WO2015098527A1 - Detection device, method for detecting object to be detected, and control program - Google Patents

Detection device, method for detecting object to be detected, and control program Download PDF

Info

Publication number
WO2015098527A1
WO2015098527A1 PCT/JP2014/082712 JP2014082712W WO2015098527A1 WO 2015098527 A1 WO2015098527 A1 WO 2015098527A1 JP 2014082712 W JP2014082712 W JP 2014082712W WO 2015098527 A1 WO2015098527 A1 WO 2015098527A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection
image
imaging
background
background model
Prior art date
Application number
PCT/JP2014/082712
Other languages
French (fr)
Japanese (ja)
Inventor
健太 西行
健太 永峰
Original Assignee
株式会社メガチップス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社メガチップス filed Critical 株式会社メガチップス
Publication of WO2015098527A1 publication Critical patent/WO2015098527A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Definitions

  • the present invention relates to a detection technique for a detection target.
  • Patent Document 1 As described in Patent Document 1 and Non-Patent Documents 1 to 3, various techniques have been proposed for detecting a detection object such as a person.
  • the present invention has been made in view of the above-described points, and an object thereof is to provide a technique capable of improving the detection accuracy of a detection object.
  • One aspect of the detection apparatus uses a storage unit that stores a background model including background image information, and the background model and the input image to detect a detection target existing in an imaging region that appears in the input image.
  • a detection unit for detecting a specifying unit for specifying a region where a detection target object is likely to exist in the imaging region based on a detection result of the detection unit, and image information obtained from the input image as a background image
  • a determination unit that determines whether or not to register the information in the background model as a result using a specific result of the specifying unit.
  • the specifying unit may detect the detection target object for each of a plurality of partial imaging regions constituting the imaging region based on a detection result of the detection unit.
  • the detection target is present in a partial imaging area in which the detection frequency is greater than or equal to the first threshold value or greater than the first threshold value among the plurality of partial imaging areas. High area.
  • the specifying unit obtains the non-detection frequency of the detection object for each of the plurality of partial imaging regions based on the detection result of the detection unit,
  • the specifying unit includes the detection frequency and the non-detection of a partial imaging region in which the non-detection frequency is greater than or equal to the second threshold value or greater than the second threshold value among the plurality of partial imaging regions. Clear the frequency.
  • the specifying unit may increase the detection frequency of the first partial imaging region where the detection target is detected in the plurality of partial imaging regions, and The detection frequency of the second partial imaging area around the one partial imaging area is increased.
  • one aspect of the detection target object detection method is a detection target object detection method in a detection device, wherein (a) a background model including background image information and an input image are used. A step of detecting a detection target existing in an imaging region shown in the input image; and (b) a region where the detection target is likely to exist in the imaging region based on the detection result in the step (a). And (c) determining whether or not to register image information obtained from the input image as background image information in the background model using the identification result in the step (b). .
  • one aspect of the control program according to the present invention is a control program for controlling a detection device that detects a detection object, and (a) a background model including background image information and an input to the detection device.
  • a step of detecting a detection object existing in an imaging region shown in the input image using an image; and (b) a region in the imaging region where the detection target is highly likely to exist ) Based on the detection result in step (b), and (c) whether the image information obtained from the input image is registered in the background model as background image information is determined using the result specified in step (b). And a step of determining the process.
  • FIG. 1 is a diagram showing an outline of the operation of the detection apparatus 1 according to the embodiment. Based on the input image, the detection apparatus 1 detects a detection target existing in an imaging region that appears in the image, that is, in an imaging region (field-of-view range) of an imaging unit that captures the image.
  • the detection target is a moving object, more specifically a person.
  • the detection apparatus 1 detects a moving object image (an image showing a moving object) included in an input image, thereby detecting a moving object, that is, a moving object existing in an imaging region that appears in the image.
  • the detection apparatus 1 executes a background model generation process using a plurality of input images sequentially input in time series.
  • the background model is a model configured by collecting information included in a plurality of input images obtained by photographing the same scene (subject).
  • the background model is used when a moving body image is detected from each input image sequentially input in time series in a moving body detection process described later.
  • the preparation stage in which the background model generation process is executed is also referred to as a “learning stage”.
  • an input image used for generating a background model may be referred to as a “reference image”.
  • an input image that is a detection target of a moving body image may be referred to as a “detection target image”.
  • the operation stage shifts from the preparation stage to the actual operation stage.
  • the detection apparatus 1 detects a moving object image with respect to an input image, a background model update process for updating a background model, and a registration determination period described later used in the background model update process. And a determination period adjustment process for adjusting.
  • the detection target is a person, but it may be other than a person.
  • FIG. 2 is a block diagram showing the configuration of the detection apparatus 1.
  • the detection device 1 includes an image input unit 2, an image processing unit 3, a detection result output unit 4, a background model storage unit 5, and a cache model storage unit 6.
  • FIG. 3 is a block diagram showing a configuration of the image processing unit 3.
  • the image input unit 2 inputs an input image 200 input from the outside of the detection apparatus 1 to the image processing unit 3.
  • the input image 200 is a captured image captured by the imaging unit.
  • FIG. 4 is a diagram illustrating an example of the imaging region 10 that appears in the input image 200, that is, the imaging region (field-of-view range) 10 of the imaging unit that captures the input image 200.
  • the imaging area 10 shown in FIG. 4 includes a conference room 100 as a subject. Therefore, in this case, the input image 200 input to the detection apparatus 1 is an image showing the conference room 100.
  • a plurality of desks 101 and a plurality of chairs 102 are arranged so as to surround the center of the floor.
  • the outside of the plurality of desks 101 is a passage 103.
  • a curtain 104 is provided on a part of the wall.
  • the detection apparatus 1 detects a person existing in the conference room 100 by detecting a moving body image with respect to the input image 200 showing the conference room 100, for example.
  • the image processing unit 3 performs various image processing on the input image 200 input from the image input unit 2.
  • the image processing unit 3 includes a CPU 300 and a storage unit 310.
  • the storage unit 310 includes a non-transitory recording medium that can be read by the CPU 300, such as a ROM (Read Only Memory) and a RAM (Random Access Memory).
  • the storage unit 310 stores a control program 311 for controlling the detection device 1.
  • Various function blocks are formed in the image processing unit 3 by the CPU 300 executing the control program 311 in the storage unit 310.
  • the storage unit 310 may include a computer-readable non-transitory recording medium other than the ROM and RAM.
  • the storage unit 310 may include, for example, a small hard disk drive and an SSD (Solid State Drive).
  • the image processing unit 3 includes a plurality of functional blocks such as a background model generation unit 30, a moving object detection unit 31, a determination period adjustment unit 32, and a background model update unit 33.
  • These functional blocks are not realized by the CPU executing the program, but may be realized by a hardware circuit using a logic circuit or the like that does not require a program for realizing the function.
  • the background model generation unit 30 generates a background model 500 including background image information using a plurality of input images 200 (a plurality of reference images 200) sequentially input from the image input unit 2.
  • the background model 500 generated by the background model generation unit 30 is stored in the background model storage unit 5.
  • the moving object detection unit 31 detects a moving object image in the input image 200 using the input image 200 input from the image input unit 2 and the background model 500 in the background model storage unit 5. In other words, the moving object detection unit 31 uses the input image 200 and the background model 500 to detect a moving object that exists in the imaging region 10 that appears in the input image 200.
  • the background model update unit 33 updates the background model 500.
  • the background model update unit 33 includes a specifying unit 330 and a registration determining unit 331. Based on the detection result of the moving object detection by the moving object detection unit 31, the specifying unit 330 specifies an area where the moving object is highly likely to exist in the imaging region 10. Specifically, the specifying unit 330 generates a detection frequency map and a non-detection frequency map based on the detection result of the moving object detection unit 31, and the moving object may exist in the imaging region 10 using these maps. Identify areas with high The detection frequency map shows a distribution of detection frequencies of moving objects (detection objects) in a plurality of areas in the imaging area 10.
  • the non-detection frequency map indicates a distribution of non-detection frequencies of moving objects in a plurality of regions in the imaging region 10.
  • the registration determination unit 331 determines whether or not to register the image information obtained from the input image 200 as background image information in the background model 500 using the specifying result in the specifying unit 330.
  • the detection frequency map and the non-detection frequency map will be described in detail later.
  • the background model storage unit 5 stores the background model 500 generated by the background model generation unit 30.
  • the cache model storage unit 6 stores a cache model to be described later.
  • Each of the background model storage unit 5 and the cache model storage unit 6 includes rewritable storage means such as flash memory, EPROM (Erasable Programmable Read Only Memory), or hard disk (HD).
  • the background model storage unit 5 and the cache model storage unit 6 are independent in hardware, but a part of the storage area of one storage device is used as the background model storage unit 5, Another part of the storage area may be used as the cache model storage unit 6.
  • the determination period adjustment unit 32 adjusts the registration determination period used in the update of the background model 500.
  • the detection result output unit 4 outputs the detection result of the moving object detection by the moving object detection unit 31 to the outside.
  • the detection result output unit 4 includes, for example, a display unit that displays the state of the imaging region 10, that is, the state of the subject existing in the imaging region 10 (the conference room 100 in the example of FIG. 4) in real time.
  • the display unit displays the area where the moving object is detected in color or the like, so that the detection result of the moving object detection is output to the outside.
  • the detection result output unit 4 may output the detection result to the outside with a sound such as voice.
  • the detection result output unit 4 may output the detection result to the outside by outputting a signal indicating the detection result to an external device.
  • the external device performs an operation according to the detection result. For example, the external device generates an alarm.
  • the imaging region 10 is the conference room 100 in FIG. 4
  • the external device controls the lighting apparatus in the conference room 100 to brighten only the region where a person exists.
  • the external device controls the air conditioner in the conference room 100 to cool or warm only an area where a person exists.
  • FIG. 5 is a diagram for explaining the background model 500.
  • the input image 200 captured by the imaging unit is used as the reference image 200 used for generating the background model 500. It becomes.
  • a background model 500 is generated based on A (A ⁇ 2) reference images 200.
  • the imaging area 10 is divided into a plurality of rectangular imaging blocks (partial imaging areas).
  • the input image 200 includes a plurality of image blocks respectively indicating images of a plurality of imaging blocks constituting the imaging area 10. Is done.
  • the size of one image block is, for example, 3 pixels ⁇ 3 pixels.
  • the imaging block and the image block indicating the image of the imaging block in the input image 200 the imaging block may be referred to as an imaging block corresponding to the image block.
  • the background model 500 includes a plurality of codebooks CB corresponding to the plurality of imaging blocks BK, respectively.
  • Each code book CB includes a code word CW including image information and related information related to the image information.
  • the code word CW included in the code book CB is generated based on an image block indicating an image of the imaging block BK corresponding to the code book CB in one input image 200.
  • Each code book CB includes a plurality of code words CW.
  • the image information included in the code word CW in the background model 500 may be referred to as “background image information”.
  • the code book CB showing sand hatching includes three code words CW1 to CW3 respectively generated based on the three reference images 200a to 200c.
  • the code word CW1 included in the code book CB is generated based on the image block indicating the image of the imaging block BK corresponding to the code book CB in the reference image 200a.
  • the code word CW2 included in the code book CB is generated based on the image block indicating the image of the imaging block BK corresponding to the code book CB in the reference image 200b.
  • the code word CW3 included in the code book CB is generated based on the image block indicating the image of the imaging block BK corresponding to the code book CB in the reference image 200c.
  • FIG. 6 is a diagram for explaining the code word CW.
  • image information of an image block indicating an image of an imaging block corresponding to the code book CB including the code word CW that is, pixel values PV of a plurality of pixels constituting the image block are used as background image information.
  • the code word CW includes the latest matching time Te and the code word generation time Ti as related information. As will be described later, it is determined whether or not the image information in the code word CW included in the background model 500 matches the image information acquired from the detection target image 200.
  • the latest matching time Te included in the code word CW indicates the latest time when it is determined that the image information included in the code word CW matches the image information acquired from the detection target image 200.
  • the code word generation time Ti included in the code word CW indicates the time at which the code word CW is generated.
  • FIG. 7 is a flowchart showing background model generation processing in which such a background model 500 is generated.
  • the background model generation process shown in FIG. 7 is executed when the background model 500 is not stored in the background model storage unit 5.
  • the background model generation unit 30 selects an imaging block having the imaging region 10 in step s2. Let it be an attention imaging block. Then, the background model generation unit 30 determines whether or not the code book CB corresponding to the target imaging block is stored in the background model storage unit 5.
  • the background model generation unit 30 determines that the code book CB corresponding to the target imaging block is not stored in the background model storage unit 5
  • the target model imaging is performed based on the reference image 200 input in step s1 in step s3.
  • a code book CB corresponding to the block is generated and stored in the background model storage unit 5.
  • the background model generation unit 30 acquires image information from an image block indicating an image of the target imaging block in the reference image 200 input in step s1. Then, the background model generation unit 30 generates a code word CW including the acquired image information as background image information, and stores the code book CB including the code word CW in the background model storage unit 5. The latest match time Te included in the code word CW is provisionally set to the same time as the code word generation time Ti.
  • the background model generation unit 30 determines that the code book CB corresponding to the target imaging block is stored in the background model storage unit 5, the target model in the reference image 200 input in step s1 in step s4. Image information is acquired from an image block indicating an image of the imaging block. Then, the background model generation unit 30 matches the acquired image information with the background information image stored in the background model storage unit 5 and included in each code word CW included in the code book CB corresponding to the imaging block of interest. It is determined whether or not to do. That is, the background model generation unit 30 determines whether or not there is a code word CW including background image information that matches the acquired image information in the code word CW included in the code book CB corresponding to the imaging block of interest. .
  • step s4 If it is determined in step s4 that the background information image in each codeword CW included in the codebook CB corresponding to the target imaging block does not match the acquired image information in step s5, that is, the target If there is no code word CW including background image information that matches the acquired image information in the code word CW included in the code book CB corresponding to the imaging block, in step s6, the background model generation unit 30 performs step In step s4, a code word CW including the image information acquired from the reference image 200 as background image information is generated.
  • the latest match time Te included in the code word CW is provisionally set to the same time as the code word generation time Ti.
  • the background model generation unit 30 adds the generated code word CW to the code book CB corresponding to the target imaging block stored in the background model storage unit 5. As a result, new background image information is added to the code book CB corresponding to the imaging block of interest. Thereafter, step s7 is executed.
  • step s7 is executed without executing step s6.
  • step s7 the background model generation unit 30 determines whether or not processing has been performed for all of the imaging blocks in the imaging region 10, that is, whether or not all of the imaging blocks have been set as the imaging block of interest. As a result of the determination in step s7, if there is an imaging block that has not been processed, the background model generation unit 30 sets the imaging block that has not been processed yet as a new attention imaging block, and thereafter step s2 and subsequent steps. Execute.
  • the background model generation unit 30 applies to the A reference images 200 in step s8. It is determined whether or not similar processing has been performed. If the result of determination in step s8 is that the number of processed reference images 200 is smaller than A, the background model generation unit 30 newly inputs a reference to the image processing unit 3 in step s1. The process from step s2 is executed on the image 200. If the result of determination in step s8 is that the number of processed reference images 200 is A, the background model generation unit 30 ends the background model generation process. Thereby, the background model 500 as described above is generated in the background model storage unit 5.
  • FIG. 8 is a flowchart showing a schematic operation in the actual operation stage of the detection apparatus 1. In the detection apparatus 1, when the background model generation process ends, the process shown in FIG. 8 is executed.
  • step s11 when the input image 200 is input from the image input unit 2 to the image processing unit 3 in step s11, a series of processing from steps s12 to s14 is executed with the input image 200 as a processing target. Is done.
  • step s12 the image processing unit 3 performs a moving object detection process for detecting a moving object image on the input image 200 to be processed.
  • step s13 the image processing unit 3 performs a determination period adjustment process for adjusting the registration determination period based on the result of the moving object detection process in step s12.
  • step s14 the image processing unit 3 performs background model update processing for updating the background model 500 in the background model storage unit 5.
  • a new input image 200 (new detection target image 200) is input from the image input unit 2 to the image processing unit 3 in step s11
  • the input image 200 is set as a new processing target, and steps s12 to s14 are performed.
  • steps s12 to s14 are performed.
  • a series of processes up to are executed.
  • the image processing unit 3 operates in the same manner.
  • the moving object detection process, the determination period adjustment process, and the background model update process are executed in this order.
  • FIG. 9 is a flowchart showing the moving object detection process.
  • the moving object detection unit 31 sets an imaging block having the imaging region 10 (for example, an upper left imaging block in the imaging region 10) as a target imaging block.
  • the moving object detection unit 31 may be referred to as an image block (hereinafter referred to as “attention image block”) indicating an image of the target imaging block in the processing target input image 200 (detection target image 200) input in step s11. ) Is detected. That is, the moving object detection unit 31 detects whether a moving object exists in the imaging block of interest.
  • the image information acquired from the target image block in the input image 200 and the background in each code word CW included in the code book CB corresponding to the target imaging block in the background model 500 By determining whether or not the image information matches, it is determined whether or not the target image block is a moving image.
  • the code book CB corresponding to the imaging block of interest may be referred to as a “corresponding code book CB”.
  • the code word CW included in the corresponding code book CB may be referred to as “corresponding code word CW”. A specific method of moving object detection will be described later.
  • step s121 When step s121 is executed, in step s122, the moving object detection unit 31 stores the result of the moving object detection in step s121. Then, in step s123, the moving object detection unit 31 determines whether or not processing has been performed for all imaging blocks in the imaging region 10, that is, whether or not all imaging blocks have been set as the imaging block of interest. As a result of the determination in step s123, when there is an imaging block that has not been processed, the moving object detection unit 31 sets the imaging block that has not been processed yet as a new attention imaging block, and thereafter performs step s121 and subsequent steps. Execute.
  • step s123 if the result of determination in step s123 is that processing has been performed for all imaging blocks in the imaging area 10, that is, detection of moving object images has been completed for all areas of the input image 200.
  • the moving object detection unit 31 ends the moving object detection process.
  • the moving object detection unit 31 stores the results of moving object detection for a plurality of image blocks constituting the input image 200. That is, the moving object detection unit 31 stores the results of moving object detection for a plurality of imaging blocks constituting the imaging area 10. This detection result is input to the detection result output unit 4.
  • FIG. 10 is a diagram showing how vectors are extracted from each of the target image block of the input image 200 and the corresponding codeword CW of the background model 500.
  • FIG. 11 is a diagram illustrating a relationship between a vector extracted from the target image block of the input image 200 and a vector extracted from the corresponding codeword CW of the background model 500.
  • the image information of the target image block in the input image 200 is handled as a vector.
  • the background image information included in the corresponding codeword CW is treated as a vector.
  • the vector for the image information of the target image block and the vector for the background image information of each corresponding codeword CW are in the same direction, whether or not the target image block is a moving image. Is determined. When these two types of vectors are directed in the same direction, it can be considered that the image information of the target image block and the background image information of each corresponding codeword CW match.
  • the target image block in the input image 200 is determined not to be a moving body image as it is the same as the image indicating the background.
  • the two types of vectors do not point in the same direction, it can be considered that the image information of the target image block does not match the background image information of each corresponding codeword CW. Therefore, in this case, it is determined that the target image block in the input image 200 is not an image showing the background but a moving body image.
  • the moving object detection unit 31 generates an image vector x f in which the pixel values of a plurality of pixels included in the image block of interest in the input image 200 and components.
  • Figure 10 shows a image vector x f which was a pixel value of each pixel component of the target image block 210 having nine pixels.
  • each pixel has pixel values of R (red), G (green), and B (blue), so the image vector xf is composed of 27 components.
  • the moving object detection unit 31 uses the background image information in the corresponding codeword CW included in the corresponding codebook CB of the background model 500 to generate a background vector that is a vector related to the background image information.
  • the background image information 510 of the corresponding codeword shown in FIG. 10 includes pixel values for nine pixels. Therefore, a background vector xb having the pixel values for the nine pixels as components is generated.
  • the background vector xb is generated from each of a plurality of code words CW included in the corresponding code book CB. Therefore, a plurality of background vector x b are generated for one image vector x f.
  • the target image block in the input image 200 is not different from the image indicating the background.
  • the image vector xf and each background vector xb are considered to contain a certain amount of noise components, the image vector xf and each background vector xb are not completely in the same direction.
  • it is possible to determine that the target image block in the input image 200 is an image indicating the background.
  • the image vector xf and each background vector xb are completely the same in consideration that the image vector xf and each background vector xb include a certain amount of noise components. Even if it is not facing the direction, it is determined that the target image block in the input image 200 is an image indicating the background.
  • the relationship between the image vector x f and background vector x b to the true vector u can be expressed as in FIG. 11.
  • the image and the vector x f and background vector x b is, as an evaluation value that indicates whether the pointing how the same direction, consider the evaluation value D 2 represented by the following (1).
  • evaluation value D 2 is a minimum eigenvalue of a non-zero 2 ⁇ 2 matrix XX T. Accordingly, the evaluation value D 2 can be determined analytically. Note that the evaluation value D 2 is the minimum eigenvalue of the non-zero 2 ⁇ 2 matrix XX T is described in Non-Patent Document 3 above.
  • Decision image block of interest is whether the moving object image in the input image 200, the minimum value C of the plurality of values of the evaluation value D 2, the mean value for a plurality of values of the evaluation value D 2 mu and
  • the moving object judgment formula shown by the following formula (3) expressed using the standard deviation ⁇ is used. This moving object judgment formula is called Chebyshev's inequality.
  • k in Expression (3) is a constant, and is a value determined based on an imaging environment (an environment in which the imaging unit is installed) of the imaging unit that captures the input image 200 and the like.
  • the constant k is determined by experiments or the like.
  • the moving image detection unit 31 When the moving object detection unit 31 satisfies the moving object determination formula (inequality), the moving image detection unit 31 considers that the image vector x f and each background vector x b do not face the same direction, and the target image block is not an image indicating the background, It determines with it being a moving body image. On the other hand, the moving object detection unit 31 considers that the image vector xf and each background vector xb face the same direction when the moving object determination formula is not satisfied, and the target image block is not a moving object image but a background. It is determined that the image is shown.
  • the moving object detection is performed based on whether the direction of the image vector obtained from the target image block and the direction of the background vector obtained from each corresponding codeword CW are the same. Therefore, the moving object detection method according to the present embodiment is a moving object detection method that is relatively robust against changes in brightness in the imaging region 10 such as sunshine changes or illumination changes.
  • the detection frequency map 600 includes the detection frequency 601 of the moving object (detection target) 700 for each of the plurality of imaging blocks BK constituting the imaging region 10.
  • the non-detection frequency map 610 includes a non-detection frequency 611 of the moving object (detection target) 700 for each of the plurality of imaging blocks BK constituting the imaging region 10.
  • a plurality of detection frequencies 601 included therein are arranged in a matrix.
  • the detection frequency 601 of the moving object 700 for a certain imaging block BK is arranged at the same position as the position of the imaging block BK in the imaging area 10 in the detection frequency map 600.
  • the non-detection frequency map 610 a plurality of non-detection frequencies 611 included therein are arranged in a matrix.
  • the non-detection frequency 611 of the moving object 700 for a certain imaging block BK is arranged at the same position as the position of the imaging block BK in the imaging area 10 in the non-detection frequency map 610.
  • the moving part detection unit 31 determines that an image block included in the input image 200 is a moving body image, that is, the specifying unit 330 determines that the moving body 700 exists in the imaging block corresponding to the image block.
  • the detection frequency map 600 the detection frequency 601 of the moving object 700 for the imaging block BK corresponding to the image block is increased by one.
  • the moving object detection unit 31 determines that an image block included in the input image 200 is not a moving object image, that is, the specifying unit 330 determines that the moving object 700 does not exist in the imaging block corresponding to the image block. Then, the non-detection frequency 611 of the moving object 700 for the imaging block BK corresponding to the image block in the non-detection frequency map 610 is increased by one.
  • An example of 600 and a non-detection frequency map 610 are shown.
  • the detection frequency map 600 shown in FIG. 12 the detection frequency 601 of the moving object 700 for the two imaging blocks BK where the moving object 700 exists is large.
  • the non-detection frequency map 610 shown in FIG. 13 the non-detection frequency 611 of the moving object 700 for each imaging block BK other than the two imaging blocks BK in which the moving object 700 exists is large.
  • the detection frequency 601 for the imaging block BK in which the moving body 700 is frequently detected increases. Therefore, by referring to the detection frequency map 600, it is possible to identify an area where the moving object 700 is likely to exist in the imaging area 10. In the imaging region 10, for example, when a person's passage exists, the detection frequency of the imaging block corresponding to the passage increases, and it can be understood that there is a high possibility that a person exists in the imaging block.
  • the non-detection frequency 611 for the imaging block BK in which the moving body 700 is not detected so much increases. Therefore, by referring to the non-detection frequency map 610, it is possible to specify an area in which there is a high possibility that the moving object 700 does not exist in the imaging area 10.
  • ⁇ Background model update process> Next, the background model update process in step s14 will be described.
  • the cache model includes background image information candidates that are background image information candidates registered in the background model 500.
  • the brightness may change due to a change in sunlight or a change in illumination.
  • the image information of the input image 200 changes, so the moving object detection unit 31 erroneously determines that the image block indicating the background included in the input image 200 is a moving object image. there's a possibility that. Therefore, there is a possibility that the image information of the image block determined to be a moving image by the moving object detection unit 31 is actually background image information.
  • the background model update unit 33 once registers the image information of the image block determined to be a moving image by the moving object detection unit 31 as a background image information candidate in the cache model. Then, the background model update unit 33 determines whether the background candidate image information registered in the cache model is background image information based on the plurality of input images 200 input during the registration determination period.
  • the background model update unit 33 determines that the background image information candidate registered in the cache model is background image information
  • the background model update unit 33 registers the background image information candidate in the background model 500 as background image information. That is, the background model update unit 33 registers the background image information candidates stored in the cache model storage unit 6 in the background model 500 as background image information based on a plurality of input images 200 input during the registration determination period. Determine whether or not.
  • the registration determination period is adjusted by the determination period adjustment process in step s13.
  • FIG. 14 is a flowchart showing the background model update process.
  • the background model update unit 33 sets an imaging block having the imaging region 10 as a target imaging block. Then, the background model update unit 33 determines in the moving object detection unit 31 that the image block (attention image block) indicating the image of the target imaging block in the processing target input image 200 input in step s11 is a moving object image. It is determined whether or not. If it is determined in step s141 that the target image block is determined not to be a moving image by the moving object detection unit 31, that is, the image information of the target image block is the background image information of each corresponding codeword CW in the background model 500. If determined to match, the background model update unit 33 executes step s142.
  • step s142 the background model update unit 33 changes the latest match time Te of the code word CW in the background model 500 including the background image information determined to match the image information of the target image block to the current time.
  • step s142 the specifying unit 330 increases the non-detection frequency for the target imaging block corresponding to the target image block in the non-detection frequency map 610 by one in step s143. .
  • step s141 when it is determined in step s141 that the target image block is determined to be a moving image by the moving object detection unit 31, the background model update unit 33 executes step s144.
  • step s144 the cache model is updated. Specifically, when the image information of the target image block is not included in each corresponding codeword CW included in the cache model in the cache model storage unit 6, the background model update unit 33 displays the image information.
  • a code word CW included as a background image information candidate is generated and registered in the corresponding code block CB in the cache model.
  • the code word CW includes the latest match time Te and the code word generation time Ti in addition to the image information (background image information candidate).
  • the latest matching time Te included in the code word CW generated in step s144 is provisionally set to the same time as the code word generation time Ti. Further, the background model update unit 33, when the image information of the target image block is included in the corresponding codeword CW included in the cache model in the cache model storage unit 6, that is, the image information of the target image block When the corresponding code word CW including the matching background image information candidate is included in the cache model, the latest matching time Te in the code word CW including the background image information candidate in the cache model is changed to the current time. .
  • step s144 the code word CW including the missing image information is added to the cache model, or the latest match time Te of the code word CW in the cache model is updated.
  • step s144 when the code book CB corresponding to the target imaging block is not registered in the cache model in the cache model storage unit 6, the background model update unit 33 uses the image information of the target image block as the background.
  • a code word CW included as an image information candidate is generated, and a code book CB including the code word CW is generated and registered in the cache model.
  • step s144 When step s144 is executed, the specifying unit 330 increases the detection frequency for the target imaging block corresponding to the target image block in the detection frequency map 600 by one in step s145.
  • step s146 the background model update unit 33 determines whether or not processing has been performed for all of the imaging blocks in the imaging region 10, that is, sets all of the imaging blocks as the target imaging block. It is determined whether or not it has been set. If it is determined in step s146 that there is an imaging block that has not been processed, the background model update unit 33 sets the imaging block that has not been processed yet as a new target imaging block, and thereafter performs steps s141 and thereafter. Execute. On the other hand, if it is determined in step s146 that the processing has been performed for all the imaging blocks in the imaging region 10, the background model update unit 33 executes step s147.
  • step s147 the code word CW that is included in the cache model and for which the latest match time Te has not been updated for a predetermined period is deleted. That is, when the image information included in the code word CW in the cache model does not match the image information acquired from the input image 200 for a certain period, the code word CW is deleted. If the image information included in the code word CW is background image information, that is, if it is image information obtained from an image indicating the background included in the input image 200, the code word CW includes The latest match time Te is frequently updated. Therefore, it can be considered that the image information included in the code word CW whose latest match time Te has not been updated for a predetermined period is highly likely to be image information acquired from the moving object image included in the input image 200.
  • the deletion determination period includes a change in brightness in the imaging region 10 such as a change in sunlight or a change in illumination, a change in image information due to a change in environment such as a poster installation or a desk layout change, and a person to be detected. This is a period set in advance to distinguish the change in image information that occurs when a moving object such as the moving object moves.
  • the deletion determination period is from several tens of frames to several hundreds of frames. Is set to a period during which the input image 200 is input.
  • step s147 when the code word CW that is included in the cache model and whose latest match time Te is not updated for the deletion determination period is deleted, the background model update unit 33 executes step s148.
  • step s148 the background model update unit 33 identifies the codeword CW that has been registered in the cache model and has passed the registration determination period from the codewords CW registered in the cache model.
  • step s144 when the code word CW is generated, the code word CW is immediately registered in the cache memory. Therefore, the code word CW is included in the code word CW as the time when the code word CW is registered in the cache model.
  • the code word generation time Ti can be used.
  • the registration judgment period is set to a larger value than the deletion judgment period.
  • the registration determination period is set to a value several times larger than the deletion determination period, for example.
  • the registration determination period is represented by the number of frames. If the registration determination period is, for example, “500”, the registration determination period is a period in which the input image 200 for 500 frames is input.
  • step s149 the background model update unit 33 performs background model registration determination processing using the detection frequency map 600.
  • the background model registration determination process it is determined whether or not to register the code word CW specified in step s148 in the background model 500 in the background model storage unit 5. The background model registration determination process will be described in detail below.
  • the specifying unit 330 uses the detection frequency map 600 to specify an area in the imaging area 10 where a moving object is highly likely to exist. Specifically, the specifying unit 330 specifies a detection frequency greater than the first threshold in the detection frequency map 600. Then, the specifying unit 330 sets an imaging block corresponding to a detection frequency larger than the first threshold as an area where there is a high possibility that a moving object exists in the imaging area 10. In other words, when the moving object detection frequency for the imaging block is greater than the first threshold, the specifying unit 330 determines that there is a high possibility that a moving object exists in the imaging block.
  • FIG. 15 is a diagram showing an example of the detection frequency map 600.
  • the first threshold value 100
  • the two detection frequencies in the first and second rows from the bottom of the rightmost column are higher than the first threshold value. It is getting bigger. Therefore, in the imaging region 10, there is a high possibility that a moving object is present in the imaging region 10 (an area indicated by diagonal lines) including the imaging blocks BK in the first and second rows from the bottom of the rightmost column. Region 120 becomes.
  • the specifying unit 330 determines that there is no region in the imaging region 10 where there is a high possibility that a moving object exists. . Further, the specifying unit 330 may set an imaging block corresponding to a detection frequency equal to or higher than the first threshold as an area where there is a high possibility that a moving object exists in the imaging area 10.
  • the registration determination unit 331 uses the codeword CW for the background model 500 for each codeword CW identified in step s148. It is determined based on the identification result in the identification unit 330 whether or not to be registered.
  • the registration determining unit 331 determines that the imaging block corresponding to the image block from which the image information included in the code word CW identified in step s148 is acquired is identified in the imaging region 10 identified by the identifying unit 330. If it is included in the region where the moving object is highly likely to exist, that is, if the moving object is highly likely to exist in the imaging block, it is determined not to register the code word CW in the background model 500.
  • the registration determination unit 331 determines that an imaging block corresponding to the image block from which the image information included in the code word CW identified in step s148 is acquired is identified by the identification unit 330. If it is not included in the region that is highly likely to exist, that is, if the possibility that a moving object is not present in the imaging block is high, it is determined that the code word CW is registered in the background model 500.
  • the registration determining unit 331 uses all the code words CW specified in step s148 as the background. It is determined that the model 500 is registered.
  • the background model update unit 33 registers the codeword CW determined to be registered in the background model 500 by the registration determination unit 331 in the code block CB corresponding to the codeword CW in the background model 500. Then, the background model update unit 33 deletes the codeword CW registered in the background model 500 from the cache model.
  • the background model update unit 33 may delete the code word CW in the cache model until the registration determination period elapses after it is registered in the cache model. .
  • the background model update unit 33 deletes the code word CW (background image information candidate) in the cache model until the registration determination period elapses after it is registered in the cache model. This means that it is determined that the code word CW (background image information candidate) registered in the cache memory is not registered in the background model 500 based on a plurality of input images 200 input in the determination period.
  • the background model updating unit 33 registers the code word CW in the cache model in the background model 500 without deleting it until the registration determination period elapses after the registration in the cache model. There is.
  • the background model update unit 33 registers the codeword CW (background image information candidate) in the cache model in the background model 500 without deleting it until the registration determination period elapses after it is registered in the cache model. This means that the model updating unit 33 has determined that the code word CW (background image information candidate) registered in the cache memory is registered in the background model 500 based on a plurality of input images 200 input in the registration determination period. ing.
  • the background model update unit 33 registers the background image information candidate registered in the cache model as the background image information in the background model 500 based on the plurality of input images 200 input in the registration determination period. It is determined whether or not. Therefore, image information of an image block erroneously determined to be a moving image by the moving object detection unit 31 can be registered in the background model 500 as background image information. Therefore, the background model 500 can be updated appropriately, and the accuracy of moving object detection by the moving object detection unit 31 is improved.
  • the codeword CW that has passed the registration determination period after being registered in the cache model is unconditionally registered in the background model 500.
  • the code word CW registered in the cache model and including the image information of the moving object image is registered in the background model 500.
  • the image information of the image block indicating the image of the imaging block in which the moving object exists in the imaging region 10 is unlikely to change. Accordingly, there is a possibility that the code word CW registered in the cache model and including the image information of the moving image indicating the moving object is not deleted from the cache model for a long time and is finally registered in the background model 500.
  • the precision of the moving body detection in the moving body detection part 31 may deteriorate.
  • the code word CW including the image information of the moving body image registered in the cache model is easily registered in the background model 500.
  • the possibility that the accuracy of moving object detection by the moving object detection unit 31 deteriorates increases.
  • the detection frequency map 600 is used as described above, instead of unconditionally registering the codeword CW, which has been registered in the cache model and having passed the registration determination period, in the background model 500. Whether the code word CW is registered in the background model 500 is determined. Thereby, it can suppress that the codeword CW containing the image information of a moving body image is registered into the background model 500 accidentally. This point will be described in detail below.
  • the specifying unit 330 uses the detection frequency map 600 to specify an imaging block that is highly likely to have a moving object in the imaging region 10.
  • the image information of the image block indicating the image of the imaging block that is highly likely to be present in the input image 200 is the image information of the moving object image. Therefore, even if the code word CW has been registered for the cache model and the registration determination period has elapsed, the imaging block corresponding to the image block from which the image information included in the code word CW is acquired has moving objects. If the imaging block has a high possibility, the image information of the code word CW is highly likely to be image information of a moving body image.
  • the imaging block is an imaging block in which there is a high possibility that a moving object exists
  • the code word CW including the image information of the moving object image is erroneously added to the background model 500 by not adding the code word CW to the background model 500. It can be suppressed.
  • step s150 the background model update unit 33 performs detection frequency clear processing.
  • the detection frequency clear process first, the specifying unit 330 specifies a non-detection frequency larger than the second threshold in the non-detection frequency map 610. Then, the specifying unit 330 clears and sets the detection frequency in the detection frequency map 600 for the imaging block corresponding to the non-detection frequency larger than the second threshold to zero. Furthermore, the specifying unit 330 clears a non-detection frequency larger than the second threshold in the non-detection frequency map 610 and sets it to zero. For example, the second threshold value is set to the same value as the first threshold value.
  • the detection frequency increases each time the presence of a moving object in the imaging block corresponding to the detection frequency is detected in the moving object detection process in step s12. Therefore, when the detection frequency is not cleared, there is a state in which the detection frequency of an imaging block for which there is no high possibility that a moving object currently exists due to a layout change or the like in the imaging region 10 is higher than the first threshold value. It is more likely to occur.
  • the non-detection frequency of the imaging block in the non-detection frequency map 610 increases.
  • the detection frequency in the detection frequency map 600 for the imaging block corresponding to the non-detection frequency larger than the second threshold in the non-detection frequency map 610 is cleared.
  • the specifying unit 330 may clear the detection frequency and the non-detection frequency for the imaging block corresponding to the non-detection frequency equal to or higher than the second threshold.
  • the background model update unit 33 includes a codeword CW including background image information that does not match the image information of the input image 200 for a predetermined period in the codeword CW included in the background model 500. Is deleted. That is, the background model update unit 33 deletes the code word CW included in the background model 500 and whose latest match time Te has not been updated for a predetermined period. Thereby, in the imaging region 10, the code word CW including the image information acquired from the image of the imaging block that is no longer the background due to the time-series imaging environment change can be deleted from the background model 500. Therefore, the information amount of the background model 500 can be reduced.
  • the background model 500 that follows the change in the imaging environment is used. Moving object detection can be performed. Therefore, the accuracy of moving object detection is improved.
  • the detection frequency map 600 and the non-detection frequency map 610 are updated in the background model update process. Instead, after the moving object detection process in step s12 ends, the step Before the background model update process in s14 starts, the detection frequency map 600 and the non-detection frequency map 610 may be updated based on the result of moving object detection for each imaging block in the moving object detection unit 31. For example, the detection frequency map 600 and the non-detection frequency map 610 may be updated between the moving object detection process in step s12 and the determination period adjustment process in step s13.
  • ⁇ Determination period adjustment process> When the brightness in the imaging region 10 changes suddenly due to changes in sunlight or illumination, the image information of the input image 200 changes suddenly. Therefore, it may be erroneously determined that the image indicating the background included in the input image 200 is a moving image, and the image information of the image indicating the background may be registered in the cache model. In such a case, if the registration determination period used in the background model update process is long, the background image information in the cache model is not reflected in the background model 500 for a long time. As a result, the accuracy of moving object detection may deteriorate.
  • the registration determination period is adjusted in the determination period adjustment process in step s13.
  • the determination period adjustment unit 32 sets the registration determination period to the moving object region (the region determined to be a moving object image by the moving object detection unit 31) in the processing target input image 200 input in step s11.
  • the smaller the ratio the smaller.
  • the registration determination period used by the background model update process in step s14 becomes short.
  • the registration determination period used in the background model update process is shortened.
  • the background image information in the cache model can be immediately reflected in the background model, and the accuracy of moving object detection is improved.
  • the registration determination period Dt is expressed by the following formula (4), where Rd is the ratio of the moving object region in the input image 200.
  • a in equation (4) is a threshold value, and Dmin and Dmax are constants. Dmax> Dmin.
  • FIG. 16 shows the relationship between the registration determination period Dt and the ratio Rd of the moving object area in the input image 200 expressed by the equation (4).
  • the registration determination period Dt becomes shorter as the moving object region ratio Rd in the input image 200 becomes larger.
  • the registration determination period Dt becomes the minimum value (Dmin).
  • the threshold value a is considered abnormal when it is determined that the region of the input image 200 or more of the input image 200 is a moving body image (whether it is considered that the brightness in the imaging region 10 has suddenly changed). ), Which is set in advance based on the criterion of), and is set according to the subject in the imaging region 10.
  • the ratio Rd of the moving object region in the input image 200 is expressed by the following equation (5), where Pd is the number of pixels in the moving object region and Pa is the total number of pixels in the input image 200.
  • the number of pixels Pd in the moving object region can be obtained by multiplying the number of image blocks that are moving object images by the number of pixels included in one image block.
  • the first threshold value is set to Dmin ⁇ first threshold value ⁇ Dmax.
  • the first threshold value is half of the value obtained by adding Dmin and Dmax.
  • the identification result of the region where the detection target object in the imaging region 10 is likely to exist is used, and the image information obtained from the input image 200 is used as the background image information as the background model. Whether or not to register in 500 is determined. Therefore, it is possible to suppress registration of image information obtained from an image of an area where the detection target actually exists in the imaging area 10 as background image information in the background model 500. Therefore, the detection accuracy of the detection object is improved.
  • FIG. 17 is a diagram illustrating an example of a result of moving object detection by the moving object detection unit 31.
  • a moving object region 800 detected from the input image 200 by the moving object detection unit 31 is shown superimposed on the input image 200.
  • the input image 200 shown in FIG. 17 includes an image of a room 900 in which a plurality of cardboards 920 and a plurality of chairs 930 are arranged on a floor 910 as an image of a subject.
  • a window (not shown) is provided on the wall 940 of the room 900.
  • Two people 990a and 990b exist in the room 900.
  • each of the people 990a and 990b existing in the room 900 is appropriately detected as the moving object region 800.
  • FIG. 18 is a diagram showing a detection frequency map 600 when the persons 990a and 990b are stationary at the position shown in FIG.
  • the floor 910 and the wall 940 are indicated by broken lines.
  • the detection frequency map 600 is shown by dividing the magnitude of the detection frequency into, for example, three stages from the first stage to the third stage.
  • the region belonging to the third stage where the detection frequency is the highest shows hatching leftward, and the region belonging to the second largest second stage is raised rightward.
  • the hatching is shown.
  • hatching is not shown for the region belonging to the first stage with the lowest detection frequency.
  • the detection frequency for the region where the people 990a and 990b exist in the imaging region 10 is large.
  • FIG. 19 is a diagram illustrating an example of a result of moving object detection in the moving object detection unit 31 when the detection frequency map 600 is not used for updating the background model 500, unlike the present embodiment.
  • a moving object region 800 detected by the moving object detection unit 31 from the input image 200 is shown superimposed on the input image 200.
  • two people 990a and 990b are sitting on two chairs 930, respectively.
  • the moving object detection result shown in FIG. 19 the person 990a is detected as the moving object area 800, but the person 990b is not detected as the moving object area 800.
  • FIG. 20 is a diagram illustrating an example of a moving object detection result in the present embodiment, that is, a result of moving object detection in the moving object detection unit 31 when the detection frequency map 600 is used to update the background model 500. .
  • a moving object detection result in the present embodiment, that is, a result of moving object detection in the moving object detection unit 31 when the detection frequency map 600 is used to update the background model 500.
  • each of the people 990 a and 990 b is appropriately detected as the moving object region 800.
  • the first threshold value for the detection frequency map 600 is the second threshold value for the non-detection frequency map 610. It is desirable to be larger than the threshold value.
  • the first threshold value is decreased, there is a possibility that it is erroneously determined that there is a high possibility that a moving object exists for an imaging block in which the moving object happens to stay a little.
  • the second threshold value is increased, even if the imaging block changes to an area where the possibility that a moving object exists is not high due to a change in the environment, the detection frequency of the imaging block is forever. May not be cleared. Therefore, when the environment of the imaging region 10 is an environment in which the movement of the moving object is small, it is desirable that the first threshold value is large and the second threshold value is small.
  • the first threshold value is preferably smaller than the second threshold value.
  • the detection frequency of the imaging block is unlikely to increase.
  • the first threshold value is increased, there is a possibility that a region where a moving object is likely to exist cannot be appropriately specified.
  • the second threshold value is decreased, the moving object may move slightly from the imaging block, and the detection frequency for the imaging block may be cleared. Therefore, when the environment of the imaging region 10 is an environment where there is a lot of movement of moving objects, it is desirable that the first threshold value is small and the second threshold value is large.
  • FIG. 21 is a diagram illustrating an example of a state in which the person 990 slightly moves in the imaging region 10 and an example of a detection frequency map 600 in that case.
  • the detection frequency map 600 increases the detection frequency for the imaging block in which the person 990 mainly exists, but the surrounding imaging blocks of the imaging block.
  • the detection frequency for is not so great. Accordingly, since the image information of the moving body image acquired from the image block corresponding to the imaging block around the imaging block in the cache model has a low detection frequency of the imaging block around the imaging block, the background model 500 may be registered.
  • the specifying unit 330 increases the detection frequency 601a for the target imaging block corresponding to the target image block in the detection frequency map 600 by one.
  • the detection frequency 601b of each imaging block around the imaging block of interest is increased by one. That is, the specifying unit 330 increases the detection frequency 601a of the imaging block (target imaging block) in which a moving object is detected in the plurality of imaging blocks, and increases the detection frequency 601b of each imaging block around the imaging block. .
  • the detection frequency for the imaging block in which the moving object mainly exists and the imaging blocks around the imaging block The detection frequency can be increased. Therefore, it is possible to suppress registration of the image information of the moving body image acquired from the image block corresponding to the imaging block around the imaging block in the cache model in the background model 500. Therefore, the accuracy of moving object detection is further improved.
  • the specifying unit 330 increases the non-detection frequency 611a for the target imaging block corresponding to the target image block in the non-detection frequency map 610 by one.
  • the non-detection frequency 611b of each imaging block around the imaging block of interest is increased by one. That is, the specifying unit 330 increases the non-detection frequency 611a of an imaging block (target imaging block) in which no moving object has been detected in the plurality of imaging blocks, and the non-detection frequency 611b of each imaging block around the imaging block. Increase.
  • the accuracy of moving object detection is further improved.
  • the detection frequency map 600 is updated after the moving object detection process in step s12 is completed and before the background model update process in step s14 is started, the process is illustrated in FIG.
  • the detection frequency 601c of the plurality of imaging blocks BKc respectively corresponding to the plurality of image blocks identified as moving body images in step s12 is increased by 1, and each of the imaging around the plurality of imaging blocks BKc is increased.
  • the detection frequency 601d of the block BKd may be increased by one.
  • the non-detection frequency map 610 is updated after the moving object detection process in step s12 is completed and before the background model update process in step s14 is started, it is not a moving object image in step s12.
  • the non-detection frequency of the plurality of imaging blocks respectively corresponding to the plurality of image blocks identified as “1” may be increased by one, and the non-detection frequency of each imaging block around the plurality of imaging blocks may be increased by one. .
  • the size of the image block is 3 pixels ⁇ 3 pixels.
  • the size is not limited to this, and the size of the image block may be 4 pixels ⁇ 4 pixels or 5 pixels ⁇ 5 pixels. Good.
  • the code word CW for a certain image block exemplifies a case where the pixel values of all the pixels in the certain image block are included as image information.
  • the code word CW may not include the pixel values of all the pixels in the image block as the image information.
  • the code word CW may include pixel values for 5 pixels as image information.
  • each pixel in the input image 200 has a pixel value of R (red), G (green), and B (blue). It is not limited. Specifically, the pixel value of each pixel in the input image 200 may be expressed using a color space other than RGB. For example, when the input image 200 is YUV format image data, the luminance signal Y and the two color difference signals U and V are used as pixel values of each pixel.
  • the detection apparatus 1 has been described in detail, but the above description is an example in all aspects, and the present invention is not limited thereto.
  • the various modifications described above can be applied in combination as long as they do not contradict each other. And it is understood that the countless modification which is not illustrated can be assumed without deviating from the scope of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A storage unit stores a background model which includes background image information. Using the background model and an inputted image, a detection unit detects an object to be detected which is present in a captured image region which appears in the inputted image. On the basis of the result of the detection by the detection unit, a specification unit specifies a region in the captured image region where the probability is high that the object to be detected is present. Using the result of the specification with the specification unit, a determination unit determines whether to register the image information obtained from the inputted image into the background model as background image information.

Description

検出装置、検出対象物の検出方法及び制御プログラムDetection apparatus, detection object detection method, and control program
 本発明は、検出対象物の検出技術に関する。 The present invention relates to a detection technique for a detection target.
 特許文献1及び非特許文献1~3にも記載されているように、人等の検出対象物の検出技術に関して従来から様々な技術が提案されている。 As described in Patent Document 1 and Non-Patent Documents 1 to 3, various techniques have been proposed for detecting a detection object such as a person.
特開2012-155917号公報JP 2012-155717 A
 検出対象物を検出する際には、その精度の向上が望まれている。 When detecting a detection target, it is desired to improve its accuracy.
 そこで、本発明は上述の点に鑑みて成されたものであり、検出対象物の検出精度を向上することが可能な技術を提供することを目的とする。 Therefore, the present invention has been made in view of the above-described points, and an object thereof is to provide a technique capable of improving the detection accuracy of a detection object.
 本発明に係る検出装置の一態様は、背景画像情報を含む背景モデルを記憶する記憶部と、前記背景モデルと入力画像とを用いて、当該入力画像に写る撮像領域に存在する検出対象物を検出する検出部と、前記撮像領域において、検出対象物が存在する可能性が高い領域を、前記検出部での検出結果に基づいて特定する特定部と、入力画像から得られる画像情報を背景画像情報として前記背景モデルに登録するか否かを、前記特定部での特定結果を用いて決定する決定部とを備える。 One aspect of the detection apparatus according to the present invention uses a storage unit that stores a background model including background image information, and the background model and the input image to detect a detection target existing in an imaging region that appears in the input image. A detection unit for detecting, a specifying unit for specifying a region where a detection target object is likely to exist in the imaging region based on a detection result of the detection unit, and image information obtained from the input image as a background image A determination unit that determines whether or not to register the information in the background model as a result using a specific result of the specifying unit.
 また、本発明に係る検出装置の一態様では、前記特定部は、前記検出部での検出結果に基づいて、前記撮像領域を構成する複数の部分撮像領域のそれぞれについての前記検出対象物の検出頻度を求め、当該複数の部分撮像領域のうち、当該検出頻度が第1のしきい値以上あるいは当該第1のしきい値よりも大きい部分撮像領域を、前記検出対象物が存在する可能性が高い領域とする。 In the detection device according to the aspect of the invention, the specifying unit may detect the detection target object for each of a plurality of partial imaging regions constituting the imaging region based on a detection result of the detection unit. There is a possibility that the detection target is present in a partial imaging area in which the detection frequency is greater than or equal to the first threshold value or greater than the first threshold value among the plurality of partial imaging areas. High area.
 また、本発明に係る検出装置の一態様では、前記特定部は、前記検出部での検出結果に基づいて、前記複数の部分撮像領域のそれぞれについての前記検出対象物の非検出頻度を求め、前記特定部は、前記複数の部分撮像領域のうち、前記非検出頻度が前記第2のしきい値以上あるいは当該第2のしきい値よりも大きい部分撮像領域についての前記検出頻度及び前記非検出頻度をクリアする。 Moreover, in one aspect of the detection apparatus according to the present invention, the specifying unit obtains the non-detection frequency of the detection object for each of the plurality of partial imaging regions based on the detection result of the detection unit, The specifying unit includes the detection frequency and the non-detection of a partial imaging region in which the non-detection frequency is greater than or equal to the second threshold value or greater than the second threshold value among the plurality of partial imaging regions. Clear the frequency.
 また、本発明に係る検出装置の一態様では、前記特定部は、前記複数の部分撮像領域において、前記検出対象物が検出された第1部分撮像領域の前記検出頻度を増加させるとともに、当該第1部分撮像領域の周辺の第2部分撮像領域の前記検出頻度を増加させる。 In the detection device according to the aspect of the invention, the specifying unit may increase the detection frequency of the first partial imaging region where the detection target is detected in the plurality of partial imaging regions, and The detection frequency of the second partial imaging area around the one partial imaging area is increased.
 また、本発明に係る検出対象物の検出方法の一態様は、検出装置での検出対象物の検出方法であって、(a)背景画像情報を含む背景モデルと入力画像とを用いて、当該入力画像に写る撮像領域に存在する検出対象物を検出する工程と、(b)前記撮像領域において、検出対象物が存在する可能性が高い領域を、前記工程(a)での検出結果に基づいて特定する工程と、(c)入力画像から得られる画像情報を背景画像情報として前記背景モデルに登録するか否かを、前記工程(b)での特定結果を用いて決定する工程とを備える。 Further, one aspect of the detection target object detection method according to the present invention is a detection target object detection method in a detection device, wherein (a) a background model including background image information and an input image are used. A step of detecting a detection target existing in an imaging region shown in the input image; and (b) a region where the detection target is likely to exist in the imaging region based on the detection result in the step (a). And (c) determining whether or not to register image information obtained from the input image as background image information in the background model using the identification result in the step (b). .
 また、本発明に係る制御プログラムの一態様は、検出対象物の検出を行う検出装置を制御するための制御プログラムであって、前記検出装置に、(a)背景画像情報を含む背景モデルと入力画像とを用いて、当該入力画像に写る撮像領域に存在する検出対象物を検出する工程と、(b)前記撮像領域において、検出対象物が存在する可能性が高い領域を、前記工程(a)での検出結果に基づいて特定する工程と、(c)入力画像から得られる画像情報を背景画像情報として前記背景モデルに登録するか否かを、前記工程(b)での特定結果を用いて決定する工程とを実行させるためのものである。 Moreover, one aspect of the control program according to the present invention is a control program for controlling a detection device that detects a detection object, and (a) a background model including background image information and an input to the detection device. A step of detecting a detection object existing in an imaging region shown in the input image using an image; and (b) a region in the imaging region where the detection target is highly likely to exist ) Based on the detection result in step (b), and (c) whether the image information obtained from the input image is registered in the background model as background image information is determined using the result specified in step (b). And a step of determining the process.
 検出対象物の検出精度が向上する。 Detecting object detection accuracy is improved.
検出装置で行われる処理の概要を示す図である。It is a figure which shows the outline | summary of the process performed with a detection apparatus. 検出装置の構成を示す図である。It is a figure which shows the structure of a detection apparatus. 画像処理部の構成を示す図である。It is a figure which shows the structure of an image process part. 撮像領域の一例を示す図である。It is a figure which shows an example of an imaging area. 背景モデルの一例を示す図である。It is a figure which shows an example of a background model. コードワードの一例を示す図である。It is a figure which shows an example of a code word. 検出装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of a detection apparatus. 検出装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of a detection apparatus. 検出装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of a detection apparatus. 検出装置の動作を説明するための図である。It is a figure for demonstrating operation | movement of a detection apparatus. 画像ベクトルと背景ベクトルとの関係を示す図である。It is a figure which shows the relationship between an image vector and a background vector. 検出頻度マップの一例を示す図である。It is a figure which shows an example of a detection frequency map. 非検出頻度マップの一例を示す図である。It is a figure which shows an example of a non-detection frequency map. 検出装置の動作を示すフローチャートである。It is a flowchart which shows operation | movement of a detection apparatus. 撮像領域において検出対象物が存在する可能性が高い領域の一例を示す図である。It is a figure which shows an example of an area | region where a detection target object has high possibility in an imaging area. 登録判定期間の調整方法を説明するための図である。It is a figure for demonstrating the adjustment method of a registration determination period. 検出対象物の検出結果の一例を示す図である。It is a figure which shows an example of the detection result of a detection target object. 検出頻度マップの一例を示す図である。It is a figure which shows an example of a detection frequency map. 検出対象物の検出結果の一例を示す図である。It is a figure which shows an example of the detection result of a detection target object. 検出対象物の検出結果の一例を示す図である。It is a figure which shows an example of the detection result of a detection target object. 検出頻度マップの一例を示す図である。It is a figure which shows an example of a detection frequency map. 検出頻度マップの更新例を示す図である。It is a figure which shows the example of an update of a detection frequency map. 非検出頻度マップの更新例を示す図である。It is a figure which shows the example of an update of a non-detection frequency map. 検出頻度マップの更新例を示す図である。It is a figure which shows the example of an update of a detection frequency map.
 <検出装置の動作の概要>
 図1は実施の形態に係る検出装置1の動作の概要を示す図である。検出装置1は、入力される画像に基づいて、当該画像に写る撮像領域、つまり当該画像を撮像する撮像部の撮像領域(視野範囲)に存在する検出対象物を検出する。
<Outline of operation of detection device>
FIG. 1 is a diagram showing an outline of the operation of the detection apparatus 1 according to the embodiment. Based on the input image, the detection apparatus 1 detects a detection target existing in an imaging region that appears in the image, that is, in an imaging region (field-of-view range) of an imaging unit that captures the image.
 本実施の形態では、検出対象物は動体、より具体的には人である。検出装置1は、入力される画像に含まれる動体画像(動体を示す画像)を検出することによって、動体検出、つまり当該画像に写る撮像領域に存在する動体を検出する。 In the present embodiment, the detection target is a moving object, more specifically a person. The detection apparatus 1 detects a moving object image (an image showing a moving object) included in an input image, thereby detecting a moving object, that is, a moving object existing in an imaging region that appears in the image.
 図1に示されるように、本実施の形態では、検出装置1の動作段階として、準備段階と実動作段階とが存在する。検出装置1は、準備段階において、時系列で順次に入力される複数枚の入力画像を用いて、背景モデルの生成処理を実行する。背景モデルは、同一のシーン(被写体)を撮影して得られた複数枚の入力画像に含まれる情報が集められて構成されたモデルである。背景モデルは、後述の動体検出処理において、時系列で順次に入力される各入力画像から動体画像を検出する際に利用される。なお、背景モデルの生成処理が実行される準備段階は「学習段階」とも呼ばれる。以後、背景モデルの生成で使用される入力画像を「基準画像」と呼ぶことがある。また、動体画像の検出の対象となる入力画像を「検出対象画像」と呼ぶことがある。 As shown in FIG. 1, in the present embodiment, there are a preparation stage and an actual operation stage as the operation stage of the detection apparatus 1. In the preparation stage, the detection apparatus 1 executes a background model generation process using a plurality of input images sequentially input in time series. The background model is a model configured by collecting information included in a plurality of input images obtained by photographing the same scene (subject). The background model is used when a moving body image is detected from each input image sequentially input in time series in a moving body detection process described later. The preparation stage in which the background model generation process is executed is also referred to as a “learning stage”. Hereinafter, an input image used for generating a background model may be referred to as a “reference image”. In addition, an input image that is a detection target of a moving body image may be referred to as a “detection target image”.
 検出装置1では、背景モデルの生成が完了すると、動作段階が準備段階から実動作段階へと移行する。検出装置1は、実動作段階において、入力画像に対して動体画像の検出を行う動体検出処理と、背景モデルを更新する背景モデル更新処理と、背景モデル更新処理で使用される後述の登録判定期間を調整する判定期間調整処理とを行う。なお、本実施の形態では、検出対象物は人であるが、人以外であっても良い。 In the detection apparatus 1, when the generation of the background model is completed, the operation stage shifts from the preparation stage to the actual operation stage. In the actual operation stage, the detection apparatus 1 detects a moving object image with respect to an input image, a background model update process for updating a background model, and a registration determination period described later used in the background model update process. And a determination period adjustment process for adjusting. In this embodiment, the detection target is a person, but it may be other than a person.
 <検出装置の構成>
 図2は検出装置1の構成を示すブロック図である。図2に示されるように、検出装置1は、画像入力部2と、画像処理部3と、検出結果出力部4と、背景モデル記憶部5と、キャッシュモデル記憶部6とを備えている。図3は画像処理部3の構成を示すブロック図である。
<Configuration of detection device>
FIG. 2 is a block diagram showing the configuration of the detection apparatus 1. As shown in FIG. 2, the detection device 1 includes an image input unit 2, an image processing unit 3, a detection result output unit 4, a background model storage unit 5, and a cache model storage unit 6. FIG. 3 is a block diagram showing a configuration of the image processing unit 3.
 画像入力部2は、検出装置1の外部から入力される入力画像200を画像処理部3に入力する。入力画像200は撮像部で撮像された撮像画像である。図4は、入力画像200に写る撮像領域10、つまり入力画像200を撮像する撮像部の撮像領域(視野範囲)10の一例を示す図である。図4に示される撮像領域10には、被写体として会議室100が含まれている。したがって、この場合には、検出装置1に入力される入力画像200は、会議室100を示す画像となる。会議室100では、複数の机101と複数の椅子102が床の中央部を取り囲むように並べられている。そして、複数の机101の外側が通路103となっている。また、会議室100では、壁の一部にカーテン104が設けられている。本実施の形態に係る検出装置1は、例えば、会議室100を示す入力画像200に対して動体画像の検出を行うことによって、会議室100に存在する人を検出する。 The image input unit 2 inputs an input image 200 input from the outside of the detection apparatus 1 to the image processing unit 3. The input image 200 is a captured image captured by the imaging unit. FIG. 4 is a diagram illustrating an example of the imaging region 10 that appears in the input image 200, that is, the imaging region (field-of-view range) 10 of the imaging unit that captures the input image 200. The imaging area 10 shown in FIG. 4 includes a conference room 100 as a subject. Therefore, in this case, the input image 200 input to the detection apparatus 1 is an image showing the conference room 100. In the conference room 100, a plurality of desks 101 and a plurality of chairs 102 are arranged so as to surround the center of the floor. The outside of the plurality of desks 101 is a passage 103. In the conference room 100, a curtain 104 is provided on a part of the wall. The detection apparatus 1 according to the present embodiment detects a person existing in the conference room 100 by detecting a moving body image with respect to the input image 200 showing the conference room 100, for example.
 画像処理部3は、画像入力部2から入力される入力画像200に対して様々な画像処理を行う。画像処理部3は、CPU300と記憶部310を備えている。記憶部310は、ROM(Read Only Memory)及びRAM(Random Access Memory)等の、CPU300が読み取り可能な非一時的な記録媒体で構成されている。記憶部310には、検出装置1を制御するための制御プログラム311が記憶されている。CPU300が記憶部310内の制御プログラム311を実行することによって、画像処理部3には様々な機能ブロックが形成される。 The image processing unit 3 performs various image processing on the input image 200 input from the image input unit 2. The image processing unit 3 includes a CPU 300 and a storage unit 310. The storage unit 310 includes a non-transitory recording medium that can be read by the CPU 300, such as a ROM (Read Only Memory) and a RAM (Random Access Memory). The storage unit 310 stores a control program 311 for controlling the detection device 1. Various function blocks are formed in the image processing unit 3 by the CPU 300 executing the control program 311 in the storage unit 310.
 なお記憶部310は、ROM及びRAM以外の、コンピュータが読み取り可能な非一時的な記録媒体を備えていても良い。記憶部310は、例えば、小型のハードディスクドライブ及びSSD(Solid State Drive)等を備えていても良い。 Note that the storage unit 310 may include a computer-readable non-transitory recording medium other than the ROM and RAM. The storage unit 310 may include, for example, a small hard disk drive and an SSD (Solid State Drive).
 図3に示されるように、画像処理部3には、背景モデル生成部30、動体検出部31、判定期間調整部32及び背景モデル更新部33等の複数の機能ブロックが形成される。これらの機能ブロックは、CPUがプログラムを実行することによって実現されるのではなく、その機能の実現にプログラムを必要としない、論理回路等を用いたハードウェア回路で実現されても良い。 As shown in FIG. 3, the image processing unit 3 includes a plurality of functional blocks such as a background model generation unit 30, a moving object detection unit 31, a determination period adjustment unit 32, and a background model update unit 33. These functional blocks are not realized by the CPU executing the program, but may be realized by a hardware circuit using a logic circuit or the like that does not require a program for realizing the function.
 背景モデル生成部30は、画像入力部2から順次入力される複数枚の入力画像200(複数枚の基準画像200)を用いて、背景画像情報を含む背景モデル500を生成する。背景モデル生成部30によって生成された背景モデル500は背景モデル記憶部5に記憶される。 The background model generation unit 30 generates a background model 500 including background image information using a plurality of input images 200 (a plurality of reference images 200) sequentially input from the image input unit 2. The background model 500 generated by the background model generation unit 30 is stored in the background model storage unit 5.
 動体検出部31は、画像入力部2から入力される入力画像200と、背景モデル記憶部5内の背景モデル500とを用いて、入力画像200中の動体画像を検出する。言い換えれば、動体検出部31は、入力画像200と背景モデル500とを用いて、当該入力画像200に写る撮像領域10に存在する動体を検出する。 The moving object detection unit 31 detects a moving object image in the input image 200 using the input image 200 input from the image input unit 2 and the background model 500 in the background model storage unit 5. In other words, the moving object detection unit 31 uses the input image 200 and the background model 500 to detect a moving object that exists in the imaging region 10 that appears in the input image 200.
 背景モデル更新部33は背景モデル500の更新を行う。背景モデル更新部33は、特定部330及び登録決定部331を備えている。特定部330は、動体検出部31での動体検出の検出結果に基づいて、撮像領域10において動体が存在する可能性が高い領域を特定する。具体的には、特定部330は、動体検出部31での検出結果に基づいて、検出頻度マップと非検出頻度マップを生成し、これらのマップを用いて撮像領域10において動体が存在する可能性が高い領域を特定する。検出頻度マップは、撮像領域10における複数の領域での動体(検出対象物)の検出頻度の分布を示している。非検出頻度マップは、撮像領域10における複数の領域での動体の非検出頻度の分布を示している。登録決定部331は、入力画像200から得られる画像情報を背景画像情報として背景モデル500に登録するか否かを、特定部330での特定結果を用いて決定する。検出頻度マップ及び非検出頻度マップについては後で詳細に説明する。 The background model update unit 33 updates the background model 500. The background model update unit 33 includes a specifying unit 330 and a registration determining unit 331. Based on the detection result of the moving object detection by the moving object detection unit 31, the specifying unit 330 specifies an area where the moving object is highly likely to exist in the imaging region 10. Specifically, the specifying unit 330 generates a detection frequency map and a non-detection frequency map based on the detection result of the moving object detection unit 31, and the moving object may exist in the imaging region 10 using these maps. Identify areas with high The detection frequency map shows a distribution of detection frequencies of moving objects (detection objects) in a plurality of areas in the imaging area 10. The non-detection frequency map indicates a distribution of non-detection frequencies of moving objects in a plurality of regions in the imaging region 10. The registration determination unit 331 determines whether or not to register the image information obtained from the input image 200 as background image information in the background model 500 using the specifying result in the specifying unit 330. The detection frequency map and the non-detection frequency map will be described in detail later.
 背景モデル記憶部5は、背景モデル生成部30で生成される背景モデル500を記憶する。キャッシュモデル記憶部6は、後述するキャッシュモデルを記憶する。背景モデル記憶部5及びキャッシュモデル記憶部6のそれぞれは、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)またはハードディスク(HD)等の書き換え可能な記憶手段で構成される。なお本例では、背景モデル記憶部5とキャッシュモデル記憶部6とはハードウェア的に独立しているが、一つの記憶装置が有する記憶領域の一部を背景モデル記憶部5として使用し、当該記憶領域の他の一部をキャッシュモデル記憶部6として使用しても良い。 The background model storage unit 5 stores the background model 500 generated by the background model generation unit 30. The cache model storage unit 6 stores a cache model to be described later. Each of the background model storage unit 5 and the cache model storage unit 6 includes rewritable storage means such as flash memory, EPROM (Erasable Programmable Read Only Memory), or hard disk (HD). In this example, the background model storage unit 5 and the cache model storage unit 6 are independent in hardware, but a part of the storage area of one storage device is used as the background model storage unit 5, Another part of the storage area may be used as the cache model storage unit 6.
 判定期間調整部32は、背景モデル500の更新で使用される登録判定期間を調整する。検出結果出力部4は、動体検出部31での動体検出の検出結果を外部に出力する。検出結果出力部4は、例えば、撮像領域10の様子、つまり撮像領域10に存在する被写体(図4の例では会議室100)の様子をリアルタイムで表示する表示部を備えている。この表示部が、動体が検出された領域を色等で表示することによって、動体検出の検出結果が外部に出力される。また、検出結果出力部4は、検出結果を音声等の音で外部に出力しても良い。また、検出結果出力部4は、検出結果を示す信号を、外部装置に対して出力することによって、当該検出結果を外部に出力しても良い。この場合には、外部装置は、検出結果に応じた動作を実行する。例えば、外部装置は、警報を発生する。あるいは、撮像領域10が図4の会議室100である場合には、外部装置は、会議室100の照明器具を制御して、人が存在する領域だけ明るくする。また、外部装置は、会議室100の空調機を制御して、人が存在する領域だけ冷却したり、暖めたりする。 The determination period adjustment unit 32 adjusts the registration determination period used in the update of the background model 500. The detection result output unit 4 outputs the detection result of the moving object detection by the moving object detection unit 31 to the outside. The detection result output unit 4 includes, for example, a display unit that displays the state of the imaging region 10, that is, the state of the subject existing in the imaging region 10 (the conference room 100 in the example of FIG. 4) in real time. The display unit displays the area where the moving object is detected in color or the like, so that the detection result of the moving object detection is output to the outside. In addition, the detection result output unit 4 may output the detection result to the outside with a sound such as voice. The detection result output unit 4 may output the detection result to the outside by outputting a signal indicating the detection result to an external device. In this case, the external device performs an operation according to the detection result. For example, the external device generates an alarm. Alternatively, when the imaging region 10 is the conference room 100 in FIG. 4, the external device controls the lighting apparatus in the conference room 100 to brighten only the region where a person exists. In addition, the external device controls the air conditioner in the conference room 100 to cool or warm only an area where a person exists.
 <準備段階(背景モデル生成処理)>
 次に検出装置1の準備段階で行われる背景モデル生成処理について説明する。図5は背景モデル500を説明するための図である。本実施の形態では、撮像領域10において人が存在しないとき(会議室100が利用されていないとき)に撮像部で撮像された入力画像200が、背景モデル500の生成で使用される基準画像200となる。背景モデル生成処理では、A枚(A≧2)の基準画像200に基づいて背景モデル500が生成される。
<Preparation stage (background model generation process)>
Next, background model generation processing performed in the preparation stage of the detection apparatus 1 will be described. FIG. 5 is a diagram for explaining the background model 500. In the present embodiment, when there is no person in the imaging region 10 (when the conference room 100 is not used), the input image 200 captured by the imaging unit is used as the reference image 200 used for generating the background model 500. It becomes. In the background model generation process, a background model 500 is generated based on A (A ≧ 2) reference images 200.
 本実施の形態では、撮像領域10は、複数の矩形の撮像ブロック(部分撮像領域)に分割される。入力画像200に含まれる、ある撮像ブロックの画像を示す領域を「画像ブロック」と呼ぶと、入力画像200は、撮像領域10を構成する複数の撮像ブロックの画像をそれぞれ示す複数の画像ブロックで構成される。本実施の形態では、一つの画像ブロックの大きさは、例えば、3画素×3画素となっている。以後、撮像ブロックと、入力画像200における、当該撮像ブロックの画像を示す画像ブロックとに関して、当該撮像ブロックを、当該画像ブロックに対応する撮像ブロックと呼ぶことがある。 In the present embodiment, the imaging area 10 is divided into a plurality of rectangular imaging blocks (partial imaging areas). When an area indicating an image of a certain imaging block included in the input image 200 is referred to as an “image block”, the input image 200 includes a plurality of image blocks respectively indicating images of a plurality of imaging blocks constituting the imaging area 10. Is done. In the present embodiment, the size of one image block is, for example, 3 pixels × 3 pixels. Hereinafter, regarding the imaging block and the image block indicating the image of the imaging block in the input image 200, the imaging block may be referred to as an imaging block corresponding to the image block.
 図5に示されるように、背景モデル500には、複数の撮像ブロックBKにそれぞれ対応する複数のコードブック(Codebook)CBが含まれる。各コードブックCBには、画像情報と、当該画像情報に関連する関連情報とを含むコードワード(Codeword)CWが含まれている。コードブックCBに含まれるコードワードCWは、一枚の入力画像200における、当該コードブックCBが対応する撮像ブロックBKの画像を示す画像ブロックに基づいて生成される。各コードブックCBには複数のコードワードCWが含まれている。以後、背景モデル500中のコードワードCWに含まれる画像情報を「背景画像情報」と呼ぶことがある。 As shown in FIG. 5, the background model 500 includes a plurality of codebooks CB corresponding to the plurality of imaging blocks BK, respectively. Each code book CB includes a code word CW including image information and related information related to the image information. The code word CW included in the code book CB is generated based on an image block indicating an image of the imaging block BK corresponding to the code book CB in one input image 200. Each code book CB includes a plurality of code words CW. Hereinafter, the image information included in the code word CW in the background model 500 may be referred to as “background image information”.
 図5において砂地のハッチングが示されているコードブックCBには、3枚の基準画像200a~200cに基づいてそれぞれ生成された3つのコードワードCW1~CW3が含まれている。コードブックCBに含まれるコードワードCW1は、基準画像200aにおける、当該コードブックCBが対応する撮像ブロックBKの画像を示す画像ブロックに基づいて生成される。コードブックCBに含まれるコードワードCW2は、基準画像200bにおける、当該コードブックCBが対応する撮像ブロックBKの画像を示す画像ブロックに基づいて生成される。そして、コードブックCBに含まれるコードワードCW3は、基準画像200cにおける、当該コードブックCBが対応する撮像ブロックBKの画像を示す画像ブロックに基づいて生成される。 In FIG. 5, the code book CB showing sand hatching includes three code words CW1 to CW3 respectively generated based on the three reference images 200a to 200c. The code word CW1 included in the code book CB is generated based on the image block indicating the image of the imaging block BK corresponding to the code book CB in the reference image 200a. The code word CW2 included in the code book CB is generated based on the image block indicating the image of the imaging block BK corresponding to the code book CB in the reference image 200b. The code word CW3 included in the code book CB is generated based on the image block indicating the image of the imaging block BK corresponding to the code book CB in the reference image 200c.
 図6はコードワードCWを説明するための図である。コードワードCWには、当該コードワードCWを含むコードブックCBが対応する撮像ブロックの画像を示す画像ブロックの画像情報、つまり当該画像ブロックを構成する複数の画素の画素値PVが、背景画像情報として含まれている。そして、コードワードCWには、関連情報として、最新一致時刻Teとコードワード生成時刻Tiとが含まれている。後述するように、背景モデル500に含まれるコードワードCW中の画像情報については、検出対象画像200から取得された画像情報と一致する否かが判定される。コードワードCWに含まれる最新一致時刻Teは、当該コードワードCWに含まれる画像情報と、検出対象画像200から取得された画像情報とが一致すると判定された最新の時刻を示している。また、コードワードCWに含まれるコードワード生成時刻Tiは、当該コードワードCWが生成された時刻を示している。 FIG. 6 is a diagram for explaining the code word CW. In the code word CW, image information of an image block indicating an image of an imaging block corresponding to the code book CB including the code word CW, that is, pixel values PV of a plurality of pixels constituting the image block are used as background image information. include. The code word CW includes the latest matching time Te and the code word generation time Ti as related information. As will be described later, it is determined whether or not the image information in the code word CW included in the background model 500 matches the image information acquired from the detection target image 200. The latest matching time Te included in the code word CW indicates the latest time when it is determined that the image information included in the code word CW matches the image information acquired from the detection target image 200. The code word generation time Ti included in the code word CW indicates the time at which the code word CW is generated.
 図7は、このような背景モデル500が生成される背景モデル生成処理を示すフローチャートである。図7に示される背景モデル生成処理は、背景モデル記憶部5に背景モデル500が記憶されていないときに実行される。 FIG. 7 is a flowchart showing background model generation processing in which such a background model 500 is generated. The background model generation process shown in FIG. 7 is executed when the background model 500 is not stored in the background model storage unit 5.
 図7に示されるように、ステップs1において、背景モデル生成部30は、画像入力部2から基準画像200が画像処理部3に入力されると、ステップs2において、撮像領域10のある撮像ブロックを注目撮像ブロックとする。そして、背景モデル生成部30は、注目撮像ブロックに対応するコードブックCBが背景モデル記憶部5に記憶されているか否かを判定する。 As illustrated in FIG. 7, when the reference image 200 is input from the image input unit 2 to the image processing unit 3 in step s1, the background model generation unit 30 selects an imaging block having the imaging region 10 in step s2. Let it be an attention imaging block. Then, the background model generation unit 30 determines whether or not the code book CB corresponding to the target imaging block is stored in the background model storage unit 5.
 背景モデル生成部30は、注目撮像ブロックに対応するコードブックCBが背景モデル記憶部5に記憶されていないと判定すると、ステップs3において、ステップs1で入力された基準画像200に基づいて、注目撮像ブロックに対応するコードブックCBを生成して背景モデル記憶部5に記憶する。 When the background model generation unit 30 determines that the code book CB corresponding to the target imaging block is not stored in the background model storage unit 5, the target model imaging is performed based on the reference image 200 input in step s1 in step s3. A code book CB corresponding to the block is generated and stored in the background model storage unit 5.
 具体的には、背景モデル生成部30は、ステップs1で入力された基準画像200における、注目撮像ブロックの画像を示す画像ブロックから画像情報を取得する。そして、背景モデル生成部30は、取得した画像情報を背景画像情報として含むコードワードCWを生成し、当該コードワードCWを含むコードブックCBを背景モデル記憶部5に記憶する。このコードワードCWに含まれる最新一致時刻Teは、暫定的に、コードワード生成時刻Tiと同じ時刻に設定される。 Specifically, the background model generation unit 30 acquires image information from an image block indicating an image of the target imaging block in the reference image 200 input in step s1. Then, the background model generation unit 30 generates a code word CW including the acquired image information as background image information, and stores the code book CB including the code word CW in the background model storage unit 5. The latest match time Te included in the code word CW is provisionally set to the same time as the code word generation time Ti.
 一方で、背景モデル生成部30は、注目撮像ブロックに対応するコードブックCBが背景モデル記憶部5に記憶されていると判定すると、ステップs4において、ステップs1で入力された基準画像200における、注目撮像ブロックの画像を示す画像ブロックから画像情報を取得する。そして、背景モデル生成部30は、背景モデル記憶部5が記憶している、注目撮像ブロックに対応するコードブックCBに含まれる各コードワードCW中の背景情報画像と、取得した画像情報とが一致するか否かを判定する。つまり、背景モデル生成部30は、注目撮像ブロックに対応するコードブックCBに含まれるコードワードCWにおいて、取得した画像情報と一致する背景画像情報を含むコードワードCWが存在するか否かを判定する。 On the other hand, when the background model generation unit 30 determines that the code book CB corresponding to the target imaging block is stored in the background model storage unit 5, the target model in the reference image 200 input in step s1 in step s4. Image information is acquired from an image block indicating an image of the imaging block. Then, the background model generation unit 30 matches the acquired image information with the background information image stored in the background model storage unit 5 and included in each code word CW included in the code book CB corresponding to the imaging block of interest. It is determined whether or not to do. That is, the background model generation unit 30 determines whether or not there is a code word CW including background image information that matches the acquired image information in the code word CW included in the code book CB corresponding to the imaging block of interest. .
 ステップs4での判定の結果、ステップs5において、注目撮像ブロックに対応するコードブックCBに含まれる各コードワードCW中の背景情報画像と、取得した画像情報とが一致しない場合には、つまり、注目撮像ブロックに対応するコードブックCBに含まれるコードワードCWにおいて、取得した画像情報と一致する背景画像情報を含むコードワードCWが存在しない場合には、ステップs6において、背景モデル生成部30は、ステップs4で基準画像200から取得した画像情報を背景画像情報として含むコードワードCWを生成する。このコードワードCWに含まれる最新一致時刻Teは、暫定的に、コードワード生成時刻Tiと同じ時刻に設定される。そして、背景モデル生成部30は、生成したコードワードCWを、背景モデル記憶部5が記憶する、注目撮像ブロックに対応するコードブックCBに追加する。これにより、注目撮像ブロックに対応するコードブックCBには新しい背景画像情報が追加される。その後、ステップs7が実行される。 If it is determined in step s4 that the background information image in each codeword CW included in the codebook CB corresponding to the target imaging block does not match the acquired image information in step s5, that is, the target If there is no code word CW including background image information that matches the acquired image information in the code word CW included in the code book CB corresponding to the imaging block, in step s6, the background model generation unit 30 performs step In step s4, a code word CW including the image information acquired from the reference image 200 as background image information is generated. The latest match time Te included in the code word CW is provisionally set to the same time as the code word generation time Ti. Then, the background model generation unit 30 adds the generated code word CW to the code book CB corresponding to the target imaging block stored in the background model storage unit 5. As a result, new background image information is added to the code book CB corresponding to the imaging block of interest. Thereafter, step s7 is executed.
 一方で、ステップs5において、注目撮像ブロックに対応するコードブックCBに含まれるコードワードCW中の背景情報画像と、取得した画像情報とが一致する場合には、つまり、注目撮像ブロックに対応するコードブックCBに含まれるコードワードCWにおいて、取得した画像情報と一致する背景画像情報を含むコードワードCWが存在する場合には、ステップs6は実行されずに、ステップs7が実行される。 On the other hand, when the background information image in the code word CW included in the code book CB corresponding to the imaging block of interest matches the acquired image information in step s5, that is, the code corresponding to the imaging block of interest. If there is a codeword CW including background image information that matches the acquired image information in the codeword CW included in the book CB, step s7 is executed without executing step s6.
 ステップs7において、背景モデル生成部30は、撮像領域10における全ての撮像ブロックについて処理が行われた否か、つまり、全ての撮像ブロックを注目撮像ブロックに設定したか否かを判定する。ステップs7での判定の結果、処理が行われていない撮像ブロックが存在する場合には、背景モデル生成部30は、未だ処理が行われていない撮像ブロックを新たな注目撮像ブロックとして、ステップs2以降を実行する。 In step s7, the background model generation unit 30 determines whether or not processing has been performed for all of the imaging blocks in the imaging region 10, that is, whether or not all of the imaging blocks have been set as the imaging block of interest. As a result of the determination in step s7, if there is an imaging block that has not been processed, the background model generation unit 30 sets the imaging block that has not been processed yet as a new attention imaging block, and thereafter step s2 and subsequent steps. Execute.
 一方で、ステップs7での判定の結果、撮像領域10における全ての撮像ブロックについて処理が行われている場合には、背景モデル生成部30は、ステップs8において、A枚の基準画像200に対して同様の処理が行われたか否かを判定する。背景モデル生成部30は、ステップs8での判定の結果、処理を行った基準画像200の枚数がA枚よりも少ない場合には、ステップs1において画像処理部3に対して新たに入力される基準画像200に対して、ステップs2以下の処理を実行する。背景モデル生成部30は、ステップs8での判定の結果、処理を行った基準画像200の枚数がA枚である場合には、背景モデル生成処理を終了する。これにより、上述のような背景モデル500が背景モデル記憶部5内に生成される。 On the other hand, if the result of determination in step s7 is that processing has been performed for all of the imaging blocks in the imaging area 10, the background model generation unit 30 applies to the A reference images 200 in step s8. It is determined whether or not similar processing has been performed. If the result of determination in step s8 is that the number of processed reference images 200 is smaller than A, the background model generation unit 30 newly inputs a reference to the image processing unit 3 in step s1. The process from step s2 is executed on the image 200. If the result of determination in step s8 is that the number of processed reference images 200 is A, the background model generation unit 30 ends the background model generation process. Thereby, the background model 500 as described above is generated in the background model storage unit 5.
 <実動作段階>
 次に検出装置1の実動作段階での動作について説明する。図8は、検出装置1の実動作段階での概略動作を示すフローチャートである。検出装置1では、背景モデル生成処理が終了すると、図8に示される処理が実行される。
<Real operation stage>
Next, the operation of the detection device 1 in the actual operation stage will be described. FIG. 8 is a flowchart showing a schematic operation in the actual operation stage of the detection apparatus 1. In the detection apparatus 1, when the background model generation process ends, the process shown in FIG. 8 is executed.
 図8に示されるように、ステップs11において画像入力部2から入力画像200が画像処理部3に入力されると、当該入力画像200を処理対象として、ステップs12~s14までの一連の処理が実行される。 As shown in FIG. 8, when the input image 200 is input from the image input unit 2 to the image processing unit 3 in step s11, a series of processing from steps s12 to s14 is executed with the input image 200 as a processing target. Is done.
 ステップs12において、画像処理部3は、処理対象の入力画像200に対して動体画像の検出を行う動体検出処理を行う。そして、ステップs13において、画像処理部3は、ステップs12での動体検出処理の結果に基づいて、登録判定期間を調整する判定期間調整処理を行う。その後、画像処理部3は、ステップs14において、背景モデル記憶部5内の背景モデル500を更新する背景モデル更新処理を行う。 In step s12, the image processing unit 3 performs a moving object detection process for detecting a moving object image on the input image 200 to be processed. In step s13, the image processing unit 3 performs a determination period adjustment process for adjusting the registration determination period based on the result of the moving object detection process in step s12. Thereafter, in step s14, the image processing unit 3 performs background model update processing for updating the background model 500 in the background model storage unit 5.
 その後、ステップs11において、画像入力部2から画像処理部3に新たな入力画像200(新たな検出対象画像200)が入力されると、当該入力画像200を新たな処理対象として、ステップs12~s14までの一連の処理が実行される。その後、画像処理部3は同様に動作する。 Thereafter, when a new input image 200 (new detection target image 200) is input from the image input unit 2 to the image processing unit 3 in step s11, the input image 200 is set as a new processing target, and steps s12 to s14 are performed. A series of processes up to are executed. Thereafter, the image processing unit 3 operates in the same manner.
 このように、本実施の形態に係る検出装置1では、入力画像200が入力されるたびに、動体検出処理、判定期間調整処理及び背景モデル更新処理がこの順番で実行される。 As described above, in the detection apparatus 1 according to the present embodiment, each time the input image 200 is input, the moving object detection process, the determination period adjustment process, and the background model update process are executed in this order.
 <動体検出処理>
 次にステップs12での動体検出処理について詳しく説明する。図9は動体検出処理を示すフローチャートである。図9に示されるように、ステップs121において、動体検出部31は、撮像領域10のある撮像ブロック(例えば、撮像領域10における左上の撮像ブロック)を注目撮像ブロックとする。そして、動体検出部31は、ステップs11で入力された処理対象の入力画像200(検出対象画像200)における、注目撮像ブロックの画像を示す画像ブロック(以後、「注目画像ブロック」と呼ぶことがある)に対して動体画像の検出を行う。つまり、動体検出部31は、注目撮像ブロックに動体が存在するか否かを検出する。
<Moving object detection processing>
Next, the moving object detection process in step s12 will be described in detail. FIG. 9 is a flowchart showing the moving object detection process. As illustrated in FIG. 9, in step s121, the moving object detection unit 31 sets an imaging block having the imaging region 10 (for example, an upper left imaging block in the imaging region 10) as a target imaging block. Then, the moving object detection unit 31 may be referred to as an image block (hereinafter referred to as “attention image block”) indicating an image of the target imaging block in the processing target input image 200 (detection target image 200) input in step s11. ) Is detected. That is, the moving object detection unit 31 detects whether a moving object exists in the imaging block of interest.
 本実施の形態に係る動体検出では、入力画像200中の注目画像ブロックから取得される画像情報と、背景モデル500における、注目撮像ブロックに対応するコードブックCBに含まれる各コードワードCW中の背景画像情報とが一致するか否かが判定されることによって、注目画像ブロックが動体画像であるか否かが判定される。以後、注目撮像ブロックに対応するコードブックCBを「対応コードブックCB」と呼ぶことがある。また、対応コードブックCBに含まれるコードワードCWを「対応コードワードCW」と呼ぶことがある。動体検出の具体的手法については後述する。 In the moving object detection according to the present embodiment, the image information acquired from the target image block in the input image 200 and the background in each code word CW included in the code book CB corresponding to the target imaging block in the background model 500. By determining whether or not the image information matches, it is determined whether or not the target image block is a moving image. Hereinafter, the code book CB corresponding to the imaging block of interest may be referred to as a “corresponding code book CB”. Further, the code word CW included in the corresponding code book CB may be referred to as “corresponding code word CW”. A specific method of moving object detection will be described later.
 ステップs121が実行されると、ステップs122において、動体検出部31は、ステップs121での動体検出の結果を記憶する。そして、動体検出部31は、ステップs123において、撮像領域10における全ての撮像ブロックについて処理が行われた否か、つまり、全ての撮像ブロックを注目撮像ブロックに設定したか否かを判定する。ステップs123での判定の結果、処理が行われていない撮像ブロックが存在する場合には、動体検出部31は、未だ処理が行われていない撮像ブロックを新たな注目撮像ブロックとして、ステップs121以降を実行する。一方で、ステップs123での判定の結果、撮像領域10における全ての撮像ブロックについて処理が行われている場合には、つまり、入力画像200の全領域に対して動体画像の検出が完了している場合には、動体検出部31は動体検出処理を終了する。これにより、動体検出部31には、入力画像200を構成する複数の画像ブロックに対する動体画像の検出の結果が記憶される。つまり、動体検出部31には、撮像領域10を構成する複数の撮像ブロックに対する動体検出の結果が記憶される。この検出結果は、検出結果出力部4に入力される。 When step s121 is executed, in step s122, the moving object detection unit 31 stores the result of the moving object detection in step s121. Then, in step s123, the moving object detection unit 31 determines whether or not processing has been performed for all imaging blocks in the imaging region 10, that is, whether or not all imaging blocks have been set as the imaging block of interest. As a result of the determination in step s123, when there is an imaging block that has not been processed, the moving object detection unit 31 sets the imaging block that has not been processed yet as a new attention imaging block, and thereafter performs step s121 and subsequent steps. Execute. On the other hand, if the result of determination in step s123 is that processing has been performed for all imaging blocks in the imaging area 10, that is, detection of moving object images has been completed for all areas of the input image 200. In this case, the moving object detection unit 31 ends the moving object detection process. As a result, the moving object detection unit 31 stores the results of moving object detection for a plurality of image blocks constituting the input image 200. That is, the moving object detection unit 31 stores the results of moving object detection for a plurality of imaging blocks constituting the imaging area 10. This detection result is input to the detection result output unit 4.
 <動体検出の詳細>
 次にステップs121での動体検出の具体的手法について図10及び図11を用いて説明する。図10は、入力画像200の注目画像ブロック及び背景モデル500の対応コードワードCWのそれぞれからベクトルを抽出する様子を表した図である。図11は、入力画像200の注目画像ブロックから抽出されたベクトルと、背景モデル500の対応コードワードCWから抽出されたベクトルとの関係を示す図である。
<Details of motion detection>
Next, a specific method of moving object detection in step s121 will be described with reference to FIGS. FIG. 10 is a diagram showing how vectors are extracted from each of the target image block of the input image 200 and the corresponding codeword CW of the background model 500. FIG. 11 is a diagram illustrating a relationship between a vector extracted from the target image block of the input image 200 and a vector extracted from the corresponding codeword CW of the background model 500.
 本実施の形態では、入力画像200中の注目画像ブロックの画像情報がベクトルとして扱われる。また、背景モデル500中の各対応コードワードCWについて、当該対応コードワードCWに含まれる背景画像情報がベクトルとして扱われる。そして、注目画像ブロックの画像情報についてのベクトルと、各対応コードワードCWの背景画像情報についてのベクトルとが、同じ方向を向いているか否かに基づいて、注目画像ブロックが動体画像であるか否かが判定される。この2種類のベクトルが同じ方向を向いている場合には、注目画像ブロックの画像情報と、各対応コードワードCWの背景画像情報とは一致すると考えることができる。したがって、この場合には、入力画像200中の注目画像ブロックは、背景を示す画像と変わらず、動体画像ではないと判定される。一方、2種類のベクトルが同じ方向を向いていない場合には、注目画像ブロックの画像情報と、各対応コードワードCWの背景画像情報とは一致しないと考えることができる。したがって、この場合には、入力画像200中の注目画像ブロックは、背景を示す画像ではなく、動体画像であると判定される。 In this embodiment, the image information of the target image block in the input image 200 is handled as a vector. For each corresponding codeword CW in the background model 500, the background image information included in the corresponding codeword CW is treated as a vector. Based on whether or not the vector for the image information of the target image block and the vector for the background image information of each corresponding codeword CW are in the same direction, whether or not the target image block is a moving image. Is determined. When these two types of vectors are directed in the same direction, it can be considered that the image information of the target image block and the background image information of each corresponding codeword CW match. Therefore, in this case, the target image block in the input image 200 is determined not to be a moving body image as it is the same as the image indicating the background. On the other hand, when the two types of vectors do not point in the same direction, it can be considered that the image information of the target image block does not match the background image information of each corresponding codeword CW. Therefore, in this case, it is determined that the target image block in the input image 200 is not an image showing the background but a moving body image.
 具体的には、動体検出部31は、入力画像200中の注目画像ブロックに含まれる複数の画素の画素値を成分とした画像ベクトルxを生成する。図10には、9個の画素を有する注目画像ブロック210の各画素の画素値を成分とした画像ベクトルxが示されている。図10の例では、各画素は、R(赤)、G(緑)及びB(青)の画素値を有しているため、画像ベクトルxは、27個の成分で構成されている。 Specifically, the moving object detection unit 31 generates an image vector x f in which the pixel values of a plurality of pixels included in the image block of interest in the input image 200 and components. Figure 10 shows a image vector x f which was a pixel value of each pixel component of the target image block 210 having nine pixels. In the example of FIG. 10, each pixel has pixel values of R (red), G (green), and B (blue), so the image vector xf is composed of 27 components.
 同様に、動体検出部31は、背景モデル500の対応コードブックCBに含まれる対応コードワードCW中の背景画像情報を用いて、背景画像情報に関するベクトルである背景ベクトルを生成する。図10に示される対応コードワードの背景画像情報510には、9個の画素についての画素値が含まれている。したがって、当該9個の画素についての画素値を成分とした背景ベクトルxが生成される。背景ベクトルxについては、対応コードブックCBに含まれる複数のコードワードCWのそれぞれから生成される。したがって、一つの画像ベクトルxに対して複数の背景ベクトルxが生成される。 Similarly, the moving object detection unit 31 uses the background image information in the corresponding codeword CW included in the corresponding codebook CB of the background model 500 to generate a background vector that is a vector related to the background image information. The background image information 510 of the corresponding codeword shown in FIG. 10 includes pixel values for nine pixels. Therefore, a background vector xb having the pixel values for the nine pixels as components is generated. The background vector xb is generated from each of a plurality of code words CW included in the corresponding code book CB. Therefore, a plurality of background vector x b are generated for one image vector x f.
 上述のように、画像ベクトルxと各背景ベクトルxとが同じ方向を向いている場合、入力画像200中の注目画像ブロックは、背景を示す画像と変わらないことになる。しかしながら、画像ベクトルx及び各背景ベクトルxには、ある程度のノイズ成分が含まれていると考えられることから、画像ベクトルxと各背景ベクトルxとが完全に同じ方向を向いていなくても、入力画像200中の注目画像ブロックは背景を示す画像であると判定することができる。 As described above, when the image vector xf and each background vector xb are in the same direction, the target image block in the input image 200 is not different from the image indicating the background. However, since the image vector xf and each background vector xb are considered to contain a certain amount of noise components, the image vector xf and each background vector xb are not completely in the same direction. However, it is possible to determine that the target image block in the input image 200 is an image indicating the background.
 そこで、本実施の形態では、画像ベクトルx及び各背景ベクトルxに、ある程度のノイズ成分が含まれていることを考慮して、画像ベクトルxと各背景ベクトルxとが完全に同じ方向を向いていない場合であっても、入力画像200中の注目画像ブロックは背景を示す画像であると判定する。 Therefore, in the present embodiment, the image vector xf and each background vector xb are completely the same in consideration that the image vector xf and each background vector xb include a certain amount of noise components. Even if it is not facing the direction, it is determined that the target image block in the input image 200 is an image indicating the background.
 画像ベクトルx及び背景ベクトルxにノイズ成分が含まれていると仮定すると、真のベクトルuに対する画像ベクトルxと背景ベクトルxとの関係は、図11のように表すことができる。本実施の形態では、画像ベクトルxと背景ベクトルxとが、どの程度同じ方向を向いているかを示す評価値として、以下の(1)で表される評価値Dを考える。 Assuming that contains image vector x f and background vector x b in the noise component, the relationship between the image vector x f and background vector x b to the true vector u can be expressed as in FIG. 11. In this embodiment, the image and the vector x f and background vector x b is, as an evaluation value that indicates whether the pointing how the same direction, consider the evaluation value D 2 represented by the following (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 そして、行列Xを画像ベクトルxと背景ベクトルxとを用いて、式(2)のように表すと、評価値Dは、2×2行列XXの非ゼロの最小固有値となる。したがって、評価値Dについては解析的に求めることができる。なお、評価値Dが2×2行列XXの非ゼロの最小固有値となることについては、上記の非特許文献3に記載されている。 Then, the matrix X by using the image vector x f and background vector x b, expressed by the equation (2), evaluation value D 2 is a minimum eigenvalue of a non-zero 2 × 2 matrix XX T. Accordingly, the evaluation value D 2 can be determined analytically. Note that the evaluation value D 2 is the minimum eigenvalue of the non-zero 2 × 2 matrix XX T is described in Non-Patent Document 3 above.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 上述のように、一つの画像ベクトルxに対して複数の背景ベクトルxが生成されることから、画像ベクトルxと背景ベクトルxとを用いて表される評価値Dの値も、背景ベクトルxの数と同じ数だけ得られることになる。 As described above, since the plurality of background vector x b are generated for one image vector x f, the value of evaluation values D 2 which is represented using an image vector x f and the background vector x b As many as the number of background vectors xb are obtained.
 入力画像200中の注目画像ブロックが動体画像であるか否かの判定は、評価値Dの複数の値のうちの最小値Cと、評価値Dの複数の値についての平均値μ及び標準偏差σとを用いて表される、以下の式(3)で示される動体判定式が用いられる。この動体判定式はチェビシェフ(Chebyshev)の不等式と呼ばれる。 Decision image block of interest is whether the moving object image in the input image 200, the minimum value C of the plurality of values of the evaluation value D 2, the mean value for a plurality of values of the evaluation value D 2 mu and The moving object judgment formula shown by the following formula (3) expressed using the standard deviation σ is used. This moving object judgment formula is called Chebyshev's inequality.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 ここで、式(3)のkは定数であって、入力画像200を撮像する撮像部の撮像環境(撮像部が設置される環境)等に基づいて定められる値である。定数kは実験等によって決定される。 Here, k in Expression (3) is a constant, and is a value determined based on an imaging environment (an environment in which the imaging unit is installed) of the imaging unit that captures the input image 200 and the like. The constant k is determined by experiments or the like.
 動体検出部31は、動体判定式(不等式)を満たす場合、画像ベクトルxと各背景ベクトルxとが同じ方向を向いていないと考えて、注目画像ブロックは、背景を示す画像ではなく、動体画像であると判定する。一方で、動体検出部31は、動体判定式を満たさない場合、画像ベクトルxと各背景ベクトルxとは同じ方向を向いていると考えて、注目画像ブロックは動体画像ではなく、背景を示す画像であると判定する。 When the moving object detection unit 31 satisfies the moving object determination formula (inequality), the moving image detection unit 31 considers that the image vector x f and each background vector x b do not face the same direction, and the target image block is not an image indicating the background, It determines with it being a moving body image. On the other hand, the moving object detection unit 31 considers that the image vector xf and each background vector xb face the same direction when the moving object determination formula is not satisfied, and the target image block is not a moving object image but a background. It is determined that the image is shown.
 このように、本実施の形態では、注目画像ブロックから得られた画像ベクトルの方向と、各対応コードワードCWから得られた背景ベクトルの方向とが、同じか否かに基づいて動体検出が行われているため、本実施の形態に係る動体検出手法は、日照変化あるいは照明変化などの撮像領域10での明るさの変化に対して比較的頑健な動体検出手法である。 Thus, in the present embodiment, the moving object detection is performed based on whether the direction of the image vector obtained from the target image block and the direction of the background vector obtained from each corresponding codeword CW are the same. Therefore, the moving object detection method according to the present embodiment is a moving object detection method that is relatively robust against changes in brightness in the imaging region 10 such as sunshine changes or illumination changes.
 <検出頻度マップ及び非検出頻度マップ>
 図12,13は検出頻度マップ600及び非検出頻度マップ610の一例をそれぞれ示す図である。検出頻度マップ600には、撮像領域10を構成する複数の撮像ブロックBKのそれぞれについての動体(検出対象物)700の検出頻度601が含まれている。一方で、非検出頻度マップ610には、撮像領域10を構成する複数の撮像ブロックBKのそれぞれについての動体(検出対象物)700の非検出頻度611が含まれている。
<Detection frequency map and non-detection frequency map>
12 and 13 are diagrams showing examples of the detection frequency map 600 and the non-detection frequency map 610, respectively. The detection frequency map 600 includes the detection frequency 601 of the moving object (detection target) 700 for each of the plurality of imaging blocks BK constituting the imaging region 10. On the other hand, the non-detection frequency map 610 includes a non-detection frequency 611 of the moving object (detection target) 700 for each of the plurality of imaging blocks BK constituting the imaging region 10.
 検出頻度マップ600では、それに含まれる複数の検出頻度601が行列状に配置されている。ある撮像ブロックBKについての動体700の検出頻度601は、検出頻度マップ600において、撮像領域10での当該撮像ブロックBKの位置と同じ位置に配置されている。同様に、非検出頻度マップ610では、それに含まれる複数の非検出頻度611が行列状に配置されている。ある撮像ブロックBKについての動体700の非検出頻度611は、非検出頻度マップ610において、撮像領域10での当該撮像ブロックBKの位置と同じ位置に配置されている。 In the detection frequency map 600, a plurality of detection frequencies 601 included therein are arranged in a matrix. The detection frequency 601 of the moving object 700 for a certain imaging block BK is arranged at the same position as the position of the imaging block BK in the imaging area 10 in the detection frequency map 600. Similarly, in the non-detection frequency map 610, a plurality of non-detection frequencies 611 included therein are arranged in a matrix. The non-detection frequency 611 of the moving object 700 for a certain imaging block BK is arranged at the same position as the position of the imaging block BK in the imaging area 10 in the non-detection frequency map 610.
 特定部330は、入力画像200に含まれるある画像ブロックが動体画像であると動体検出部31で判定されると、つまり、当該画像ブロックに対応する撮像ブロックに動体700が存在すると判定されると、検出頻度マップ600における、当該画像ブロックに対応する撮像ブロックBKについての動体700の検出頻度601を1つ増加する。 If the moving part detection unit 31 determines that an image block included in the input image 200 is a moving body image, that is, the specifying unit 330 determines that the moving body 700 exists in the imaging block corresponding to the image block. In the detection frequency map 600, the detection frequency 601 of the moving object 700 for the imaging block BK corresponding to the image block is increased by one.
 また特定部330は、入力画像200に含まれるある画像ブロックが動体画像でないと動体検出部31で判定されると、つまり、当該画像ブロックに対応する撮像ブロックに動体700が存在しないと判定されると、非検出頻度マップ610における、当該画像ブロックに対応する撮像ブロックBKについての動体700の非検出頻度611を1つ増加する。 If the moving object detection unit 31 determines that an image block included in the input image 200 is not a moving object image, that is, the specifying unit 330 determines that the moving object 700 does not exist in the imaging block corresponding to the image block. Then, the non-detection frequency 611 of the moving object 700 for the imaging block BK corresponding to the image block in the non-detection frequency map 610 is increased by one.
 図12,13には、撮像領域10を構成する複数の撮像ブロックBKのうち、中央の下側の2つの撮像ブロックBKにおいて動体(人)700が静止している場合に生成される検出頻度マップ600及び非検出頻度マップ610の一例が示されている。図12に示される検出頻度マップ600では、動体700が存在する2つの撮像ブロックBKについての動体700の検出頻度601が大きくなっている。一方で、図13に示される非検出頻度マップ610では、動体700が存在する2つの撮像ブロックBK以外の各撮像ブロックBKについての動体700の非検出頻度611が大きくなっている。 12 and 13, a detection frequency map generated when the moving object (person) 700 is stationary in two imaging blocks BK on the lower side of the center among the plurality of imaging blocks BK constituting the imaging region 10. An example of 600 and a non-detection frequency map 610 are shown. In the detection frequency map 600 shown in FIG. 12, the detection frequency 601 of the moving object 700 for the two imaging blocks BK where the moving object 700 exists is large. On the other hand, in the non-detection frequency map 610 shown in FIG. 13, the non-detection frequency 611 of the moving object 700 for each imaging block BK other than the two imaging blocks BK in which the moving object 700 exists is large.
 このような検出頻度マップ600では、動体700が頻繁に検出される撮像ブロックBKについての検出頻度601は大きくなる。したがって、検出頻度マップ600を参照することによって、撮像領域10において動体700が存在する可能性が高い領域を特定することができる。撮像領域10において、例えば、人の通路が存在する場合には、当該通路に対応する撮像ブロックについての検出頻度が大きくなり、当該撮像ブロックに人が存在する可能性が高いことが理解できる。 In such a detection frequency map 600, the detection frequency 601 for the imaging block BK in which the moving body 700 is frequently detected increases. Therefore, by referring to the detection frequency map 600, it is possible to identify an area where the moving object 700 is likely to exist in the imaging area 10. In the imaging region 10, for example, when a person's passage exists, the detection frequency of the imaging block corresponding to the passage increases, and it can be understood that there is a high possibility that a person exists in the imaging block.
 また非検出頻度マップ610では、動体700があまり検出されない撮像ブロックBKについての非検出頻度611は大きくなる。したがって、非検出頻度マップ610を参照することによって、撮像領域10において動体700が存在しない可能性が高い領域を特定することができる。 Further, in the non-detection frequency map 610, the non-detection frequency 611 for the imaging block BK in which the moving body 700 is not detected so much increases. Therefore, by referring to the non-detection frequency map 610, it is possible to specify an area in which there is a high possibility that the moving object 700 does not exist in the imaging area 10.
 <背景モデル更新処理>
 次にステップs14での背景モデル更新処理について説明する。背景モデル更新処理では、キャッシュモデルを記憶するキャッシュモデル記憶部6が使用される。キャッシュモデルには、背景モデル500に登録される背景画像情報の候補である背景画像情報候補が含められる。
<Background model update process>
Next, the background model update process in step s14 will be described. In the background model update process, a cache model storage unit 6 that stores a cache model is used. The cache model includes background image information candidates that are background image information candidates registered in the background model 500.
 ここで、撮像領域10では、日照変化あるいは照明変化などによって、明るさが変化することがある。撮像領域10での明るさが変化すると、入力画像200の画像情報が変化することから、動体検出部31は、入力画像200に含まれる、背景を示す画像ブロックを誤って動体画像であると判定する可能性がある。したがって、動体検出部31において動体画像あると判定された画像ブロックの画像情報が、実際には背景の画像情報である可能性がある。 Here, in the imaging area 10, the brightness may change due to a change in sunlight or a change in illumination. When the brightness in the imaging region 10 changes, the image information of the input image 200 changes, so the moving object detection unit 31 erroneously determines that the image block indicating the background included in the input image 200 is a moving object image. there's a possibility that. Therefore, there is a possibility that the image information of the image block determined to be a moving image by the moving object detection unit 31 is actually background image information.
 そこで、本実施の形態では、背景モデル更新部33は、動体検出部31において動体画像であると判定された画像ブロックの画像情報を背景画像情報候補として、いったんキャッシュモデルに登録する。そして、背景モデル更新部33は、登録判定期間に入力される複数枚の入力画像200に基づいて、キャッシュモデルに登録した背景候補画像情報が、背景の画像情報であるか否かを判定する。背景モデル更新部33は、キャッシュモデルに登録した背景画像情報候補が背景の画像情報であると判定すると、当該背景画像情報候補を背景画像情報として背景モデル500に登録する。つまり、背景モデル更新部33は、登録判定期間に入力される複数枚の入力画像200に基づいて、キャッシュモデル記憶部6に記憶した背景画像情報候補を背景画像情報として背景モデル500に登録するか否かを判定する。登録判定期間は、ステップs13での判定期間調整処理で調整される。 Therefore, in the present embodiment, the background model update unit 33 once registers the image information of the image block determined to be a moving image by the moving object detection unit 31 as a background image information candidate in the cache model. Then, the background model update unit 33 determines whether the background candidate image information registered in the cache model is background image information based on the plurality of input images 200 input during the registration determination period. When the background model update unit 33 determines that the background image information candidate registered in the cache model is background image information, the background model update unit 33 registers the background image information candidate in the background model 500 as background image information. That is, the background model update unit 33 registers the background image information candidates stored in the cache model storage unit 6 in the background model 500 as background image information based on a plurality of input images 200 input during the registration determination period. Determine whether or not. The registration determination period is adjusted by the determination period adjustment process in step s13.
 図14は背景モデル更新処理を示すフローチャートである。図14に示されるように、ステップs141において、背景モデル更新部33は、撮像領域10のある撮像ブロックを注目撮像ブロックとする。そして、背景モデル更新部33は、ステップs11で入力された処理対象の入力画像200における、注目撮像ブロックの画像を示す画像ブロック(注目画像ブロック)が動体検出部31において動体画像であると判定されたか否かを判定する。ステップs141において、注目画像ブロックが動体検出部31において動体画像ではないと判定されたと判断されると、つまり、注目画像ブロックの画像情報が、背景モデル500中の各対応コードワードCWの背景画像情報と一致すると判定されると、背景モデル更新部33はステップs142を実行する。 FIG. 14 is a flowchart showing the background model update process. As illustrated in FIG. 14, in step s141, the background model update unit 33 sets an imaging block having the imaging region 10 as a target imaging block. Then, the background model update unit 33 determines in the moving object detection unit 31 that the image block (attention image block) indicating the image of the target imaging block in the processing target input image 200 input in step s11 is a moving object image. It is determined whether or not. If it is determined in step s141 that the target image block is determined not to be a moving image by the moving object detection unit 31, that is, the image information of the target image block is the background image information of each corresponding codeword CW in the background model 500. If determined to match, the background model update unit 33 executes step s142.
 ステップs142では、背景モデル更新部33は、注目画像ブロックの画像情報と一致すると判定された背景画像情報を含む、背景モデル500中のコードワードCWの最新一致時刻Teを現在時刻に変更する。ステップs142が実行されると、背景モデル更新部33では、特定部330が、ステップs143において、非検出頻度マップ610における、注目画像ブロックに対応する注目撮像ブロックについての非検出頻度を1つ増加する。 In step s142, the background model update unit 33 changes the latest match time Te of the code word CW in the background model 500 including the background image information determined to match the image information of the target image block to the current time. When step s142 is executed, in the background model update unit 33, the specifying unit 330 increases the non-detection frequency for the target imaging block corresponding to the target image block in the non-detection frequency map 610 by one in step s143. .
 一方で、ステップs141において、注目画像ブロックが動体検出部31において動体画像であると判定されたと判断されると、背景モデル更新部33はステップs144を実行する。ステップs144では、キャッシュモデルの更新が行われる。具体的には、背景モデル更新部33は、注目画像ブロックの画像情報が、キャッシュモデル記憶部6内のキャッシュモデルに含まれる各対応コードワードCWに含まれていない場合には、当該画像情報を背景画像情報候補として含むコードワードCWを生成してキャッシュモデル内の対応コードブロックCBに登録する。このコードワードCWには、画像情報(背景画像情報候補)以外にも、最新一致時刻Te及びコードワード生成時刻Tiが含まれている。ステップs144で生成されたコードワードCWに含まれる最新一致時刻Teは、暫定的に、コードワード生成時刻Tiと同じ時刻に設定される。また背景モデル更新部33は、注目画像ブロックの画像情報が、キャッシュモデル記憶部6内のキャッシュモデルに含まれる対応コードワードCWに含まれている場合には、つまり、注目画像ブロックの画像情報と一致する背景画像情報候補を含む対応コードワードCWがキャッシュモデルに含まれている場合には、キャッシュモデルにおける、当該背景画像情報候補を含むコードワードCW中の最新一致時刻Teを現在時刻に変更する。 On the other hand, when it is determined in step s141 that the target image block is determined to be a moving image by the moving object detection unit 31, the background model update unit 33 executes step s144. In step s144, the cache model is updated. Specifically, when the image information of the target image block is not included in each corresponding codeword CW included in the cache model in the cache model storage unit 6, the background model update unit 33 displays the image information. A code word CW included as a background image information candidate is generated and registered in the corresponding code block CB in the cache model. The code word CW includes the latest match time Te and the code word generation time Ti in addition to the image information (background image information candidate). The latest matching time Te included in the code word CW generated in step s144 is provisionally set to the same time as the code word generation time Ti. Further, the background model update unit 33, when the image information of the target image block is included in the corresponding codeword CW included in the cache model in the cache model storage unit 6, that is, the image information of the target image block When the corresponding code word CW including the matching background image information candidate is included in the cache model, the latest matching time Te in the code word CW including the background image information candidate in the cache model is changed to the current time. .
 このように、ステップs144では、不足している画像情報を含むコードワードCWのキャッシュモデルへの追加、あるいはキャッシュモデル中のコードワードCWの最新一致時刻Teの更新が行われる。 Thus, in step s144, the code word CW including the missing image information is added to the cache model, or the latest match time Te of the code word CW in the cache model is updated.
 なお、ステップs144において、背景モデル更新部33は、キャッシュモデル記憶部6内のキャッシュモデルに、注目撮像ブロックに対応するコードブックCBが登録されていない場合には、注目画像ブロックの画像情報を背景画像情報候補として含むコードワードCWを生成し、当該コードワードCWを含むコードブックCBを生成してキャッシュモデルに登録する。 In step s144, when the code book CB corresponding to the target imaging block is not registered in the cache model in the cache model storage unit 6, the background model update unit 33 uses the image information of the target image block as the background. A code word CW included as an image information candidate is generated, and a code book CB including the code word CW is generated and registered in the cache model.
 ステップs144が実行されると、特定部330は、ステップs145において、検出頻度マップ600における、注目画像ブロックに対応する注目撮像ブロックについての検出頻度を1つ増加する。 When step s144 is executed, the specifying unit 330 increases the detection frequency for the target imaging block corresponding to the target image block in the detection frequency map 600 by one in step s145.
 ステップs143あるいはステップs145が実行されると、ステップs146において、背景モデル更新部33は、撮像領域10における全ての撮像ブロックについて処理が行われた否か、つまり、全ての撮像ブロックを注目撮像ブロックに設定したか否かを判定する。ステップs146において、処理が行われていない撮像ブロックが存在すると判定された場合には、背景モデル更新部33は、未だ処理が行われていない撮像ブロックを新たな注目撮像ブロックとして、ステップs141以降を実行する。一方で、ステップs146において、撮像領域10における全ての撮像ブロックについて処理が行われたと判定されると、背景モデル更新部33はステップs147を実行する。 When step s143 or step s145 is executed, in step s146, the background model update unit 33 determines whether or not processing has been performed for all of the imaging blocks in the imaging region 10, that is, sets all of the imaging blocks as the target imaging block. It is determined whether or not it has been set. If it is determined in step s146 that there is an imaging block that has not been processed, the background model update unit 33 sets the imaging block that has not been processed yet as a new target imaging block, and thereafter performs steps s141 and thereafter. Execute. On the other hand, if it is determined in step s146 that the processing has been performed for all the imaging blocks in the imaging region 10, the background model update unit 33 executes step s147.
 ステップs147では、キャッシュモデルに含まれる、最新一致時刻Teが所定期間更新されていないコードワードCWが削除される。つまり、キャッシュモデル中のコードワードCWに含まれる画像情報が、ある程度の期間、入力画像200から取得された画像情報と一致しない場合には、当該コードワードCWが削除される。コードワードCWに含まれる画像情報が、背景の画像情報である場合には、つまり入力画像200に含まれる、背景を示す画像から取得された画像情報である場合には、当該コードワードCW中の最新一致時刻Teは頻繁に更新される。したがって、最新一致時刻Teが所定期間更新されていないコードワードCWに含まれる画像情報については、入力画像200に含まれる動体画像から取得された画像情報である可能性が高いと考えることができる。最新一致時刻Teが所定期間更新されていないコードワードCWがキャッシュモデルから削除されることによって、動体画像の画像情報がキャッシュモデルから削除される。以後、この所定期間を「削除判定用期間」と呼ぶことがある。削除判定用期間は、日照変化あるいは照明変化などの撮像領域10での明るさの変化、及びポスターの設置あるいは机の配置変更などの環境の変化等による画像情報の変化と、検出対象とする人等の動体が動くときに生じる画像情報の変化とを区別するために予め設定される期間である。例えば、入力画像200を撮像する撮像部の撮像フレームレートが30fpsであり、撮像領域10が会議室100(図4参照)であるとすると、削除判定用期間は、数十フレーム分から数百フレーム分の入力画像200が入力される期間に設定される。 In step s147, the code word CW that is included in the cache model and for which the latest match time Te has not been updated for a predetermined period is deleted. That is, when the image information included in the code word CW in the cache model does not match the image information acquired from the input image 200 for a certain period, the code word CW is deleted. If the image information included in the code word CW is background image information, that is, if it is image information obtained from an image indicating the background included in the input image 200, the code word CW includes The latest match time Te is frequently updated. Therefore, it can be considered that the image information included in the code word CW whose latest match time Te has not been updated for a predetermined period is highly likely to be image information acquired from the moving object image included in the input image 200. By deleting the code word CW whose latest match time Te has not been updated for a predetermined period from the cache model, the image information of the moving image is deleted from the cache model. Hereinafter, this predetermined period may be referred to as “deletion determination period”. The deletion determination period includes a change in brightness in the imaging region 10 such as a change in sunlight or a change in illumination, a change in image information due to a change in environment such as a poster installation or a desk layout change, and a person to be detected. This is a period set in advance to distinguish the change in image information that occurs when a moving object such as the moving object moves. For example, if the imaging frame rate of the imaging unit that captures the input image 200 is 30 fps and the imaging region 10 is the conference room 100 (see FIG. 4), the deletion determination period is from several tens of frames to several hundreds of frames. Is set to a period during which the input image 200 is input.
 ステップs147において、キャッシュモデルに含まれる、最新一致時刻Teが削除判定用期間更新されていないコードワードCWが削除されると、背景モデル更新部33はステップs148を実行する。ステップs148では、背景モデル更新部33は、キャッシュモデルに登録されているコードワードCWのうち、キャッシュモデルに登録されてから登録判定期間経過しているコードワードCWを特定する。ステップs144では、コードワードCWが生成されると、当該コードワードCWはすぐにキャッシュメモリに登録されることから、コードワードCWがキャッシュモデル内に登録された時刻として、当該コードワードCWに含まれるコードワード生成時刻Tiを使用することができる。 In step s147, when the code word CW that is included in the cache model and whose latest match time Te is not updated for the deletion determination period is deleted, the background model update unit 33 executes step s148. In step s148, the background model update unit 33 identifies the codeword CW that has been registered in the cache model and has passed the registration determination period from the codewords CW registered in the cache model. In step s144, when the code word CW is generated, the code word CW is immediately registered in the cache memory. Therefore, the code word CW is included in the code word CW as the time when the code word CW is registered in the cache model. The code word generation time Ti can be used.
 登録判定期間は削除判定用期間よりも大きな値に設定される。登録判定期間は、削除判定用期間よりも例えば数倍程度大きな値に設定される。本実施の形態では、登録判定期間はフレーム数で表されるものとする。登録判定期間が例えば“500”であるとすると、登録判定期間は、500フレーム分の入力画像200が入力される期間となる。 The registration judgment period is set to a larger value than the deletion judgment period. The registration determination period is set to a value several times larger than the deletion determination period, for example. In the present embodiment, the registration determination period is represented by the number of frames. If the registration determination period is, for example, “500”, the registration determination period is a period in which the input image 200 for 500 frames is input.
 ステップs148が実行されると、ステップs149において、背景モデル更新部33は、検出頻度マップ600を用いた背景モデル登録判定処理を行う。背景モデル登録判定処理では、ステップs148で特定されたコードワードCWを、背景モデル記憶部5内の背景モデル500に登録するか否かが決定される。以下に背景モデル登録判定処理について詳細に説明する。 When step s148 is executed, in step s149, the background model update unit 33 performs background model registration determination processing using the detection frequency map 600. In the background model registration determination process, it is determined whether or not to register the code word CW specified in step s148 in the background model 500 in the background model storage unit 5. The background model registration determination process will be described in detail below.
 背景モデル登録判定処理では、まず特定部330が、検出頻度マップ600を用いて、撮像領域10において動体が存在する可能性が高い領域を特定する。具体的には、特定部330は、検出頻度マップ600において、第1のしきい値よりも大きい検出頻度を特定する。そして、特定部330は、第1のしきい値よりも大きい検出頻度に対応する撮像ブロックを、撮像領域10において動体が存在する可能性が高い領域とする。つまり、特定部330は、撮像ブロックについて動体の検出頻度が第1のしきい値よりも大きい場合には、当該撮像ブロックには動体が存在する可能性が高いと判定する。 In the background model registration determination process, first, the specifying unit 330 uses the detection frequency map 600 to specify an area in the imaging area 10 where a moving object is highly likely to exist. Specifically, the specifying unit 330 specifies a detection frequency greater than the first threshold in the detection frequency map 600. Then, the specifying unit 330 sets an imaging block corresponding to a detection frequency larger than the first threshold as an area where there is a high possibility that a moving object exists in the imaging area 10. In other words, when the moving object detection frequency for the imaging block is greater than the first threshold, the specifying unit 330 determines that there is a high possibility that a moving object exists in the imaging block.
 図15は検出頻度マップ600の一例を示す図である。第1のしきい値=100とすると、図15に示される検出頻度マップ600では、一番右の列の下から一行目と二行目の2つの検出頻度が第1のしきい値よりも大きくなっている。したがって、撮像領域10において、一番右の列の下から一行目と二行目の撮像ブロックBKから成る領域(斜線で示された領域)が、撮像領域10において動体が存在する可能性が高い領域120となる。 FIG. 15 is a diagram showing an example of the detection frequency map 600. When the first threshold value = 100, in the detection frequency map 600 shown in FIG. 15, the two detection frequencies in the first and second rows from the bottom of the rightmost column are higher than the first threshold value. It is getting bigger. Therefore, in the imaging region 10, there is a high possibility that a moving object is present in the imaging region 10 (an area indicated by diagonal lines) including the imaging blocks BK in the first and second rows from the bottom of the rightmost column. Region 120 becomes.
 なお、特定部330は、検出頻度マップ600において、第1のしきい値よりも大きい検出頻度が存在しない場合には、撮像領域10において動体が存在する可能性が高い領域が存在しないと判定する。また、特定部330は、第1のしきい値以上の検出頻度に対応する撮像ブロックを、撮像領域10において動体が存在する可能性が高い領域としても良い。 Note that when the detection frequency map 600 does not have a detection frequency greater than the first threshold, the specifying unit 330 determines that there is no region in the imaging region 10 where there is a high possibility that a moving object exists. . Further, the specifying unit 330 may set an imaging block corresponding to a detection frequency equal to or higher than the first threshold as an area where there is a high possibility that a moving object exists in the imaging area 10.
 特定部330において、撮像領域10において動体が存在する可能性が高い領域が特定されると、登録決定部331は、ステップs148で特定された各コードワードCWについて、当該コードワードCWを背景モデル500に登録するか否かを、特定部330での特定結果に基づいて決定する。 When the identification unit 330 identifies an area in the imaging area 10 where a moving object is highly likely to exist, the registration determination unit 331 uses the codeword CW for the background model 500 for each codeword CW identified in step s148. It is determined based on the identification result in the identification unit 330 whether or not to be registered.
 具体的には、登録決定部331は、ステップs148で特定されたコードワードCWに含まれる画像情報が取得された画像ブロックに対応する撮像ブロックが、特定部330で特定された、撮像領域10において動体が存在する可能性が高い領域に含まれる場合には、つまり、当該撮像ブロックに動体が存在する可能性が高い場合には、当該コードワードCWを背景モデル500に登録しないと決定する。 Specifically, the registration determining unit 331 determines that the imaging block corresponding to the image block from which the image information included in the code word CW identified in step s148 is acquired is identified in the imaging region 10 identified by the identifying unit 330. If it is included in the region where the moving object is highly likely to exist, that is, if the moving object is highly likely to exist in the imaging block, it is determined not to register the code word CW in the background model 500.
 一方で、登録決定部331は、ステップs148で特定されたコードワードCWに含まれる画像情報が取得された画像ブロックに対応する撮像ブロックが、特定部330で特定された、撮像領域10において動体が存在する可能性が高い領域に含まれない場合には、つまり、当該撮像ブロックに動体が存在する可能性が高くない場合には、当該コードワードCWを背景モデル500に登録すると決定する。 On the other hand, the registration determination unit 331 determines that an imaging block corresponding to the image block from which the image information included in the code word CW identified in step s148 is acquired is identified by the identification unit 330. If it is not included in the region that is highly likely to exist, that is, if the possibility that a moving object is not present in the imaging block is high, it is determined that the code word CW is registered in the background model 500.
 なお、特定部330において、撮像領域10において動体が存在する可能性が高い領域が存在しないと判定された場合には、登録決定部331は、ステップs148で特定されたコードワードCWのすべてを背景モデル500に登録すると決定する。 If the specifying unit 330 determines that there is no region where there is a high possibility that a moving object exists in the imaging region 10, the registration determining unit 331 uses all the code words CW specified in step s148 as the background. It is determined that the model 500 is registered.
 背景モデル更新部33は、登録決定部331において背景モデル500に登録すると決定されたコードワードCWを、背景モデル500内の当該コードワードCWに対応するコードブロックCBに登録する。そして、背景モデル更新部33は、背景モデル500に登録したコードワードCWをキャッシュモデルから削除する。 The background model update unit 33 registers the codeword CW determined to be registered in the background model 500 by the registration determination unit 331 in the code block CB corresponding to the codeword CW in the background model 500. Then, the background model update unit 33 deletes the codeword CW registered in the background model 500 from the cache model.
 上記の説明から理解できるように、本実施の形態では、背景モデル更新部33は、キャッシュモデル内のコードワードCWを、キャッシュモデルに登録してから登録判定期間経過するまでに削除することがある。背景モデル更新部33が、キャッシュモデル内のコードワードCW(背景画像情報候補)を、キャッシュモデルに登録してから登録判定期間経過するまでに削除するということは、背景モデル更新部33が、登録判定期間において入力される複数枚の入力画像200に基づいて、キャッシュメモリに登録したコードワードCW(背景画像情報候補)を背景モデル500に登録しないと判定したことを意味している。 As can be understood from the above description, in the present embodiment, the background model update unit 33 may delete the code word CW in the cache model until the registration determination period elapses after it is registered in the cache model. . The background model update unit 33 deletes the code word CW (background image information candidate) in the cache model until the registration determination period elapses after it is registered in the cache model. This means that it is determined that the code word CW (background image information candidate) registered in the cache memory is not registered in the background model 500 based on a plurality of input images 200 input in the determination period.
 また、本実施の形態では、背景モデル更新部33は、キャッシュモデル内のコードワードCWを、キャッシュモデルに登録してから登録判定期間経過するまでに削除せずに、背景モデル500に登録することがある。背景モデル更新部33が、キャッシュモデル内のコードワードCW(背景画像情報候補)を、キャッシュモデルに登録してから登録判定期間経過するまで削除せずに背景モデル500に登録するということは、背景モデル更新部33が、登録判定期間において入力される複数枚の入力画像200に基づいて、キャッシュメモリに登録したコードワードCW(背景画像情報候補)を背景モデル500に登録すると判定したことを意味している。 In the present embodiment, the background model updating unit 33 registers the code word CW in the cache model in the background model 500 without deleting it until the registration determination period elapses after the registration in the cache model. There is. The background model update unit 33 registers the codeword CW (background image information candidate) in the cache model in the background model 500 without deleting it until the registration determination period elapses after it is registered in the cache model. This means that the model updating unit 33 has determined that the code word CW (background image information candidate) registered in the cache memory is registered in the background model 500 based on a plurality of input images 200 input in the registration determination period. ing.
 このように、背景モデル更新部33は、登録判定期間において入力される複数枚の入力画像200に基づいて、キャッシュモデルに登録された背景画像情報候補を背景画像情報として背景モデル500に登録するか否かを判定している。したがって、動体検出部31において動体画像であると誤って判定された画像ブロックの画像情報を、背景画像情報として背景モデル500に登録することができる。よって、背景モデル500を適切に更新することができ、動体検出部31での動体検出の精度が向上する。 Thus, the background model update unit 33 registers the background image information candidate registered in the cache model as the background image information in the background model 500 based on the plurality of input images 200 input in the registration determination period. It is determined whether or not. Therefore, image information of an image block erroneously determined to be a moving image by the moving object detection unit 31 can be registered in the background model 500 as background image information. Therefore, the background model 500 can be updated appropriately, and the accuracy of moving object detection by the moving object detection unit 31 is improved.
 また、本実施の形態とは異なり、キャッシュモデルに登録されているコードワードCWのうち、キャッシュモデルに登録されてから登録判定期間経過しているコードワードCWを無条件で背景モデル500に登録する場合には、キャッシュモデルに登録された、動体画像の画像情報を含むコードワードCWが背景モデル500に登録される可能性がある。具体的には、撮像領域10において動体が一時的に静止している場合には、撮像領域10における、当該動体が存在する撮像ブロックの画像を示す画像ブロックの画像情報が変化しにくい。したがって、キャッシュモデルに登録された、当該動体を示す動体画像の画像情報を含むコードワードCWが、キャッシュモデルから長時間削除されずに、最終的には背景モデル500に登録される可能性がある。これにより、動体検出部31での動体検出の精度が劣化する可能性がある。特に、後述するように、登録判定期間が調整されて短くなった場合には、キャッシュモデルに登録された、動体画像の画像情報を含むコードワードCWが背景モデル500に登録されやすい。その結果、動体検出部31での動体検出の精度が劣化する可能性が大きくなる。 Further, unlike the present embodiment, among the codewords CW registered in the cache model, the codeword CW that has passed the registration determination period after being registered in the cache model is unconditionally registered in the background model 500. In this case, there is a possibility that the code word CW registered in the cache model and including the image information of the moving object image is registered in the background model 500. Specifically, when the moving object is temporarily stationary in the imaging region 10, the image information of the image block indicating the image of the imaging block in which the moving object exists in the imaging region 10 is unlikely to change. Accordingly, there is a possibility that the code word CW registered in the cache model and including the image information of the moving image indicating the moving object is not deleted from the cache model for a long time and is finally registered in the background model 500. . Thereby, the precision of the moving body detection in the moving body detection part 31 may deteriorate. In particular, as will be described later, when the registration determination period is adjusted to be shortened, the code word CW including the image information of the moving body image registered in the cache model is easily registered in the background model 500. As a result, the possibility that the accuracy of moving object detection by the moving object detection unit 31 deteriorates increases.
 そこで、本実施の形態では、キャッシュモデルに登録されてから登録判定期間経過しているコードワードCWを無条件で背景モデル500に登録するのではなく、上述のように、検出頻度マップ600を用いて当該コードワードCWを背景モデル500に登録するか否かを決定している。これにより、動体画像の画像情報を含むコードワードCWが誤って背景モデル500に登録されることを抑制することができる。以下にこの点について詳細に説明する。 Therefore, in the present embodiment, the detection frequency map 600 is used as described above, instead of unconditionally registering the codeword CW, which has been registered in the cache model and having passed the registration determination period, in the background model 500. Whether the code word CW is registered in the background model 500 is determined. Thereby, it can suppress that the codeword CW containing the image information of a moving body image is registered into the background model 500 accidentally. This point will be described in detail below.
 上述のように、特定部330は、検出頻度マップ600を用いて、撮像領域10において動体が存在する可能性が高い撮像ブロックを特定している。入力画像200に含まれる、動体が存在する可能性が高い撮像ブロックの画像を示す画像ブロックの画像情報については、動体画像の画像情報である可能性が高い。したがって、キャッシュモデルに登録されてから登録判定期間経過しているコードワードCWであっても、当該コードワードCWに含まれる画像情報が取得された画像ブロックに対応する撮像ブロックが、動体が存在する可能性が高い撮像ブロックであれば、当該コードワードCWの画像情報は動体画像の画像情報である可能性が高い。よって、本実施の形態のように、キャッシュモデルに登録されてから登録判定期間経過しているコードワードCWであっても、当該コードワードCWに含まれる画像情報が取得された画像ブロックに対応する撮像ブロックが、動体が存在する可能性が高い撮像ブロックであれば、当該コードワードCWを背景モデル500に追加しないことによって、動体画像の画像情報を含むコードワードCWが誤って背景モデル500に追加されることを抑制することができる。 As described above, the specifying unit 330 uses the detection frequency map 600 to specify an imaging block that is highly likely to have a moving object in the imaging region 10. There is a high possibility that the image information of the image block indicating the image of the imaging block that is highly likely to be present in the input image 200 is the image information of the moving object image. Therefore, even if the code word CW has been registered for the cache model and the registration determination period has elapsed, the imaging block corresponding to the image block from which the image information included in the code word CW is acquired has moving objects. If the imaging block has a high possibility, the image information of the code word CW is highly likely to be image information of a moving body image. Therefore, as in the present embodiment, even a code word CW for which the registration determination period has elapsed since registration in the cache model corresponds to the image block from which the image information included in the code word CW has been acquired. If the imaging block is an imaging block in which there is a high possibility that a moving object exists, the code word CW including the image information of the moving object image is erroneously added to the background model 500 by not adding the code word CW to the background model 500. It can be suppressed.
 ステップs149が終了すると、ステップs150において、背景モデル更新部33は検出頻度クリア処理を行う。この検出頻度クリア処理では、まず特定部330が、非検出頻度マップ610において第2のしきい値よりも大きい非検出頻度を特定する。そして、特定部330は、第2のしきい値よりも大きい非検出頻度に対応する撮像ブロックについての検出頻度マップ600での検出頻度をクリアして零に設定する。さらに、特定部330は、非検出頻度マップ610において第2のしきい値よりも大きい非検出頻度をクリアして零に設定する。第2のしきい値は、例えば、第1のしきい値と同じ値に設定される。 When step s149 ends, in step s150, the background model update unit 33 performs detection frequency clear processing. In the detection frequency clear process, first, the specifying unit 330 specifies a non-detection frequency larger than the second threshold in the non-detection frequency map 610. Then, the specifying unit 330 clears and sets the detection frequency in the detection frequency map 600 for the imaging block corresponding to the non-detection frequency larger than the second threshold to zero. Furthermore, the specifying unit 330 clears a non-detection frequency larger than the second threshold in the non-detection frequency map 610 and sets it to zero. For example, the second threshold value is set to the same value as the first threshold value.
 ここで、検出頻度は、ステップs12での動体検出処理において、当該検出頻度に対応する撮像ブロックでの動体の存在が検出されるたびに増加する。したがって、検出頻度をクリアしない場合には、撮像領域10でのレイアウトの変更等によって現在は動体が存在する可能性は高くない撮像ブロックについての検出頻度が第1のしきい値よりも大きい状態が発生する可能性が高くなる。 Here, the detection frequency increases each time the presence of a moving object in the imaging block corresponding to the detection frequency is detected in the moving object detection process in step s12. Therefore, when the detection frequency is not cleared, there is a state in which the detection frequency of an imaging block for which there is no high possibility that a moving object currently exists due to a layout change or the like in the imaging region 10 is higher than the first threshold value. It is more likely to occur.
 一方で、ある撮像ブロックについて、動体が存在する可能性が高い状態からそうではない状態へ遷移すると、非検出頻度マップ610での当該撮像ブロックの非検出頻度が大きくなる。 On the other hand, when a certain imaging block transitions from a state in which there is a high possibility that a moving object exists to a state where it does not, the non-detection frequency of the imaging block in the non-detection frequency map 610 increases.
 そこで、本実施の形態では、非検出頻度マップ610において第2のしきい値よりも大きい非検出頻度に対応する撮像ブロックについての検出頻度マップ600での検出頻度をクリアしている。これにより、現在は動体が存在する可能性が高くない撮像ブロックについての検出頻度が第1のしきい値よりも大きい状態が発生することを抑制することができる。したがって、撮像領域10において動体が存在する可能性が高い領域を正確に特定することができる。よって、動体画像の画像情報が背景モデル500に登録されることをさらに抑制することができる。その結果、動体検出の精度がさらに向上する。なお、特定部330は、第2のしきい値以上の非検出頻度に対応する撮像ブロックについての検出頻度と非検出頻度をクリアしても良い。 Therefore, in the present embodiment, the detection frequency in the detection frequency map 600 for the imaging block corresponding to the non-detection frequency larger than the second threshold in the non-detection frequency map 610 is cleared. As a result, it is possible to suppress the occurrence of a state in which the detection frequency of an imaging block for which there is no high possibility that a moving object currently exists is greater than the first threshold value. Therefore, it is possible to accurately specify an area in the imaging area 10 where a moving object is highly likely to exist. Therefore, it can further suppress that the image information of a moving body image is registered into the background model 500. FIG. As a result, the accuracy of moving object detection is further improved. The specifying unit 330 may clear the detection frequency and the non-detection frequency for the imaging block corresponding to the non-detection frequency equal to or higher than the second threshold.
 ステップs150が終了すると、ステップs151において、背景モデル更新部33は、背景モデル500に含まれるコードワードCWにおいて、入力画像200の画像情報と所定期間にわたって一致しなかった背景画像情報を含むコードワードCWを削除する。つまり、背景モデル更新部33は、背景モデル500に含まれる、最新一致時刻Teが所定期間更新されていないコードワードCWを削除する。これにより、撮像領域10において、時系列的な撮像環境の変化により既に背景ではなくなった撮像ブロックの画像から取得された画像情報を含むコードワードCWを背景モデル500から削除することができる。よって、背景モデル500の情報量を低減することができる。 When step s150 ends, in step s151, the background model update unit 33 includes a codeword CW including background image information that does not match the image information of the input image 200 for a predetermined period in the codeword CW included in the background model 500. Is deleted. That is, the background model update unit 33 deletes the code word CW included in the background model 500 and whose latest match time Te has not been updated for a predetermined period. Thereby, in the imaging region 10, the code word CW including the image information acquired from the image of the imaging block that is no longer the background due to the time-series imaging environment change can be deleted from the background model 500. Therefore, the information amount of the background model 500 can be reduced.
 このような背景モデルの更新処理を行うことによって、撮像領域10での明るさの変化等の撮像環境の変化が生じた場合であっても、撮像環境の変化に追従した背景モデル500を用いて動体検出を行うことができる。よって、動体検出の精度が向上する。 Even if a change in the imaging environment such as a change in brightness in the imaging region 10 occurs by performing such a background model update process, the background model 500 that follows the change in the imaging environment is used. Moving object detection can be performed. Therefore, the accuracy of moving object detection is improved.
 なお、本実施の形態では、背景モデル更新処理において検出頻度マップ600及び非検出頻度マップ610を更新しているが、この代わりに、ステップs12での動体検出処理が終了した後であって、ステップs14での背景モデル更新処理が開始する前に、動体検出部31での各撮像ブロックについての動体検出の結果に基づいて、検出頻度マップ600及び非検出頻度マップ610を更新しても良い。例えば、ステップs12での動体検出処理とステップs13での判定期間調整処理との間において、検出頻度マップ600及び非検出頻度マップ610を更新しても良い。 In the present embodiment, the detection frequency map 600 and the non-detection frequency map 610 are updated in the background model update process. Instead, after the moving object detection process in step s12 ends, the step Before the background model update process in s14 starts, the detection frequency map 600 and the non-detection frequency map 610 may be updated based on the result of moving object detection for each imaging block in the moving object detection unit 31. For example, the detection frequency map 600 and the non-detection frequency map 610 may be updated between the moving object detection process in step s12 and the determination period adjustment process in step s13.
 <判定期間調整処理>
 日照変化あるいは照明変化などによって、撮像領域10での明るさが急に変化すると、入力画像200の画像情報が急に変化する。したがって、入力画像200に含まれる、背景を示す画像が動体画像であると誤って判定されて、当該背景を示す画像の画像情報がキャッシュモデルに登録されることがある。このような場合、背景モデル更新処理で使用される登録判定期間が長いと、キャッシュモデル内の背景の画像情報が長時間背景モデル500に反映されなくなる。その結果、動体検出の精度が劣化する可能性がある。
<Determination period adjustment process>
When the brightness in the imaging region 10 changes suddenly due to changes in sunlight or illumination, the image information of the input image 200 changes suddenly. Therefore, it may be erroneously determined that the image indicating the background included in the input image 200 is a moving image, and the image information of the image indicating the background may be registered in the cache model. In such a case, if the registration determination period used in the background model update process is long, the background image information in the cache model is not reflected in the background model 500 for a long time. As a result, the accuracy of moving object detection may deteriorate.
 そこで、ステップs13での判定期間調整処理において、登録判定期間の調整を行う。具体的には、判定期間調整部32は、登録判定期間を、ステップs11で入力された処理対象の入力画像200中の動体領域(動体検出部31で動体画像であると判定された領域)の割合が大きいほど小さくする。これにより、入力画像200中の動体領域の割合が大きい場合には、ステップs14での背景モデル更新処理で使用される登録判定期間が短くなる。撮像領域10での明るさが急に変化すると、入力画像200全体で画像情報が急に変化することから、入力画像200での動体領域の割合が大きくなる。よって、撮像領域10での明るさが急に変化すると、背景モデル更新処理で使用される登録判定期間が短くなる。その結果、キャッシュモデル内の背景の画像情報をすぐに背景モデルに反映することが可能となり、動体検出の精度が向上する。 Therefore, the registration determination period is adjusted in the determination period adjustment process in step s13. Specifically, the determination period adjustment unit 32 sets the registration determination period to the moving object region (the region determined to be a moving object image by the moving object detection unit 31) in the processing target input image 200 input in step s11. The smaller the ratio, the smaller. Thereby, when the ratio of the moving body area | region in the input image 200 is large, the registration determination period used by the background model update process in step s14 becomes short. When the brightness in the imaging region 10 changes suddenly, the image information changes suddenly in the entire input image 200, so the proportion of the moving object region in the input image 200 increases. Therefore, when the brightness in the imaging region 10 changes suddenly, the registration determination period used in the background model update process is shortened. As a result, the background image information in the cache model can be immediately reflected in the background model, and the accuracy of moving object detection is improved.
 本実施の形態では、登録判定期間Dtは、入力画像200中の動体領域の割合をRdとすると、以下の式(4)で表される。 In the present embodiment, the registration determination period Dt is expressed by the following formula (4), where Rd is the ratio of the moving object region in the input image 200.
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 ここで式(4)中のaはしきい値であって、Dmin及びDmaxは定数である。Dmax>Dminとなっている。 Here, a in equation (4) is a threshold value, and Dmin and Dmax are constants. Dmax> Dmin.
 式(4)で表される、登録判定期間Dtと入力画像200中の動体領域の割合Rdとの関係を図示すると図16のようになる。図16にも示されるように、式(4)によれば、入力画像200中の動体領域の割合Rdが大きくなるほど、登録判定期間Dtが小さくなる。特に、入力画像200中の動体領域の割合Rdが固定のしきい値aを超えた場合、登録判定期間Dtが最小値(Dmin)となる。 FIG. 16 shows the relationship between the registration determination period Dt and the ratio Rd of the moving object area in the input image 200 expressed by the equation (4). As shown in FIG. 16, according to the equation (4), the registration determination period Dt becomes shorter as the moving object region ratio Rd in the input image 200 becomes larger. In particular, when the ratio Rd of the moving object region in the input image 200 exceeds a fixed threshold value a, the registration determination period Dt becomes the minimum value (Dmin).
 なお、しきい値aは、入力画像200の何パーセント以上の領域が動体画像であると判定されると異常と考えられるか(撮像領域10での明るさが急に変化した状態と考えられるか)という基準に基づいて予め設定される値であり、撮像領域10での被写体に応じて設定されることになる。 It should be noted that the threshold value a is considered abnormal when it is determined that the region of the input image 200 or more of the input image 200 is a moving body image (whether it is considered that the brightness in the imaging region 10 has suddenly changed). ), Which is set in advance based on the criterion of), and is set according to the subject in the imaging region 10.
 また、入力画像200中の動体領域の割合Rdは、動体領域の画素数をPd、入力画像200の全画素数をPaとすると、以下の式(5)で表される。 The ratio Rd of the moving object region in the input image 200 is expressed by the following equation (5), where Pd is the number of pixels in the moving object region and Pa is the total number of pixels in the input image 200.
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 動体領域の画素数Pdは、動体画像である画像ブロックの数に、1つの画像ブロックに含まれる画素数を乗じることによって得ることができる。 The number of pixels Pd in the moving object region can be obtained by multiplying the number of image blocks that are moving object images by the number of pixels included in one image block.
 <登録判定期間と検出頻度マップに関する第1のしきい値との関係>
 登録判定期間を表すフレーム数が小さい場合には、撮像領域10において動体が存在する可能性が高い撮像ブロックに関して、当該撮像ブロックについての検出頻度が第1のしきい値よりも大きくなる前に、キャッシュモデル内の、当該撮像ブロックの画像を示す画像ブロックから取得された画像情報を含むコードワードCWが背景モデル500に登録される可能性が高くなる。したがって、登録判定期間を表すフレーム数が小さい場合には、キャッシュモデル内の、動体画像の画像情報を含むコードワードCWが、誤って背景モデル500に追加されることがある。これを抑制するためには、登録判定期間の最小値を示すDminを第1のしきい値よりも大きくことが考えられる。
<Relationship between Registration Determination Period and First Threshold Regarding Detection Frequency Map>
When the number of frames representing the registration determination period is small, before the detection frequency of the imaging block is higher than the first threshold for an imaging block in which there is a high possibility of moving objects in the imaging region 10, There is a high possibility that a code word CW including image information acquired from an image block indicating an image of the imaging block in the cache model is registered in the background model 500. Therefore, when the number of frames representing the registration determination period is small, the code word CW including the image information of the moving body image in the cache model may be erroneously added to the background model 500. In order to suppress this, it is conceivable that Dmin indicating the minimum value of the registration determination period is larger than the first threshold value.
 しかしながら、Dminを大きくすると、上述の「キャッシュモデル内の背景の画像情報をすぐに背景モデルに反映することが可能となる」という効果が薄れてしまう。 However, if Dmin is increased, the above-mentioned effect that “the image information of the background in the cache model can be immediately reflected in the background model” is diminished.
 そこで、本実施の形態では、第1のしきい値を、Dmin<第1のしきい値<Dmaxに設定する。例えば、第1のしきい値を、DminとDmaxを足して得られる値の半分とする。これにより、入力画像200中の動体領域の割合Rdが小さい場合には、登録判定期間は第1のしきい値よりも大きくなる。したがって、入力画像200中の動体領域の割合Rdが小さい場合には、キャッシュモデル内の、動体画像の画像情報を含むコードワードCWが、誤って背景モデル500に追加されることを抑制することができる。これに対して、入力画像200中の動体領域の割合Rdが大きい場合には、登録判定期間は第1のしきい値よりも小さくなる。したがって、キャッシュモデル内の背景の画像情報をすぐに登録することが可能となる。 Therefore, in the present embodiment, the first threshold value is set to Dmin <first threshold value <Dmax. For example, the first threshold value is half of the value obtained by adding Dmin and Dmax. Thereby, when the ratio Rd of the moving body region in the input image 200 is small, the registration determination period becomes longer than the first threshold value. Therefore, when the ratio Rd of the moving object region in the input image 200 is small, it is possible to prevent the code word CW including the image information of the moving object image in the cache model from being erroneously added to the background model 500. it can. On the other hand, when the ratio Rd of the moving object region in the input image 200 is large, the registration determination period is smaller than the first threshold value. Therefore, the background image information in the cache model can be registered immediately.
 以上のように、本実施の形態では、撮像領域10での検出対象物が存在する可能性が高い領域の特定結果が用いられて、入力画像200から得られる画像情報が背景画像情報として背景モデル500に登録するか否かが決定される。したがって、撮像領域10における、検出対象物が実際に存在する領域の画像から得られた画像情報が背景画像情報として背景モデル500に登録されることを抑制することができる。よって、検出対象物の検出精度が向上する。 As described above, in the present embodiment, the identification result of the region where the detection target object in the imaging region 10 is likely to exist is used, and the image information obtained from the input image 200 is used as the background image information as the background model. Whether or not to register in 500 is determined. Therefore, it is possible to suppress registration of image information obtained from an image of an area where the detection target actually exists in the imaging area 10 as background image information in the background model 500. Therefore, the detection accuracy of the detection object is improved.
 図17は動体検出部31での動体検出の結果の一例を示す図である。図17には、動体検出部31が入力画像200から検出した動体領域800が入力画像200に重ねて示されている。図17に示される入力画像200には、被写体の画像として、床910に複数のダンボール920と複数の椅子930が配置された部屋900の画像が含まれている。部屋900の壁940には図示しない窓が設けられている。部屋900には二人の人990a,990bが存在している。図17の例では、部屋900に存在する人990a,990bのそれぞれが、動体領域800として適切に検出されている。 FIG. 17 is a diagram illustrating an example of a result of moving object detection by the moving object detection unit 31. In FIG. 17, a moving object region 800 detected from the input image 200 by the moving object detection unit 31 is shown superimposed on the input image 200. The input image 200 shown in FIG. 17 includes an image of a room 900 in which a plurality of cardboards 920 and a plurality of chairs 930 are arranged on a floor 910 as an image of a subject. A window (not shown) is provided on the wall 940 of the room 900. Two people 990a and 990b exist in the room 900. In the example of FIG. 17, each of the people 990a and 990b existing in the room 900 is appropriately detected as the moving object region 800.
 図18は、部屋900において、人990a,990bが図17に示される位置で静止している場合の検出頻度マップ600を示す図である。図18では床910及び壁940が破線で示されている。図18では、理解し易いように、検出頻度の大きさを例えば第1段階から第3段階の3段階に分けて検出頻度マップ600を示している。図18に示される検出頻度マップ600においては、検出頻度が、最も大きい第3段階に属する領域については左上がりのハッチングが示されており、2番目に大きい第2段階に属する領域については右上がりのハッチングが示されている。そして、図18に示される検出頻度マップ600においては、検出頻度が、最も小さい第1段階に属する領域についてはハッチングが示されていない。図18に示される検出頻度マップ600においては、撮像領域10における、人990a,990bが存在する領域についての検出頻度が大きくなっている。 FIG. 18 is a diagram showing a detection frequency map 600 when the persons 990a and 990b are stationary at the position shown in FIG. In FIG. 18, the floor 910 and the wall 940 are indicated by broken lines. In FIG. 18, for easy understanding, the detection frequency map 600 is shown by dividing the magnitude of the detection frequency into, for example, three stages from the first stage to the third stage. In the detection frequency map 600 shown in FIG. 18, the region belonging to the third stage where the detection frequency is the highest shows hatching leftward, and the region belonging to the second largest second stage is raised rightward. The hatching is shown. In the detection frequency map 600 shown in FIG. 18, hatching is not shown for the region belonging to the first stage with the lowest detection frequency. In the detection frequency map 600 shown in FIG. 18, the detection frequency for the region where the people 990a and 990b exist in the imaging region 10 is large.
 図19は、本実施の形態とは異なり、背景モデル500の更新に検出頻度マップ600が使用されない場合での動体検出部31での動体検出の結果の一例を示す図である。図19には、動体検出部31が入力画像200から検出した動体領域800が入力画像200に重ねて示されている。図19に示される入力画像200では、二人の人990a,990bが二つの椅子930にそれぞれ座っている。図19に示される動体検出結果では、人990aは動体領域800として検出されているものの、人990bは動体領域800として検出されていない。 FIG. 19 is a diagram illustrating an example of a result of moving object detection in the moving object detection unit 31 when the detection frequency map 600 is not used for updating the background model 500, unlike the present embodiment. In FIG. 19, a moving object region 800 detected by the moving object detection unit 31 from the input image 200 is shown superimposed on the input image 200. In the input image 200 shown in FIG. 19, two people 990a and 990b are sitting on two chairs 930, respectively. In the moving object detection result shown in FIG. 19, the person 990a is detected as the moving object area 800, but the person 990b is not detected as the moving object area 800.
 図20は、本実施の形態での動体検出結果、つまり、背景モデル500の更新に検出頻度マップ600が使用された場合での動体検出部31での動体検出の結果の一例を示す図である。図20に示されるように、背景モデル500の更新に検出頻度マップ600が使用された場合には、人990a,990bのそれぞれが動体領域800として適切に検出されている。 FIG. 20 is a diagram illustrating an example of a moving object detection result in the present embodiment, that is, a result of moving object detection in the moving object detection unit 31 when the detection frequency map 600 is used to update the background model 500. . As shown in FIG. 20, when the detection frequency map 600 is used to update the background model 500, each of the people 990 a and 990 b is appropriately detected as the moving object region 800.
 なお、撮像領域10の環境が、会議室などのように動体の動きが少ない環境である場合には、検出頻度マップ600についての第1のしきい値は、非検出頻度マップ610についての第2のしきい値よりも大きい方が望ましい。動体の動きが少ない環境では、動体が同じ場所に留まる可能性が高いことから、動体が存在する可能性が高い撮像ブロックの検出頻度は大きくなりやすい。このような環境において、第1のしきい値を小さくすると、動体がたまたま少しだけ留まった撮像ブロックについて、動体が存在する可能性が高いと誤って判断する可能性がある。一方で、第2のしきい値を大きくすると、環境の変化により、撮像ブロックが、動体が存在する可能性が高くない領域に変化した場合であっても、当該撮像ブロックについての検出頻度がいつまでもクリアされない可能性がある。よって、撮像領域10の環境が動体の動きが少ない環境である場合には、第1のしきい値は大きく、第2のしきい値は小さい方が望ましい。 When the environment of the imaging region 10 is an environment where there is little movement of moving objects such as a conference room, the first threshold value for the detection frequency map 600 is the second threshold value for the non-detection frequency map 610. It is desirable to be larger than the threshold value. In an environment where there is little movement of a moving object, there is a high possibility that the moving object stays in the same place, and therefore the detection frequency of an imaging block that is highly likely to exist is likely to increase. In such an environment, if the first threshold value is decreased, there is a possibility that it is erroneously determined that there is a high possibility that a moving object exists for an imaging block in which the moving object happens to stay a little. On the other hand, if the second threshold value is increased, even if the imaging block changes to an area where the possibility that a moving object exists is not high due to a change in the environment, the detection frequency of the imaging block is forever. May not be cleared. Therefore, when the environment of the imaging region 10 is an environment in which the movement of the moving object is small, it is desirable that the first threshold value is large and the second threshold value is small.
 また、撮像領域10の環境が、人通りの多い場所など、動体の動きが多い環境である場合には、第1のしきい値は第2のしきい値よりも小さい方が望ましい。動体の動きが多い環境では、動体が同じ場所に留まる可能性は低い。したがって、動体が存在する可能性が高い撮像ブロックであっても、当該撮像ブロックの検出頻度は大きくなりにくい。このような環境において、第1のしきい値を大きくすると、動体が存在する可能性が高い領域を適切に特定できない可能性がある。一方で、第2のしきい値を小さくすると、動体が撮像ブロックから少し動いただけで、当該撮像ブロックについての検出頻度がクリアされる可能性がある。よって、撮像領域10の環境が動体の動きが多い環境である場合には、第1のしきい値は小さく、第2のしきい値は大きい方が望ましい。 In addition, when the environment of the imaging region 10 is an environment where there is a lot of movement of moving objects such as a place where there is a lot of traffic, the first threshold value is preferably smaller than the second threshold value. In an environment where there is a lot of movement of moving objects, it is unlikely that the moving objects will stay in the same place. Therefore, even in an imaging block where there is a high possibility that a moving object exists, the detection frequency of the imaging block is unlikely to increase. In such an environment, if the first threshold value is increased, there is a possibility that a region where a moving object is likely to exist cannot be appropriately specified. On the other hand, if the second threshold value is decreased, the moving object may move slightly from the imaging block, and the detection frequency for the imaging block may be cleared. Therefore, when the environment of the imaging region 10 is an environment where there is a lot of movement of moving objects, it is desirable that the first threshold value is small and the second threshold value is large.
 <変形例>
 人等の動体は、静止しているといっても完全に静止していることはほとんど無く、微動することがある。例えば、机に座っている人や、通路で立ち止まって他の人と話をしている人は、その全体的な位置は変化しないものの(巨視的には静止しているものの)、手、頭、体、脚等が微動する。
<Modification>
Even if a moving body such as a person is stationary, the moving body is rarely completely stationary and may move slightly. For example, a person sitting at a desk or talking to another person while standing in a passage does not change its overall position (although it is macroscopically stationary), but has a hand, head The body, legs, etc. move slightly.
 図21は、撮像領域10で人990が微動する様子と、その場合の検出頻度マップ600の一例を示す図である。図21に示されるように、人990が微動する場合には、検出頻度マップ600では、その人990が主に存在する撮像ブロックについての検出頻度は大きくなるものの、当該撮像ブロックの周辺の撮像ブロックについての検出頻度はあまり大きくならない。したがって、キャッシュモデル内の、当該撮像ブロックの周辺の撮像ブロックに対応する画像ブロックから取得された、動体画像の画像情報が、当該撮像ブロックの周辺の撮像ブロックの検出頻度が小さいために、背景モデル500に登録される可能性がある。 FIG. 21 is a diagram illustrating an example of a state in which the person 990 slightly moves in the imaging region 10 and an example of a detection frequency map 600 in that case. As shown in FIG. 21, when the person 990 moves slightly, the detection frequency map 600 increases the detection frequency for the imaging block in which the person 990 mainly exists, but the surrounding imaging blocks of the imaging block. The detection frequency for is not so great. Accordingly, since the image information of the moving body image acquired from the image block corresponding to the imaging block around the imaging block in the cache model has a low detection frequency of the imaging block around the imaging block, the background model 500 may be registered.
 そこで、本変形例では、図22に示されるように、ステップs145において、特定部330が、検出頻度マップ600における、注目画像ブロックに対応する注目撮像ブロックについての検出頻度601aを1つ増加させるとともに、注目撮像ブロックの周辺の各撮像ブロックの検出頻度601bを1つ増加させる。つまり、特定部330は、複数の撮像ブロックにおいて、動体が検出された撮像ブロック(注目撮像ブロック)の検出頻度601aを増加させるとともに、当該撮像ブロックの周辺の各撮像ブロックの検出頻度601bを増加させる。これにより、人等の動体が完全に静止していなくて微動している場合であっても、当該動体が主に存在する撮像ブロックについての検出頻度と、当該撮像ブロックの周辺の撮像ブロックについての検出頻度とを大きくすることができる。よって、キャッシュモデル内の、当該撮像ブロックの周辺の撮像ブロックに対応する画像ブロックから取得された、動体画像の画像情報が背景モデル500に登録されることを抑制することができる。よって、動体検出の精度がさらに向上する。 Therefore, in the present modification, as illustrated in FIG. 22, in step s145, the specifying unit 330 increases the detection frequency 601a for the target imaging block corresponding to the target image block in the detection frequency map 600 by one. The detection frequency 601b of each imaging block around the imaging block of interest is increased by one. That is, the specifying unit 330 increases the detection frequency 601a of the imaging block (target imaging block) in which a moving object is detected in the plurality of imaging blocks, and increases the detection frequency 601b of each imaging block around the imaging block. . As a result, even if a moving object such as a person is not completely stationary but is moving finely, the detection frequency for the imaging block in which the moving object mainly exists and the imaging blocks around the imaging block The detection frequency can be increased. Therefore, it is possible to suppress registration of the image information of the moving body image acquired from the image block corresponding to the imaging block around the imaging block in the cache model in the background model 500. Therefore, the accuracy of moving object detection is further improved.
 また、本変形例では、図23に示されるように、ステップs143において、特定部330が、非検出頻度マップ610における、注目画像ブロックに対応する注目撮像ブロックについての非検出頻度611aを1つ増加させるとともに、注目撮像ブロックの周辺の各撮像ブロックの非検出頻度611bを1つ増加させる。つまり、特定部330は、複数の撮像ブロックにおいて、動体が検出されなかった撮像ブロック(注目撮像ブロック)の非検出頻度611aを増加させるとともに、当該撮像ブロックの周辺の各撮像ブロックの非検出頻度611bを増加させる。これにより、動体が完全に静止していなくて微動する場合であっても、現在は動体が存在する可能性が高くない撮像ブロックについての検出頻度を適切にクリアすることができる。よって、動体検出の精度がさらに向上する。 In the present modification, as illustrated in FIG. 23, in step s143, the specifying unit 330 increases the non-detection frequency 611a for the target imaging block corresponding to the target image block in the non-detection frequency map 610 by one. In addition, the non-detection frequency 611b of each imaging block around the imaging block of interest is increased by one. That is, the specifying unit 330 increases the non-detection frequency 611a of an imaging block (target imaging block) in which no moving object has been detected in the plurality of imaging blocks, and the non-detection frequency 611b of each imaging block around the imaging block. Increase. As a result, even if the moving object is not completely stationary and moves finely, it is possible to appropriately clear the detection frequency for an imaging block that is not currently likely to have a moving object. Therefore, the accuracy of moving object detection is further improved.
 なお、上述のように、ステップs12での動体検出処理が終了した後であって、ステップs14での背景モデル更新処理が開始する前に検出頻度マップ600を更新する場合には、図24に示されるように、ステップs12において動体画像であると特定された複数の画像ブロックにそれぞれ対応する複数の撮像ブロックBKcの検出頻度601cを1つ増加させるとともに、当該複数の撮像ブロックBKcの周辺の各撮像ブロックBKdの検出頻度601dを1つ増加させても良い。 As described above, when the detection frequency map 600 is updated after the moving object detection process in step s12 is completed and before the background model update process in step s14 is started, the process is illustrated in FIG. As described above, the detection frequency 601c of the plurality of imaging blocks BKc respectively corresponding to the plurality of image blocks identified as moving body images in step s12 is increased by 1, and each of the imaging around the plurality of imaging blocks BKc is increased. The detection frequency 601d of the block BKd may be increased by one.
 同様に、ステップs12での動体検出処理が終了した後であって、ステップs14での背景モデル更新処理が開始する前に非検出頻度マップ610を更新する場合には、ステップs12において動体画像ではないと特定された複数の画像ブロックにそれぞれ対応する複数の撮像ブロックの非検出頻度を1つ増加させるとともに、当該複数の撮像ブロックの周辺の各撮像ブロックの非検出頻度を1つ増加させても良い。 Similarly, when the non-detection frequency map 610 is updated after the moving object detection process in step s12 is completed and before the background model update process in step s14 is started, it is not a moving object image in step s12. The non-detection frequency of the plurality of imaging blocks respectively corresponding to the plurality of image blocks identified as “1” may be increased by one, and the non-detection frequency of each imaging block around the plurality of imaging blocks may be increased by one. .
 また、上記の例では、画像ブロックの大きさを、3画素×3画素としていたが、これに限定されず、画像ブロックの大きさは、4画素×4画素、または5画素×5画素としてもよい。 In the above example, the size of the image block is 3 pixels × 3 pixels. However, the size is not limited to this, and the size of the image block may be 4 pixels × 4 pixels or 5 pixels × 5 pixels. Good.
 また、上記の例では、ある画像ブロックについてのコードワードCWには、当該ある画像ブロック内の全ての画素の画素値が画像情報として含まれている場合を例示したが、これに限定されず、コードワードCWには、画像情報として画像ブロック内の全ての画素の画素値が含まれていなくてもよい。具体的には、画像ブロックの大きさが、3画素×3画素であった場合、コードワードCWには、5画素分の画素値が画像情報として含まれていてもよい。このように、コードワードCW内の情報量を減らすことによって、処理量を低減することができるので、動体検出処理を高速化することができる。 In the above example, the code word CW for a certain image block exemplifies a case where the pixel values of all the pixels in the certain image block are included as image information. However, the present invention is not limited to this. The code word CW may not include the pixel values of all the pixels in the image block as the image information. Specifically, when the size of the image block is 3 pixels × 3 pixels, the code word CW may include pixel values for 5 pixels as image information. Thus, since the amount of processing can be reduced by reducing the amount of information in the code word CW, the moving object detection processing can be speeded up.
 また、上記の例では、入力画像200中の各画素が、R(赤)、G(緑)及びB(青)のそれぞれの画素値を有している場合を想定していたが、これに限定されない。具体的には、入力画像200中の各画素の画素値は、RGB以外の他の色空間を用いて表されていてもよい。例えば、入力画像200がYUV形式の画像データである場合、輝度信号Y並びに2つの色差信号U,Vが、各画素の画素値として用いられることになる。 In the above example, it is assumed that each pixel in the input image 200 has a pixel value of R (red), G (green), and B (blue). It is not limited. Specifically, the pixel value of each pixel in the input image 200 may be expressed using a color space other than RGB. For example, when the input image 200 is YUV format image data, the luminance signal Y and the two color difference signals U and V are used as pixel values of each pixel.
 以上のように、検出装置1は詳細に説明されたが、上記した説明は、全ての局面において例示であって、この発明がそれに限定されるものではない。また、上述した各種変形例は、相互に矛盾しない限り組み合わせて適用可能である。そして、例示されていない無数の変形例が、この発明の範囲から外れることなく想定され得るものと解される。 As described above, the detection apparatus 1 has been described in detail, but the above description is an example in all aspects, and the present invention is not limited thereto. The various modifications described above can be applied in combination as long as they do not contradict each other. And it is understood that the countless modification which is not illustrated can be assumed without deviating from the scope of the present invention.
 1 検出装置
 5 背景モデル記憶部
 31 動体検出部
 311 制御プログラム
 330 特定部
 331 登録決定部
DESCRIPTION OF SYMBOLS 1 Detection apparatus 5 Background model memory | storage part 31 Moving body detection part 311 Control program 330 Specification part 331 Registration determination part

Claims (7)

  1.  背景画像情報を含む背景モデルを記憶する記憶部と、
     前記背景モデルと入力画像とを用いて、当該入力画像に写る撮像領域に存在する検出対象物を検出する検出部と、
     前記撮像領域において、検出対象物が存在する可能性が高い領域を、前記検出部での検出結果に基づいて特定する特定部と、
     入力画像から得られる画像情報を背景画像情報として前記背景モデルに登録するか否かを、前記特定部での特定結果を用いて決定する決定部と
    を備える、検出装置。
    A storage unit for storing a background model including background image information;
    Using the background model and the input image, a detection unit for detecting a detection target existing in an imaging region shown in the input image;
    In the imaging region, a specifying unit that specifies a region where a detection target is likely to exist based on a detection result in the detection unit;
    A detection device comprising: a determination unit that determines whether or not to register image information obtained from an input image as background image information in the background model using a specific result in the specifying unit.
  2.  請求項1に記載の検出装置であって、
     前記特定部は、前記検出部での検出結果に基づいて、前記撮像領域を構成する複数の部分撮像領域のそれぞれについての前記検出対象物の検出頻度を求め、当該複数の部分撮像領域のうち、当該検出頻度が第1のしきい値以上あるいは当該第1のしきい値よりも大きい部分撮像領域を、前記検出対象物が存在する可能性が高い領域とする、検出装置。
    The detection device according to claim 1,
    The specifying unit obtains the detection frequency of the detection object for each of a plurality of partial imaging regions constituting the imaging region based on a detection result in the detection unit, and among the plurality of partial imaging regions, The detection apparatus which makes the partial imaging area whose detection frequency is more than a 1st threshold value or larger than the said 1st threshold value the area | region where possibility that the said detection target object exists is high.
  3.  請求項2に記載の検出装置であって、
     前記特定部は、前記検出部での検出結果に基づいて、前記複数の部分撮像領域のそれぞれについての前記検出対象物の非検出頻度を求め、
     前記特定部は、前記複数の部分撮像領域のうち、前記非検出頻度が第2のしきい値以上あるいは当該第2のしきい値よりも大きい部分撮像領域についての前記検出頻度及び前記非検出頻度をクリアする、検出装置。
    The detection device according to claim 2,
    The specifying unit obtains the non-detection frequency of the detection object for each of the plurality of partial imaging regions based on the detection result in the detection unit,
    The specifying unit includes the detection frequency and the non-detection frequency for a partial imaging region in which the non-detection frequency is greater than or equal to a second threshold value or greater than the second threshold value among the plurality of partial imaging regions. Clear the detection device.
  4.  請求項2に記載の検出装置であって、
     前記特定部は、前記複数の部分撮像領域において、前記検出対象物が検出された第1部分撮像領域の前記検出頻度を増加させるとともに、当該第1部分撮像領域の周辺の第2部分撮像領域の前記検出頻度を増加させる、検出装置。
    The detection device according to claim 2,
    The specifying unit increases the detection frequency of the first partial imaging region in which the detection target is detected in the plurality of partial imaging regions, and includes the second partial imaging region around the first partial imaging region. A detection device for increasing the detection frequency.
  5.  請求項3に記載の検出装置であって、
     前記特定部は、前記複数の部分撮像領域において、前記検出対象物が検出された第1部分撮像領域の前記検出頻度を増加させるとともに、当該第1部分撮像領域の周辺の第2部分撮像領域の前記検出頻度を増加させる、検出装置。
    The detection device according to claim 3,
    The specifying unit increases the detection frequency of the first partial imaging region in which the detection target is detected in the plurality of partial imaging regions, and includes the second partial imaging region around the first partial imaging region. A detection device for increasing the detection frequency.
  6.  検出装置での検出対象物の検出方法であって、
     (a)背景画像情報を含む背景モデルと入力画像とを用いて、当該入力画像に写る撮像領域に存在する検出対象物を検出する工程と、
     (b)前記撮像領域において、検出対象物が存在する可能性が高い領域を、前記工程(a)での検出結果に基づいて特定する工程と、
     (c)入力画像から得られる画像情報を背景画像情報として前記背景モデルに登録するか否かを、前記工程(b)での特定結果を用いて決定する工程と
    を備える、検出対象物の検出方法。
    A detection method of a detection object in a detection device,
    (A) using a background model including background image information and an input image, detecting a detection target existing in an imaging region shown in the input image;
    (B) in the imaging region, a step of specifying a region where a detection target object is likely to exist based on a detection result in the step (a);
    (C) detecting a detection target, comprising determining whether or not image information obtained from an input image is registered in the background model as background image information using the specific result in the step (b). Method.
  7.  検出対象物の検出を行う検出装置を制御するための制御プログラムであって、
     前記検出装置に、
     (a)背景画像情報を含む背景モデルと入力画像とを用いて、当該入力画像に写る撮像領域に存在する検出対象物を検出する工程と、
     (b)前記撮像領域において、検出対象物が存在する可能性が高い領域を、前記工程(a)での検出結果に基づいて特定する工程と、
     (c)入力画像から得られる画像情報を背景画像情報として前記背景モデルに登録するか否かを、前記工程(b)での特定結果を用いて決定する工程と
    を実行させるための制御プログラム。
    A control program for controlling a detection device that detects a detection object,
    In the detection device,
    (A) using a background model including background image information and an input image, detecting a detection target existing in an imaging region shown in the input image;
    (B) in the imaging region, a step of specifying a region where a detection target object is likely to exist based on a detection result in the step (a);
    (C) A control program for executing a step of determining whether or not to register image information obtained from an input image as background image information in the background model using the specific result in the step (b).
PCT/JP2014/082712 2013-12-27 2014-12-10 Detection device, method for detecting object to be detected, and control program WO2015098527A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013271387A JP6378483B2 (en) 2013-12-27 2013-12-27 Detection apparatus, detection object detection method, and control program
JP2013-271387 2013-12-27

Publications (1)

Publication Number Publication Date
WO2015098527A1 true WO2015098527A1 (en) 2015-07-02

Family

ID=53478391

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/082712 WO2015098527A1 (en) 2013-12-27 2014-12-10 Detection device, method for detecting object to be detected, and control program

Country Status (2)

Country Link
JP (1) JP6378483B2 (en)
WO (1) WO2015098527A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11296653A (en) * 1998-04-06 1999-10-29 Sanyo Electric Co Ltd Image processor and human body detector using the same
JP2008250366A (en) * 2007-03-29 2008-10-16 Pioneer Electronic Corp Person extraction method, its device, its program, and recording medium with same program recorded
JP2010238032A (en) * 2009-03-31 2010-10-21 Nohmi Bosai Ltd Smoke detection device
JP2012048477A (en) * 2010-08-26 2012-03-08 Canon Inc Image processing apparatus, image processing method, and program
JP2013254291A (en) * 2012-06-06 2013-12-19 Mega Chips Corp Moving object detection device, moving object detection method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11296653A (en) * 1998-04-06 1999-10-29 Sanyo Electric Co Ltd Image processor and human body detector using the same
JP2008250366A (en) * 2007-03-29 2008-10-16 Pioneer Electronic Corp Person extraction method, its device, its program, and recording medium with same program recorded
JP2010238032A (en) * 2009-03-31 2010-10-21 Nohmi Bosai Ltd Smoke detection device
JP2012048477A (en) * 2010-08-26 2012-03-08 Canon Inc Image processing apparatus, image processing method, and program
JP2013254291A (en) * 2012-06-06 2013-12-19 Mega Chips Corp Moving object detection device, moving object detection method and program

Also Published As

Publication number Publication date
JP6378483B2 (en) 2018-08-22
JP2015125696A (en) 2015-07-06

Similar Documents

Publication Publication Date Title
CN105404884B (en) Image analysis method
KR102500265B1 (en) Determining the variance of a block in an image based on the block&#39;s motion vector
JP2019145174A (en) Image processing system, image processing method and program storage medium
KR102094506B1 (en) Method for measuring changes of distance between the camera and the object using object tracking , Computer readable storage medium of recording the method and a device measuring changes of distance
AU2016225841B2 (en) Predicting accuracy of object recognition in a stitched image
WO2013005815A1 (en) Object detection device, object detection method, and program
JP6652051B2 (en) Detection system, detection method and program
US10643096B2 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
JP2014197342A (en) Object position detection device, object position detection method and program
JP2014110020A (en) Image processor, image processing method and image processing program
JP6378483B2 (en) Detection apparatus, detection object detection method, and control program
JP2021111228A (en) Learning device, learning method, and program
KR102413043B1 (en) Method and apparatus for seperating shot of moving picture content
JP6177708B2 (en) Moving object detection device, moving object detection method, and control program
JP6396051B2 (en) Area state estimation device, area state estimation method, program, and environment control system
JP6162492B2 (en) Moving object detection device, moving object detection method, and control program
JP6362939B2 (en) Detection apparatus, detection method, and control program
KR101631023B1 (en) Neighbor-based intensity correction device, background acquisition device and method thereof
Chen et al. Real-time people counting method with surveillance cameras implemented on embedded system
JP6044130B2 (en) Image region dividing apparatus, method, and program
JP6598943B2 (en) Image processing apparatus and method, and monitoring system
JP6709761B2 (en) Image processing apparatus, image processing method, and image processing program
CN116468977A (en) Method and device for evaluating antagonism robustness of visual detection model
JP5886075B2 (en) Image processing apparatus and program
JP2021026311A (en) Image processing apparatus, image processing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14875452

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14875452

Country of ref document: EP

Kind code of ref document: A1