WO2015146582A1 - Area state estimation device, area state estimation method, and environment control system - Google Patents

Area state estimation device, area state estimation method, and environment control system Download PDF

Info

Publication number
WO2015146582A1
WO2015146582A1 PCT/JP2015/057099 JP2015057099W WO2015146582A1 WO 2015146582 A1 WO2015146582 A1 WO 2015146582A1 JP 2015057099 W JP2015057099 W JP 2015057099W WO 2015146582 A1 WO2015146582 A1 WO 2015146582A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
state
moving object
period
image
Prior art date
Application number
PCT/JP2015/057099
Other languages
French (fr)
Japanese (ja)
Inventor
健太 西行
Original Assignee
株式会社メガチップス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社メガチップス filed Critical 株式会社メガチップス
Publication of WO2015146582A1 publication Critical patent/WO2015146582A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers

Definitions

  • the present invention relates to an area state estimation device, an area state estimation method, and an environment control system.
  • Patent Documents 1 and 2 and Non-Patent Document 1 are disclosed as techniques related to the present invention.
  • an object of the present application is to provide an area state estimation device that contributes to finer control over an area by estimating the state of a moving object for each area.
  • a first aspect of the area state estimation device includes an area unit moving body detection unit that detects a moving body in each of a plurality of areas in the input image using an input image and a background image, and an area state that estimates an area state of the area And (i) (When a moving object is detected over a first period in the attention area that is one of the areas, the area state estimation section changes the area state of the attention area from the absence state to the entry state.
  • an area state estimation apparatus is an area state estimation apparatus concerning a 1st aspect, Comprising:
  • the said area state estimation part is the said attention area over the said 3rd period in the said stay state (iii ')
  • the area state of the attention area is changed from the stay state to the withdrawal state.
  • an area state estimation apparatus is an area state estimation apparatus concerning a 2nd aspect, Comprising:
  • the said area state estimation part is a moving body to the said attention area over the 5th period in the said approach state.
  • the area state is transitioned from the entering state to the leaving state only on the condition that no is detected.
  • a fourth aspect of the area state estimation apparatus is the area state estimation apparatus according to any one of the first to third aspects, wherein the area state estimation unit (vi) When a moving object is detected in the attention area over a sixth period shorter than the sum of the first period and the second period, the area state of the attention area transitions from the leaving state to the staying state or the entering state.
  • an area state estimation apparatus is an area state estimation apparatus concerning any one 1st to 4th aspect, Comprising:
  • the said area unit moving body detection part is detection sensitivity according to the said area state. Then, a moving object is detected in the attention area.
  • an area state estimation apparatus is the area state estimation apparatus concerning a 5th aspect, Comprising:
  • the said area unit moving body detection part has the detection sensitivity in the said area whose said area state is the said withdrawal state.
  • the area state is set to be higher than the detection sensitivity in the area where the absence state, the entry state, or the stay state is present.
  • a 7th aspect of an area state estimation apparatus is an area state estimation apparatus concerning the 5th or 6th aspect, Comprising:
  • the said area unit moving body detection part surrounds the said area where the said area state is an approach state
  • the detection sensitivity in the area is set higher than the detection sensitivity in the area where the area state is absent, approaching state, or staying state.
  • An eighth aspect of the area state estimation apparatus is the area state estimation apparatus according to any one of the second to seventh aspects, wherein a change in a registration determination period of image information obtained from the input image is a reference.
  • a background model update unit that registers as background image information of the background image when smaller than a value, the area state estimation unit is (I) when the area state of the area has transitioned to the approach state , Holding an approach flag indicating that over a holding period longer than the registration determination period, (II) in the stay state, moving objects are not detected in the area of interest over the third period, and about the surrounding area
  • the entry flag is held, the area state of the area of interest is transitioned to the withdrawal state or the absence state.
  • a ninth aspect of the area state estimation apparatus is the area state estimation apparatus according to the eighth aspect, wherein the area state estimation unit (I ′) sets the area state of the surrounding area to the entry state. And before the transition, when the area state of the area of interest maintains a stay state, the approach flag for the surrounding area is held, the area state estimation device, The image processing apparatus further includes a determination period adjustment unit that sets the registration determination period of the image information belonging to the attention area to be shorter than the registration determination period of the image information belonging to another area.
  • the aspect of the environment control system according to the present invention includes the area state estimation device according to any one of the first to ninth aspects, and the environment control device that controls the environment of the area according to the area state.
  • the area state estimation method (a) a step of detecting a moving object in each of a plurality of areas in the input image using an input image and a background image, and (b) in an attention area that is one of the areas A step of transitioning the area state of the area of interest from an absent state to an approach state when a moving object is detected over a first period; and (c) a moving object is detected in the area of interest over a second period in the approach state. And (d) when no moving object is detected in the attention area over a third period in the staying state, the area state of the attention area is changed to the staying state. A step of transitioning to a leaving state; and (f) when no moving object is detected in the attention area over a fourth period in the leaving state, The area status of the serial target area and a step of transitioning to the absence state.
  • an area estimation state device that estimates an area state of a plurality of areas in an input image uses (a) an input image and a background image to detect a moving object in each of the plurality of areas in the input image. And (b) a step of transitioning the area state of the area of interest from an absent state to an entering state when a moving object is detected over a first period in the area of interest as one of the areas, and (c) A step of transitioning the area state of the area of interest to a stay state when a moving object is detected in the area of interest over a second period in the approach state; and (d) the area of interest over a third period in the stay state.
  • the presence of a moving object is detected by distinguishing between the approaching state and the staying state, and the absence of the moving object is detected by distinguishing between the leaving state and the absence state. Therefore, it is possible to know a more detailed state of the moving object.
  • FIG. 1 is a block diagram conceptually showing an example of an image processing unit (area state estimation device) 1 according to the first embodiment.
  • an input image D ⁇ b> 1 is input from the image input unit 2 to the image processing unit 1.
  • the image input unit 2 inputs an input image D1 input from the outside to the image processing unit 1.
  • the input image D1 is a captured image captured by an imaging unit (not shown).
  • FIG. 2 schematically shows an example of the input image D1.
  • a room appears in the input image D1.
  • the input image D1 in FIG. 2 is an example when the imaging unit is an omnidirectional camera. Therefore, as the distance from the center of the input image D1 increases, the object is represented as curved.
  • the wall 101 located at the end in the input image D1 is greatly curved. Actually, the wall 101 has a planar shape that does not curve. A region far from the center of the input image D1 is displayed smaller in the input image D1.
  • the person 103 is sitting on a chair (not shown) provided in a pair with the desk 102, and the upper body of the person 103 appears.
  • the lower body and chair of the person 103 are hidden by the desk 102 and do not appear in the input image D1.
  • the image processing unit 1 performs various image processing on the input image D1 input from the image input unit 2.
  • the image processing unit 1 includes a CPU 100 and a storage unit 110.
  • the storage unit 110 is configured by a non-transitory recording medium that can be read by the CPU 100, such as a ROM (Read Only Memory) and a RAM (Random Access Memory).
  • the storage unit 110 stores a control program 111. When the CPU 100 executes the control program 111 in the storage unit 110, a functional block described later is formed in the image processing unit 1.
  • the storage unit 110 may include a computer-readable non-transitory recording medium other than the ROM and RAM.
  • the storage unit 110 may include, for example, a small hard disk drive and an SSD (Solid State Drive).
  • FIG. 4 is a flowchart illustrating an example of a schematic operation of the image processing unit 1.
  • a series of processing from steps s12 to s14 is executed with the input image D1 as a processing target. Since the input image D1 is sequentially input every predetermined time, the operation of FIG. 4 is repeatedly executed.
  • the moving object detection unit 10 inputs the input image D1 and the background image D0 (see also FIG. 3).
  • the background image D0 is the same as the input image D1 in that it is an image taken by the imaging unit, the background image D0 does not include a moving object (for example, the person 103).
  • the background image D0 is captured in advance and stored in the background model storage unit 3, for example.
  • the background model storage unit 3 includes a rewritable non-primary recording medium such as a flash memory, an EPROM (Erasable Programmable Read Only Memory), or a hard disk (HD).
  • the moving object detection unit 10 detects a moving object for each pixel using the background image D0 and the input image D1. More specifically, a pixel having a large difference between the input image D1 and the background image D0 is determined as a pixel representing a moving object (hereinafter also referred to as a moving object pixel), and the moving object is detected.
  • a moving object detection method any method such as CodeBook, Colinear, or statistical background difference can be employed.
  • a pixel value difference between the input image D1 and the background image D0 is calculated for each pixel, and a pixel whose difference is larger than a predetermined reference value is detected as a moving object pixel.
  • the difference between the input image D1 and the background image D0 can be calculated for each predetermined image block.
  • the image block is obtained by dividing the input image D1 (background image D0) into a plurality of regions.
  • the input image D1 and the background image D0 have a total of nine pixels of 3 ⁇ 3 in the vertical direction. It is an image block.
  • this method a specific example of this method will be described.
  • FIG. 5 is a flowchart showing an example of the moving object detection process.
  • the moving object detection unit 10 may be referred to as a certain image block (hereinafter referred to as “target image block”) in the processing target input image D1 input in step s11 described above. ) Is detected. That is, the moving object detection unit 10 detects whether a moving object appears in the target image block. More specifically, the moving object detection unit 10 determines whether the image information of the target image block of the input image D1 and the image information of the target image block of the background image D0 (hereinafter also referred to as background image information) match each other. Determine.
  • the target image block is an image showing a moving object (hereinafter also referred to as a moving object image).
  • a moving object image A specific method for determining whether the image information and the background image information in the target image block match each other will be described later.
  • step s121 the moving object detection unit 10 stores the result of the moving object detection in step s121.
  • step s123 the moving object detection unit 10 determines whether or not processing has been performed for all image blocks, that is, whether or not all image blocks have been set as target image blocks. If there is an image block that has not been processed as a result of the determination in step s123, the moving object detection unit 10 sets the image block that has not been processed yet as a new image block of interest, and then executes step s121 and subsequent steps. Execute.
  • step s123 if the result of determination in step s123 is that processing has been performed for all image blocks, that is, if detection of moving objects has been completed for all regions of the input image D1, moving objects are detected.
  • the detection unit 10 ends the moving object detection process.
  • a specific method of moving object detection in step s121 will be described.
  • a plurality of background images D0 are used.
  • a plurality of images having different brightnesses are recorded in the background model storage unit 3 as background images.
  • a plurality of images to be registered as background images are also recorded in the background model storage unit 3 as background images D0.
  • each codebook CB includes three codewords CW (CW1 to CW3).
  • each codeword CW includes image information (background image information) of each background image D0 in the image block corresponding to the codebook CB to which the codeword CW belongs. This background image information is used for moving object detection.
  • the code word CW includes the latest match time Te and the code word generation time Ti. These are used for addition or update of background image information described in the third embodiment.
  • the code book CB showing sand hatching includes three code words CW1 to CW3 generated based on the background images D0a to D0c, respectively.
  • the code word CW1 included in the code book CB is generated based on the image block corresponding to the code book CB in the background image D0a.
  • the code word CW2 included in the code book CB is generated based on the image block corresponding to the code book CB in the background image D0b.
  • the code word CW3 included in the code book CB is generated based on the image block corresponding to the code book CB in the background image D0c.
  • the code book CB corresponding to the target image block may be referred to as a corresponding code book CB
  • the code word CW belonging to the corresponding code book CB may be referred to as a corresponding code word CW.
  • FIG. 8 is a diagram showing how vectors are extracted from each of the target image block of the input image D1 and the corresponding codeword CW of the background model.
  • FIG. 9 is a diagram illustrating a relationship between a vector extracted from the target image block of the input image D1 and a vector extracted from the corresponding codeword CW of the background model.
  • the image information of the target image block in the input image D1 is handled as a vector.
  • the background image information included in the corresponding codeword CW is treated as a vector.
  • the target image block is a moving image. Is determined.
  • the target image block in the input image D1 is not a moving body image and is not different from the image indicating the background.
  • the two types of vectors do not point in the same direction, it can be considered that the image information of the target image block does not match the background image information of each corresponding codeword CW. Therefore, in this case, it is determined that the target image block in the input image D1 is not an image indicating the background but a moving body image.
  • the moving object detection unit 10 generates an image vector x f in which the pixel values of a plurality of pixels included in the image block of interest in the input image D1 and components.
  • Figure 8 is an image vector x f is shown that the component pixel values of respective pixels of the target image block 210 having nine pixels.
  • each pixel has pixel values of R (red), G (green), and B (blue), so the image vector xf is composed of 27 components.
  • the moving object detection unit 10 generates a background vector, which is a vector related to the background image information, using the background image information in the corresponding codeword CW included in the corresponding codebook CB of the background model.
  • the background image information 510 of the corresponding code word shown in FIG. 8 includes pixel values for nine pixels. Therefore, a background vector xb having the pixel values for the nine pixels as components is generated.
  • the background vector xb is generated from each of a plurality of code words CW included in the corresponding code book CB. Therefore, a plurality of background vector x b are generated for one image vector x f.
  • the target image block in the input image D1 is not different from the image indicating the background.
  • the image vector xf and each background vector xb are considered to contain a certain amount of noise components, the image vector xf and each background vector xb are not completely in the same direction.
  • the image vector xf and each background vector xb are completely the same in consideration that the image vector xf and each background vector xb include a certain amount of noise components. Even if it is not facing the direction, it is determined that the target image block in the input image D1 is an image indicating the background.
  • the relationship between the image vector x f and background vector x b to the true vector u can be expressed as in FIG.
  • the image and the vector x f and background vector x b is, as an evaluation value that indicates whether the pointing how the same direction, consider the evaluation value D 2 represented by the following (1).
  • evaluation value D 2 is a minimum eigenvalue of a non-zero 2 ⁇ 2 matrix XX T. Accordingly, the evaluation value D 2 can be determined analytically. Note that the evaluation value D 2 is the minimum eigenvalue of the non-zero 2 ⁇ 2 matrix XX T is described in Non-Patent Document 1 above.
  • Decision image block of interest is whether the moving object image in the input image D1 is the minimum value C of the plurality of values of the evaluation value D 2, the mean value for a plurality of values of the evaluation value D 2 mu and
  • the moving object judgment formula shown by the following formula (3) expressed using the standard deviation ⁇ is used. This moving object judgment formula is called Chebyshev's inequality.
  • k in Expression (3) is a constant, and is a value determined based on the imaging environment (environment in which the imaging unit is installed) of the imaging unit that captures the input image D1.
  • the constant k is determined by experiments or the like.
  • the moving object detection unit 10 When the moving object detection unit 10 satisfies the moving object determination equation (inequality equation), the image vector x f and each background vector x b are not oriented in the same direction, and the target image block is not an image indicating a background. It determines with it being a moving body image. On the other hand, when the moving object detection unit 10 does not satisfy the moving object determination formula, the moving object detection unit 10 considers that the image vector x f and each background vector x b face the same direction, and the target image block is not a moving object image but a background. It is determined that the image is shown.
  • the moving object detection is performed based on whether the direction of the image vector obtained from the target image block and the direction of the background vector obtained from each corresponding codeword CW are the same. Therefore, the moving object detection method according to the present embodiment is a moving object detection method that is relatively robust against changes in brightness such as a change in sunlight or a change in illumination.
  • the moving object detection unit 10 determines that a pixel belonging to the image block determined to be a moving image is a moving object pixel.
  • binary information indicating whether or not the pixel represents a moving object can be obtained for each pixel.
  • the information including the said binary information about all the pixels is called pixel unit moving body image D2.
  • step s13 the pixel unit moving body image D2 is input to the area presence / absence determination unit 20 (FIG. 3), and based on this, the moving body is detected in each of the plurality of areas.
  • the plurality of areas are areas obtained by dividing the input image D1, and for example, the actual areas are set to be equal to each other. More specifically, for example, an area obtained by dividing the room to be imaged into a lattice shape with an equal area when viewed from above can be adopted as a plurality of areas.
  • the imaging unit is an omnidirectional camera
  • the contour of such an area is curved in the input image D1 as the distance from the center of the input image D1 increases.
  • Each area is shown smaller as the distance from the center of the input image D1 increases. Therefore, the number of pixels included in each area is different from each other.
  • Such area setting is performed based on the correspondence between the actual coordinates and the coordinates in the input image D1. This correspondence can be known in advance based on the specifications of the imaging unit.
  • the area setting information is stored in advance in the storage unit 110, for example, and the area presence / absence determination unit 20 can recognize each area based on the setting information.
  • the area presence / absence determination unit 20 determines whether or not a moving object appears in each area based on the number of moving object pixels (hereinafter referred to as the moving object pixel number) PN1 included in each area, that is, there is a moving object in each area. It is determined whether or not to do.
  • FIG. 10 is a flowchart showing an example of a schematic operation of the area presence / absence determination unit 20.
  • the area presence / absence determination unit 20 determines whether the number of moving object pixels PN1 in a certain area (hereinafter also referred to as an attention area) is larger than a reference value PNref.
  • step s131 If an affirmative determination is made in step s131, it is determined in step s132 that there is a moving object in the area of interest. That is, the moving object is detected in the attention area. If a negative determination is made in step s131, it is determined in step s133 that there is no moving object in the area of interest. That is, no moving object is detected in the attention area.
  • the imaging unit is an omnidirectional camera
  • the number of pixels PN2 included in each area may be different from each other.
  • the reference value PNref is proportional to the number of pixels PN2 by a positive proportional coefficient.
  • the moving object detection unit 10 determines that a pixel included in a certain area is a moving object pixel, when the number of moving object pixels PN1 included in the area is small, It is determined that there are no moving objects in the area. That is, a moving object having a small size is determined as noise, and it is determined that there is no moving object. Thereby, the detection accuracy of a moving body can be improved.
  • the moving object detection by the moving object detection unit 10 is a provisional detection. .
  • step s134 the area presence / absence determination unit 20 determines whether all areas are set as the attention area. That is, it is determined whether or not processing has been completed for all areas. If a negative determination is made in step s134, the processing after step s131 is executed again with the unprocessed area as the attention area. On the other hand, when an affirmative determination is made in step s134, it is determined that the process has been completed for all areas, and the area presence / absence determination process is ended.
  • step s14 the area unit moving body image D3 is input to the area state estimation unit 30 (FIG. 3), and based on this, the state of each area (hereinafter referred to as the area state) is at least entered.
  • the estimated state is one of the four states of the staying state, the leaving state, and the absence state.
  • the entry state here indicates a state in which the moving object has entered the area
  • the stay state indicates a state in which the moving object has stayed in the area for a relatively long period
  • the leaving state indicates a state in which the moving object has left the area.
  • the absence state indicates a state in which the moving object is absent in the area for a relatively long period.
  • the area state estimation unit 30 outputs estimated information (information indicating an area state for each area, hereinafter referred to as an area state image D4).
  • the area state image D4 is stored in the storage unit 110, for example.
  • FIG. 11 is a flowchart showing the area state estimation process.
  • the area state estimation unit 30 estimates the area state of the attention area. A specific estimation method will be described in detail later.
  • step s142 the area state estimation unit 30 records the result, and in step s143, determines whether all areas have been set as the attention area. That is, it is determined whether or not the area states of all areas have been estimated. If a negative determination is made, the unprocessed area is set as the attention area, and the processes after step s141 are executed again. If an affirmative determination is made in step s143, it is determined that the area states of all areas have been estimated, and the area state estimation process ends.
  • FIG. 12 is a diagram showing an example of area state transition.
  • the area state estimation unit 30 determines whether there is a moving object in the attention area based on the area unit moving object image D3. If it is determined that there is a moving object in the attention area, the area state estimation unit 30 changes the area state of the attention area from the absence state to the noise state.
  • the noise state indicates a state in which the detected moving object is determined to be noise. That is, even if it is determined by the area presence / absence determination unit 20 that a moving object exists in the area, the moving object exists in the area unless it is determined that the moving object exists in the area for a predetermined period. It is not judged. In other words, even if a moving object is detected in a short period of time, it is determined to be noise, and the area state is transitioned to the noise state.
  • the moving object detection by the area presence / absence determination unit 20 is a provisional detection. I can grasp it.
  • the area state estimation unit 30 changes the area state of the attention area to the entry state. That is, if it is determined that there is a moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state is shifted to the entry state.
  • the length of the period can be determined by the number of input images D1 (referred to as the number of frames). Therefore, the period is considered here by the number of frames.
  • the area state estimation unit 30 counts the number of frames FN1 of the input image D1 (hereinafter also referred to as the number of detected frames) FN1 that is continuously determined to be a moving object in the attention area for each area. More specifically, if it is determined that there is a moving object in the attention area in the area-unit moving object image D3, 1 is added to the number of detected frames FN1 in the attention area. On the other hand, if it is determined that there is no moving object in the attention area, the number of detected frames FN1 in the attention area is initialized to zero. This detected frame number FN1 corresponds to a period during which it is determined that a moving object is present in the area.
  • the area state estimation unit 30 determines whether or not the number of detected frames FN1 is larger than the entry determination value FNref11, and when an affirmative determination is made, the area state transitions from the absent state to the entry state.
  • FN1> 0 is shown immediately above the arrow indicating the transition from the absence state to the noise state. This is a condition for transition from the absence state to the noise state. That is, when it is determined that the detected frame number FN1 is larger than zero in the absence state, the area state is changed from the absence state to the noise state. In short, when a moving object is detected in the attention area in the absence state, the area state is quickly changed to the noise state.
  • the area state estimation unit 30 may transition the area state of the attention area to the absence state. Thereby, the area state can be returned from the noise state to the absence state.
  • the number of frames can also be used for this transition condition.
  • the area state estimation unit 30 counts, for each area, the number of frames (hereinafter also referred to as the number of undetected frames) FN2 of the input image D1 that is determined that there is no moving object in the attention area. For example, in the input area unit moving body image D3, if it is determined that there is no moving body in the attention area, 1 is added to the number of undetected frames FN2 in the attention area. On the other hand, when it is determined that there is a moving object in the attention area, the number of undetected frames FN2 in the attention area is initialized to zero. This undetected frame number FN2 corresponds to a period during which it is continuously determined that there is no moving object in the area.
  • the area state estimation unit 30 determines whether or not the number of undetected frames FN2 is larger than zero in the noise state, and transitions the area state to the absent state when a positive determination is made.
  • the noise state is provided here to clearly indicate the noise, the noise state may not be provided.
  • the area state may be changed from the absence state to the entry state only when it is determined that a moving object exists for a predetermined period.
  • the area state estimation unit 30 changes the area state of the attention area to the stay state. That is, if it is determined that there is a moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state is shifted to the stay state. As a more detailed operation, the area state estimation unit 30 determines whether the detected frame number FN1 is larger than the stay determination value FNref12 (> FNref11) in the approach state, and when a positive determination is made, Transition the area state to the stay state.
  • the area state is set to the entry state in the initial period when the moving object starts to be detected, and the area state is set to the stay state in the period after the initial period. That is, not only the information that the moving object exists but also the moving object is detected by distinguishing between the approaching state and the staying state.
  • the area state estimation unit 30 changes the area state of the attention area to the withdrawal state. That is, if it is determined that there is no moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state estimating unit 30 changes the area state to the leaving state. As a more detailed operation, the area state estimation unit 30 determines whether or not the number of undetected frames FN2 is larger than the leaving determination value FNref21, and when an affirmative determination is made, the area state is changed to the leaving state. And transition.
  • the area state estimating unit 30 changes the area state of the attention area to the leaving state.
  • the first condition is that it is determined that there is no moving object in the attention area over a predetermined period. This is also a condition for transitioning from the entering state to the leaving state as described above.
  • the second condition is that at least one area state in the area around the area of interest (hereinafter referred to as the surrounding area) is an approaching state.
  • FIG. 13 is a diagram illustrating an example of a state in which the area state transitions from the staying state to the leaving state.
  • a rectangular area in an actual space is schematically shown as each area.
  • the plurality of areas are arranged in a grid pattern.
  • the imaging unit is, for example, an omnidirectional camera
  • this rectangular area is curved in the input image D1 as in FIG.
  • 16 areas of 4 ⁇ 4 in a simplified manner are shown.
  • the numbers “0” and “1” shown in the area indicate the result of moving object detection by the area presence / absence determination unit 20. “1” indicates that it is determined that a moving object exists, and “0” indicates that it is determined that no moving object exists.
  • FIG. 13 is a diagram illustrating an example of a state in which the area state transitions from the staying state to the leaving state.
  • a rectangular area in an actual space is schematically shown as each area.
  • the plurality of areas are arranged in a grid pattern.
  • the imaging unit is, for example
  • the area state is shown according to the type of hatching shown in each area.
  • a blank indicates an absence state (including a noise state), a horizontal hatching indicates a staying state, a vertical hatching indicates an approaching state, and a diagonal hatching indicates a leaving state.
  • the area unit moving body image D3 at the left end of FIG. 13 it is determined that a moving body exists only in one area A1.
  • the area state image D4 corresponding to this the area state of the area A1 is the stay state.
  • the area states of other areas are absent.
  • the moving object moves to the area A2 on the right, for example.
  • the second area unit moving body image D3 from the left it is determined that there is no moving body in the area A1, and it is determined that there is a moving body in the area A2.
  • the area state of the area A2 is a noise state.
  • the area of the area A2 is displayed as shown in the third area state image D4 from the left. The state transitions to the entry state. Thereby, the second condition is satisfied for the area A1.
  • the area state of the area A1 transitions to the withdrawal state as shown in the rightmost area state image D4. .
  • the moving object detection unit 10 does not detect the person 103 as a moving object
  • the area presence / absence determination unit 20 also does not detect the person 103 as a moving object. If the person 103 is not detected as a moving object over a predetermined period, the first condition is satisfied for the attention area including the person 103. However, if the moving object is simply shielded by the shielding object, the moving object (person 103) does not enter the surrounding area, and therefore the second condition is not satisfied. Therefore, the area state of the attention area appropriately maintains the stay state.
  • the area state when the area state is transitioned from the staying state to the absence state only under the first condition, the area state becomes the leaving state although the moving object is simply hidden by the shield. According to the present embodiment, it is possible to avoid such an erroneous estimation of the area state.
  • the area state is transitioned to the leaving state using only the first condition (see FIG. 12).
  • This is a process considering that the moving body moves across a plurality of areas. That is, when the attention area is in the approaching state, there is a possibility that the moving body is moving to another area, so only the first condition is adopted.
  • the area state can be quickly transitioned from the entering state to the leaving state. That is, the area state can be estimated with high responsiveness.
  • the leaving determination value FNref21 used when changing from the entering state to the leaving state and the leaving determination value FNref21 used when changing from the staying state to the leaving state are equal to each other. However, they may be different.
  • the area state estimating unit 30 changes the area state to the staying state. More specifically, the detected frame number FN1 is compared with zero, and when the detected frame number FN1 is larger than zero, the area state estimation unit 30 transitions the area state to the stay state. This is processing in consideration of the possibility that the moving object will return to the area that was in the staying state. That is, there is a possibility that the moving object may return to the area in the leaving state, and accordingly, the area state is easily changed to the staying state. Thereby, the area state of the area of interest can be changed from the leaving state to the staying state relatively quickly. That is, the area state can be estimated with high responsiveness.
  • moving object detection over a predetermined period may be employed as a condition for transition from the leaving state to the stay state.
  • a predetermined period a period shorter than the period required for transition from the absent state to the stay state
  • the area state may be changed from the leaving state to the staying state. This also allows the area state to transition from the leaving state to the staying state relatively quickly.
  • the area state when a moving object is detected in the leaving state, the area state is changed from the leaving state to the staying state, but the area state may be changed from the leaving state to the entering state. .
  • the area state estimation unit 30 changes the area state of the attention area to the absence state. That is, if it is determined that there is no moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state estimating unit 30 changes the area state to the absent state. As a more detailed operation, it is determined whether or not the number of undetected frames FN2 is larger than the absence determination value FNref22 (> exit determination value FNref21), and when a positive determination is made, the area state estimation unit 30 Transition the state to the absent state.
  • FIG. 14 is a diagram schematically showing an example of the input image D1 and the area state image D4.
  • the person 103 is sitting slightly on the upper left near the center, and the person 104 moving on the right is shown on the lower right.
  • the area state of the area including the person 103 becomes the stay state (horizontal line hatching) and the area state of the area including the person 104 becomes the entry state (vertical line hatching) by the processing of the image processing unit 1.
  • the present image processing unit 1 does not simply detect the presence / absence of moving objects, but estimates the entry state, stay state, leaving state, and absence state as area states. Therefore, as described below, the external device 4 can perform finer control according to the area state.
  • the area state image D4 output by the area state estimation unit 30 has area state information for each area and is input to the external device 4 (see FIGS. 1 and 3).
  • the external device 4 performs control according to the area state based on the area state image D4.
  • the external device 4 is an environment control device that controls the area environment (temperature, humidity, brightness, sound, display, etc.).
  • the external device 4 is an air conditioner or a lighting device.
  • the external device 4 adjusts the space state (brightness, temperature, humidity, etc.) of each area according to the area state. Therefore, finer control can be performed as compared with the prior art.
  • the external device 4 has a plurality of lighting devices, and this lighting device is provided for each area.
  • the external device 4 controls the lighting device in the area where the area state is the staying state with the highest illuminance, and controls the lighting device in the area where the area state is the entering state or the leaving state with a lower illuminance.
  • the lighting device in the area that is in the removed state is controlled with the lowest illuminance.
  • a flap for adjusting the air delivery direction is directed to the staying area, and the flap is directed to the area where the area state is the entering state, the leaving state, or the absence state. Absent.
  • many flaps may be directed to the staying area and some flaps may be directed to the entering or leaving area.
  • the air conditioner in the staying area operates at a desired target value (temperature target value or humidity target value) and enters or leaves the area.
  • the air conditioner may be operated with a smaller target value, and the air conditioner in the absent area may be operated with the smallest target value.
  • the small target value here means a target value having a small difference from the current value (temperature or humidity).
  • the environment in an area where the area state is a staying state, the environment is controlled with a target value that requires the most power, and in an area where the area state is an entering state or a leaving state, a target value that requires less power In the area where the area state is absent, the environment is controlled with a target value that requires the least power. Thereby, effective environmental control can be performed with low power consumption.
  • the area state is updated with high responsiveness, so that the external device 4 can perform control with high responsiveness according to the actual state of the area.
  • the surrounding area may be an area existing in a region surrounding the attention area, and may be eight areas that surround the attention area most recently.
  • the speed of the moving body is high, so that none of the eight surrounding areas is in the entering state, and the area adjacent to that one area is in the entering state.
  • the surrounding area may be set in a wider range.
  • the eight areas and the sixteen areas that immediately surround the eight areas may be adopted as the surrounding areas.
  • the portion including the moving object detection unit 10 and the area presence / absence determination unit 20 is understood as a specific example of the area unit moving object detection unit that detects a moving object in each of a plurality of areas in the input image D1. Can do.
  • the image block and the area are described separately, but the same range as the image block may be adopted as the area.
  • the operation of the area presence / absence determination unit 20 is unnecessary.
  • ⁇ Area at the end of input image D1> when the first condition and the second condition are satisfied, the area state is changed from the staying state to the leaving state. However, when the area of interest is located at the end of the input image D1, the area state may be changed from the staying state to the leaving state without the second condition. If the area of interest is located at the end, the moving object may leave the input image D1 without passing through another area, and in this case, the surrounding area does not enter the entry state.
  • FIG. 15 is a functional block diagram conceptually illustrating an example of the image processing unit 1 according to the second embodiment.
  • the moving object detection unit 10 and the area presence / absence determination unit 20 receive the area state image D4 from the area state estimation unit 30.
  • the moving object detection unit 10 and the area presence / absence determination unit 20 detect moving objects with detection sensitivity corresponding to the area state.
  • the detection sensitivity here indicates the ease of detecting a moving object, and the higher the detection sensitivity, the easier the moving object is detected.
  • the area presence / absence determination unit 20 detects a moving body in area units based on the size of the moving body pixel number PN1 of each area and the reference value PNref, and the reference value PNref indicates the detection sensitivity.
  • the moving object detection unit 10 detects a moving object in units of pixels based on the difference between the background image D0 and the input image D1. More specifically, a moving object is detected when a parameter representing the difference (for example, the above-described minimum value C) is larger than a reference value (for example, ⁇ + k ⁇ ). This reference value also indicates the detection sensitivity. The detection sensitivity decreases as the reference value increases.
  • the average value ⁇ and the standard deviation ⁇ are values determined by the background image D0 of the background model, and therefore a plurality of values may be adopted as the constant k.
  • a plurality of detection sensitivity values are recorded in advance in the storage unit 110, for example, and the moving object detection unit 10 and the area presence / absence determination unit 20 select the detection sensitivity according to the area state, A moving object is detected based on this.
  • the moving object detection unit 10 compares the pixels included in the area where the area state is the leaving state with respect to the pixels included in the area where the area state is absent (including the noise state), the entering state or the staying state.
  • the moving object is detected with high detection sensitivity. That is, a smaller reference value is adopted. Since there is a possibility that the moving object may return to the area whose area state is the withdrawal state, there is a high possibility that the moving object will appear relatively. Therefore, when there is a high possibility that the moving object appears in the attention area, the moving object can be detected with high detection sensitivity. Thereby, a moving body can be detected more rapidly. As a result, the area state can be estimated with high responsiveness.
  • the moving object may be detected with higher detection sensitivity than the pixels included in the staying area. This is because if the area state of the surrounding area is the entry state, there is a high possibility that a moving object will enter the area. Also by this, a moving body can be detected more rapidly.
  • the area presence / absence determination unit 20 may detect a moving object with high detection sensitivity with respect to an area whose area state is the withdrawal state. That is, a smaller reference value PNref may be adopted. Also by this, a moving body can be detected more rapidly.
  • moving objects may be detected with high detection sensitivity even in areas where the area state of the surrounding area is the approaching state. That is, a smaller reference value PNref is adopted. Also by this, a moving body can be detected more rapidly.
  • both the moving object detection unit 10 and the area presence / absence determination unit 20 employ detection sensitivity according to the area state, but only one of them has detection sensitivity according to the area state. May be adopted.
  • a background model update unit 40 and a cache model storage unit 6 are further provided.
  • the background model update unit 40 inputs the input image D1 and the pixel unit moving body image D2.
  • the background model update unit 40 does not indicate a moving object when the input image D1 including a pixel determined to be a moving object pixel does not change over a predetermined registration determination period. It is determined that the background is indicated, and this is registered in the background model.
  • a cache model storage unit 6 that stores a cache model is used.
  • the cache model includes background image information candidates that are candidates for background image information registered in the background model.
  • the cache model storage unit 6 includes rewritable storage means such as flash memory, EPROM (Erasable Programmable Read Only Memory), or hard disk (HD).
  • the background model storage unit 3 and the cache model storage unit 6 are independent from each other in terms of hardware, but a part of the storage area of one storage device is used as the background model storage unit 3 and Another part of the storage area may be used as the cache model storage unit 6.
  • the background model update unit 40 once registers the image information of the image block determined to be a moving image by the moving object detection unit 10 as a background image information candidate in the cache model. Whether the background model update unit 40 registers the background image information candidates stored in the cache model storage unit 6 in the background model as background image information based on the plurality of input images D1 input during the registration determination period. Determine whether or not. More specifically, when the background candidate image obtained from the input image D1 does not change over the registration determination period, the background image information candidate is determined to be background image information. If it determines in this way, the background model update part 40 will register the said background image information candidate in a background model as background image information.
  • FIG. 17 is a flowchart showing the background model update process. This background model update process is performed after step s12 of FIG. As shown in FIG. 17, in step s151, the background model update unit 40 determines that the target image block in the processing target input image D1 input in step s11 is a moving object image in the moving object detection unit 10. Determine whether or not. If it is determined in step s151 that the target image block is not a moving image in the moving object detection unit 10, that is, it is determined that the image information of the target image block matches the background image information of each corresponding codeword CW in the background model. Then, the background model update unit 40 executes Step s152.
  • step s152 the background model update unit 40 changes the latest match time Te of the code word CW in the background model including the background image information determined to match the image information of the target image block to the current time.
  • step s151 if it is determined in step s151 that the target image block is a moving body image in the moving body detection unit 10, the background model update unit 40 executes step s153.
  • step s153 the cache model is updated. Specifically, if the image information of the target image block is not included in each corresponding codeword CW included in the cache model in the cache model storage unit 6, the background model update unit 40 displays the image information.
  • a code word CW included as a background image information candidate is generated and registered in the corresponding code book CB in the cache model.
  • the code word CW includes the latest match time Te and the code word generation time Ti in addition to the image information (background image information candidate).
  • the latest matching time Te included in the code word CW generated in step s153 is provisionally set to the same time as the code word generation time Ti.
  • the background model update unit 40 when the image information of the target image block is included in the corresponding codeword CW included in the cache model in the cache model storage unit 6, that is, the image information of the target image block
  • the latest matching time Te in the code word CW including the background image information candidate in the cache model is changed to the current time. .
  • step s153 the code word CW is added to the cache model, or the latest match time Te of the code word CW in the cache model is updated.
  • step s153 if the code model CB corresponding to the target image block is not registered in the cache model in the cache model storage unit 6, the background model update unit 40 uses the image information of the target image block as the background.
  • a code word CW included as an image information candidate is generated, and a code book CB including the code word CW is generated and registered in the cache model.
  • step s154 the background model update unit 40 determines whether or not all image blocks have been set as the target image block. If it is determined in step s154 that there is an image block that has not been processed, the background model update unit 40 sets the image block that has not been processed yet as a new image block of interest, and then executes step s151 and subsequent steps. Execute. On the other hand, if it is determined in step s154 that processing has been performed for all image blocks, the background model update unit 40 executes step s155.
  • step s155 the code word CW that is included in the cache model and whose latest match time Te has not been updated for a predetermined period is deleted. That is, when the image information included in the code word CW in the cache model does not match the image information acquired from the input image D1 for a certain period, the code word CW is deleted. If the image information included in the code word CW is background image information, that is, if it is image information acquired from an image indicating the background included in the input image D1, the code word CW includes Since the latest match time Te is frequently updated, the image information included in the code word CW for which the latest match time Te has not been updated for a predetermined period is the image information acquired from the moving body image included in the input image D1.
  • the deletion determination period By deleting the code word CW whose latest match time Te has not been updated for a predetermined period from the cache model, the image information of the moving image is deleted from the cache model.
  • this predetermined period may be referred to as “deletion determination period”.
  • the deletion judgment period changes in image information due to changes in brightness, such as changes in sunlight or lighting, and changes in the environment, such as changes in the placement of posters or desks, and moving objects such as people to be detected move This is a period set in advance to distinguish the change in image information that sometimes occurs. For example, when the imaging frame rate of the imaging unit that captures the input image D1 is 30 fps, the deletion determination period is set to a period in which, for example, several tens to several hundreds of input images D1 are input.
  • step s155 when the code word CW that is included in the cache model and whose latest match time Te has not been updated for the deletion determination period is deleted, the background model update unit 40 executes step s156.
  • step s156 the background model update unit 40 identifies the codeword CW that has been registered for the cache model and has passed the registration determination period from the codewords CW registered in the cache model.
  • step s156 when the code word CW is generated, the code word CW is immediately registered in the cache memory. Therefore, the code word CW is included in the code word CW as the time when the code word CW is registered in the cache model.
  • the code word generation time Ti can be used.
  • the registration judgment period is set to a larger value than the deletion judgment period.
  • the registration determination period is set to a value several times larger than the deletion determination period, for example.
  • the registration determination period is represented by the number of frames. If the registration determination period is, for example, “500”, the registration determination period is a period in which input images D1 for 500 frames are input.
  • step s156 When step s156 is executed, in step s157, the background model update unit 40 performs background model registration processing. In the background model registration process, the code word CW identified in step s156 is registered in the background model in the background model storage unit 3.
  • the background model update unit 40 may delete the code word CW in the cache model until the registration determination period elapses after it is registered in the cache model. .
  • the background model update unit 40 deletes the code word CW (background image information candidate) in the cache model before the registration determination period elapses after it is registered in the cache model. This means that it is determined that the code word CW (background image information candidate) registered in the cache memory is not registered in the background model based on a plurality of input images D1 input in the determination period.
  • the background model update unit 40 can register the code word CW in the cache model in the background model without deleting the code word CW in the cache model until the registration determination period elapses. is there.
  • the background model update unit 40 registers the code word CW (background image information candidate) in the cache model in the background model without deleting the code word CW (background image information candidate) from the cache model until the registration determination period elapses. This means that the update unit 40 has determined that the code word CW (background image information candidate) registered in the cache memory is registered in the background model based on the plurality of input images D1 input in the registration determination period. .
  • the background model update unit 40 determines whether or not to register the background image information candidate registered in the cache model as background image information in the background model based on the plurality of input images D1 input in the registration determination period. Therefore, the image information of the image block erroneously determined to be a moving image by the moving object detection unit 10 can be registered in the background model as background image information. Therefore, the background model can be appropriately updated, and the accuracy of moving object detection by the moving object detection unit 10 is improved.
  • the background model update process described above for example, when a person (moving object) enters a certain area and then leaves the area, the image information including the belonging is obtained. Eventually it will be registered in the background model.
  • FIG. 18 shows an example of a series of area unit moving body images D3 and area state images D4 when a person leaves with his belongings.
  • both people and belongings exist in the area A1 for a relatively long period of time. Therefore, also in the example of FIG. 18, a moving object is initially detected in the area A1 (“1” is indicated in the area A1), and the area state of the area A1 is a staying state (horizontal line hatching).
  • the moving object When the person subsequently moves to the area A3 on the right side, the moving object is not detected in the area A2, and the moving object is detected in the area A3. As this state continues, the area states of areas A2 and A3 become a leaving state and an entering state, respectively. Since the moving object is not detected thereafter in the area A2, the area state will eventually be absent.
  • the background model update unit 40 Register information in the background model. As a result, there is no difference between the input image D1 and the background model in the image block, and the belongings are not detected as moving objects in the area A1. This is indicated by “0” in area A1 in the area unit moving body image D3 at the right end of FIG.
  • the area condition of the surrounding area of the area A1 is not an approaching state, so the second condition is not satisfied for the area A1. Therefore, the area state of area A1 continues to maintain the stay state. This is indicated by the hatching of the horizontal line of the area A1 in the area state image D4 at the right end of FIG.
  • an object when image information representing a moving object is registered in the background model, an object is to appropriately change the area state of the area including the moving object from the staying state to the leaving state.
  • the area state estimation unit 30 retains information (hereinafter referred to as an entry flag) that the area of interest has entered the entry state when the area state of the area of interest transitions to the entry state. More specifically, this approach flag is recorded in the storage unit 110, for example. This approach flag is held for a holding period longer than the registration determination period, and is erased when the holding period elapses. The approach flag is held for each area.
  • the area state estimation unit 30 changes the area state from the staying state to the leaving state not only when both the first and second conditions are satisfied, but also when the next condition is satisfied.
  • the condition is a condition that satisfies both the first condition and the third condition that the approach flag for the surrounding area is held.
  • the approach flag is held for the area A2.
  • the entry flag is held for a holding period longer than the registration determination period after the area A2 enters the entry state. Therefore, even when the image information of the image blocks belonging to the area A1 is registered in the background model (see the state at the right end in FIG. 18), the entry flag of the area A2 is retained without being erased. Thereby, the third condition is satisfied.
  • the area state estimation unit 30 changes the area state of the area A1 from the staying state to the leaving state.
  • the area A1 can be appropriately transitioned from the staying state to the leaving state.
  • the area state when both the first and third conditions are satisfied in the staying state, the area state is changed from the staying state to the leaving state, but the area state is changed from the staying state to the absence state. You may make a transition. Thereby, an area state can be changed to an actual state promptly. In other words, the area state can be estimated with high responsiveness.
  • the above-mentioned problem arises because the second condition is adopted as the condition from the staying state to the leaving state, and the approach flag is employed to solve this problem.
  • the above-described problem does not occur when the area state does not become the stay state. Therefore, in this case, even if the area state becomes the entry state, it is not necessary to hold the entry flag for the area. For example, when the moving object simply crosses the imaging area, it is assumed that there is no area that is in the staying state, and therefore no entry flag is required.
  • a condition for holding the entry flag a condition is added that the area state of the surrounding area (for example, the eight areas immediately surrounding the area) is the staying state.
  • the approach flag is held for the surrounding area of the area. Therefore, even if the image information of the area in the staying state is registered in the background model, the third condition is satisfied, and accordingly, the area state of the area is appropriately changed to the withdrawal state (or absence state). be able to.
  • an entry / exit time required for transition from the entry state to the withdrawal state (a period corresponding to the withdrawal determination value FNref21) is set as an entry stay period (entrance determination value FNref11) required for the transition from the entry state to the stay state. )
  • an entry stay period (entrance determination value FNref11) required for the transition from the entry state to the stay state. )
  • both the first and second areas that are adjacent in the left-right direction may be in the approach state.
  • the second area far from the moving source can be in the staying state before the first area close to the moving source changes from the entering state to the leaving state, that is, the first area is in the entering state. . Therefore, in this case, the approach flag is held for the first area close to the movement source. That is, the entry flag is held even when the moving object is simply moving. Such a situation can be avoided by setting the above period.
  • the entry flag may be held for the area (surrounding area).
  • the entry flag may be held in the surrounding area on the condition that the area state of the surrounding area has changed to the entering state while the area state of the attention area is in the staying state.
  • a determination period adjustment unit 50 is further provided.
  • the determination period adjustment unit 50 adjusts the registration determination period according to the area state and the presence / absence of the entry flag. Briefly, the determination period adjustment unit 50 determines that the image information indicates the background for the area where the object is left behind, and sets the registration determination period for the area to be short.
  • the area state estimation unit 30 holds the entry flag as described in the third embodiment. That is, when the area state of the surrounding area transitions to the entering state and the area state of the attention area maintains the staying state from before (at least one frame before), the entry flag is held for the surrounding area To do. This avoids holding unnecessary entry flags. That is, the entry flag is not held when the moving object is simply moved. In other words, when there is an area where an object is left behind and is in a staying state, an approach flag is held in the surrounding area.
  • the determination period adjustment unit 50 determines that the attention area includes an object left behind when the area state of the attention area is the staying state and the approach flag is held for the surrounding area. Then, the registration determination period of the image block belonging to the attention area is set shorter than the registration determination period of the image block belonging to another area. This registration determination period is output to the background model update unit 40.
  • the registration determination period is set so that image information belonging to an area that is likely to contain an object left behind is registered in the background model at an early stage. Set it short. Thereby, an area state can be estimated with high responsiveness.
  • the image processing unit 1 has been described in detail. However, the above description is illustrative in all aspects, and the present invention is not limited thereto. The various modifications described above can be applied in combination as long as they do not contradict each other. And it is understood that the countless modification which is not illustrated can be assumed without deviating from the scope of the present invention.

Abstract

Provided is an area state estimation device in which an area state estimation unit causes the area state of an area of interest to transition from an absence state to an entry state when a moving body is detected during a first period in an area of interest that is one area in an input image, (ii) causes the area state of the area of interest to transition to a stay state when the moving body is detected in the area of interest during a second period while in the entry state, (iii) causes the area state of the area of interest to transition to a departure state when the moving body is not detected in the area of interest during a third period while in the stay state, and (iv) causes the area state of the area of interest to transition to the absence state when the moving body is not detected in the area of interest during a fourth period while in the departure state.

Description

エリア状態推定装置、エリア状態推定方法および環境制御システムArea state estimation device, area state estimation method, and environmental control system
 本発明は、エリア状態推定装置、エリア状態推定方法および環境制御システムに関する。 The present invention relates to an area state estimation device, an area state estimation method, and an environment control system.
 例えばオフィス内で人物を検知し、その状態に応じて照明や空調を制御することへの要求がある。従来では赤外線を用いて人物を検知する装置が多く採用されていたが、この装置では検知範囲が狭かったり、検知する対象エリアの温度が限られたりするなどの課題があった。そこで、画像センサ(カメラ)を用いた人検知システムが提案されている。この人検知システムにおいては、画像センサで撮像した撮像画像を処理して動体(例えば人物)が検出される。 For example, there is a demand for detecting a person in an office and controlling lighting and air conditioning according to the state. Conventionally, many devices for detecting a person using infrared rays have been employed. However, this device has problems such as a narrow detection range and a limited temperature of a target area to be detected. Therefore, a human detection system using an image sensor (camera) has been proposed. In this human detection system, a moving object (for example, a person) is detected by processing a captured image captured by an image sensor.
 なお本発明に関連する技術として、特許文献1,2および非特許文献1が開示されている。 Note that Patent Documents 1 and 2 and Non-Patent Document 1 are disclosed as techniques related to the present invention.
特許第5294562号公報Japanese Patent No. 5294562 特許第4715604号公報Japanese Patent No. 4715604
 しかしながら、従来では動体の存否を判定しているだけであり、これでは動体の状態に応じた細かな制御(例えば照明制御または空調制御など)を行なうことが難しい。 However, conventionally, only the presence or absence of a moving object is determined, and it is difficult to perform fine control (for example, illumination control or air conditioning control) according to the state of the moving object.
 そこで、本願は、エリアごとに動体の状態を推定することで、エリアに対するより細かな制御に寄与するエリア状態推定装置を提供することを目的とする。 Therefore, an object of the present application is to provide an area state estimation device that contributes to finer control over an area by estimating the state of a moving object for each area.
 エリア状態推定装置の第1態様は、入力画像および背景画像を用いて、前記入力画像における複数のエリアのそれぞれで動体を検出するエリア単位動体検出部と、前記エリアのエリア状態を推定するエリア状態推定部とを備え、前記エリア状態推定部は、(i) 前記エリアの一つたる注目エリアにおいて、第1期間にわたって動体が検出されたときに、前記注目エリアのエリア状態を不在状態から進入状態へと遷移させ、(ii)前記進入状態において第2期間にわたって前記注目エリアに動体が検出されたときに、前記注目エリアのエリア状態を滞在状態へと遷移させ、(iii)前記滞在状態において第3期間にわたって前記注目エリアに動体が検出されないときに、前記注目エリアのエリア状態を退去状態へと遷移させ、(iv)前記退去状態において第4期間にわたって前記注目エリアに動体が検出されないときに、前記注目エリアのエリア状態を不在状態へと遷移させる。 A first aspect of the area state estimation device includes an area unit moving body detection unit that detects a moving body in each of a plurality of areas in the input image using an input image and a background image, and an area state that estimates an area state of the area And (i) (When a moving object is detected over a first period in the attention area that is one of the areas, the area state estimation section changes the area state of the attention area from the absence state to the entry state. (Ii) when a moving object is detected in the area of interest over the second period in the approach state, the area state of the area of interest is transitioned to a stay state, and (iii) When no moving object is detected in the attention area for three periods, the area state of the attention area is changed to a withdrawal state, and (iv) When the moving object to the target area is not detected over a period transits the area status of the target area to the absence state.
 また、エリア状態推定装置の第2の態様は、第1の態様にかかるエリア状態推定装置であって、前記エリア状態推定部は、(iii')前記滞在状態において前記第3期間にわたって前記注目エリアに動体が検出されず、かつ、前記注目エリアの周囲に位置する周囲エリアのエリア状態が前記進入状態であるときに、前記注目エリアのエリア状態を前記滞在状態から前記退去状態へと遷移させる。 Moreover, the 2nd aspect of an area state estimation apparatus is an area state estimation apparatus concerning a 1st aspect, Comprising: The said area state estimation part is the said attention area over the said 3rd period in the said stay state (iii ') When the moving object is not detected and the area state of the surrounding area located around the attention area is the entry state, the area state of the attention area is changed from the stay state to the withdrawal state.
 また、エリア状態推定装置の第3の態様は、第2の態様にかかるエリア状態推定装置であって、前記エリア状態推定部は、(v)前記進入状態において第5期間にわたって前記注目エリアに動体が検出されないことのみを条件として、前記エリア状態を前記進入状態から前記退去状態へと遷移させる。 Moreover, the 3rd aspect of an area state estimation apparatus is an area state estimation apparatus concerning a 2nd aspect, Comprising: The said area state estimation part is a moving body to the said attention area over the 5th period in the said approach state. The area state is transitioned from the entering state to the leaving state only on the condition that no is detected.
 また、エリア状態推定装置の第4の態様は、第1から第3のいずれか一つの態様にかかるエリア状態推定装置であって、前記エリア状態推定部は、(vi)前記退去状態において前記第1期間と前記第2期間との和よりも短い第6期間にわたって前記注目エリアに動体が検出されたときに、前記注目エリアのエリア状態を前記退去状態から前記滞在状態または前記進入状態へと遷移させる。 Further, a fourth aspect of the area state estimation apparatus is the area state estimation apparatus according to any one of the first to third aspects, wherein the area state estimation unit (vi) When a moving object is detected in the attention area over a sixth period shorter than the sum of the first period and the second period, the area state of the attention area transitions from the leaving state to the staying state or the entering state. Let
 また、エリア状態推定装置の第5の態様は、第1から第4のいずれか一つの態様にかかるエリア状態推定装置であって、前記エリア単位動体検出部は、前記エリア状態に応じた検出感度で前記注目エリアにおいて動体を検出する。 Moreover, the 5th aspect of an area state estimation apparatus is an area state estimation apparatus concerning any one 1st to 4th aspect, Comprising: The said area unit moving body detection part is detection sensitivity according to the said area state. Then, a moving object is detected in the attention area.
 また、エリア状態推定装置の第6の態様は、第5の態様にかかるエリア状態推定装置であって、前記エリア単位動体検出部は、前記エリア状態が前記退去状態である前記エリアにおける検出感度を、前記エリア状態が前記不在状態、前記進入状態または前記滞在状態である前記エリアにおける検出感度よりも高く設定する。 Moreover, the 6th aspect of an area state estimation apparatus is the area state estimation apparatus concerning a 5th aspect, Comprising: The said area unit moving body detection part has the detection sensitivity in the said area whose said area state is the said withdrawal state. The area state is set to be higher than the detection sensitivity in the area where the absence state, the entry state, or the stay state is present.
 また、エリア状態推定装置の第7の態様は、第5または第6の態様にかかるエリア状態推定装置であって、前記エリア単位動体検出部は、前記エリア状態が進入状態である前記エリアを周囲に有する前記エリアにおける検出感度を、前記エリア状態が不在状態、進入状態または滞在状態である前記エリアにおける検出感度よりも高く設定する。 Moreover, a 7th aspect of an area state estimation apparatus is an area state estimation apparatus concerning the 5th or 6th aspect, Comprising: The said area unit moving body detection part surrounds the said area where the said area state is an approach state The detection sensitivity in the area is set higher than the detection sensitivity in the area where the area state is absent, approaching state, or staying state.
 また、エリア状態推定装置の第8の態様は、第2から第7のいずれか一つの態様にかかるエリア状態推定装置であって、前記入力画像から得られる画像情報の登録判定期間における変化が基準値よりも小さいときに、前記背景画像の背景画像情報として登録する背景モデル更新部を更に備え、前記エリア状態推定部は、(I)前記エリアの前記エリア状態が前記進入状態に遷移したときに、その旨を示す進入フラグを前記登録判定期間よりも長い保持期間にわたって保持し、(II)前記滞在状態において前記第3期間にわたって前記注目エリアに動体が検出されず、かつ、前記周囲エリアについての前記進入フラグが保持されているときに、前記注目エリアの前記エリア状態を前記退去状態または前記不在状態へと遷移させる。 An eighth aspect of the area state estimation apparatus is the area state estimation apparatus according to any one of the second to seventh aspects, wherein a change in a registration determination period of image information obtained from the input image is a reference. A background model update unit that registers as background image information of the background image when smaller than a value, the area state estimation unit is (I) when the area state of the area has transitioned to the approach state , Holding an approach flag indicating that over a holding period longer than the registration determination period, (II) in the stay state, moving objects are not detected in the area of interest over the third period, and about the surrounding area When the entry flag is held, the area state of the area of interest is transitioned to the withdrawal state or the absence state.
 また、エリア状態推定装置の第9の態様は、第8の態様にかかるエリア状態推定装置であって、前記エリア状態推定部は、(I')前記周囲エリアの前記エリア状態が前記進入状態へと遷移し、かつ、その遷移の前から、前記注目エリアの前記エリア状態が滞在状態を維持しているときに、前記周囲エリアについての前記進入フラグを保持し、前記エリア状態推定装置は、前記注目エリアに属する前記画像情報の前記登録判定期間を、他の前記エリアに属する前記画像情報の前記登録判定期間よりも短く設定する判定期間調整部を更に備える。 Further, a ninth aspect of the area state estimation apparatus is the area state estimation apparatus according to the eighth aspect, wherein the area state estimation unit (I ′) sets the area state of the surrounding area to the entry state. And before the transition, when the area state of the area of interest maintains a stay state, the approach flag for the surrounding area is held, the area state estimation device, The image processing apparatus further includes a determination period adjustment unit that sets the registration determination period of the image information belonging to the attention area to be shorter than the registration determination period of the image information belonging to another area.
 本発明にかかる環境制御システムの態様は、第1から第9のいずれか一つの態様にかかるエリア状態推定装置と、前記エリア状態に応じて前記エリアの環境を制御する環境制御装置とを備える。 The aspect of the environment control system according to the present invention includes the area state estimation device according to any one of the first to ninth aspects, and the environment control device that controls the environment of the area according to the area state.
 エリア状態推定方法の一態様は、(a)入力画像および背景画像を用いて、前記入力画像における複数のエリアのそれぞれで動体を検出する工程と、(b)前記エリアの一つたる注目エリアにおいて、第1期間にわたって動体が検出されたときに、前記注目エリアのエリア状態を不在状態から進入状態へと遷移させる工程と、(c)前記進入状態において第2期間にわたって前記注目エリアに動体が検出されたときに、前記注目エリアのエリア状態を滞在状態へと遷移させる工程と、(d)前記滞在状態において第3期間にわたって前記注目エリアに動体が検出されないときに、前記注目エリアのエリア状態を退去状態へと遷移させる工程と、(f)前記退去状態において第4期間にわたって前記注目エリアに動体が検出されないときに、前記注目エリアのエリア状態を前記不在状態へと遷移させる工程とを備える。 In one aspect of the area state estimation method, (a) a step of detecting a moving object in each of a plurality of areas in the input image using an input image and a background image, and (b) in an attention area that is one of the areas A step of transitioning the area state of the area of interest from an absent state to an approach state when a moving object is detected over a first period; and (c) a moving object is detected in the area of interest over a second period in the approach state. And (d) when no moving object is detected in the attention area over a third period in the staying state, the area state of the attention area is changed to the staying state. A step of transitioning to a leaving state; and (f) when no moving object is detected in the attention area over a fourth period in the leaving state, The area status of the serial target area and a step of transitioning to the absence state.
 プログラムの一態様は、入力画像における複数のエリアのエリア状態を推定するエリア推定状態装置に、(a)入力画像および背景画像を用いて、前記入力画像における複数のエリアのそれぞれで動体を検出する工程と、(b)前記エリアの一つたる注目エリアにおいて、第1期間にわたって動体が検出されたときに、前記注目エリアのエリア状態を不在状態から進入状態へと遷移させる工程と、(c)前記進入状態において第2期間にわたって前記注目エリアに動体が検出されたときに、前記注目エリアのエリア状態を滞在状態へと遷移させる工程と、(d)前記滞在状態において第3期間にわたって前記注目エリアに動体が検出されないときに、前記注目エリアのエリア状態を退去状態へと遷移させる工程と、(f)前記退去状態において第4期間にわたって前記注目エリアに動体が検出されないときに、前記注目エリアのエリア状態を前記不在状態へと遷移させる工程とを実行させる。 In one aspect of the program, an area estimation state device that estimates an area state of a plurality of areas in an input image uses (a) an input image and a background image to detect a moving object in each of the plurality of areas in the input image. And (b) a step of transitioning the area state of the area of interest from an absent state to an entering state when a moving object is detected over a first period in the area of interest as one of the areas, and (c) A step of transitioning the area state of the area of interest to a stay state when a moving object is detected in the area of interest over a second period in the approach state; and (d) the area of interest over a third period in the stay state. A step of transitioning the area state of the area of interest to a leaving state when no moving object is detected in (f) the leaving state; The over fourth period Te when the moving object to the target area is not detected, to execute the step of transitioning the area status of the target area to the absence state.
 進入状態と滞在状態とを区別して動体の存在を検出し、退去状態と不在状態とを区別して動体の不在を検出する。よってより細かい動体の状態を知ることができる。 The presence of a moving object is detected by distinguishing between the approaching state and the staying state, and the absence of the moving object is detected by distinguishing between the leaving state and the absence state. Therefore, it is possible to know a more detailed state of the moving object.
 この発明の目的、特徴、局面、および利点は、以下の詳細な説明と添付図面とによって、より明白となる。 The objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description and the accompanying drawings.
画像処理装置の概念的な構成の一例を示すブロック図である。It is a block diagram which shows an example of a notional structure of an image processing apparatus. 入力画像の一例を模式的に示す図である。It is a figure which shows an example of an input image typically. 画像処理装置の概念的な構成の一例を示すブロック図である。It is a block diagram which shows an example of a notional structure of an image processing apparatus. 画像処理装置の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of an image processing apparatus. 動体検出部の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of a moving body detection part. コードブックとコードワードとを模式的に示す図である。It is a figure which shows a code book and a code word typically. コードワードに含まれる情報の一例を模式的に示す図である。It is a figure which shows typically an example of the information contained in a code word. 動体検出部の動作を説明するための図である。It is a figure for demonstrating operation | movement of a moving body detection part. 画像ベクトルと背景ベクトルとの関係を示す図である。It is a figure which shows the relationship between an image vector and a background vector. エリア在/不在判定部の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of an area presence / absence determination part. エリア状態推定部の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of an area state estimation part. エリア状態の遷移図の一例を示す図である。It is a figure which shows an example of the transition diagram of an area state. エリア状態が滞在状態から退去状態へと遷移する様子を示す図である。It is a figure which shows a mode that an area state changes from a stay state to a leaving state. 入力画像とエリア状態画像との一例を模式的に示す図である。It is a figure which shows an example of an input image and an area state image typically. 画像処理装置の概念的な構成の一例を示すブロック図である。It is a block diagram which shows an example of a notional structure of an image processing apparatus. 画像処理装置の概念的な構成の一例を示すブロック図である。It is a block diagram which shows an example of a notional structure of an image processing apparatus. 背景モデル更新処理部の動作の一例を示すフローチャートである。It is a flowchart which shows an example of operation | movement of a background model update process part. エリア状態が変化する様子の一例を示す図である。It is a figure which shows an example of a mode that an area state changes. 画像処理装置の概念的な構成の一例を示すブロック図である。It is a block diagram which shows an example of a notional structure of an image processing apparatus.
 第1の実施の形態.
 図1は第1の実施の形態にかかる画像処理部(エリア状態推定装置)1の一例を概念的に示すブロック図である。図1に示されるように、画像処理部1には画像入力部2から入力画像D1が入力される。画像入力部2は外部から入力される入力画像D1を画像処理部1に入力する。入力画像D1は撮像部(不図示)によって撮像された撮像画像である。
First embodiment.
FIG. 1 is a block diagram conceptually showing an example of an image processing unit (area state estimation device) 1 according to the first embodiment. As shown in FIG. 1, an input image D <b> 1 is input from the image input unit 2 to the image processing unit 1. The image input unit 2 inputs an input image D1 input from the outside to the image processing unit 1. The input image D1 is a captured image captured by an imaging unit (not shown).
 図2は入力画像D1の一例を模式的に示している。この入力画像D1にはある部屋が表れている。なお図2の入力画像D1は撮像部が全方位カメラである場合の一例である。よって入力画像D1の中心から遠ざかるにしたがって、物体が湾曲して表される。例えば入力画像D1において端に位置する壁101が、大きく湾曲して表されている。実際には壁101は湾曲しない平面形状を有している。また入力画像D1の中心から遠い領域は入力画像D1においてより小さく表示される。 FIG. 2 schematically shows an example of the input image D1. A room appears in the input image D1. The input image D1 in FIG. 2 is an example when the imaging unit is an omnidirectional camera. Therefore, as the distance from the center of the input image D1 increases, the object is represented as curved. For example, the wall 101 located at the end in the input image D1 is greatly curved. Actually, the wall 101 has a planar shape that does not curve. A region far from the center of the input image D1 is displayed smaller in the input image D1.
 図2の例示では、机102と対になって設けられる椅子(不図示)に人103が座っており、人103の上半身が表れている。人103の下半身および椅子は机102によって隠れており、入力画像D1においては表れていない。 2, the person 103 is sitting on a chair (not shown) provided in a pair with the desk 102, and the upper body of the person 103 appears. The lower body and chair of the person 103 are hidden by the desk 102 and do not appear in the input image D1.
 画像処理部1は、画像入力部2から入力される入力画像D1に対して様々な画像処理を行う。画像処理部1は、CPU100と記憶部110を備えている。記憶部110は、ROM(Read Only Memory)およびRAM(Random Access Memory)等の、CPU100が読み取り可能な非一時的な記録媒体で構成されている。記憶部110には制御プログラム111が記憶されている。CPU100が記憶部110内の制御プログラム111を実行することによって、画像処理部1には後述する機能ブロックが形成される。 The image processing unit 1 performs various image processing on the input image D1 input from the image input unit 2. The image processing unit 1 includes a CPU 100 and a storage unit 110. The storage unit 110 is configured by a non-transitory recording medium that can be read by the CPU 100, such as a ROM (Read Only Memory) and a RAM (Random Access Memory). The storage unit 110 stores a control program 111. When the CPU 100 executes the control program 111 in the storage unit 110, a functional block described later is formed in the image processing unit 1.
 なお記憶部110は、ROMおよびRAM以外の、コンピュータが読み取り可能な非一時的な記録媒体を備えていても良い。記憶部110は、例えば、小型のハードディスクドライブおよびSSD(Solid State Drive)等を備えていても良い。 Note that the storage unit 110 may include a computer-readable non-transitory recording medium other than the ROM and RAM. The storage unit 110 may include, for example, a small hard disk drive and an SSD (Solid State Drive).
 図3に示すように、画像処理部1においては、動体検出部10とエリア在/不在判定部20とエリア状態推定部30との機能ブロックが形成される。なお、これらの機能ブロックは、CPU100が制御プログラム111を実行することによって実現されるのではなく、論理回路などを用いたハードウェア回路で実現されても良い。 As shown in FIG. 3, in the image processing unit 1, functional blocks of a moving object detection unit 10, an area presence / absence determination unit 20, and an area state estimation unit 30 are formed. These functional blocks are not realized by the CPU 100 executing the control program 111, but may be realized by a hardware circuit using a logic circuit or the like.
 図4は、画像処理部1の概略動作の一例を示すフローチャートである。ステップs11において画像入力部2から入力画像D1が画像処理部1に入力されると、当該入力画像D1を処理対象として、ステップs12~s14までの一連の処理が実行される。入力画像D1は所定の時間ごとに順次に入力されるので、図4の動作は繰り返し実行される。 FIG. 4 is a flowchart illustrating an example of a schematic operation of the image processing unit 1. When the input image D1 is input from the image input unit 2 to the image processing unit 1 in step s11, a series of processing from steps s12 to s14 is executed with the input image D1 as a processing target. Since the input image D1 is sequentially input every predetermined time, the operation of FIG. 4 is repeatedly executed.
 <画素単位動体検出>
 ステップs12において動体検出部10は入力画像D1と背景画像D0とを入力する(図3も参照)。背景画像D0は撮像部によって撮影された画像であるという点で入力画像D1と共通するものの、動体(例えば人103)を含まない画像である。この背景画像D0は例えば予め撮像されて背景モデル記憶部3に格納される。背景モデル記憶部3は、例えばフラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)またはハードディスク(HD)等の書き換え可能な非一次的な記録媒体で構成される。
<Pixel unit motion detection>
In step s12, the moving object detection unit 10 inputs the input image D1 and the background image D0 (see also FIG. 3). Although the background image D0 is the same as the input image D1 in that it is an image taken by the imaging unit, the background image D0 does not include a moving object (for example, the person 103). The background image D0 is captured in advance and stored in the background model storage unit 3, for example. The background model storage unit 3 includes a rewritable non-primary recording medium such as a flash memory, an EPROM (Erasable Programmable Read Only Memory), or a hard disk (HD).
 動体検出部10は背景画像D0と入力画像D1とを用いて画素ごとに動体を検出する。より詳細には、入力画像D1と背景画像D0との差異が大きい画素を、動体を表す画素(以下、動体画素とも呼ぶ)であると判定して、動体を検出する。このような動体検出の手法としては、例えばCodeBook、Colinearまたは統計的背景差分など任意の手法を採用することができる。 The moving object detection unit 10 detects a moving object for each pixel using the background image D0 and the input image D1. More specifically, a pixel having a large difference between the input image D1 and the background image D0 is determined as a pixel representing a moving object (hereinafter also referred to as a moving object pixel), and the moving object is detected. As such a moving object detection method, any method such as CodeBook, Colinear, or statistical background difference can be employed.
 例えば簡単な動体検出としては、入力画像D1と背景画像D0との画素値の差を画素毎に算出し、その差が所定の基準値よりも大きい画素を、動体画素として検出する。 For example, as simple moving object detection, a pixel value difference between the input image D1 and the background image D0 is calculated for each pixel, and a pixel whose difference is larger than a predetermined reference value is detected as a moving object pixel.
 或いは、入力画像D1と背景画像D0との差異を所定の画像ブロック毎に算出することもできる。ここでいう画像ブロックとは、入力画像D1(背景画像D0)を複数の領域に分割したものであり、例えば入力画像D1および背景画像D0において、縦3×横3の計9個の画素を有する画像ブロックである。以下、この手法の具体例について説明する。 Alternatively, the difference between the input image D1 and the background image D0 can be calculated for each predetermined image block. Here, the image block is obtained by dividing the input image D1 (background image D0) into a plurality of regions. For example, the input image D1 and the background image D0 have a total of nine pixels of 3 × 3 in the vertical direction. It is an image block. Hereinafter, a specific example of this method will be described.
 図5は動体検出処理の一例を示すフローチャートである。図5に示されるように、ステップs121において、動体検出部10は、上述のステップs11で入力された処理対象の入力画像D1における、ある画像ブロック(以後、「注目画像ブロック」と呼ぶことがある)に対して動体の検出を行う。つまり、動体検出部10は、注目画像ブロックに動体が表れているか否かを検出する。より詳細には、動体検出部10は入力画像D1の注目画像ブロックの画像情報と、背景画像D0の注目画像ブロックの画像情報(以下では、背景画像情報とも呼ぶ)とが互いに一致するか否かを判定する。これらが一致しない、即ちこれらの差が基準値よりも大きいと判定したときに、注目画像ブロックが動体を示す画像(以下では、動体画像とも呼ぶ)であると判定する。注目画像ブロックにおける、画像情報および背景画像情報が、互いに一致するか否かの具体的な判定方法については後に述べる。 FIG. 5 is a flowchart showing an example of the moving object detection process. As shown in FIG. 5, in step s121, the moving object detection unit 10 may be referred to as a certain image block (hereinafter referred to as “target image block”) in the processing target input image D1 input in step s11 described above. ) Is detected. That is, the moving object detection unit 10 detects whether a moving object appears in the target image block. More specifically, the moving object detection unit 10 determines whether the image information of the target image block of the input image D1 and the image information of the target image block of the background image D0 (hereinafter also referred to as background image information) match each other. Determine. When it is determined that they do not match, that is, when the difference between them is larger than the reference value, it is determined that the target image block is an image showing a moving object (hereinafter also referred to as a moving object image). A specific method for determining whether the image information and the background image information in the target image block match each other will be described later.
 ステップs121が実行されると、ステップs122において、動体検出部10は、ステップs121での動体検出の結果を記憶する。そして、動体検出部10は、ステップs123において、全ての画像ブロックについて処理が行われた否か、つまり、全ての画像ブロックを注目画像ブロックに設定したか否かを判定する。ステップs123での判定の結果、処理が行われていない画像ブロックが存在する場合には、動体検出部10は、未だ処理が行われていない画像ブロックを新たな注目画像ブロックとして、ステップs121以降を実行する。一方で、ステップs123での判定の結果、全ての画像ブロックについて処理が行われている場合には、つまり、入力画像D1の全領域に対して動体の検出が完了している場合には、動体検出部10は動体検出処理を終了する。 When step s121 is executed, in step s122, the moving object detection unit 10 stores the result of the moving object detection in step s121. In step s123, the moving object detection unit 10 determines whether or not processing has been performed for all image blocks, that is, whether or not all image blocks have been set as target image blocks. If there is an image block that has not been processed as a result of the determination in step s123, the moving object detection unit 10 sets the image block that has not been processed yet as a new image block of interest, and then executes step s121 and subsequent steps. Execute. On the other hand, if the result of determination in step s123 is that processing has been performed for all image blocks, that is, if detection of moving objects has been completed for all regions of the input image D1, moving objects are detected. The detection unit 10 ends the moving object detection process.
 <動体検出の詳細>
 次にステップs121での動体検出の具体的手法について説明する。ここでは、複数の背景画像D0を用いる。例えば日中および夜間では明るさが相違するので、背景画像として明るさが異なる複数の画像が背景モデル記憶部3に記録される。その他、背景画像として登録すべき複数の画像も背景画像D0として背景モデル記憶部3に記録される。
<Details of motion detection>
Next, a specific method of moving object detection in step s121 will be described. Here, a plurality of background images D0 are used. For example, since the brightness is different between daytime and nighttime, a plurality of images having different brightnesses are recorded in the background model storage unit 3 as background images. In addition, a plurality of images to be registered as background images are also recorded in the background model storage unit 3 as background images D0.
 ここでは上述のように画像ブロック毎に入力画像D1と背景画像D0とを比較するので、複数の背景画像D0を画像ブロックごとに取り扱う。そこで、各画像ブロックに対応したコードブックCB(図6参照)と、各コードブックCBに含まれるコードワードCW(図6参照)という概念を導入する。コードブックCBは画像ブロックと同じ数だけ存在する。図6の例示では、図示の煩雑を避けるために、縦3個および横4個の計12個のコードブックCBが示されている。各コードブックCBに属するコードワードCWは背景画像D0と同じ数だけ存在する。図6の例示では、3つの背景画像D0a~D0cが背景モデルを構成しているので、各コードブックCBは3個のコードワードCW(CW1~CW3)を含んでいる。各コードワードCWは、図7に示すように、自身が属するコードブックCBに対応する画像ブロックにおける各背景画像D0の画像情報(背景画像情報)を含んでいる。この背景画像情報は動体検出に用いられる。またコードワードCWは最新一致時刻Teとコードワード生成時刻Tiとを含んでいる。これらは第3の実施の形態で述べる背景画像情報の追加或いは更新に用いられる。 Here, since the input image D1 and the background image D0 are compared for each image block as described above, a plurality of background images D0 are handled for each image block. Therefore, the concept of a code book CB (see FIG. 6) corresponding to each image block and a code word CW (see FIG. 6) included in each code book CB is introduced. There are as many codebooks CB as image blocks. In the example of FIG. 6, a total of 12 codebooks CB of 3 in the vertical direction and 4 in the horizontal direction are shown in order to avoid the complexity of the illustration. There are as many codewords CW belonging to each codebook CB as there are background images D0. In the example of FIG. 6, since the three background images D0a to D0c form a background model, each codebook CB includes three codewords CW (CW1 to CW3). As shown in FIG. 7, each codeword CW includes image information (background image information) of each background image D0 in the image block corresponding to the codebook CB to which the codeword CW belongs. This background image information is used for moving object detection. The code word CW includes the latest match time Te and the code word generation time Ti. These are used for addition or update of background image information described in the third embodiment.
 図6において砂地のハッチングが示されているコードブックCBには、背景画像D0a~D0cに基づいてそれぞれ生成された3つのコードワードCW1~CW3が含まれている。コードブックCBに含まれるコードワードCW1は、背景画像D0aにおける、当該コードブックCBが対応する画像ブロックに基づいて生成される。コードブックCBに含まれるコードワードCW2は、背景画像D0bにおける、当該コードブックCBが対応する画像ブロックに基づいて生成される。そして、コードブックCBに含まれるコードワードCW3は、背景画像D0cにおける、当該コードブックCBが対応する画像ブロックに基づいて生成される。 In FIG. 6, the code book CB showing sand hatching includes three code words CW1 to CW3 generated based on the background images D0a to D0c, respectively. The code word CW1 included in the code book CB is generated based on the image block corresponding to the code book CB in the background image D0a. The code word CW2 included in the code book CB is generated based on the image block corresponding to the code book CB in the background image D0b. Then, the code word CW3 included in the code book CB is generated based on the image block corresponding to the code book CB in the background image D0c.
 以下では、注目画像ブロックに対応するコードブックCBを、対応コードブックCBと呼ぶことがあり、対応コードブックCBに属するコードワードCWを、対応コードワードCWと呼ぶことがある。 Hereinafter, the code book CB corresponding to the target image block may be referred to as a corresponding code book CB, and the code word CW belonging to the corresponding code book CB may be referred to as a corresponding code word CW.
 ここでは、入力画像D1の注目画像ブロックと背景画像D0の注目画像ブロックとの差異を考慮するに当たって、次で説明するベクトルを用いる。図8は、入力画像D1の注目画像ブロック及び背景モデルの対応コードワードCWのそれぞれからベクトルを抽出する様子を表した図である。図9は、入力画像D1の注目画像ブロックから抽出されたベクトルと、背景モデルの対応コードワードCWから抽出されたベクトルとの関係を示す図である。 Here, in considering the difference between the target image block of the input image D1 and the target image block of the background image D0, a vector described below is used. FIG. 8 is a diagram showing how vectors are extracted from each of the target image block of the input image D1 and the corresponding codeword CW of the background model. FIG. 9 is a diagram illustrating a relationship between a vector extracted from the target image block of the input image D1 and a vector extracted from the corresponding codeword CW of the background model.
 本実施の形態では、入力画像D1中の注目画像ブロックの画像情報がベクトルとして扱われる。また、背景モデル中の各対応コードワードCWについて、当該対応コードワードCWに含まれる背景画像情報がベクトルとして扱われる。そして、注目画像ブロックの画像情報についてのベクトルと、各対応コードワードCWの背景画像情報についてのベクトルとが、同じ方向を向いているか否かに基づいて、注目画像ブロックが動体画像であるか否かが判定される。この2種類のベクトルが同じ方向を向いている場合には、注目画像ブロックの画像情報と、各対応コードワードCWの背景画像情報とは一致すると考えることができる。したがって、この場合には、入力画像D1中の注目画像ブロックは、背景を示す画像と変わらず、動体画像ではないと判定される。一方、2種類のベクトルが同じ方向を向いていない場合には、注目画像ブロックの画像情報と、各対応コードワードCWの背景画像情報とは一致しないと考えることができる。したがって、この場合には、入力画像D1中の注目画像ブロックは、背景を示す画像ではなく、動体画像であると判定される。 In the present embodiment, the image information of the target image block in the input image D1 is handled as a vector. For each corresponding codeword CW in the background model, the background image information included in the corresponding codeword CW is treated as a vector. Based on whether or not the vector for the image information of the target image block and the vector for the background image information of each corresponding codeword CW are in the same direction, whether or not the target image block is a moving image. Is determined. When these two types of vectors are directed in the same direction, it can be considered that the image information of the target image block and the background image information of each corresponding codeword CW match. Therefore, in this case, it is determined that the target image block in the input image D1 is not a moving body image and is not different from the image indicating the background. On the other hand, when the two types of vectors do not point in the same direction, it can be considered that the image information of the target image block does not match the background image information of each corresponding codeword CW. Therefore, in this case, it is determined that the target image block in the input image D1 is not an image indicating the background but a moving body image.
 具体的には、動体検出部10は、入力画像D1中の注目画像ブロックに含まれる複数の画素の画素値を成分とした画像ベクトルxを生成する。図8には、9個の画素を有する注目画像ブロック210の各画素の画素値を成分とした画像ベクトルxが示されている。図8の例では、各画素は、R(赤)、G(緑)及びB(青)の画素値を有しているため、画像ベクトルxは、27個の成分で構成されている。 Specifically, the moving object detection unit 10 generates an image vector x f in which the pixel values of a plurality of pixels included in the image block of interest in the input image D1 and components. Figure 8 is an image vector x f is shown that the component pixel values of respective pixels of the target image block 210 having nine pixels. In the example of FIG. 8, each pixel has pixel values of R (red), G (green), and B (blue), so the image vector xf is composed of 27 components.
 同様に、動体検出部10は、背景モデルの対応コードブックCBに含まれる対応コードワードCW中の背景画像情報を用いて、背景画像情報に関するベクトルである背景ベクトルを生成する。図8に示される対応コードワードの背景画像情報510には、9個の画素についての画素値が含まれている。したがって、当該9個の画素についての画素値を成分とした背景ベクトルxが生成される。背景ベクトルxについては、対応コードブックCBに含まれる複数のコードワードCWのそれぞれから生成される。したがって、一つの画像ベクトルxに対して複数の背景ベクトルxが生成される。 Similarly, the moving object detection unit 10 generates a background vector, which is a vector related to the background image information, using the background image information in the corresponding codeword CW included in the corresponding codebook CB of the background model. The background image information 510 of the corresponding code word shown in FIG. 8 includes pixel values for nine pixels. Therefore, a background vector xb having the pixel values for the nine pixels as components is generated. The background vector xb is generated from each of a plurality of code words CW included in the corresponding code book CB. Therefore, a plurality of background vector x b are generated for one image vector x f.
 上述のように、画像ベクトルxと各背景ベクトルxとが同じ方向を向いている場合、入力画像D1中の注目画像ブロックは、背景を示す画像と変わらないことになる。しかしながら、画像ベクトルx及び各背景ベクトルxには、ある程度のノイズ成分が含まれていると考えられることから、画像ベクトルxと各背景ベクトルxとが完全に同じ方向を向いていなくても、入力画像D1中の注目画像ブロックは背景を示す画像であると判定することができる。 As described above, when the image vector xf and each background vector xb face the same direction, the target image block in the input image D1 is not different from the image indicating the background. However, since the image vector xf and each background vector xb are considered to contain a certain amount of noise components, the image vector xf and each background vector xb are not completely in the same direction. However, it is possible to determine that the target image block in the input image D1 is an image indicating the background.
 そこで、本実施の形態では、画像ベクトルx及び各背景ベクトルxに、ある程度のノイズ成分が含まれていることを考慮して、画像ベクトルxと各背景ベクトルxとが完全に同じ方向を向いていない場合であっても、入力画像D1中の注目画像ブロックは背景を示す画像であると判定する。 Therefore, in the present embodiment, the image vector xf and each background vector xb are completely the same in consideration that the image vector xf and each background vector xb include a certain amount of noise components. Even if it is not facing the direction, it is determined that the target image block in the input image D1 is an image indicating the background.
 画像ベクトルx及び背景ベクトルxにノイズ成分が含まれていると仮定すると、真のベクトルuに対する画像ベクトルxと背景ベクトルxとの関係は、図9のように表すことができる。本実施の形態では、画像ベクトルxと背景ベクトルxとが、どの程度同じ方向を向いているかを示す評価値として、以下の(1)で表される評価値Dを考える。 Assuming that contains image vector x f and background vector x b in the noise component, the relationship between the image vector x f and background vector x b to the true vector u can be expressed as in FIG. In this embodiment, the image and the vector x f and background vector x b is, as an evaluation value that indicates whether the pointing how the same direction, consider the evaluation value D 2 represented by the following (1).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 そして、行列Xを画像ベクトルxと背景ベクトルxとを用いて、式(2)のように表すと、評価値Dは、2×2行列XXの非ゼロの最小固有値となる。したがって、評価値Dについては解析的に求めることができる。なお、評価値Dが2×2行列XXの非ゼロの最小固有値となることについては、上記の非特許文献1に記載されている。 Then, the matrix X by using the image vector x f and background vector x b, expressed by the equation (2), evaluation value D 2 is a minimum eigenvalue of a non-zero 2 × 2 matrix XX T. Accordingly, the evaluation value D 2 can be determined analytically. Note that the evaluation value D 2 is the minimum eigenvalue of the non-zero 2 × 2 matrix XX T is described in Non-Patent Document 1 above.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 上述のように、一つの画像ベクトルxに対して複数の背景ベクトルxが生成されることから、画像ベクトルxと背景ベクトルxとを用いて表される評価値Dの値も、背景ベクトルxの数と同じ数だけ得られることになる。 As described above, since the plurality of background vector x b are generated for one image vector x f, the value of evaluation values D 2 which is represented using an image vector x f and the background vector x b As many as the number of background vectors xb are obtained.
 入力画像D1中の注目画像ブロックが動体画像であるか否かの判定は、評価値Dの複数の値のうちの最小値Cと、評価値Dの複数の値についての平均値μ及び標準偏差σとを用いて表される、以下の式(3)で示される動体判定式が用いられる。この動体判定式はチェビシェフ(Chebyshev)の不等式と呼ばれる。 Decision image block of interest is whether the moving object image in the input image D1 is the minimum value C of the plurality of values of the evaluation value D 2, the mean value for a plurality of values of the evaluation value D 2 mu and The moving object judgment formula shown by the following formula (3) expressed using the standard deviation σ is used. This moving object judgment formula is called Chebyshev's inequality.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 ここで、式(3)のkは定数であって、入力画像D1を撮像する撮像部の撮像環境(撮像部が設置される環境)等に基づいて定められる値である。定数kは実験等によって決定される。 Here, k in Expression (3) is a constant, and is a value determined based on the imaging environment (environment in which the imaging unit is installed) of the imaging unit that captures the input image D1. The constant k is determined by experiments or the like.
 動体検出部10は、動体判定式(不等式)を満たす場合、画像ベクトルxと各背景ベクトルxとが同じ方向を向いていないと考えて、注目画像ブロックは、背景を示す画像ではなく、動体画像であると判定する。一方で、動体検出部10は、動体判定式を満たさない場合、画像ベクトルxと各背景ベクトルxとは同じ方向を向いていると考えて、注目画像ブロックは動体画像ではなく、背景を示す画像であると判定する。 When the moving object detection unit 10 satisfies the moving object determination equation (inequality equation), the image vector x f and each background vector x b are not oriented in the same direction, and the target image block is not an image indicating a background. It determines with it being a moving body image. On the other hand, when the moving object detection unit 10 does not satisfy the moving object determination formula, the moving object detection unit 10 considers that the image vector x f and each background vector x b face the same direction, and the target image block is not a moving object image but a background. It is determined that the image is shown.
 このように、本実施の形態では、注目画像ブロックから得られた画像ベクトルの方向と、各対応コードワードCWから得られた背景ベクトルの方向とが、同じか否かに基づいて動体検出が行われているため、本実施の形態に係る動体検出手法は、日照変化あるいは照明変化などの明るさの変化に対して比較的頑健な動体検出手法である。 Thus, in the present embodiment, the moving object detection is performed based on whether the direction of the image vector obtained from the target image block and the direction of the background vector obtained from each corresponding codeword CW are the same. Therefore, the moving object detection method according to the present embodiment is a moving object detection method that is relatively robust against changes in brightness such as a change in sunlight or a change in illumination.
 そして、動体検出部10は、動体画像であると判定した画像ブロックに属する画素を動体画素であると判定する。このような動体検出によって、その画素が動体を表しているのか否かを示す二値情報を、画素毎に得ることができる。以下では、全ての画素についての当該二値情報を含む情報を画素単位動体画像D2と呼ぶ。 The moving object detection unit 10 determines that a pixel belonging to the image block determined to be a moving image is a moving object pixel. By such moving object detection, binary information indicating whether or not the pixel represents a moving object can be obtained for each pixel. Below, the information including the said binary information about all the pixels is called pixel unit moving body image D2.
 <エリア在/不在判定>
 次にステップs13(図4)においてエリア在/不在判定部20に画素単位動体画像D2が入力され(図3)、これに基づいて複数のエリアのそれぞれで動体を検出する。
<Area presence / absence determination>
Next, in step s13 (FIG. 4), the pixel unit moving body image D2 is input to the area presence / absence determination unit 20 (FIG. 3), and based on this, the moving body is detected in each of the plurality of areas.
 ここでいう複数のエリアは、入力画像D1を分割して得られるエリアであり、例えば実際の面積が互いに等しくなるように設定される。より詳細には、例えば撮像対象の部屋を上から見て等面積で格子状に分割して得られるエリアを、複数のエリアとして採用することができる。撮像部が全方位カメラである場合には、このようなエリアの輪郭は、入力画像D1においては入力画像D1の中心から遠ざかるほど湾曲する。また各エリアは入力画像D1の中心から遠ざかるほど小さく示される。よって各エリアに含まれる画素の数は互いに相違する。 Here, the plurality of areas are areas obtained by dividing the input image D1, and for example, the actual areas are set to be equal to each other. More specifically, for example, an area obtained by dividing the room to be imaged into a lattice shape with an equal area when viewed from above can be adopted as a plurality of areas. When the imaging unit is an omnidirectional camera, the contour of such an area is curved in the input image D1 as the distance from the center of the input image D1 increases. Each area is shown smaller as the distance from the center of the input image D1 increases. Therefore, the number of pixels included in each area is different from each other.
 このようなエリアの設定は実座標と入力画像D1における座標との対応関係に基づいて行なわれる。またこの対応関係は撮像部の仕様に基づいて予め了知することができる。エリアの設定情報は例えば記憶部110などに予め格納され、エリア在/不在判定部20はこの設定情報に基づいて、各エリアを認識することができる。 Such area setting is performed based on the correspondence between the actual coordinates and the coordinates in the input image D1. This correspondence can be known in advance based on the specifications of the imaging unit. The area setting information is stored in advance in the storage unit 110, for example, and the area presence / absence determination unit 20 can recognize each area based on the setting information.
 エリア在/不在判定部20は、各エリアに含まれる動体画素の数(以下、動体画素数と呼ぶ)PN1に基づいて、各エリアに動体が表れているか否か、即ち各エリアに動体が存在するか否かを判定する。図10はエリア在/不在判定部20の概略動作の一例を示すフローチャートである。ステップs131においてエリア在/不在判定部20は、あるエリア(以下、注目エリアとも呼ぶ)の動体画素数PN1が基準値PNrefよりも大きいか否かを判定する。ステップs131にて肯定的な判定がなされたときには、ステップs132において当該注目エリアに動体が存在していると判定する。つまり注目エリアにおいて動体を検出する。ステップs131にて否定的な判定がなされたときには、ステップs133において当該注目エリアに動体が存在していないと判定する。つまり注目エリアにおいて動体を検出しない。 The area presence / absence determination unit 20 determines whether or not a moving object appears in each area based on the number of moving object pixels (hereinafter referred to as the moving object pixel number) PN1 included in each area, that is, there is a moving object in each area. It is determined whether or not to do. FIG. 10 is a flowchart showing an example of a schematic operation of the area presence / absence determination unit 20. In step s131, the area presence / absence determination unit 20 determines whether the number of moving object pixels PN1 in a certain area (hereinafter also referred to as an attention area) is larger than a reference value PNref. If an affirmative determination is made in step s131, it is determined in step s132 that there is a moving object in the area of interest. That is, the moving object is detected in the attention area. If a negative determination is made in step s131, it is determined in step s133 that there is no moving object in the area of interest. That is, no moving object is detected in the attention area.
 なお撮像部が全方位カメラである場合には、各エリアに含まれる画素数PN2が互いに相違し得る。この場合、各エリアに含まれる画素数PN2が大きいほど大きい値を基準値PNrefとして採用することが望ましい。例えば基準値PNrefは画素数PN2に対して正の比例係数で比例する。これにより、各エリアにおいて等価な条件で動体を検出することができる。 If the imaging unit is an omnidirectional camera, the number of pixels PN2 included in each area may be different from each other. In this case, it is desirable to adopt a larger value as the reference value PNref as the number of pixels PN2 included in each area is larger. For example, the reference value PNref is proportional to the number of pixels PN2 by a positive proportional coefficient. Thereby, a moving body can be detected under equivalent conditions in each area.
 以上のように本実施の形態によれば、動体検出部10によって、あるエリアに含まれる画素が動体画素であると判定されたとしても、そのエリアに含まれる動体画素数PN1が少ないときには、そのエリアには動体が存在していないと判定する。つまり動体のサイズが小さいものをノイズと判定して、動体が存在していないと判定するのである。これにより動体の検出精度を向上することができる。 As described above, according to the present embodiment, even if the moving object detection unit 10 determines that a pixel included in a certain area is a moving object pixel, when the number of moving object pixels PN1 included in the area is small, It is determined that there are no moving objects in the area. That is, a moving object having a small size is determined as noise, and it is determined that there is no moving object. Thereby, the detection accuracy of a moving body can be improved.
 なお、このように動体検出部10の動体検出の結果が、その後のエリア在/不在判定部20によって覆され得ることに鑑みれば、動体検出部10による動体検出は仮の検出であるとも把握できる。 In view of the fact that the moving object detection result of the moving object detection unit 10 can be covered by the subsequent area presence / absence determination unit 20, it can be understood that the moving object detection by the moving object detection unit 10 is a provisional detection. .
 ステップs132またはステップs133の次に、ステップs134において、エリア在/不在判定部20は全てのエリアを注目エリアに設定したか否かを判定する。つまり全てのエリアについて処理を終了したか否かを判定する。ステップs134にて否定的な判定がなされたときには、未処理のエリアを注目エリアとして再びステップs131以後の処理を実行する。一方でステップs134にて肯定的な判定がなされたときには、全てのエリアについて処理を終了したと判定して、エリア在/不在判定処理を終了する。 After step s132 or step s133, in step s134, the area presence / absence determination unit 20 determines whether all areas are set as the attention area. That is, it is determined whether or not processing has been completed for all areas. If a negative determination is made in step s134, the processing after step s131 is executed again with the unprocessed area as the attention area. On the other hand, when an affirmative determination is made in step s134, it is determined that the process has been completed for all areas, and the area presence / absence determination process is ended.
 このようなエリア在/不在判定部20の動作により、そのエリアに動体が存在しているか否かの二値情報を、エリア毎に得ることができる。以下では、全てのエリアについての当該二値情報を含む情報を、エリア単位動体画像D3と呼ぶ。 By such operation of the area presence / absence determination unit 20, binary information indicating whether or not a moving object exists in the area can be obtained for each area. Below, the information including the said binary information about all the areas is called area unit moving body image D3.
 <エリア状態の推定>
 次にステップs14(図4)においてエリア状態推定部30にエリア単位動体画像D3が入力され(図3)、これに基づいて、各エリアの状態(以下、エリア状態と呼ぶ)を、少なくとも進入状態、滞在状態、退去状態および不在状態の4つの状態のいずれかに推定する。ここでいう進入状態とは動体が当該エリアに進入した状態を示し、滞在状態は当該エリアに動体が比較的長い期間にわたって滞在した状態を示し、退去状態は当該エリアから動体が退去した状態を示し、不在状態は当該エリアに動体が比較的長い期間にわたって不在となっている状態を示す。エリア状態推定部30は、推定した情報(エリア毎にエリア状態を示す情報、以下、エリア状態画像D4と呼ぶ)を出力する。またこのエリア状態画像D4は例えば記憶部110に格納される。
<Estimation of area status>
Next, in step s14 (FIG. 4), the area unit moving body image D3 is input to the area state estimation unit 30 (FIG. 3), and based on this, the state of each area (hereinafter referred to as the area state) is at least entered. The estimated state is one of the four states of the staying state, the leaving state, and the absence state. The entry state here indicates a state in which the moving object has entered the area, the stay state indicates a state in which the moving object has stayed in the area for a relatively long period, and the leaving state indicates a state in which the moving object has left the area. The absence state indicates a state in which the moving object is absent in the area for a relatively long period. The area state estimation unit 30 outputs estimated information (information indicating an area state for each area, hereinafter referred to as an area state image D4). The area state image D4 is stored in the storage unit 110, for example.
 図11はエリア状態推定処理を示すフローチャートである。ステップs141においてエリア状態推定部30は注目エリアのエリア状態を推定する。具体的な推定方法は後に詳述する。ステップs142においてエリア状態推定部30はその結果を記録し、ステップs143において全てのエリアを注目エリアに設定したか否かを判定する。つまり全てのエリアのエリア状態を推定したか否かを判定する。否定的な判定がなされると、未処理のエリアを注目エリアに設定してステップs141以後の処理を再び実行する。ステップs143において肯定的な判定がなされると、全てのエリアのエリア状態を推定したと判定して、エリア状態推定処理を終了する。 FIG. 11 is a flowchart showing the area state estimation process. In step s141, the area state estimation unit 30 estimates the area state of the attention area. A specific estimation method will be described in detail later. In step s142, the area state estimation unit 30 records the result, and in step s143, determines whether all areas have been set as the attention area. That is, it is determined whether or not the area states of all areas have been estimated. If a negative determination is made, the unprocessed area is set as the attention area, and the processes after step s141 are executed again. If an affirmative determination is made in step s143, it is determined that the area states of all areas have been estimated, and the area state estimation process ends.
 図12はエリア状態の遷移の一例を示す図である。まず注目エリアのエリア状態が不在状態である場合について説明する。このとき、エリア状態推定部30はエリア単位動体画像D3に基づいて注目エリアに動体が存在するか否かを判定する。そして注目エリアに動体が存在すると判定されていると、エリア状態推定部30は注目エリアのエリア状態を不在状態からノイズ状態に遷移させる。ノイズ状態とは、検出された動体がノイズであると判定した状態を示している。つまり、エリア在/不在判定部20によってエリアに動体が存在していると判定されたとしても、所定期間にわたってそのエリアに動体が存在していると判定されない限り、そのエリアに動体が存在するとは判定しないのである。言い換えれば、短期間に動体が検出されたとしても、それはノイズであると判定して、エリア状態をノイズ状態に遷移させているのである。 FIG. 12 is a diagram showing an example of area state transition. First, the case where the area state of the attention area is absent will be described. At this time, the area state estimation unit 30 determines whether there is a moving object in the attention area based on the area unit moving object image D3. If it is determined that there is a moving object in the attention area, the area state estimation unit 30 changes the area state of the attention area from the absence state to the noise state. The noise state indicates a state in which the detected moving object is determined to be noise. That is, even if it is determined by the area presence / absence determination unit 20 that a moving object exists in the area, the moving object exists in the area unless it is determined that the moving object exists in the area for a predetermined period. It is not judged. In other words, even if a moving object is detected in a short period of time, it is determined to be noise, and the area state is transitioned to the noise state.
 なお、このようにエリア在/不在判定部20の動体検出の結果が、その後のエリア状態推定部30によって覆され得ることに鑑みれば、エリア在/不在判定部20による動体検出は仮の検出であるとも把握できる。 In view of the fact that the moving object detection result of the area presence / absence determination unit 20 can be covered by the subsequent area state estimation unit 30, the moving object detection by the area presence / absence determination unit 20 is a provisional detection. I can grasp it.
 ノイズ状態において、所定期間にわたって注目エリアに動体が存在していると判定されたときには、エリア状態推定部30は注目エリアのエリア状態を進入状態へと遷移させる。つまり、順次に入力される所定枚のエリア単位動体画像D3のいずれにおいても注目エリアに動体が存在すると判定されていると、エリア状態を進入状態へと遷移させるのである。 In the noise state, when it is determined that a moving object exists in the attention area for a predetermined period, the area state estimation unit 30 changes the area state of the attention area to the entry state. That is, if it is determined that there is a moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state is shifted to the entry state.
 なお入力画像D1は所定の時間ごとに入力されるので、期間の長さを入力画像D1の枚数(フレーム数と呼ぶ)で判断することができる。よってここでは期間をフレーム数で考える。 Since the input image D1 is input every predetermined time, the length of the period can be determined by the number of input images D1 (referred to as the number of frames). Therefore, the period is considered here by the number of frames.
 エリア状態推定部30は、注目エリアに動体が存在すると判定され続ける入力画像D1のフレーム数(以下、検出フレーム数とも呼ぶ)FN1を、エリア毎にカウントする。より詳細には、エリア単位動体画像D3において注目エリアに動体が存在すると判定されていれば、この注目エリアの検出フレーム数FN1に1を加算する。他方、注目エリアに動体が存在しないと判定されていれば、注目エリアの検出フレーム数FN1を0へと初期化する。この検出フレーム数FN1は、そのエリアにおいて動体が存在すると判定され続ける期間に相当することになる。 The area state estimation unit 30 counts the number of frames FN1 of the input image D1 (hereinafter also referred to as the number of detected frames) FN1 that is continuously determined to be a moving object in the attention area for each area. More specifically, if it is determined that there is a moving object in the attention area in the area-unit moving object image D3, 1 is added to the number of detected frames FN1 in the attention area. On the other hand, if it is determined that there is no moving object in the attention area, the number of detected frames FN1 in the attention area is initialized to zero. This detected frame number FN1 corresponds to a period during which it is determined that a moving object is present in the area.
 そして、エリア状態推定部30は検出フレーム数FN1が進入判定値FNref11よりも大きいか否かを判定し、肯定的な判定がなされたときに、エリア状態を不在状態から進入状態へと遷移させる。 Then, the area state estimation unit 30 determines whether or not the number of detected frames FN1 is larger than the entry determination value FNref11, and when an affirmative determination is made, the area state transitions from the absent state to the entry state.
 図12においては、不在状態からノイズ状態への遷移を示す矢印の直上に「FN1>0」が示されている。これは不在状態からノイズ状態への遷移の条件である。つまり、不在状態において検出フレーム数FN1が零よりも大きいと判定されたときに、エリア状態を不在状態からノイズ状態に遷移させている。要するに、不在状態において注目エリアに動体が検出されると、速やかにエリア状態をノイズ状態に遷移させていているのである。 In FIG. 12, “FN1> 0” is shown immediately above the arrow indicating the transition from the absence state to the noise state. This is a condition for transition from the absence state to the noise state. That is, when it is determined that the detected frame number FN1 is larger than zero in the absence state, the area state is changed from the absence state to the noise state. In short, when a moving object is detected in the attention area in the absence state, the area state is quickly changed to the noise state.
 他方、ノイズ状態において、注目エリアに動体が存在してないと判定されたときには、エリア状態推定部30は注目エリアのエリア状態を不在状態へと遷移させるとよい。これにより、エリア状態をノイズ状態から不在状態に戻すことができる。 On the other hand, when it is determined that there is no moving object in the attention area in the noise state, the area state estimation unit 30 may transition the area state of the attention area to the absence state. Thereby, the area state can be returned from the noise state to the absence state.
 またこの遷移条件にも、フレーム数を用いることができる。エリア状態推定部30は、注目エリアに動体が存在しないと判定される入力画像D1のフレーム数(以下、不検出フレーム数とも呼ぶ)FN2を、エリア毎にカウントする。例えば入力されるエリア単位動体画像D3において、注目エリアに動体が存在しないと判定されていると、この注目エリアの不検出フレーム数FN2に1を加算する。他方、注目エリアに動体が存在すると判定されているときには、注目エリアの不検出フレーム数FN2を0へと初期化する。この不検出フレーム数FN2は、そのエリアにおいて動体が存在しないと判定され続ける期間に相当することになる。 The number of frames can also be used for this transition condition. The area state estimation unit 30 counts, for each area, the number of frames (hereinafter also referred to as the number of undetected frames) FN2 of the input image D1 that is determined that there is no moving object in the attention area. For example, in the input area unit moving body image D3, if it is determined that there is no moving body in the attention area, 1 is added to the number of undetected frames FN2 in the attention area. On the other hand, when it is determined that there is a moving object in the attention area, the number of undetected frames FN2 in the attention area is initialized to zero. This undetected frame number FN2 corresponds to a period during which it is continuously determined that there is no moving object in the area.
 そして、エリア状態推定部30は、ノイズ状態において不検出フレーム数FN2が零よりも大きいか否かを判定し、肯定的な判定がなされたときに、エリア状態を不在状態へと遷移させる。 Then, the area state estimation unit 30 determines whether or not the number of undetected frames FN2 is larger than zero in the noise state, and transitions the area state to the absent state when a positive determination is made.
 なおここでは、ノイズを明記すべくノイズ状態を設けているものの、ノイズ状態は設けなくても良い。例えば不在状態において、所定期間にわたって動体が存在すると判定された場合のみにエリア状態を不在状態から進入状態へと遷移させてもよい。 In addition, although the noise state is provided here to clearly indicate the noise, the noise state may not be provided. For example, in the absence state, the area state may be changed from the absence state to the entry state only when it is determined that a moving object exists for a predetermined period.
 次に、進入状態において、所定期間にわたって注目エリアに動体が存在していると判定されたときには、エリア状態推定部30は注目エリアのエリア状態を滞在状態へと遷移させる。つまり、順次に入力される所定枚のエリア単位動体画像D3のいずれにも注目エリアに動体が存在すると判定されていると、エリア状態を滞在状態へと遷移させるのである。より詳細な動作としては、エリア状態推定部30は、進入状態において検出フレーム数FN1が滞在判定値FNref12(>FNref11)よりも大きいか否かを判定し、肯定的な判定がなされたときに、エリア状態を滞在状態へと遷移させる。 Next, in the approach state, when it is determined that a moving object exists in the attention area for a predetermined period, the area state estimation unit 30 changes the area state of the attention area to the stay state. That is, if it is determined that there is a moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state is shifted to the stay state. As a more detailed operation, the area state estimation unit 30 determines whether the detected frame number FN1 is larger than the stay determination value FNref12 (> FNref11) in the approach state, and when a positive determination is made, Transition the area state to the stay state.
 以上のように本実施の形態では、動体が検出され始めた初期期間においてエリア状態を進入状態とし、この初期期間よりも後の期間においてはエリア状態を滞在状態とする。つまり、単に動体が存在しているという情報のみならず、進入状態と滞在状態とを区別して動体を検出しているのである。 As described above, in the present embodiment, the area state is set to the entry state in the initial period when the moving object starts to be detected, and the area state is set to the stay state in the period after the initial period. That is, not only the information that the moving object exists but also the moving object is detected by distinguishing between the approaching state and the staying state.
 また進入状態において、所定期間にわたって注目エリアに動体が存在しないと判定されたときには、エリア状態推定部30は注目エリアのエリア状態を退去状態へと遷移させる。つまり、順次に入力される所定枚のエリア単位動体画像D3のいずれにおいても注目エリアに動体が存在しないと判定されていると、エリア状態推定部30はエリア状態を退去状態へと遷移させる。より詳細な動作としては、エリア状態推定部30は、不検出フレーム数FN2が退去判定値FNref21よりも大きいか否かを判定し、肯定的な判定がなされたときに、エリア状態を退去状態へと遷移させる。 In the approach state, when it is determined that there is no moving object in the attention area for a predetermined period, the area state estimation unit 30 changes the area state of the attention area to the withdrawal state. That is, if it is determined that there is no moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state estimating unit 30 changes the area state to the leaving state. As a more detailed operation, the area state estimation unit 30 determines whether or not the number of undetected frames FN2 is larger than the leaving determination value FNref21, and when an affirmative determination is made, the area state is changed to the leaving state. And transition.
 一方で滞在状態においては、次の2つの条件を満たしたときに、エリア状態推定部30は注目エリアのエリア状態を退去状態へと遷移させる。第1の条件は、所定期間にわたって注目エリアに動体が存在しないと判定されていることである。これは上述したように進入状態から退去状態へと遷移するための条件でもある。 On the other hand, in the staying state, when the following two conditions are satisfied, the area state estimating unit 30 changes the area state of the attention area to the leaving state. The first condition is that it is determined that there is no moving object in the attention area over a predetermined period. This is also a condition for transitioning from the entering state to the leaving state as described above.
 第2の条件は、注目エリアの周囲にあるエリア(以下、周囲エリアと呼ぶ)のうち少なくとも一つのエリア状態が進入状態であることである。エリア状態推定部30は、周囲エリアのエリア状態が進入状態か否かを順に判定し、周囲エリアのいずれか一つのエリア状態が進入状態であるときに、第2の条件を満たすと判定する。図12の例示では、周囲エリアのエリア状態が進入状態であることを、「FE=True」で表している。 The second condition is that at least one area state in the area around the area of interest (hereinafter referred to as the surrounding area) is an approaching state. The area state estimation unit 30 sequentially determines whether or not the area state of the surrounding area is the entry state, and determines that the second condition is satisfied when any one area state of the surrounding area is the entry state. In the example of FIG. 12, “FE = True” indicates that the area state of the surrounding area is the entry state.
 図13はエリア状態が滞在状態から退去状態へと遷移する様子の一例を示す図である。ここでは、各エリアとして実際の空間における矩形状のエリアが模式的に示されている。複数のエリアは格子状に配置されている。撮像部が例えば全方位カメラである場合には、この矩形状のエリアは入力画像D1においては図2と同様にして湾曲する。図13の例示では、簡易的に縦4×横4の16個のエリアが示されている。またエリア内に示される「0」および「1」の数字はエリア在/不在判定部20の動体検出の結果を示している。「1」は動体が存在していると判定されたことを示し、「0」は動体が存在していないと判定されたことを示している。また図13では、各エリアに示されるハッチングの種類によってエリア状態を示している。空白が不在状態(ノイズ状態を含む)を示し、横線のハッチングが滞在状態を示し、縦線のハッチングが進入状態を示し、斜め線のハッチングが退去状態を示している。 FIG. 13 is a diagram illustrating an example of a state in which the area state transitions from the staying state to the leaving state. Here, a rectangular area in an actual space is schematically shown as each area. The plurality of areas are arranged in a grid pattern. When the imaging unit is, for example, an omnidirectional camera, this rectangular area is curved in the input image D1 as in FIG. In the example of FIG. 13, 16 areas of 4 × 4 in a simplified manner are shown. The numbers “0” and “1” shown in the area indicate the result of moving object detection by the area presence / absence determination unit 20. “1” indicates that it is determined that a moving object exists, and “0” indicates that it is determined that no moving object exists. Moreover, in FIG. 13, the area state is shown according to the type of hatching shown in each area. A blank indicates an absence state (including a noise state), a horizontal hatching indicates a staying state, a vertical hatching indicates an approaching state, and a diagonal hatching indicates a leaving state.
 また図13においては、エリア単位動体画像D3およびエリア状態画像D4の時間的な変化が左から右へと示されている。 Further, in FIG. 13, temporal changes in the area unit moving body image D3 and the area state image D4 are shown from left to right.
 図13の左端のエリア単位動体画像D3においては、ある一つのエリアA1のみにおいて動体が存在すると判定されている。これに対応するエリア状態画像D4においては、エリアA1のエリア状態が滞在状態となっている。他のエリアのエリア状態は不在状態である。 In the area unit moving body image D3 at the left end of FIG. 13, it is determined that a moving body exists only in one area A1. In the area state image D4 corresponding to this, the area state of the area A1 is the stay state. The area states of other areas are absent.
 ここで当該動体が例えば右隣のエリアA2に移動する場合を考える。これにより、左から2番目のエリア単位動体画像D3に示すように、エリアA1において動体が存在しないと判定され、エリアA2において動体が存在すると判定される。このときエリアA2のエリア状態はノイズ状態である。 Here, consider the case where the moving object moves to the area A2 on the right, for example. Thereby, as shown in the second area unit moving body image D3 from the left, it is determined that there is no moving body in the area A1, and it is determined that there is a moving body in the area A2. At this time, the area state of the area A2 is a noise state.
 そして、エリアA2において動体が存在すると判定され続けると、言い換えればエリアA2の検出フレーム数FN1が進入判定値FNref11を超えると、左から3番目のエリア状態画像D4に示すように、エリアA2のエリア状態が進入状態へと遷移する。これにより、エリアA1について第2の条件が満足する。 If it is continuously determined that there is a moving object in the area A2, in other words, if the number of detected frames FN1 in the area A2 exceeds the approach determination value FNref11, the area of the area A2 is displayed as shown in the third area state image D4 from the left. The state transitions to the entry state. Thereby, the second condition is satisfied for the area A1.
 またエリアA1において、動体が所定期間にわたって存在しないと判定されると、つまり第1の条件も満足すると、右端のエリア状態画像D4に示すように、エリアA1のエリア状態が退去状態へと遷移する。 If it is determined in the area A1 that the moving object does not exist for a predetermined period, that is, if the first condition is also satisfied, the area state of the area A1 transitions to the withdrawal state as shown in the rightmost area state image D4. .
 このような滞在状態から退去状態への遷移により、次で説明する誤推定を抑制できる。例えば注目エリアにおいて動体が遮蔽物によって遮られた場合、注目エリアから動体が検出されなくなる。例えば図2において人103がしゃがんで机102に隠れた場合に、この人103が動体として検出されなくなる。この場合、動体検出部10は人103を動体として検出しないので、エリア在/不在判定部20も人103を動体として検出しない。この人103が所定期間にわたって動体として検出されないと、人103を含む注目エリアについて第1の条件が満足する。しかるに、このように単に動体が遮蔽物によって遮蔽された状態であれば、当該動体(人103)は周囲のエリアに進入しないので、第2の条件が成立しない。よって注目エリアのエリア状態は適切に滞在状態を維持する。 誤 By such transition from the staying state to the leaving state, it is possible to suppress the erroneous estimation described below. For example, when a moving object is blocked by a shield in the attention area, the moving object is not detected from the attention area. For example, in FIG. 2, when a person 103 squats down and hides on the desk 102, the person 103 is not detected as a moving object. In this case, since the moving object detection unit 10 does not detect the person 103 as a moving object, the area presence / absence determination unit 20 also does not detect the person 103 as a moving object. If the person 103 is not detected as a moving object over a predetermined period, the first condition is satisfied for the attention area including the person 103. However, if the moving object is simply shielded by the shielding object, the moving object (person 103) does not enter the surrounding area, and therefore the second condition is not satisfied. Therefore, the area state of the attention area appropriately maintains the stay state.
 例えば第1の条件のみを以ってエリア状態を滞在状態から不在状態へと遷移させると、単に動体が遮蔽物によって隠れているだけであるにも拘わらず、エリア状態が退去状態となる。本実施の形態によれば、そのようなエリア状態の誤推定を回避することができるのである。 For example, when the area state is transitioned from the staying state to the absence state only under the first condition, the area state becomes the leaving state although the moving object is simply hidden by the shield. According to the present embodiment, it is possible to avoid such an erroneous estimation of the area state.
 その一方で進入状態においては、上述のように、第1の条件のみを用いてエリア状態を退去状態へと遷移させている(図12参照)。これは、動体が複数のエリアに跨って移動することを考慮した処理である。つまり注目エリアが進入状態であるときには、動体が他のエリアへと移動している最中である可能性があるので、第1の条件のみを採用しているのである。これにより、滞在状態から退去状態への遷移に比べて、速やかにエリア状態を進入状態から退去状態へと遷移させることができる。つまり高い応答性でエリア状態を推定することができる。 On the other hand, in the approach state, as described above, the area state is transitioned to the leaving state using only the first condition (see FIG. 12). This is a process considering that the moving body moves across a plurality of areas. That is, when the attention area is in the approaching state, there is a possibility that the moving body is moving to another area, so only the first condition is adopted. Thereby, compared with the transition from the staying state to the leaving state, the area state can be quickly transitioned from the entering state to the leaving state. That is, the area state can be estimated with high responsiveness.
 なお図12の例示では、進入状態から退去状態へと遷移させる際に用いる退去判定値FNref21と、滞在状態から退去状態へと遷移させる際に用いる退去判定値FNref21とは、互いに等しい。しかしながら、これらを異ならせても構わない。 In the illustration of FIG. 12, the leaving determination value FNref21 used when changing from the entering state to the leaving state and the leaving determination value FNref21 used when changing from the staying state to the leaving state are equal to each other. However, they may be different.
 次に、退去状態において、注目エリアに動体が存在すると判定されると、エリア状態推定部30はエリア状態を滞在状態へと遷移させる。より具体的には、検出フレーム数FN1と零とを比較し、検出フレーム数FN1が零よりも大きいときに、エリア状態推定部30はエリア状態を滞在状態に遷移させる。これは、滞在状態であったエリアに動体が戻ってくる可能性を考慮した処理である。つまり、退去状態のエリアに動体が戻ってくる可能性があるので、その分、エリア状態を滞在状態へと遷移させやすくしているのである。これにより、比較的速やかに注目エリアのエリア状態を退去状態から滞在状態に遷移させることができる。つまり高い応答性でエリア状態を推定できる。 Next, when it is determined that there is a moving object in the attention area in the leaving state, the area state estimating unit 30 changes the area state to the staying state. More specifically, the detected frame number FN1 is compared with zero, and when the detected frame number FN1 is larger than zero, the area state estimation unit 30 transitions the area state to the stay state. This is processing in consideration of the possibility that the moving object will return to the area that was in the staying state. That is, there is a possibility that the moving object may return to the area in the leaving state, and accordingly, the area state is easily changed to the staying state. Thereby, the area state of the area of interest can be changed from the leaving state to the staying state relatively quickly. That is, the area state can be estimated with high responsiveness.
 なお退去状態から滞在状態へと遷移させる条件として、所定期間(不在状態から滞在状態への遷移に要する期間よりも短い期間)にわたる動体検出を採用してもよい。例えば検出フレーム数FN1が滞在判定値FNref13(<滞在判定値FNref12)よりも大きいときに、エリア状態を退去状態から滞在状態へと遷移させても良い。これによっても、比較的速やかにエリア状態を退去状態から滞在状態へと遷移させることができる。 It should be noted that moving object detection over a predetermined period (a period shorter than the period required for transition from the absent state to the stay state) may be employed as a condition for transition from the leaving state to the stay state. For example, when the detected frame number FN1 is larger than the stay determination value FNref13 (<stay determination value FNref12), the area state may be changed from the leaving state to the staying state. This also allows the area state to transition from the leaving state to the staying state relatively quickly.
 なお図12の例示では、退去状態で動体が検出されたときに、エリア状態を退去状態から滞在状態へと遷移させているものの、エリア状態を退去状態から進入状態へと遷移させても構わない。 In the example of FIG. 12, when a moving object is detected in the leaving state, the area state is changed from the leaving state to the staying state, but the area state may be changed from the leaving state to the entering state. .
 また退去状態において、所定期間にわたって注目エリアに動体が存在しないと判定されると、エリア状態推定部30は注目エリアのエリア状態を不在状態へと遷移させる。つまり順次に入力される所定枚のエリア単位動体画像D3のいずれにおいても注目エリアに動体が存在しないと判定されていると、エリア状態推定部30はエリア状態を不在状態へと遷移させる。より詳細な動作として、不検出フレーム数FN2が不在判定値FNref22(>退去判定値FNref21)よりも大きいか否かを判定し、肯定的な判定がなされたときに、エリア状態推定部30はエリア状態を不在状態へと遷移させる。 In the leaving state, if it is determined that there is no moving object in the attention area for a predetermined period, the area state estimation unit 30 changes the area state of the attention area to the absence state. That is, if it is determined that there is no moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state estimating unit 30 changes the area state to the absent state. As a more detailed operation, it is determined whether or not the number of undetected frames FN2 is larger than the absence determination value FNref22 (> exit determination value FNref21), and when a positive determination is made, the area state estimation unit 30 Transition the state to the absent state.
 図14は、入力画像D1とエリア状態画像D4との一例を模式的に示す図である。この入力画像D1では、中央付近のやや左上に人103が座っており、右側に移動する人104が右下に示されている。このとき、画像処理部1の処理によって、人103を含むエリアのエリア状態は滞在状態(横線のハッチング)となり、人104を含むエリアのエリア状態が進入状態(縦線のハッチング)となる。 FIG. 14 is a diagram schematically showing an example of the input image D1 and the area state image D4. In the input image D1, the person 103 is sitting slightly on the upper left near the center, and the person 104 moving on the right is shown on the lower right. At this time, the area state of the area including the person 103 becomes the stay state (horizontal line hatching) and the area state of the area including the person 104 becomes the entry state (vertical line hatching) by the processing of the image processing unit 1.
 以上のように、本画像処理部1は単に動体の在/不在のみを検出しているのではなく、エリア状態として、進入状態、滞在状態、退去状態および不在状態を推定している。よって、以下で述べるように、外部装置4はエリア状態に応じてより細かい制御を行なうことができる。 As described above, the present image processing unit 1 does not simply detect the presence / absence of moving objects, but estimates the entry state, stay state, leaving state, and absence state as area states. Therefore, as described below, the external device 4 can perform finer control according to the area state.
 <外部装置4>
 エリア状態推定部30が出力するエリア状態画像D4はエリア毎にエリア状態の情報を有しており、外部装置4に入力される(図1及び図3参照)。外部装置4はエリア状態画像D4に基づいてエリア状態に応じた制御を行なう。外部装置4はエリアの環境(温度、湿度、明るさ、音、表示など)を制御する環境制御装置である。例えば外部装置4は空気調和機または照明装置などである。外部装置4は、エリア状態に応じて各エリアの空間の状態(明るさ、温度、湿度など)を調整する。よって、従来に比してより細かな制御を行なうことができる。
<External device 4>
The area state image D4 output by the area state estimation unit 30 has area state information for each area and is input to the external device 4 (see FIGS. 1 and 3). The external device 4 performs control according to the area state based on the area state image D4. The external device 4 is an environment control device that controls the area environment (temperature, humidity, brightness, sound, display, etc.). For example, the external device 4 is an air conditioner or a lighting device. The external device 4 adjusts the space state (brightness, temperature, humidity, etc.) of each area according to the area state. Therefore, finer control can be performed as compared with the prior art.
 例えば外部装置4は複数の照明装置を有しており、この照明装置がエリア毎に設けられる。外部装置4は例えばエリア状態が滞在状態であるエリアの照明装置を最も高い照度で制御し、エリア状態が進入状態または退去状態であるエリアの照明装置を、より低い照度で制御し、エリア状態が退去状態であるエリアの照明装置を最も低い照度で制御する。 For example, the external device 4 has a plurality of lighting devices, and this lighting device is provided for each area. For example, the external device 4 controls the lighting device in the area where the area state is the staying state with the highest illuminance, and controls the lighting device in the area where the area state is the entering state or the leaving state with a lower illuminance. The lighting device in the area that is in the removed state is controlled with the lowest illuminance.
 また外部装置4が空気調和機である場合には、例えば空気の送出方向を調整するフラップを滞在状態のエリアへ向け、エリア状態が進入状態、退去状態または不在状態のエリアには、フラップを向けない。或いは、複数のフラップが設けられている場合には、多くのフラップを滞在状態のエリアに向け、少しのフラップを進入状態または退去状態のエリアに向けてもよい。 Further, when the external device 4 is an air conditioner, for example, a flap for adjusting the air delivery direction is directed to the staying area, and the flap is directed to the area where the area state is the entering state, the leaving state, or the absence state. Absent. Alternatively, in the case where a plurality of flaps are provided, many flaps may be directed to the staying area and some flaps may be directed to the entering or leaving area.
 或いは、エリアごとに空気調和機が設けられている場合には、滞在状態のエリアの空気調和機が所望の目標値(温度目標値または湿度目標値)で運転し、進入状態または退去状態のエリアの空気調和機はより小さい目標値で運転し、不在状態のエリアの空気調和機は最も小さい目標値で運転してもよい。ここでいう小さい目標値とは、現状の値(温度または湿度)との差が小さい目標値を意味する。 Alternatively, when an air conditioner is provided for each area, the air conditioner in the staying area operates at a desired target value (temperature target value or humidity target value) and enters or leaves the area. The air conditioner may be operated with a smaller target value, and the air conditioner in the absent area may be operated with the smallest target value. The small target value here means a target value having a small difference from the current value (temperature or humidity).
 より一般化して説明すると、エリア状態が滞在状態であるエリアにおいて、最も電力を要する目標値で環境を制御し、エリア状態が進入状態または退去状態であるエリアにおいて、より電力を必要としない目標値で環境を制御し、エリア状態が不在状態であるエリアにおいて、最も電力を必要としない目標値で環境を制御する。これにより、低い消費電力で有効な環境制御を行なうことができる。 More generally, in an area where the area state is a staying state, the environment is controlled with a target value that requires the most power, and in an area where the area state is an entering state or a leaving state, a target value that requires less power In the area where the area state is absent, the environment is controlled with a target value that requires the least power. Thereby, effective environmental control can be performed with low power consumption.
 なお上述のようにエリア状態が高い応答性で更新されることで、外部装置4は実際のエリアの状態に応じて高い応答性で制御を行なうことができる。 As described above, the area state is updated with high responsiveness, so that the external device 4 can perform control with high responsiveness according to the actual state of the area.
 <周囲エリアの範囲>
 なお上述の例において、周囲エリアは、注目エリアを囲む領域に存在するエリアであればよく、注目エリアを直近で囲む8個のエリアであってもよい。しかるに、滞在状態のエリアから動体が移動する際に、動体の速度が速いために、当該8個の周囲エリアのいずれもが進入状態にならず、その1エリア隣のエリアが進入状態となる場合も想定される。これに対応するためには、周囲エリアをより広い範囲で設定するとよい。例えば上記8個のエリアと、上記8個のエリアを直近で囲む16個のエリアとを、周囲エリアとして採用しても良い。これにより、注目エリアの隣のエリアが進入状態にならずに、その隣のエリアが進入状態になったとしても、注目エリアのエリア状態を退去状態へと遷移させることができる。
<Range of surrounding area>
In the above-described example, the surrounding area may be an area existing in a region surrounding the attention area, and may be eight areas that surround the attention area most recently. However, when the moving body moves from the staying area, the speed of the moving body is high, so that none of the eight surrounding areas is in the entering state, and the area adjacent to that one area is in the entering state. Is also envisaged. In order to cope with this, the surrounding area may be set in a wider range. For example, the eight areas and the sixteen areas that immediately surround the eight areas may be adopted as the surrounding areas. Thereby, even if the area adjacent to the attention area does not enter the entry state and the adjacent area enters the entry state, the area state of the attention area can be changed to the withdrawal state.
 要するに、1以上のエリアを介して注目エリアと隣り合うエリアを周囲エリアに含めると良い。 In short, it is good to include an area adjacent to the area of interest through one or more areas in the surrounding area.
 <動体検出部およびエリア在/不在判定部>
 上述の例において、動体検出部10およびエリア在/不在判定部20からなる部分は、入力画像D1における複数のエリアのそれぞれで動体を検出するエリア単位動体検出部の具体的な一例と把握することができる。
<Moving object detection unit and area presence / absence determination unit>
In the above-described example, the portion including the moving object detection unit 10 and the area presence / absence determination unit 20 is understood as a specific example of the area unit moving object detection unit that detects a moving object in each of a plurality of areas in the input image D1. Can do.
 また上述の例において、画像ブロックとエリアとを区別して説明しているものの、画像ブロックと同じ範囲をエリアとして採用しても良い。この場合、動体検出部10によってエリア単位で動体が検出されるので、エリア在/不在判定部20の動作は不要である。 In the above example, the image block and the area are described separately, but the same range as the image block may be adopted as the area. In this case, since the moving object is detected by the moving object detection unit 10 in units of areas, the operation of the area presence / absence determination unit 20 is unnecessary.
 <入力画像D1の端のエリア>
 上述の例では、第1の条件および第2の条件が成立したときに、エリア状態を滞在状態から退去状態へと遷移させた。ただし、注目エリアが入力画像D1の端に位置するときには、第2の条件なしにエリア状態を滞在状態から退去状態へと遷移させてもよい。注目エリアが端に位置していれば、動体が他のエリアを介すことなく入力画像D1の外側に去ることがあり、この場合、周囲エリアが進入状態にならないからである。
<Area at the end of input image D1>
In the above-described example, when the first condition and the second condition are satisfied, the area state is changed from the staying state to the leaving state. However, when the area of interest is located at the end of the input image D1, the area state may be changed from the staying state to the leaving state without the second condition. If the area of interest is located at the end, the moving object may leave the input image D1 without passing through another area, and in this case, the surrounding area does not enter the entry state.
 第2の実施の形態.
 図15は第2の実施の形態にかかる画像処理部1の一例を概念的に示す機能ブロック図である。第2の実施の形態においては、動体検出部10およびエリア在/不在判定部20はエリア状態推定部30からエリア状態画像D4を受け取る。動体検出部10およびエリア在/不在判定部20はエリア状態に応じた検出感度で動体を検出する。
Second embodiment.
FIG. 15 is a functional block diagram conceptually illustrating an example of the image processing unit 1 according to the second embodiment. In the second embodiment, the moving object detection unit 10 and the area presence / absence determination unit 20 receive the area state image D4 from the area state estimation unit 30. The moving object detection unit 10 and the area presence / absence determination unit 20 detect moving objects with detection sensitivity corresponding to the area state.
 ここでいう検出感度とは動体の検出しやすさを示し、検出感度が高いほど動体が検出されやすい。例えばエリア在/不在判定部20は、各エリアの動体画素数PN1と基準値PNrefとの大小に基づいてエリア単位で動体を検出するところ、この基準値PNrefが検出感度を示す。基準値PNrefが大きいほど動体を検出しにくいので、基準値PNrefが大きいほど検出感度は低くなる。 The detection sensitivity here indicates the ease of detecting a moving object, and the higher the detection sensitivity, the easier the moving object is detected. For example, the area presence / absence determination unit 20 detects a moving body in area units based on the size of the moving body pixel number PN1 of each area and the reference value PNref, and the reference value PNref indicates the detection sensitivity. The larger the reference value PNref is, the more difficult it is to detect moving objects. Therefore, the greater the reference value PNref, the lower the detection sensitivity.
 また例えば動体検出部10は、背景画像D0と入力画像D1との差異に基づいて画素単位で動体を検出する。より詳細には当該差異を表すパラメータ(例えば上述の最小値C)が基準値(例えばμ+kα)よりも大きいときに動体が検出される。この基準値も検出感度を示す。この基準値が大きいほど検出感度は低くなる。なお動体判定式において、平均値μと標準偏差σとは背景モデルの背景画像D0によって決定される値であるので、定数kとして複数の値を採用すればよい。 For example, the moving object detection unit 10 detects a moving object in units of pixels based on the difference between the background image D0 and the input image D1. More specifically, a moving object is detected when a parameter representing the difference (for example, the above-described minimum value C) is larger than a reference value (for example, μ + kα). This reference value also indicates the detection sensitivity. The detection sensitivity decreases as the reference value increases. In the moving object determination formula, the average value μ and the standard deviation σ are values determined by the background image D0 of the background model, and therefore a plurality of values may be adopted as the constant k.
 検出感度の値(上述の基準値)の複数は例えば予め記憶部110に記録されており、動体検出部10およびエリア在/不在判定部20は、それぞれエリア状態に応じて検出感度を選択し、これに基づいて動体を検出する。 A plurality of detection sensitivity values (the above-described reference values) are recorded in advance in the storage unit 110, for example, and the moving object detection unit 10 and the area presence / absence determination unit 20 select the detection sensitivity according to the area state, A moving object is detected based on this.
 例えば動体検出部10は、エリア状態が退去状態であるエリアに含まれる画素に対しては、エリア状態が不在状態(ノイズ状態を含む)、進入状態または滞在状態であるエリアに含まれる画素に比べて、高い検出感度で動体を検出する。つまりより小さい基準値を採用する。エリア状態が退去状態であるエリアには、動体が戻ってくる可能性があることから、比較的動体が表れる可能性が高い。よって、注目エリアに動体が表れる可能性が高いときに、高い検出感度で動体を検出することができる。これにより、より速やかに動体を検出することができる。ひいては高い応答性でエリア状態を推定することができる。 For example, the moving object detection unit 10 compares the pixels included in the area where the area state is the leaving state with respect to the pixels included in the area where the area state is absent (including the noise state), the entering state or the staying state. The moving object is detected with high detection sensitivity. That is, a smaller reference value is adopted. Since there is a possibility that the moving object may return to the area whose area state is the withdrawal state, there is a high possibility that the moving object will appear relatively. Therefore, when there is a high possibility that the moving object appears in the attention area, the moving object can be detected with high detection sensitivity. Thereby, a moving body can be detected more rapidly. As a result, the area state can be estimated with high responsiveness.
 また周囲エリアのエリア状態が進入状態であるエリア(エリア状態が進入状態であるエリアを周囲に有するエリア)に含まれる画素に対しても、エリア状態が不在状態(ノイズ状態を含む)、進入状態または滞在状態であるエリアに含まれる画素に比べて高い検出感度で動体を検出しても良い。周囲エリアのエリア状態が進入状態であれば、当該エリアに動体が進入する可能性が高いからである。これによっても、より速やかに動体を検出することができる。 In addition, even for pixels included in an area where the area state of the surrounding area is the approaching state (an area having an area in which the area state is the approaching state), the area state is absent (including the noise state), the entering state Alternatively, the moving object may be detected with higher detection sensitivity than the pixels included in the staying area. This is because if the area state of the surrounding area is the entry state, there is a high possibility that a moving object will enter the area. Also by this, a moving body can be detected more rapidly.
 同様に、エリア在/不在判定部20は、エリア状態が退去状態であるエリアに対して高い検出感度で動体を検出しても良い。つまりより小さい基準値PNrefを採用してもよい。これによっても、より速やかに動体を検出することができる。 Similarly, the area presence / absence determination unit 20 may detect a moving object with high detection sensitivity with respect to an area whose area state is the withdrawal state. That is, a smaller reference value PNref may be adopted. Also by this, a moving body can be detected more rapidly.
 また周囲エリアのエリア状態が進入状態であるエリアに対しても高い検出感度で動体を検出しても良い。つまりより小さい基準値PNrefを採用する。これによっても、より速やかに動体を検出することができる。 Also, moving objects may be detected with high detection sensitivity even in areas where the area state of the surrounding area is the approaching state. That is, a smaller reference value PNref is adopted. Also by this, a moving body can be detected more rapidly.
 なお第2の実施の形態では、動体検出部10およびエリア在/不在判定部20の両方がエリア状態に応じた検出感度を採用しているが、いずれか一方のみがエリア状態に応じた検出感度を採用しても良い。 In the second embodiment, both the moving object detection unit 10 and the area presence / absence determination unit 20 employ detection sensitivity according to the area state, but only one of them has detection sensitivity according to the area state. May be adopted.
 第3の実施の形態.
 第3の実施の形態では、入力画像D1からの画像情報を適宜に背景モデルに登録することを前提とする。まずこの前提について説明する。第3の実施の形態では図16に示すように、背景モデル更新部40およびキャッシュモデル記憶部6が更に設けられる。背景モデル更新部40は入力画像D1および画素単位動体画像D2を入力する。背景モデル更新部40は、簡単に説明すると、動体画素と判定された画素を含む入力画像D1が、所定の登録判定期間にわたって変化しないときに、その入力画像D1は動体を示しているのではなく背景を示していると判定して、これを背景モデルに登録する。
Third embodiment.
In the third embodiment, it is assumed that image information from the input image D1 is appropriately registered in the background model. First, this premise will be described. In the third embodiment, as shown in FIG. 16, a background model update unit 40 and a cache model storage unit 6 are further provided. The background model update unit 40 inputs the input image D1 and the pixel unit moving body image D2. Briefly, the background model update unit 40 does not indicate a moving object when the input image D1 including a pixel determined to be a moving object pixel does not change over a predetermined registration determination period. It is determined that the background is indicated, and this is registered in the background model.
 このような背景モデル更新処理の具体例について詳細に説明する。背景モデル更新処理では、キャッシュモデルを記憶するキャッシュモデル記憶部6が使用される。キャッシュモデルには、背景モデルに登録される背景画像情報の候補である背景画像情報候補が含められる。キャッシュモデル記憶部6は、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)またはハードディスク(HD)等の書き換え可能な記憶手段で構成される。なお本例では、背景モデル記憶部3とキャッシュモデル記憶部6とはハードウェア的に独立しているが、一つの記憶装置が有する記憶領域の一部を背景モデル記憶部3として使用し、当該記憶領域の他の一部をキャッシュモデル記憶部6として使用しても良い。 A specific example of such background model update processing will be described in detail. In the background model update process, a cache model storage unit 6 that stores a cache model is used. The cache model includes background image information candidates that are candidates for background image information registered in the background model. The cache model storage unit 6 includes rewritable storage means such as flash memory, EPROM (Erasable Programmable Read Only Memory), or hard disk (HD). In this example, the background model storage unit 3 and the cache model storage unit 6 are independent from each other in terms of hardware, but a part of the storage area of one storage device is used as the background model storage unit 3 and Another part of the storage area may be used as the cache model storage unit 6.
 背景モデル更新部40は、動体検出部10において動体画像であると判定された画像ブロックの画像情報を背景画像情報候補として、いったんキャッシュモデルに登録する。そして、背景モデル更新部40は、登録判定期間に入力される複数枚の入力画像D1に基づいて、キャッシュモデル記憶部6に記憶した背景画像情報候補を、背景画像情報として背景モデルに登録するか否かを判定する。より詳細には、入力画像D1から得られる背景候補画像が登録判定期間にわたって変化しないときに、この背景画像情報候補が背景の画像情報であると判定する。このように判定すると、背景モデル更新部40は当該背景画像情報候補を背景画像情報として背景モデルに登録する。 The background model update unit 40 once registers the image information of the image block determined to be a moving image by the moving object detection unit 10 as a background image information candidate in the cache model. Whether the background model update unit 40 registers the background image information candidates stored in the cache model storage unit 6 in the background model as background image information based on the plurality of input images D1 input during the registration determination period. Determine whether or not. More specifically, when the background candidate image obtained from the input image D1 does not change over the registration determination period, the background image information candidate is determined to be background image information. If it determines in this way, the background model update part 40 will register the said background image information candidate in a background model as background image information.
 図17は背景モデル更新処理を示すフローチャートである。この背景モデル更新処理は図4のステップs12よりも後に行なわれる。図17に示されるように、ステップs151において、背景モデル更新部40は、ステップs11で入力された処理対象の入力画像D1における注目画像ブロックが、動体検出部10において動体画像であると判定されたか否かを判定する。ステップs151において、注目画像ブロックが動体検出部10において動体画像ではないと判定されると、つまり、注目画像ブロックの画像情報が、背景モデル中の各対応コードワードCWの背景画像情報と一致すると判定されると、背景モデル更新部40はステップs152を実行する。 FIG. 17 is a flowchart showing the background model update process. This background model update process is performed after step s12 of FIG. As shown in FIG. 17, in step s151, the background model update unit 40 determines that the target image block in the processing target input image D1 input in step s11 is a moving object image in the moving object detection unit 10. Determine whether or not. If it is determined in step s151 that the target image block is not a moving image in the moving object detection unit 10, that is, it is determined that the image information of the target image block matches the background image information of each corresponding codeword CW in the background model. Then, the background model update unit 40 executes Step s152.
 ステップs152では、背景モデル更新部40は、注目画像ブロックの画像情報と一致すると判定された背景画像情報を含む、背景モデル中のコードワードCWの最新一致時刻Teを現在時刻に変更する。 In step s152, the background model update unit 40 changes the latest match time Te of the code word CW in the background model including the background image information determined to match the image information of the target image block to the current time.
 一方で、ステップs151において、注目画像ブロックが動体検出部10において動体画像であると判定されると、背景モデル更新部40はステップs153を実行する。ステップs153では、キャッシュモデルの更新が行われる。具体的には、背景モデル更新部40は、注目画像ブロックの画像情報が、キャッシュモデル記憶部6内のキャッシュモデルに含まれる各対応コードワードCWに含まれていない場合には、当該画像情報を背景画像情報候補として含むコードワードCWを生成してキャッシュモデル内の対応コードブックCBに登録する。このコードワードCWには、画像情報(背景画像情報候補)以外にも、最新一致時刻Teおよびコードワード生成時刻Tiが含まれている。ステップs153で生成されたコードワードCWに含まれる最新一致時刻Teは、暫定的に、コードワード生成時刻Tiと同じ時刻に設定される。また背景モデル更新部40は、注目画像ブロックの画像情報が、キャッシュモデル記憶部6内のキャッシュモデルに含まれる対応コードワードCWに含まれている場合には、つまり、注目画像ブロックの画像情報と一致する背景画像情報候補を含む対応コードワードCWがキャッシュモデルに含まれている場合には、キャッシュモデルにおける、当該背景画像情報候補を含むコードワードCW中の最新一致時刻Teを現在時刻に変更する。 On the other hand, if it is determined in step s151 that the target image block is a moving body image in the moving body detection unit 10, the background model update unit 40 executes step s153. In step s153, the cache model is updated. Specifically, if the image information of the target image block is not included in each corresponding codeword CW included in the cache model in the cache model storage unit 6, the background model update unit 40 displays the image information. A code word CW included as a background image information candidate is generated and registered in the corresponding code book CB in the cache model. The code word CW includes the latest match time Te and the code word generation time Ti in addition to the image information (background image information candidate). The latest matching time Te included in the code word CW generated in step s153 is provisionally set to the same time as the code word generation time Ti. In addition, the background model update unit 40, when the image information of the target image block is included in the corresponding codeword CW included in the cache model in the cache model storage unit 6, that is, the image information of the target image block When the corresponding code word CW including the matching background image information candidate is included in the cache model, the latest matching time Te in the code word CW including the background image information candidate in the cache model is changed to the current time. .
 このように、ステップs153では、コードワードCWのキャッシュモデルへの追加、あるいはキャッシュモデル中のコードワードCWの最新一致時刻Teの更新が行われる。 Thus, in step s153, the code word CW is added to the cache model, or the latest match time Te of the code word CW in the cache model is updated.
 なお、ステップs153において、背景モデル更新部40は、キャッシュモデル記憶部6内のキャッシュモデルに、注目画像ブロックに対応するコードブックCBが登録されていない場合には、注目画像ブロックの画像情報を背景画像情報候補として含むコードワードCWを生成し、当該コードワードCWを含むコードブックCBを生成してキャッシュモデルに登録する。 In step s153, if the code model CB corresponding to the target image block is not registered in the cache model in the cache model storage unit 6, the background model update unit 40 uses the image information of the target image block as the background. A code word CW included as an image information candidate is generated, and a code book CB including the code word CW is generated and registered in the cache model.
 ステップs152あるいはステップs153が実行されると、ステップs154において、背景モデル更新部40は、全ての画像ブロックを注目画像ブロックに設定したか否かを判定する。ステップs154において、処理が行われていない画像ブロックが存在すると判定された場合には、背景モデル更新部40は、未だ処理が行われていない画像ブロックを新たな注目画像ブロックとして、ステップs151以降を実行する。一方で、ステップs154において、全ての画像ブロックについて処理が行われたと判定されると、背景モデル更新部40はステップs155を実行する。 When step s152 or step s153 is executed, in step s154, the background model update unit 40 determines whether or not all image blocks have been set as the target image block. If it is determined in step s154 that there is an image block that has not been processed, the background model update unit 40 sets the image block that has not been processed yet as a new image block of interest, and then executes step s151 and subsequent steps. Execute. On the other hand, if it is determined in step s154 that processing has been performed for all image blocks, the background model update unit 40 executes step s155.
 ステップs155では、キャッシュモデルに含まれる、最新一致時刻Teが所定期間更新されていないコードワードCWが削除される。つまり、キャッシュモデル中のコードワードCWに含まれる画像情報が、ある程度の期間、入力画像D1から取得された画像情報と一致しない場合には、当該コードワードCWが削除される。コードワードCWに含まれる画像情報が、背景の画像情報である場合には、つまり入力画像D1に含まれる、背景を示す画像から取得された画像情報である場合には、当該コードワードCW中の最新一致時刻Teは頻繁に更新されることから、最新一致時刻Teが所定期間更新されていないコードワードCWに含まれる画像情報については、入力画像D1に含まれる動体画像から取得された画像情報である可能性が高いと考えることができる。最新一致時刻Teが所定期間更新されていないコードワードCWがキャッシュモデルから削除されることによって、動体画像の画像情報がキャッシュモデルから削除される。以後、この所定期間を「削除判定用期間」と呼ぶことがある。削除判定用期間は、日照変化あるいは照明変化などの明るさの変化、およびポスターの設置あるいは机の配置変更などの環境の変化等による画像情報の変化と、検出対象とする人等の動体が動くときに生じる画像情報の変化とを区別するために予め設定される期間である。例えば、入力画像D1を撮像する撮像部の撮像フレームレートが30fpsである場合、削除判定用期間は、例えば数十フレーム分から数百フレーム分の入力画像D1が入力される期間に設定される。 In step s155, the code word CW that is included in the cache model and whose latest match time Te has not been updated for a predetermined period is deleted. That is, when the image information included in the code word CW in the cache model does not match the image information acquired from the input image D1 for a certain period, the code word CW is deleted. If the image information included in the code word CW is background image information, that is, if it is image information acquired from an image indicating the background included in the input image D1, the code word CW includes Since the latest match time Te is frequently updated, the image information included in the code word CW for which the latest match time Te has not been updated for a predetermined period is the image information acquired from the moving body image included in the input image D1. It can be considered that there is a high possibility. By deleting the code word CW whose latest match time Te has not been updated for a predetermined period from the cache model, the image information of the moving image is deleted from the cache model. Hereinafter, this predetermined period may be referred to as “deletion determination period”. During the deletion judgment period, changes in image information due to changes in brightness, such as changes in sunlight or lighting, and changes in the environment, such as changes in the placement of posters or desks, and moving objects such as people to be detected move This is a period set in advance to distinguish the change in image information that sometimes occurs. For example, when the imaging frame rate of the imaging unit that captures the input image D1 is 30 fps, the deletion determination period is set to a period in which, for example, several tens to several hundreds of input images D1 are input.
 ステップs155において、キャッシュモデルに含まれる、最新一致時刻Teが削除判定用期間更新されていないコードワードCWが削除されると、背景モデル更新部40はステップs156を実行する。ステップs156では、背景モデル更新部40は、キャッシュモデルに登録されているコードワードCWのうち、キャッシュモデルに登録されてから登録判定期間経過しているコードワードCWを特定する。ステップs156では、コードワードCWが生成されると、当該コードワードCWはすぐにキャッシュメモリに登録されることから、コードワードCWがキャッシュモデル内に登録された時刻として、当該コードワードCWに含まれるコードワード生成時刻Tiを使用することができる。 In step s155, when the code word CW that is included in the cache model and whose latest match time Te has not been updated for the deletion determination period is deleted, the background model update unit 40 executes step s156. In step s156, the background model update unit 40 identifies the codeword CW that has been registered for the cache model and has passed the registration determination period from the codewords CW registered in the cache model. In step s156, when the code word CW is generated, the code word CW is immediately registered in the cache memory. Therefore, the code word CW is included in the code word CW as the time when the code word CW is registered in the cache model. The code word generation time Ti can be used.
 登録判定期間は削除判定用期間よりも大きな値に設定される。登録判定期間は、削除判定用期間よりも例えば数倍程度大きな値に設定される。本実施の形態では、登録判定期間はフレーム数で表されるものとする。登録判定期間が例えば“500”であるとすると、登録判定期間は、500フレーム分の入力画像D1が入力される期間となる。 The registration judgment period is set to a larger value than the deletion judgment period. The registration determination period is set to a value several times larger than the deletion determination period, for example. In the present embodiment, the registration determination period is represented by the number of frames. If the registration determination period is, for example, “500”, the registration determination period is a period in which input images D1 for 500 frames are input.
 ステップs156が実行されると、ステップs157において、背景モデル更新部40は、背景モデル登録処理を行う。背景モデル登録処理では、ステップs156で特定されたコードワードCWを、背景モデル記憶部3内の背景モデルに登録する。 When step s156 is executed, in step s157, the background model update unit 40 performs background model registration processing. In the background model registration process, the code word CW identified in step s156 is registered in the background model in the background model storage unit 3.
 上記の説明から理解できるように、本実施の形態では、背景モデル更新部40は、キャッシュモデル内のコードワードCWを、キャッシュモデルに登録してから登録判定期間経過するまでに削除することがある。背景モデル更新部40が、キャッシュモデル内のコードワードCW(背景画像情報候補)を、キャッシュモデルに登録してから登録判定期間経過するまでに削除するということは、背景モデル更新部40が、登録判定期間において入力される複数枚の入力画像D1に基づいて、キャッシュメモリに登録したコードワードCW(背景画像情報候補)を背景モデルに登録しないと判定したことを意味している。 As can be understood from the above description, in the present embodiment, the background model update unit 40 may delete the code word CW in the cache model until the registration determination period elapses after it is registered in the cache model. . The background model update unit 40 deletes the code word CW (background image information candidate) in the cache model before the registration determination period elapses after it is registered in the cache model. This means that it is determined that the code word CW (background image information candidate) registered in the cache memory is not registered in the background model based on a plurality of input images D1 input in the determination period.
 また、本実施の形態では、背景モデル更新部40は、キャッシュモデル内のコードワードCWを、キャッシュモデルに登録してから登録判定期間経過するまでに削除せずに、背景モデルに登録することがある。背景モデル更新部40が、キャッシュモデル内のコードワードCW(背景画像情報候補)を、キャッシュモデルに登録してから登録判定期間経過するまで削除せずに背景モデルに登録するということは、背景モデル更新部40が、登録判定期間において入力される複数枚の入力画像D1に基づいて、キャッシュメモリに登録したコードワードCW(背景画像情報候補)を背景モデルに登録すると判定したことを意味している。 Further, in the present embodiment, the background model update unit 40 can register the code word CW in the cache model in the background model without deleting the code word CW in the cache model until the registration determination period elapses. is there. The background model update unit 40 registers the code word CW (background image information candidate) in the cache model in the background model without deleting the code word CW (background image information candidate) from the cache model until the registration determination period elapses. This means that the update unit 40 has determined that the code word CW (background image information candidate) registered in the cache memory is registered in the background model based on the plurality of input images D1 input in the registration determination period. .
 このように、背景モデル更新部40は、登録判定期間において入力される複数枚の入力画像D1に基づいて、キャッシュモデルに登録された背景画像情報候補を背景画像情報として背景モデルに登録するか否かを判定していることから、動体検出部10において動体画像であると誤って判定された画像ブロックの画像情報を、背景画像情報として背景モデルに登録することができる。よって、背景モデルを適切に更新することができ、動体検出部10での動体検出の精度が向上する。 As described above, the background model update unit 40 determines whether or not to register the background image information candidate registered in the cache model as background image information in the background model based on the plurality of input images D1 input in the registration determination period. Therefore, the image information of the image block erroneously determined to be a moving image by the moving object detection unit 10 can be registered in the background model as background image information. Therefore, the background model can be appropriately updated, and the accuracy of moving object detection by the moving object detection unit 10 is improved.
 上述の背景モデル更新処理によれば、例えば人(動体)がある持ち物を持って、あるエリアに進入し、その後、当該持ち物をそのエリアに置いて立ち去った場合に、その持ち物を含む画像情報はいずれ背景モデルに登録される。 According to the background model update process described above, for example, when a person (moving object) enters a certain area and then leaves the area, the image information including the belonging is obtained. Eventually it will be registered in the background model.
 図18は人が持ち物を置いて立ち去った場合の一連のエリア単位動体画像D3とエリア状態画像D4との一例を示している。ここでは初期的に、エリアA1に人と持ち物との両方が比較的長期間にわたって存在している。よって、図18の例示でも、初期的に、エリアA1において動体が検出され(エリアA1において「1」が示され)、エリアA1のエリア状態が滞在状態(横線のハッチング)となっている。 FIG. 18 shows an example of a series of area unit moving body images D3 and area state images D4 when a person leaves with his belongings. Here, initially, both people and belongings exist in the area A1 for a relatively long period of time. Therefore, also in the example of FIG. 18, a moving object is initially detected in the area A1 (“1” is indicated in the area A1), and the area state of the area A1 is a staying state (horizontal line hatching).
 次に人が持ち物をエリアA1に置いてエリアA2へと移動すると、エリアA1では当該持ち物が動体として検出され、エリアA2では人が動体として検出されることになる。よって図18の左から2番目のエリア単位動体画像D3では、エリアA1,A2において「1」が示されている。エリアA2において所定時間にわたって人が動体として検出されると、エリアA2のエリア状態が進入状態(縦線のハッチング)となる。 Next, when a person puts his belongings in the area A1 and moves to the area A2, the belongings are detected as a moving object in the area A1, and a person is detected as a moving object in the area A2. Accordingly, in the second area unit moving body image D3 from the left in FIG. 18, “1” is shown in the areas A1 and A2. When a person is detected as a moving object in the area A2 for a predetermined time, the area state of the area A2 becomes an approach state (vertical line hatching).
 続けて人が右隣のエリアA3に移動すると、エリアA2に動体が検出されなくなり、エリアA3に動体が検出される。この状態が続くことでエリアA2,A3のエリア状態はそれぞれ退去状態および進入状態となる。エリアA2はその後も動体が検出されないので、そのエリア状態はいずれ不在状態となる。 When the person subsequently moves to the area A3 on the right side, the moving object is not detected in the area A2, and the moving object is detected in the area A3. As this state continues, the area states of areas A2 and A3 become a leaving state and an entering state, respectively. Since the moving object is not detected thereafter in the area A2, the area state will eventually be absent.
 さらに人が右側に移動するとエリアA3でも動体が検出されなくなり、この状態が続くことでエリアA3のエリア状態は退去状態を経て不在状態となる。この状態が図18の右から2番目のエリア状態画像D4で示される。 When the person further moves to the right side, no moving object is detected even in the area A3, and when this state continues, the area state of the area A3 goes into the absence state after leaving. This state is indicated by the second area state image D4 from the right in FIG.
 一方で、エリアA1に属する画像ブロック(より詳細には、当該画像ブロックのうち、動体画像と判定された画像ブロック)の画像情報が登録判定期間にわたって変化しないと、背景モデル更新部40はその画像情報を背景モデルに登録する。これによって、当該画像ブロックにおいて入力画像D1と背景モデルとの差異がなくなるので、エリアA1において当該持ち物が動体として検出されなくなる。これが、図18の右端のエリア単位動体画像D3において、エリアA1の「0」で示されている。 On the other hand, if the image information of the image block belonging to the area A1 (more specifically, the image block determined to be a moving image among the image blocks) does not change over the registration determination period, the background model update unit 40 Register information in the background model. As a result, there is no difference between the input image D1 and the background model in the image block, and the belongings are not detected as moving objects in the area A1. This is indicated by “0” in area A1 in the area unit moving body image D3 at the right end of FIG.
 しかしながら、図18の例示では、エリアA1の周囲エリアのエリア状態が進入状態ではないので、エリアA1について第2の条件を満足しない。よって、エリアA1のエリア状態は滞在状態を維持し続けることになる。これが、図18の右端のエリア状態画像D4において、エリアA1の横線のハッチングで示されている。 However, in the illustration of FIG. 18, the area condition of the surrounding area of the area A1 is not an approaching state, so the second condition is not satisfied for the area A1. Therefore, the area state of area A1 continues to maintain the stay state. This is indicated by the hatching of the horizontal line of the area A1 in the area state image D4 at the right end of FIG.
 そこで第3の実施の形態では、動体を表す画像情報が背景モデルに登録された場合に、その動体を含むエリアのエリア状態を、適切に滞在状態から退去状態へ遷移させることを目的とする。 Therefore, in the third embodiment, when image information representing a moving object is registered in the background model, an object is to appropriately change the area state of the area including the moving object from the staying state to the leaving state.
 エリア状態推定部30は、注目エリアのエリア状態を進入状態へと遷移させるときに、当該注目エリアが進入状態になったという情報(以下、進入フラグと呼ぶ)を保持する。より詳細にはこの進入フラグを例えば記憶部110に記録する。この進入フラグは登録判定期間よりも長い保持期間にわたって保持され、保持期間が経過すると消去される。進入フラグはエリア毎に保持される。 The area state estimation unit 30 retains information (hereinafter referred to as an entry flag) that the area of interest has entered the entry state when the area state of the area of interest transitions to the entry state. More specifically, this approach flag is recorded in the storage unit 110, for example. This approach flag is held for a holding period longer than the registration determination period, and is erased when the holding period elapses. The approach flag is held for each area.
 そしてエリア状態推定部30は、第1および第2の条件の両方を満足するときのみならず、次の条件を満足するときにも、エリア状態を滞在状態から退去状態へと遷移させる。その条件とは、第1の条件と、周囲エリアについての進入フラグが保持されているという第3の条件との両方を満足するという条件である。 Then, the area state estimation unit 30 changes the area state from the staying state to the leaving state not only when both the first and second conditions are satisfied, but also when the next condition is satisfied. The condition is a condition that satisfies both the first condition and the third condition that the approach flag for the surrounding area is held.
 図18の例示に沿って説明すると、エリアA2のエリア状態が進入状態となったとき(図18の左から2番目の状態)に、エリアA2について進入フラグが保持される。この進入フラグはエリアA2が進入状態になってから登録判定期間よりも長い保持期間にわたって保持される。よって、エリアA1に属する画像ブロックの画像情報が背景モデルに登録される時点(図18の右端の状態参照)においても、エリアA2の進入フラグは消去されずに保持されている。これにより第3の条件が成立する。エリアA1に動体が検出されずに第1の条件が成立すると、エリア状態推定部30はエリアA1のエリア状態を滞在状態から退去状態へと遷移させる。 Describing along the illustration of FIG. 18, when the area state of the area A2 becomes the approach state (second state from the left in FIG. 18), the approach flag is held for the area A2. The entry flag is held for a holding period longer than the registration determination period after the area A2 enters the entry state. Therefore, even when the image information of the image blocks belonging to the area A1 is registered in the background model (see the state at the right end in FIG. 18), the entry flag of the area A2 is retained without being erased. Thereby, the third condition is satisfied. When the moving object is not detected in the area A1 and the first condition is satisfied, the area state estimation unit 30 changes the area state of the area A1 from the staying state to the leaving state.
 これにより、滞在状態であるエリアA1に属する画像ブロックの画像情報が背景モデルに登録された場合に、適切に当該エリアA1を滞在状態から退去状態へと遷移させることができる。 Thereby, when the image information of the image block belonging to the area A1 in the staying state is registered in the background model, the area A1 can be appropriately transitioned from the staying state to the leaving state.
 なお上述の例では、滞在状態において第1および第3の条件の両方が成立したときに、エリア状態を滞在状態から退去状態へと遷移させているが、エリア状態を滞在状態から不在状態へと遷移させても良い。これにより、速やかにエリア状態を現実の状態に遷移させることができる。言い換えれば、高い応答性でエリア状態を推定できる。 In the above example, when both the first and third conditions are satisfied in the staying state, the area state is changed from the staying state to the leaving state, but the area state is changed from the staying state to the absence state. You may make a transition. Thereby, an area state can be changed to an actual state promptly. In other words, the area state can be estimated with high responsiveness.
 なおここでは、滞在状態から退去状態への条件に第2の条件を採用するからこそ上述のような問題が生じ、これを解決するために進入フラグを採用した。逆に言えば、エリア状態が滞在状態にならない場合には上述の問題は生じない。よってこの場合には、たとえエリア状態が進入状態になっても、そのエリアについて進入フラグを保持する必要はない。例えば、動体が撮像領域を単に横切る場合には、滞在状態となるエリアがないと想定されるので、進入フラグは不要である。 In this case, the above-mentioned problem arises because the second condition is adopted as the condition from the staying state to the leaving state, and the approach flag is employed to solve this problem. In other words, the above-described problem does not occur when the area state does not become the stay state. Therefore, in this case, even if the area state becomes the entry state, it is not necessary to hold the entry flag for the area. For example, when the moving object simply crosses the imaging area, it is assumed that there is no area that is in the staying state, and therefore no entry flag is required.
 そこで、進入フラグを保持するための条件として、周囲のエリア(例えばそのエリアを直近で囲む8個のエリア)のエリア状態が滞在状態である、という条件を追加する。これにより、滞在状態であるエリアから見れば、当該エリアの周囲エリアについて進入フラグが保持されることになる。よって、滞在状態であるエリアの画像情報が背景モデルに登録されたとしても、第3の条件を満足し、以って適切に当該エリアのエリア状態を退去状態(或いは不在状態)へと遷移させることができる。 Therefore, as a condition for holding the entry flag, a condition is added that the area state of the surrounding area (for example, the eight areas immediately surrounding the area) is the staying state. Thus, when viewed from the area in the staying state, the approach flag is held for the surrounding area of the area. Therefore, even if the image information of the area in the staying state is registered in the background model, the third condition is satisfied, and accordingly, the area state of the area is appropriately changed to the withdrawal state (or absence state). be able to.
 一方で、単なる動体の移動の際に、近接する2つエリアのエリア状態がそれぞれ滞在状態および進入状態とならなければ、上述の条件により、単なる移動の際に進入フラグが保持されることを回避できる。これを実現するには、エリア状態の遷移に要する期間を調整する必要がある。より詳細には、進入状態から退去状態へと遷移するのに要する進入退去時間(退去判定値FNref21に相当する期間)を、進入状態から滞在状態への遷移に要する進入滞在期間(進入判定値FNref11)よりも短く設定する必要がある。これらを逆に設定すると、次に説明するように、単なる動体の移動においても進入フラグが保持されるからである。例えば動体が右側に移動している場合を考慮する。この場合、左右方向において隣り合う第1及び第2のエリアのいずれもが進入状態になることがある。上記逆の設定においては、移動元に近い第1エリアが進入状態から退去状態になる前に、つまり第1エリアが進入状態である状態で、移動元から遠い第2エリアが滞在状態になりえる。よってこの場合、移動元に近い第1エリアについて進入フラグが保持される。つまり、単に動体が移動しているときにも、進入フラグが保持される。上記の期間の設定により、このような事態を回避できるのである。 On the other hand, if the area states of two adjacent areas are not the staying state and the approaching state when the moving object is simply moving, it is avoided that the approach flag is held during the simple moving according to the above-described conditions. it can. In order to realize this, it is necessary to adjust the period required for the transition of the area state. More specifically, an entry / exit time required for transition from the entry state to the withdrawal state (a period corresponding to the withdrawal determination value FNref21) is set as an entry stay period (entrance determination value FNref11) required for the transition from the entry state to the stay state. ) Must be set shorter. This is because if these are set in reverse, the approach flag is maintained even when the moving body is simply moved, as will be described next. For example, consider the case where the moving object is moving to the right. In this case, both the first and second areas that are adjacent in the left-right direction may be in the approach state. In the reverse setting, the second area far from the moving source can be in the staying state before the first area close to the moving source changes from the entering state to the leaving state, that is, the first area is in the entering state. . Therefore, in this case, the approach flag is held for the first area close to the movement source. That is, the entry flag is held even when the moving object is simply moving. Such a situation can be avoided by setting the above period.
 要するに、あるエリア(周囲エリア)のエリア状態が進入状態へと遷移し、かつ、それよりも前(少なくとも1フレーム前)から、そのエリアの周囲に位置するエリア(注目エリア)のエリア状態が滞在状態を維持するときに、そのエリア(周囲エリア)について進入フラグを保持すればよいのである。言い換えれば、注目エリアのエリア状態が滞在状態となっている状態で、周囲エリアのエリア状態が進入状態へと遷移したとことを条件として、当該周囲エリアに進入フラグを保持すればよい。これにより、不要な進入フラグの保持を回避できる。 In short, the area state of an area (surrounding area) transitions to the approach state, and the area state of the area (attention area) located around that area stays before that (at least one frame before) When the state is maintained, the entry flag may be held for the area (surrounding area). In other words, the entry flag may be held in the surrounding area on the condition that the area state of the surrounding area has changed to the entering state while the area state of the attention area is in the staying state. Thereby, holding | maintenance of an unnecessary approach flag can be avoided.
 第4の実施の形態.
 第4の実施の形態では図19に示すように、判定期間調整部50が更に設けられる。判定期間調整部50は、エリア状態および進入フラグの有無に応じて登録判定期間を調整する。簡単に説明すると、判定期間調整部50は、物体が置き去りにされたエリアについては、画像情報が背景を示していると判断して、そのエリアについては登録判定期間を短く設定するのである。
Fourth embodiment.
In the fourth embodiment, as shown in FIG. 19, a determination period adjustment unit 50 is further provided. The determination period adjustment unit 50 adjusts the registration determination period according to the area state and the presence / absence of the entry flag. Briefly, the determination period adjustment unit 50 determines that the image information indicates the background for the area where the object is left behind, and sets the registration determination period for the area to be short.
 なおエリア状態推定部30は、第3の実施の形態で説明したように進入フラグを保持する。即ち、周囲エリアのエリア状態が進入状態へと遷移し、かつ、それよりも前(少なくとも1フレーム前)から、注目エリアのエリア状態が滞在状態を維持するときに、周囲エリアについて進入フラグを保持する。これにより不要な進入フラグの保持が回避される。つまり単なる動体の移動の際には、進入フラグが保持されない。逆にいえば、物体が置き去りにされて滞在状態となっているエリアがあるときに、その周囲のエリアに進入フラグが保持される。 The area state estimation unit 30 holds the entry flag as described in the third embodiment. That is, when the area state of the surrounding area transitions to the entering state and the area state of the attention area maintains the staying state from before (at least one frame before), the entry flag is held for the surrounding area To do. This avoids holding unnecessary entry flags. That is, the entry flag is not held when the moving object is simply moved. In other words, when there is an area where an object is left behind and is in a staying state, an approach flag is held in the surrounding area.
 そこで、判定期間調整部50は、注目エリアのエリア状態が滞在状態であり、かつ、周囲エリアについて進入フラグが保持されている場合には、注目エリアは置き去りにされた物体を含んでいると判定して、注目エリアに属する画像ブロックの登録判定期間を、他のエリアに属する画像ブロックの登録判定期間よりも短く設定する。この登録判定期間は、背景モデル更新部40へと出力される。 Accordingly, the determination period adjustment unit 50 determines that the attention area includes an object left behind when the area state of the attention area is the staying state and the approach flag is held for the surrounding area. Then, the registration determination period of the image block belonging to the attention area is set shorter than the registration determination period of the image block belonging to another area. This registration determination period is output to the background model update unit 40.
 以上のように、第4の実施の形態においては、置き去りにされた物体が含まれている可能性が高いエリアに属する画像情報を、早期に背景モデルへと登録すべく、その登録判定期間を短く設定するのである。これにより、高い応答性でエリア状態を推定することができる。 As described above, in the fourth embodiment, the registration determination period is set so that image information belonging to an area that is likely to contain an object left behind is registered in the background model at an early stage. Set it short. Thereby, an area state can be estimated with high responsiveness.
 以上のように、画像処理部1は詳細に説明されたが、上記した説明は、全ての局面において例示であって、この発明がそれに限定されるものではない。また、上述した各種変形例は、相互に矛盾しない限り組み合わせて適用可能である。そして、例示されていない無数の変形例が、この発明の範囲から外れることなく想定され得るものと解される。 As described above, the image processing unit 1 has been described in detail. However, the above description is illustrative in all aspects, and the present invention is not limited thereto. The various modifications described above can be applied in combination as long as they do not contradict each other. And it is understood that the countless modification which is not illustrated can be assumed without deviating from the scope of the present invention.
 1 エリア状態推定装置
 10 動体検出部
 20 エリア在/不在判定部
 30 エリア状態推定部
 40 背景モデル更新部
 50 判定期間調整部
DESCRIPTION OF SYMBOLS 1 Area state estimation apparatus 10 Moving object detection part 20 Area presence / absence determination part 30 Area state estimation part 40 Background model update part 50 Determination period adjustment part

Claims (11)

  1.  入力画像および背景画像を用いて、前記入力画像における複数のエリアのそれぞれで動体を検出するエリア単位動体検出部と、
     前記エリアのエリア状態を推定するエリア状態推定部と
    を備え、
     前記エリア状態推定部は、
     (i) 前記エリアの一つたる注目エリアにおいて、第1期間にわたって動体が検出されたときに、前記注目エリアのエリア状態を不在状態から進入状態へと遷移させ、
     (ii)前記進入状態において第2期間にわたって前記注目エリアに動体が検出されたときに、前記注目エリアのエリア状態を滞在状態へと遷移させ、
     (iii)前記滞在状態において第3期間にわたって前記注目エリアに動体が検出されないときに、前記注目エリアのエリア状態を退去状態へと遷移させ、
     (iv)前記退去状態において第4期間にわたって前記注目エリアに動体が検出されないときに、前記注目エリアのエリア状態を不在状態へと遷移させる、エリア状態推定装置。
    An area unit moving body detection unit that detects a moving body in each of a plurality of areas in the input image using an input image and a background image;
    An area state estimation unit for estimating the area state of the area,
    The area state estimation unit
    (i) When a moving object is detected over a first period in the attention area that is one of the areas, the area state of the attention area is changed from the absence state to the entry state,
    (ii) When a moving object is detected in the attention area over the second period in the approach state, the area state of the attention area is changed to a stay state,
    (iii) When a moving object is not detected in the attention area over the third period in the stay state, the area state of the attention area is changed to a withdrawal state,
    (iv) An area state estimation device that transitions the area state of the area of interest to an absent state when no moving object is detected in the area of interest for a fourth period in the leaving state.
  2.  前記エリア状態推定部は、(iii')前記滞在状態において前記第3期間にわたって前記注目エリアに動体が検出されず、かつ、前記注目エリアの周囲に位置する周囲エリアのエリア状態が前記進入状態であるときに、前記注目エリアのエリア状態を前記滞在状態から前記退去状態へと遷移させる、請求項1に記載のエリア状態推定装置。 The area state estimation unit (iii ′) in the stay state, no moving object is detected in the attention area over the third period, and the area state of the surrounding area located around the attention area is the entry state. The area state estimation device according to claim 1, wherein the area state of the attention area is changed from the stay state to the leaving state at a certain time.
  3.  前記エリア状態推定部は、(v)前記進入状態において第5期間にわたって前記注目エリアに動体が検出されないことのみを条件として、前記エリア状態を前記進入状態から前記退去状態へと遷移させる、請求項2に記載のエリア状態推定装置。 The area state estimation unit (v) transitions the area state from the entry state to the withdrawal state only on the condition that no moving object is detected in the attention area over a fifth period in the entry state. 2. The area state estimation apparatus according to 2.
  4.  前記エリア状態推定部は、(vi)前記退去状態において前記第1期間と前記第2期間との和よりも短い第6期間にわたって前記注目エリアに動体が検出されたときに、前記注目エリアのエリア状態を前記退去状態から前記滞在状態または前記進入状態へと遷移させる、請求項1に記載のエリア状態推定装置。 The area state estimation unit (vi) when the moving object is detected in the attention area over a sixth period shorter than the sum of the first period and the second period in the leaving state, The area state estimation apparatus according to claim 1, wherein a state is changed from the leaving state to the staying state or the entering state.
  5.  前記エリア単位動体検出部は、前記エリア状態に応じた検出感度で前記注目エリアにおいて動体を検出する、請求項1に記載のエリア状態推定装置。 The area state estimation device according to claim 1, wherein the area unit moving body detection unit detects a moving body in the attention area with a detection sensitivity according to the area state.
  6.  前記エリア単位動体検出部は、前記エリア状態が前記退去状態である前記エリアにおける検出感度を、前記エリア状態が前記不在状態、前記進入状態または前記滞在状態である前記エリアにおける検出感度よりも高く設定する、請求項5に記載のエリア状態推定装置。 The area unit moving body detection unit sets the detection sensitivity in the area in which the area state is the leaving state higher than the detection sensitivity in the area in which the area state is the absence state, the entry state, or the stay state. The area state estimation device according to claim 5.
  7.  前記エリア単位動体検出部は、前記エリア状態が進入状態である前記エリアを周囲に有する前記エリアにおける検出感度を、前記エリア状態が不在状態、進入状態または滞在状態である前記エリアにおける検出感度よりも高く設定する、請求項5に記載のエリア状態推定装置。 The area unit moving body detection unit has a detection sensitivity in the area around the area where the area state is an approach state, more than a detection sensitivity in the area where the area state is an absence state, an entry state, or a stay state. The area state estimation apparatus according to claim 5, wherein the area state estimation apparatus is set high.
  8.  前記入力画像から得られる画像情報の登録判定期間における変化が基準値よりも小さいときに、前記背景画像の背景画像情報として登録する背景モデル更新部を更に備え、
     前記エリア状態推定部は、
     (I)前記エリアの前記エリア状態が前記進入状態に遷移したときに、その旨を示す進入フラグを前記登録判定期間よりも長い保持期間にわたって保持し、
     (II)前記滞在状態において前記第3期間にわたって前記注目エリアに動体が検出されず、かつ、前記周囲エリアについての前記進入フラグが保持されているときに、前記注目エリアの前記エリア状態を前記退去状態または前記不在状態へと遷移させる、請求項2に記載のエリア状態推定装置。
    When the change in the registration determination period of the image information obtained from the input image is smaller than a reference value, further comprising a background model update unit that registers as background image information of the background image,
    The area state estimation unit
    (I) When the area state of the area transitions to the entry state, holding an entry flag indicating that over a holding period longer than the registration determination period,
    (II) When the moving object is not detected in the attention area over the third period in the stay state and the entry flag for the surrounding area is held, the area state of the attention area is withdrawn The area state estimation apparatus according to claim 2, wherein the state state transition is performed to a state or the absence state.
  9.  前記エリア状態推定部は、(I')前記周囲エリアの前記エリア状態が前記進入状態へと遷移し、かつ、その遷移の前から、前記注目エリアの前記エリア状態が滞在状態を維持しているときに、前記周囲エリアについての前記進入フラグを保持し、
     前記エリア状態推定装置は、
     前記注目エリアに属する前記画像情報の前記登録判定期間を、他の前記エリアに属する前記画像情報の前記登録判定期間よりも短く設定する判定期間調整部を更に備える、請求項8に記載のエリア状態推定装置。
    The area state estimation unit (I ′) the area state of the surrounding area has transitioned to the approach state, and the area state of the area of interest has maintained a stay state before the transition Sometimes holding the entry flag for the surrounding area,
    The area state estimation device
    The area state according to claim 8, further comprising a determination period adjustment unit that sets the registration determination period of the image information belonging to the attention area to be shorter than the registration determination period of the image information belonging to another area. Estimating device.
  10.  請求項1に記載のエリア状態推定装置と、
     前記エリア状態に応じて前記エリアの環境を制御する環境制御装置と
    を備える、環境制御システム。
    The area state estimation device according to claim 1,
    An environment control system comprising: an environment control device that controls an environment of the area according to the area state.
  11.  (a)入力画像および背景画像を用いて、前記入力画像における複数のエリアのそれぞれで動体を検出する工程と、
     (b)前記エリアの一つたる注目エリアにおいて、第1期間にわたって動体が検出されたときに、前記注目エリアのエリア状態を不在状態から進入状態へと遷移させる工程と、
     (c)前記進入状態において第2期間にわたって前記注目エリアに動体が検出されたときに、前記注目エリアのエリア状態を滞在状態へと遷移させる工程と、
     (d)前記滞在状態において第3期間にわたって前記注目エリアに動体が検出されないときに、前記注目エリアのエリア状態を退去状態へと遷移させる工程と、
     (f)前記退去状態において第4期間にわたって前記注目エリアに動体が検出されないときに、前記注目エリアのエリア状態を前記不在状態へと遷移させる工程と
    を備える、エリア状態推定方法。
    (A) detecting a moving object in each of a plurality of areas in the input image using the input image and the background image;
    (B) in the attention area that is one of the areas, when a moving object is detected over a first period, transitioning the area state of the attention area from the absence state to the entry state;
    (C) when a moving object is detected in the attention area over the second period in the entry state, the area state of the attention area is changed to a stay state;
    (D) when a moving object is not detected in the attention area over a third period in the stay state, transitioning the area state of the attention area to a withdrawal state;
    (F) An area state estimation method comprising a step of transitioning the area state of the attention area to the absent state when no moving object is detected in the attention area over a fourth period in the leaving state.
PCT/JP2015/057099 2014-03-26 2015-03-11 Area state estimation device, area state estimation method, and environment control system WO2015146582A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-063118 2014-03-26
JP2014063118A JP6396051B2 (en) 2014-03-26 2014-03-26 Area state estimation device, area state estimation method, program, and environment control system

Publications (1)

Publication Number Publication Date
WO2015146582A1 true WO2015146582A1 (en) 2015-10-01

Family

ID=54195106

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/057099 WO2015146582A1 (en) 2014-03-26 2015-03-11 Area state estimation device, area state estimation method, and environment control system

Country Status (2)

Country Link
JP (1) JP6396051B2 (en)
WO (1) WO2015146582A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7005285B2 (en) * 2017-11-01 2022-01-21 株式会社東芝 Image sensors, sensing methods, control systems and programs
JP6948759B2 (en) * 2018-08-14 2021-10-13 Kddi株式会社 Devices, programs and methods for determining the presence or absence of a moving object using sensors with different detection methods

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008077361A (en) * 2006-09-20 2008-04-03 Kanazawa Inst Of Technology Monitoring method and monitoring system
JP2010256045A (en) * 2009-04-21 2010-11-11 Taisei Corp Wide range/high accuracy human body detection sensor
JP2012209214A (en) * 2011-03-30 2012-10-25 Panasonic Corp Lighting system
JP2013096947A (en) * 2011-11-04 2013-05-20 Panasonic Corp Human sensor and load control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008077361A (en) * 2006-09-20 2008-04-03 Kanazawa Inst Of Technology Monitoring method and monitoring system
JP2010256045A (en) * 2009-04-21 2010-11-11 Taisei Corp Wide range/high accuracy human body detection sensor
JP2012209214A (en) * 2011-03-30 2012-10-25 Panasonic Corp Lighting system
JP2013096947A (en) * 2011-11-04 2013-05-20 Panasonic Corp Human sensor and load control system

Also Published As

Publication number Publication date
JP2015184233A (en) 2015-10-22
JP6396051B2 (en) 2018-09-26

Similar Documents

Publication Publication Date Title
TWI759286B (en) System and method for training object classifier by machine learning
JP6509275B2 (en) Method and apparatus for updating a background model used for image background subtraction
US20230394808A1 (en) Image processing system, image processing method, and program storage medium
JP5675233B2 (en) Information processing apparatus, recognition method thereof, and program
JP6482195B2 (en) Image recognition apparatus, image recognition method, and program
CN105404884B (en) Image analysis method
JP2007323572A (en) Object detector, object detection method, and object detection program
EP3092619A1 (en) Information processing apparatus and information processing method
JP6024658B2 (en) Object detection apparatus, object detection method, and program
CN112703533A (en) Object tracking
Setitra et al. Background subtraction algorithms with post-processing: A review
JP6652051B2 (en) Detection system, detection method and program
JP2020149642A (en) Object tracking device and object tracking method
JP6809613B2 (en) Image foreground detection device, detection method and electronic equipment
JPWO2018061976A1 (en) Image processing device
WO2015146582A1 (en) Area state estimation device, area state estimation method, and environment control system
US9824462B2 (en) Method for detecting object and object detecting apparatus
JP6326622B2 (en) Human detection device
JP6177708B2 (en) Moving object detection device, moving object detection method, and control program
Sidnev et al. Efficient camera tampering detection with automatic parameter calibration
JP5241687B2 (en) Object detection apparatus and object detection program
Bien et al. Detection and recognition of indoor smoking events
JP2012104872A (en) Image processing unit, and image processing program
JP6162492B2 (en) Moving object detection device, moving object detection method, and control program
JP7435298B2 (en) Object detection device and object detection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15768949

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15768949

Country of ref document: EP

Kind code of ref document: A1