WO2015146582A1 - Dispositif d'estimation d'état de zone, procédé d'estimation d'état de zone, et système de commande d'environnement - Google Patents

Dispositif d'estimation d'état de zone, procédé d'estimation d'état de zone, et système de commande d'environnement Download PDF

Info

Publication number
WO2015146582A1
WO2015146582A1 PCT/JP2015/057099 JP2015057099W WO2015146582A1 WO 2015146582 A1 WO2015146582 A1 WO 2015146582A1 JP 2015057099 W JP2015057099 W JP 2015057099W WO 2015146582 A1 WO2015146582 A1 WO 2015146582A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
state
moving object
period
image
Prior art date
Application number
PCT/JP2015/057099
Other languages
English (en)
Japanese (ja)
Inventor
健太 西行
Original Assignee
株式会社メガチップス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社メガチップス filed Critical 株式会社メガチップス
Publication of WO2015146582A1 publication Critical patent/WO2015146582A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers

Definitions

  • the present invention relates to an area state estimation device, an area state estimation method, and an environment control system.
  • Patent Documents 1 and 2 and Non-Patent Document 1 are disclosed as techniques related to the present invention.
  • an object of the present application is to provide an area state estimation device that contributes to finer control over an area by estimating the state of a moving object for each area.
  • a first aspect of the area state estimation device includes an area unit moving body detection unit that detects a moving body in each of a plurality of areas in the input image using an input image and a background image, and an area state that estimates an area state of the area And (i) (When a moving object is detected over a first period in the attention area that is one of the areas, the area state estimation section changes the area state of the attention area from the absence state to the entry state.
  • an area state estimation apparatus is an area state estimation apparatus concerning a 1st aspect, Comprising:
  • the said area state estimation part is the said attention area over the said 3rd period in the said stay state (iii ')
  • the area state of the attention area is changed from the stay state to the withdrawal state.
  • an area state estimation apparatus is an area state estimation apparatus concerning a 2nd aspect, Comprising:
  • the said area state estimation part is a moving body to the said attention area over the 5th period in the said approach state.
  • the area state is transitioned from the entering state to the leaving state only on the condition that no is detected.
  • a fourth aspect of the area state estimation apparatus is the area state estimation apparatus according to any one of the first to third aspects, wherein the area state estimation unit (vi) When a moving object is detected in the attention area over a sixth period shorter than the sum of the first period and the second period, the area state of the attention area transitions from the leaving state to the staying state or the entering state.
  • an area state estimation apparatus is an area state estimation apparatus concerning any one 1st to 4th aspect, Comprising:
  • the said area unit moving body detection part is detection sensitivity according to the said area state. Then, a moving object is detected in the attention area.
  • an area state estimation apparatus is the area state estimation apparatus concerning a 5th aspect, Comprising:
  • the said area unit moving body detection part has the detection sensitivity in the said area whose said area state is the said withdrawal state.
  • the area state is set to be higher than the detection sensitivity in the area where the absence state, the entry state, or the stay state is present.
  • a 7th aspect of an area state estimation apparatus is an area state estimation apparatus concerning the 5th or 6th aspect, Comprising:
  • the said area unit moving body detection part surrounds the said area where the said area state is an approach state
  • the detection sensitivity in the area is set higher than the detection sensitivity in the area where the area state is absent, approaching state, or staying state.
  • An eighth aspect of the area state estimation apparatus is the area state estimation apparatus according to any one of the second to seventh aspects, wherein a change in a registration determination period of image information obtained from the input image is a reference.
  • a background model update unit that registers as background image information of the background image when smaller than a value, the area state estimation unit is (I) when the area state of the area has transitioned to the approach state , Holding an approach flag indicating that over a holding period longer than the registration determination period, (II) in the stay state, moving objects are not detected in the area of interest over the third period, and about the surrounding area
  • the entry flag is held, the area state of the area of interest is transitioned to the withdrawal state or the absence state.
  • a ninth aspect of the area state estimation apparatus is the area state estimation apparatus according to the eighth aspect, wherein the area state estimation unit (I ′) sets the area state of the surrounding area to the entry state. And before the transition, when the area state of the area of interest maintains a stay state, the approach flag for the surrounding area is held, the area state estimation device, The image processing apparatus further includes a determination period adjustment unit that sets the registration determination period of the image information belonging to the attention area to be shorter than the registration determination period of the image information belonging to another area.
  • the aspect of the environment control system according to the present invention includes the area state estimation device according to any one of the first to ninth aspects, and the environment control device that controls the environment of the area according to the area state.
  • the area state estimation method (a) a step of detecting a moving object in each of a plurality of areas in the input image using an input image and a background image, and (b) in an attention area that is one of the areas A step of transitioning the area state of the area of interest from an absent state to an approach state when a moving object is detected over a first period; and (c) a moving object is detected in the area of interest over a second period in the approach state. And (d) when no moving object is detected in the attention area over a third period in the staying state, the area state of the attention area is changed to the staying state. A step of transitioning to a leaving state; and (f) when no moving object is detected in the attention area over a fourth period in the leaving state, The area status of the serial target area and a step of transitioning to the absence state.
  • an area estimation state device that estimates an area state of a plurality of areas in an input image uses (a) an input image and a background image to detect a moving object in each of the plurality of areas in the input image. And (b) a step of transitioning the area state of the area of interest from an absent state to an entering state when a moving object is detected over a first period in the area of interest as one of the areas, and (c) A step of transitioning the area state of the area of interest to a stay state when a moving object is detected in the area of interest over a second period in the approach state; and (d) the area of interest over a third period in the stay state.
  • the presence of a moving object is detected by distinguishing between the approaching state and the staying state, and the absence of the moving object is detected by distinguishing between the leaving state and the absence state. Therefore, it is possible to know a more detailed state of the moving object.
  • FIG. 1 is a block diagram conceptually showing an example of an image processing unit (area state estimation device) 1 according to the first embodiment.
  • an input image D ⁇ b> 1 is input from the image input unit 2 to the image processing unit 1.
  • the image input unit 2 inputs an input image D1 input from the outside to the image processing unit 1.
  • the input image D1 is a captured image captured by an imaging unit (not shown).
  • FIG. 2 schematically shows an example of the input image D1.
  • a room appears in the input image D1.
  • the input image D1 in FIG. 2 is an example when the imaging unit is an omnidirectional camera. Therefore, as the distance from the center of the input image D1 increases, the object is represented as curved.
  • the wall 101 located at the end in the input image D1 is greatly curved. Actually, the wall 101 has a planar shape that does not curve. A region far from the center of the input image D1 is displayed smaller in the input image D1.
  • the person 103 is sitting on a chair (not shown) provided in a pair with the desk 102, and the upper body of the person 103 appears.
  • the lower body and chair of the person 103 are hidden by the desk 102 and do not appear in the input image D1.
  • the image processing unit 1 performs various image processing on the input image D1 input from the image input unit 2.
  • the image processing unit 1 includes a CPU 100 and a storage unit 110.
  • the storage unit 110 is configured by a non-transitory recording medium that can be read by the CPU 100, such as a ROM (Read Only Memory) and a RAM (Random Access Memory).
  • the storage unit 110 stores a control program 111. When the CPU 100 executes the control program 111 in the storage unit 110, a functional block described later is formed in the image processing unit 1.
  • the storage unit 110 may include a computer-readable non-transitory recording medium other than the ROM and RAM.
  • the storage unit 110 may include, for example, a small hard disk drive and an SSD (Solid State Drive).
  • FIG. 4 is a flowchart illustrating an example of a schematic operation of the image processing unit 1.
  • a series of processing from steps s12 to s14 is executed with the input image D1 as a processing target. Since the input image D1 is sequentially input every predetermined time, the operation of FIG. 4 is repeatedly executed.
  • the moving object detection unit 10 inputs the input image D1 and the background image D0 (see also FIG. 3).
  • the background image D0 is the same as the input image D1 in that it is an image taken by the imaging unit, the background image D0 does not include a moving object (for example, the person 103).
  • the background image D0 is captured in advance and stored in the background model storage unit 3, for example.
  • the background model storage unit 3 includes a rewritable non-primary recording medium such as a flash memory, an EPROM (Erasable Programmable Read Only Memory), or a hard disk (HD).
  • the moving object detection unit 10 detects a moving object for each pixel using the background image D0 and the input image D1. More specifically, a pixel having a large difference between the input image D1 and the background image D0 is determined as a pixel representing a moving object (hereinafter also referred to as a moving object pixel), and the moving object is detected.
  • a moving object detection method any method such as CodeBook, Colinear, or statistical background difference can be employed.
  • a pixel value difference between the input image D1 and the background image D0 is calculated for each pixel, and a pixel whose difference is larger than a predetermined reference value is detected as a moving object pixel.
  • the difference between the input image D1 and the background image D0 can be calculated for each predetermined image block.
  • the image block is obtained by dividing the input image D1 (background image D0) into a plurality of regions.
  • the input image D1 and the background image D0 have a total of nine pixels of 3 ⁇ 3 in the vertical direction. It is an image block.
  • this method a specific example of this method will be described.
  • FIG. 5 is a flowchart showing an example of the moving object detection process.
  • the moving object detection unit 10 may be referred to as a certain image block (hereinafter referred to as “target image block”) in the processing target input image D1 input in step s11 described above. ) Is detected. That is, the moving object detection unit 10 detects whether a moving object appears in the target image block. More specifically, the moving object detection unit 10 determines whether the image information of the target image block of the input image D1 and the image information of the target image block of the background image D0 (hereinafter also referred to as background image information) match each other. Determine.
  • the target image block is an image showing a moving object (hereinafter also referred to as a moving object image).
  • a moving object image A specific method for determining whether the image information and the background image information in the target image block match each other will be described later.
  • step s121 the moving object detection unit 10 stores the result of the moving object detection in step s121.
  • step s123 the moving object detection unit 10 determines whether or not processing has been performed for all image blocks, that is, whether or not all image blocks have been set as target image blocks. If there is an image block that has not been processed as a result of the determination in step s123, the moving object detection unit 10 sets the image block that has not been processed yet as a new image block of interest, and then executes step s121 and subsequent steps. Execute.
  • step s123 if the result of determination in step s123 is that processing has been performed for all image blocks, that is, if detection of moving objects has been completed for all regions of the input image D1, moving objects are detected.
  • the detection unit 10 ends the moving object detection process.
  • a specific method of moving object detection in step s121 will be described.
  • a plurality of background images D0 are used.
  • a plurality of images having different brightnesses are recorded in the background model storage unit 3 as background images.
  • a plurality of images to be registered as background images are also recorded in the background model storage unit 3 as background images D0.
  • each codebook CB includes three codewords CW (CW1 to CW3).
  • each codeword CW includes image information (background image information) of each background image D0 in the image block corresponding to the codebook CB to which the codeword CW belongs. This background image information is used for moving object detection.
  • the code word CW includes the latest match time Te and the code word generation time Ti. These are used for addition or update of background image information described in the third embodiment.
  • the code book CB showing sand hatching includes three code words CW1 to CW3 generated based on the background images D0a to D0c, respectively.
  • the code word CW1 included in the code book CB is generated based on the image block corresponding to the code book CB in the background image D0a.
  • the code word CW2 included in the code book CB is generated based on the image block corresponding to the code book CB in the background image D0b.
  • the code word CW3 included in the code book CB is generated based on the image block corresponding to the code book CB in the background image D0c.
  • the code book CB corresponding to the target image block may be referred to as a corresponding code book CB
  • the code word CW belonging to the corresponding code book CB may be referred to as a corresponding code word CW.
  • FIG. 8 is a diagram showing how vectors are extracted from each of the target image block of the input image D1 and the corresponding codeword CW of the background model.
  • FIG. 9 is a diagram illustrating a relationship between a vector extracted from the target image block of the input image D1 and a vector extracted from the corresponding codeword CW of the background model.
  • the image information of the target image block in the input image D1 is handled as a vector.
  • the background image information included in the corresponding codeword CW is treated as a vector.
  • the target image block is a moving image. Is determined.
  • the target image block in the input image D1 is not a moving body image and is not different from the image indicating the background.
  • the two types of vectors do not point in the same direction, it can be considered that the image information of the target image block does not match the background image information of each corresponding codeword CW. Therefore, in this case, it is determined that the target image block in the input image D1 is not an image indicating the background but a moving body image.
  • the moving object detection unit 10 generates an image vector x f in which the pixel values of a plurality of pixels included in the image block of interest in the input image D1 and components.
  • Figure 8 is an image vector x f is shown that the component pixel values of respective pixels of the target image block 210 having nine pixels.
  • each pixel has pixel values of R (red), G (green), and B (blue), so the image vector xf is composed of 27 components.
  • the moving object detection unit 10 generates a background vector, which is a vector related to the background image information, using the background image information in the corresponding codeword CW included in the corresponding codebook CB of the background model.
  • the background image information 510 of the corresponding code word shown in FIG. 8 includes pixel values for nine pixels. Therefore, a background vector xb having the pixel values for the nine pixels as components is generated.
  • the background vector xb is generated from each of a plurality of code words CW included in the corresponding code book CB. Therefore, a plurality of background vector x b are generated for one image vector x f.
  • the target image block in the input image D1 is not different from the image indicating the background.
  • the image vector xf and each background vector xb are considered to contain a certain amount of noise components, the image vector xf and each background vector xb are not completely in the same direction.
  • the image vector xf and each background vector xb are completely the same in consideration that the image vector xf and each background vector xb include a certain amount of noise components. Even if it is not facing the direction, it is determined that the target image block in the input image D1 is an image indicating the background.
  • the relationship between the image vector x f and background vector x b to the true vector u can be expressed as in FIG.
  • the image and the vector x f and background vector x b is, as an evaluation value that indicates whether the pointing how the same direction, consider the evaluation value D 2 represented by the following (1).
  • evaluation value D 2 is a minimum eigenvalue of a non-zero 2 ⁇ 2 matrix XX T. Accordingly, the evaluation value D 2 can be determined analytically. Note that the evaluation value D 2 is the minimum eigenvalue of the non-zero 2 ⁇ 2 matrix XX T is described in Non-Patent Document 1 above.
  • Decision image block of interest is whether the moving object image in the input image D1 is the minimum value C of the plurality of values of the evaluation value D 2, the mean value for a plurality of values of the evaluation value D 2 mu and
  • the moving object judgment formula shown by the following formula (3) expressed using the standard deviation ⁇ is used. This moving object judgment formula is called Chebyshev's inequality.
  • k in Expression (3) is a constant, and is a value determined based on the imaging environment (environment in which the imaging unit is installed) of the imaging unit that captures the input image D1.
  • the constant k is determined by experiments or the like.
  • the moving object detection unit 10 When the moving object detection unit 10 satisfies the moving object determination equation (inequality equation), the image vector x f and each background vector x b are not oriented in the same direction, and the target image block is not an image indicating a background. It determines with it being a moving body image. On the other hand, when the moving object detection unit 10 does not satisfy the moving object determination formula, the moving object detection unit 10 considers that the image vector x f and each background vector x b face the same direction, and the target image block is not a moving object image but a background. It is determined that the image is shown.
  • the moving object detection is performed based on whether the direction of the image vector obtained from the target image block and the direction of the background vector obtained from each corresponding codeword CW are the same. Therefore, the moving object detection method according to the present embodiment is a moving object detection method that is relatively robust against changes in brightness such as a change in sunlight or a change in illumination.
  • the moving object detection unit 10 determines that a pixel belonging to the image block determined to be a moving image is a moving object pixel.
  • binary information indicating whether or not the pixel represents a moving object can be obtained for each pixel.
  • the information including the said binary information about all the pixels is called pixel unit moving body image D2.
  • step s13 the pixel unit moving body image D2 is input to the area presence / absence determination unit 20 (FIG. 3), and based on this, the moving body is detected in each of the plurality of areas.
  • the plurality of areas are areas obtained by dividing the input image D1, and for example, the actual areas are set to be equal to each other. More specifically, for example, an area obtained by dividing the room to be imaged into a lattice shape with an equal area when viewed from above can be adopted as a plurality of areas.
  • the imaging unit is an omnidirectional camera
  • the contour of such an area is curved in the input image D1 as the distance from the center of the input image D1 increases.
  • Each area is shown smaller as the distance from the center of the input image D1 increases. Therefore, the number of pixels included in each area is different from each other.
  • Such area setting is performed based on the correspondence between the actual coordinates and the coordinates in the input image D1. This correspondence can be known in advance based on the specifications of the imaging unit.
  • the area setting information is stored in advance in the storage unit 110, for example, and the area presence / absence determination unit 20 can recognize each area based on the setting information.
  • the area presence / absence determination unit 20 determines whether or not a moving object appears in each area based on the number of moving object pixels (hereinafter referred to as the moving object pixel number) PN1 included in each area, that is, there is a moving object in each area. It is determined whether or not to do.
  • FIG. 10 is a flowchart showing an example of a schematic operation of the area presence / absence determination unit 20.
  • the area presence / absence determination unit 20 determines whether the number of moving object pixels PN1 in a certain area (hereinafter also referred to as an attention area) is larger than a reference value PNref.
  • step s131 If an affirmative determination is made in step s131, it is determined in step s132 that there is a moving object in the area of interest. That is, the moving object is detected in the attention area. If a negative determination is made in step s131, it is determined in step s133 that there is no moving object in the area of interest. That is, no moving object is detected in the attention area.
  • the imaging unit is an omnidirectional camera
  • the number of pixels PN2 included in each area may be different from each other.
  • the reference value PNref is proportional to the number of pixels PN2 by a positive proportional coefficient.
  • the moving object detection unit 10 determines that a pixel included in a certain area is a moving object pixel, when the number of moving object pixels PN1 included in the area is small, It is determined that there are no moving objects in the area. That is, a moving object having a small size is determined as noise, and it is determined that there is no moving object. Thereby, the detection accuracy of a moving body can be improved.
  • the moving object detection by the moving object detection unit 10 is a provisional detection. .
  • step s134 the area presence / absence determination unit 20 determines whether all areas are set as the attention area. That is, it is determined whether or not processing has been completed for all areas. If a negative determination is made in step s134, the processing after step s131 is executed again with the unprocessed area as the attention area. On the other hand, when an affirmative determination is made in step s134, it is determined that the process has been completed for all areas, and the area presence / absence determination process is ended.
  • step s14 the area unit moving body image D3 is input to the area state estimation unit 30 (FIG. 3), and based on this, the state of each area (hereinafter referred to as the area state) is at least entered.
  • the estimated state is one of the four states of the staying state, the leaving state, and the absence state.
  • the entry state here indicates a state in which the moving object has entered the area
  • the stay state indicates a state in which the moving object has stayed in the area for a relatively long period
  • the leaving state indicates a state in which the moving object has left the area.
  • the absence state indicates a state in which the moving object is absent in the area for a relatively long period.
  • the area state estimation unit 30 outputs estimated information (information indicating an area state for each area, hereinafter referred to as an area state image D4).
  • the area state image D4 is stored in the storage unit 110, for example.
  • FIG. 11 is a flowchart showing the area state estimation process.
  • the area state estimation unit 30 estimates the area state of the attention area. A specific estimation method will be described in detail later.
  • step s142 the area state estimation unit 30 records the result, and in step s143, determines whether all areas have been set as the attention area. That is, it is determined whether or not the area states of all areas have been estimated. If a negative determination is made, the unprocessed area is set as the attention area, and the processes after step s141 are executed again. If an affirmative determination is made in step s143, it is determined that the area states of all areas have been estimated, and the area state estimation process ends.
  • FIG. 12 is a diagram showing an example of area state transition.
  • the area state estimation unit 30 determines whether there is a moving object in the attention area based on the area unit moving object image D3. If it is determined that there is a moving object in the attention area, the area state estimation unit 30 changes the area state of the attention area from the absence state to the noise state.
  • the noise state indicates a state in which the detected moving object is determined to be noise. That is, even if it is determined by the area presence / absence determination unit 20 that a moving object exists in the area, the moving object exists in the area unless it is determined that the moving object exists in the area for a predetermined period. It is not judged. In other words, even if a moving object is detected in a short period of time, it is determined to be noise, and the area state is transitioned to the noise state.
  • the moving object detection by the area presence / absence determination unit 20 is a provisional detection. I can grasp it.
  • the area state estimation unit 30 changes the area state of the attention area to the entry state. That is, if it is determined that there is a moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state is shifted to the entry state.
  • the length of the period can be determined by the number of input images D1 (referred to as the number of frames). Therefore, the period is considered here by the number of frames.
  • the area state estimation unit 30 counts the number of frames FN1 of the input image D1 (hereinafter also referred to as the number of detected frames) FN1 that is continuously determined to be a moving object in the attention area for each area. More specifically, if it is determined that there is a moving object in the attention area in the area-unit moving object image D3, 1 is added to the number of detected frames FN1 in the attention area. On the other hand, if it is determined that there is no moving object in the attention area, the number of detected frames FN1 in the attention area is initialized to zero. This detected frame number FN1 corresponds to a period during which it is determined that a moving object is present in the area.
  • the area state estimation unit 30 determines whether or not the number of detected frames FN1 is larger than the entry determination value FNref11, and when an affirmative determination is made, the area state transitions from the absent state to the entry state.
  • FN1> 0 is shown immediately above the arrow indicating the transition from the absence state to the noise state. This is a condition for transition from the absence state to the noise state. That is, when it is determined that the detected frame number FN1 is larger than zero in the absence state, the area state is changed from the absence state to the noise state. In short, when a moving object is detected in the attention area in the absence state, the area state is quickly changed to the noise state.
  • the area state estimation unit 30 may transition the area state of the attention area to the absence state. Thereby, the area state can be returned from the noise state to the absence state.
  • the number of frames can also be used for this transition condition.
  • the area state estimation unit 30 counts, for each area, the number of frames (hereinafter also referred to as the number of undetected frames) FN2 of the input image D1 that is determined that there is no moving object in the attention area. For example, in the input area unit moving body image D3, if it is determined that there is no moving body in the attention area, 1 is added to the number of undetected frames FN2 in the attention area. On the other hand, when it is determined that there is a moving object in the attention area, the number of undetected frames FN2 in the attention area is initialized to zero. This undetected frame number FN2 corresponds to a period during which it is continuously determined that there is no moving object in the area.
  • the area state estimation unit 30 determines whether or not the number of undetected frames FN2 is larger than zero in the noise state, and transitions the area state to the absent state when a positive determination is made.
  • the noise state is provided here to clearly indicate the noise, the noise state may not be provided.
  • the area state may be changed from the absence state to the entry state only when it is determined that a moving object exists for a predetermined period.
  • the area state estimation unit 30 changes the area state of the attention area to the stay state. That is, if it is determined that there is a moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state is shifted to the stay state. As a more detailed operation, the area state estimation unit 30 determines whether the detected frame number FN1 is larger than the stay determination value FNref12 (> FNref11) in the approach state, and when a positive determination is made, Transition the area state to the stay state.
  • the area state is set to the entry state in the initial period when the moving object starts to be detected, and the area state is set to the stay state in the period after the initial period. That is, not only the information that the moving object exists but also the moving object is detected by distinguishing between the approaching state and the staying state.
  • the area state estimation unit 30 changes the area state of the attention area to the withdrawal state. That is, if it is determined that there is no moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state estimating unit 30 changes the area state to the leaving state. As a more detailed operation, the area state estimation unit 30 determines whether or not the number of undetected frames FN2 is larger than the leaving determination value FNref21, and when an affirmative determination is made, the area state is changed to the leaving state. And transition.
  • the area state estimating unit 30 changes the area state of the attention area to the leaving state.
  • the first condition is that it is determined that there is no moving object in the attention area over a predetermined period. This is also a condition for transitioning from the entering state to the leaving state as described above.
  • the second condition is that at least one area state in the area around the area of interest (hereinafter referred to as the surrounding area) is an approaching state.
  • FIG. 13 is a diagram illustrating an example of a state in which the area state transitions from the staying state to the leaving state.
  • a rectangular area in an actual space is schematically shown as each area.
  • the plurality of areas are arranged in a grid pattern.
  • the imaging unit is, for example, an omnidirectional camera
  • this rectangular area is curved in the input image D1 as in FIG.
  • 16 areas of 4 ⁇ 4 in a simplified manner are shown.
  • the numbers “0” and “1” shown in the area indicate the result of moving object detection by the area presence / absence determination unit 20. “1” indicates that it is determined that a moving object exists, and “0” indicates that it is determined that no moving object exists.
  • FIG. 13 is a diagram illustrating an example of a state in which the area state transitions from the staying state to the leaving state.
  • a rectangular area in an actual space is schematically shown as each area.
  • the plurality of areas are arranged in a grid pattern.
  • the imaging unit is, for example
  • the area state is shown according to the type of hatching shown in each area.
  • a blank indicates an absence state (including a noise state), a horizontal hatching indicates a staying state, a vertical hatching indicates an approaching state, and a diagonal hatching indicates a leaving state.
  • the area unit moving body image D3 at the left end of FIG. 13 it is determined that a moving body exists only in one area A1.
  • the area state image D4 corresponding to this the area state of the area A1 is the stay state.
  • the area states of other areas are absent.
  • the moving object moves to the area A2 on the right, for example.
  • the second area unit moving body image D3 from the left it is determined that there is no moving body in the area A1, and it is determined that there is a moving body in the area A2.
  • the area state of the area A2 is a noise state.
  • the area of the area A2 is displayed as shown in the third area state image D4 from the left. The state transitions to the entry state. Thereby, the second condition is satisfied for the area A1.
  • the area state of the area A1 transitions to the withdrawal state as shown in the rightmost area state image D4. .
  • the moving object detection unit 10 does not detect the person 103 as a moving object
  • the area presence / absence determination unit 20 also does not detect the person 103 as a moving object. If the person 103 is not detected as a moving object over a predetermined period, the first condition is satisfied for the attention area including the person 103. However, if the moving object is simply shielded by the shielding object, the moving object (person 103) does not enter the surrounding area, and therefore the second condition is not satisfied. Therefore, the area state of the attention area appropriately maintains the stay state.
  • the area state when the area state is transitioned from the staying state to the absence state only under the first condition, the area state becomes the leaving state although the moving object is simply hidden by the shield. According to the present embodiment, it is possible to avoid such an erroneous estimation of the area state.
  • the area state is transitioned to the leaving state using only the first condition (see FIG. 12).
  • This is a process considering that the moving body moves across a plurality of areas. That is, when the attention area is in the approaching state, there is a possibility that the moving body is moving to another area, so only the first condition is adopted.
  • the area state can be quickly transitioned from the entering state to the leaving state. That is, the area state can be estimated with high responsiveness.
  • the leaving determination value FNref21 used when changing from the entering state to the leaving state and the leaving determination value FNref21 used when changing from the staying state to the leaving state are equal to each other. However, they may be different.
  • the area state estimating unit 30 changes the area state to the staying state. More specifically, the detected frame number FN1 is compared with zero, and when the detected frame number FN1 is larger than zero, the area state estimation unit 30 transitions the area state to the stay state. This is processing in consideration of the possibility that the moving object will return to the area that was in the staying state. That is, there is a possibility that the moving object may return to the area in the leaving state, and accordingly, the area state is easily changed to the staying state. Thereby, the area state of the area of interest can be changed from the leaving state to the staying state relatively quickly. That is, the area state can be estimated with high responsiveness.
  • moving object detection over a predetermined period may be employed as a condition for transition from the leaving state to the stay state.
  • a predetermined period a period shorter than the period required for transition from the absent state to the stay state
  • the area state may be changed from the leaving state to the staying state. This also allows the area state to transition from the leaving state to the staying state relatively quickly.
  • the area state when a moving object is detected in the leaving state, the area state is changed from the leaving state to the staying state, but the area state may be changed from the leaving state to the entering state. .
  • the area state estimation unit 30 changes the area state of the attention area to the absence state. That is, if it is determined that there is no moving object in the attention area in any of the predetermined area unit moving object images D3 that are sequentially input, the area state estimating unit 30 changes the area state to the absent state. As a more detailed operation, it is determined whether or not the number of undetected frames FN2 is larger than the absence determination value FNref22 (> exit determination value FNref21), and when a positive determination is made, the area state estimation unit 30 Transition the state to the absent state.
  • FIG. 14 is a diagram schematically showing an example of the input image D1 and the area state image D4.
  • the person 103 is sitting slightly on the upper left near the center, and the person 104 moving on the right is shown on the lower right.
  • the area state of the area including the person 103 becomes the stay state (horizontal line hatching) and the area state of the area including the person 104 becomes the entry state (vertical line hatching) by the processing of the image processing unit 1.
  • the present image processing unit 1 does not simply detect the presence / absence of moving objects, but estimates the entry state, stay state, leaving state, and absence state as area states. Therefore, as described below, the external device 4 can perform finer control according to the area state.
  • the area state image D4 output by the area state estimation unit 30 has area state information for each area and is input to the external device 4 (see FIGS. 1 and 3).
  • the external device 4 performs control according to the area state based on the area state image D4.
  • the external device 4 is an environment control device that controls the area environment (temperature, humidity, brightness, sound, display, etc.).
  • the external device 4 is an air conditioner or a lighting device.
  • the external device 4 adjusts the space state (brightness, temperature, humidity, etc.) of each area according to the area state. Therefore, finer control can be performed as compared with the prior art.
  • the external device 4 has a plurality of lighting devices, and this lighting device is provided for each area.
  • the external device 4 controls the lighting device in the area where the area state is the staying state with the highest illuminance, and controls the lighting device in the area where the area state is the entering state or the leaving state with a lower illuminance.
  • the lighting device in the area that is in the removed state is controlled with the lowest illuminance.
  • a flap for adjusting the air delivery direction is directed to the staying area, and the flap is directed to the area where the area state is the entering state, the leaving state, or the absence state. Absent.
  • many flaps may be directed to the staying area and some flaps may be directed to the entering or leaving area.
  • the air conditioner in the staying area operates at a desired target value (temperature target value or humidity target value) and enters or leaves the area.
  • the air conditioner may be operated with a smaller target value, and the air conditioner in the absent area may be operated with the smallest target value.
  • the small target value here means a target value having a small difference from the current value (temperature or humidity).
  • the environment in an area where the area state is a staying state, the environment is controlled with a target value that requires the most power, and in an area where the area state is an entering state or a leaving state, a target value that requires less power In the area where the area state is absent, the environment is controlled with a target value that requires the least power. Thereby, effective environmental control can be performed with low power consumption.
  • the area state is updated with high responsiveness, so that the external device 4 can perform control with high responsiveness according to the actual state of the area.
  • the surrounding area may be an area existing in a region surrounding the attention area, and may be eight areas that surround the attention area most recently.
  • the speed of the moving body is high, so that none of the eight surrounding areas is in the entering state, and the area adjacent to that one area is in the entering state.
  • the surrounding area may be set in a wider range.
  • the eight areas and the sixteen areas that immediately surround the eight areas may be adopted as the surrounding areas.
  • the portion including the moving object detection unit 10 and the area presence / absence determination unit 20 is understood as a specific example of the area unit moving object detection unit that detects a moving object in each of a plurality of areas in the input image D1. Can do.
  • the image block and the area are described separately, but the same range as the image block may be adopted as the area.
  • the operation of the area presence / absence determination unit 20 is unnecessary.
  • ⁇ Area at the end of input image D1> when the first condition and the second condition are satisfied, the area state is changed from the staying state to the leaving state. However, when the area of interest is located at the end of the input image D1, the area state may be changed from the staying state to the leaving state without the second condition. If the area of interest is located at the end, the moving object may leave the input image D1 without passing through another area, and in this case, the surrounding area does not enter the entry state.
  • FIG. 15 is a functional block diagram conceptually illustrating an example of the image processing unit 1 according to the second embodiment.
  • the moving object detection unit 10 and the area presence / absence determination unit 20 receive the area state image D4 from the area state estimation unit 30.
  • the moving object detection unit 10 and the area presence / absence determination unit 20 detect moving objects with detection sensitivity corresponding to the area state.
  • the detection sensitivity here indicates the ease of detecting a moving object, and the higher the detection sensitivity, the easier the moving object is detected.
  • the area presence / absence determination unit 20 detects a moving body in area units based on the size of the moving body pixel number PN1 of each area and the reference value PNref, and the reference value PNref indicates the detection sensitivity.
  • the moving object detection unit 10 detects a moving object in units of pixels based on the difference between the background image D0 and the input image D1. More specifically, a moving object is detected when a parameter representing the difference (for example, the above-described minimum value C) is larger than a reference value (for example, ⁇ + k ⁇ ). This reference value also indicates the detection sensitivity. The detection sensitivity decreases as the reference value increases.
  • the average value ⁇ and the standard deviation ⁇ are values determined by the background image D0 of the background model, and therefore a plurality of values may be adopted as the constant k.
  • a plurality of detection sensitivity values are recorded in advance in the storage unit 110, for example, and the moving object detection unit 10 and the area presence / absence determination unit 20 select the detection sensitivity according to the area state, A moving object is detected based on this.
  • the moving object detection unit 10 compares the pixels included in the area where the area state is the leaving state with respect to the pixels included in the area where the area state is absent (including the noise state), the entering state or the staying state.
  • the moving object is detected with high detection sensitivity. That is, a smaller reference value is adopted. Since there is a possibility that the moving object may return to the area whose area state is the withdrawal state, there is a high possibility that the moving object will appear relatively. Therefore, when there is a high possibility that the moving object appears in the attention area, the moving object can be detected with high detection sensitivity. Thereby, a moving body can be detected more rapidly. As a result, the area state can be estimated with high responsiveness.
  • the moving object may be detected with higher detection sensitivity than the pixels included in the staying area. This is because if the area state of the surrounding area is the entry state, there is a high possibility that a moving object will enter the area. Also by this, a moving body can be detected more rapidly.
  • the area presence / absence determination unit 20 may detect a moving object with high detection sensitivity with respect to an area whose area state is the withdrawal state. That is, a smaller reference value PNref may be adopted. Also by this, a moving body can be detected more rapidly.
  • moving objects may be detected with high detection sensitivity even in areas where the area state of the surrounding area is the approaching state. That is, a smaller reference value PNref is adopted. Also by this, a moving body can be detected more rapidly.
  • both the moving object detection unit 10 and the area presence / absence determination unit 20 employ detection sensitivity according to the area state, but only one of them has detection sensitivity according to the area state. May be adopted.
  • a background model update unit 40 and a cache model storage unit 6 are further provided.
  • the background model update unit 40 inputs the input image D1 and the pixel unit moving body image D2.
  • the background model update unit 40 does not indicate a moving object when the input image D1 including a pixel determined to be a moving object pixel does not change over a predetermined registration determination period. It is determined that the background is indicated, and this is registered in the background model.
  • a cache model storage unit 6 that stores a cache model is used.
  • the cache model includes background image information candidates that are candidates for background image information registered in the background model.
  • the cache model storage unit 6 includes rewritable storage means such as flash memory, EPROM (Erasable Programmable Read Only Memory), or hard disk (HD).
  • the background model storage unit 3 and the cache model storage unit 6 are independent from each other in terms of hardware, but a part of the storage area of one storage device is used as the background model storage unit 3 and Another part of the storage area may be used as the cache model storage unit 6.
  • the background model update unit 40 once registers the image information of the image block determined to be a moving image by the moving object detection unit 10 as a background image information candidate in the cache model. Whether the background model update unit 40 registers the background image information candidates stored in the cache model storage unit 6 in the background model as background image information based on the plurality of input images D1 input during the registration determination period. Determine whether or not. More specifically, when the background candidate image obtained from the input image D1 does not change over the registration determination period, the background image information candidate is determined to be background image information. If it determines in this way, the background model update part 40 will register the said background image information candidate in a background model as background image information.
  • FIG. 17 is a flowchart showing the background model update process. This background model update process is performed after step s12 of FIG. As shown in FIG. 17, in step s151, the background model update unit 40 determines that the target image block in the processing target input image D1 input in step s11 is a moving object image in the moving object detection unit 10. Determine whether or not. If it is determined in step s151 that the target image block is not a moving image in the moving object detection unit 10, that is, it is determined that the image information of the target image block matches the background image information of each corresponding codeword CW in the background model. Then, the background model update unit 40 executes Step s152.
  • step s152 the background model update unit 40 changes the latest match time Te of the code word CW in the background model including the background image information determined to match the image information of the target image block to the current time.
  • step s151 if it is determined in step s151 that the target image block is a moving body image in the moving body detection unit 10, the background model update unit 40 executes step s153.
  • step s153 the cache model is updated. Specifically, if the image information of the target image block is not included in each corresponding codeword CW included in the cache model in the cache model storage unit 6, the background model update unit 40 displays the image information.
  • a code word CW included as a background image information candidate is generated and registered in the corresponding code book CB in the cache model.
  • the code word CW includes the latest match time Te and the code word generation time Ti in addition to the image information (background image information candidate).
  • the latest matching time Te included in the code word CW generated in step s153 is provisionally set to the same time as the code word generation time Ti.
  • the background model update unit 40 when the image information of the target image block is included in the corresponding codeword CW included in the cache model in the cache model storage unit 6, that is, the image information of the target image block
  • the latest matching time Te in the code word CW including the background image information candidate in the cache model is changed to the current time. .
  • step s153 the code word CW is added to the cache model, or the latest match time Te of the code word CW in the cache model is updated.
  • step s153 if the code model CB corresponding to the target image block is not registered in the cache model in the cache model storage unit 6, the background model update unit 40 uses the image information of the target image block as the background.
  • a code word CW included as an image information candidate is generated, and a code book CB including the code word CW is generated and registered in the cache model.
  • step s154 the background model update unit 40 determines whether or not all image blocks have been set as the target image block. If it is determined in step s154 that there is an image block that has not been processed, the background model update unit 40 sets the image block that has not been processed yet as a new image block of interest, and then executes step s151 and subsequent steps. Execute. On the other hand, if it is determined in step s154 that processing has been performed for all image blocks, the background model update unit 40 executes step s155.
  • step s155 the code word CW that is included in the cache model and whose latest match time Te has not been updated for a predetermined period is deleted. That is, when the image information included in the code word CW in the cache model does not match the image information acquired from the input image D1 for a certain period, the code word CW is deleted. If the image information included in the code word CW is background image information, that is, if it is image information acquired from an image indicating the background included in the input image D1, the code word CW includes Since the latest match time Te is frequently updated, the image information included in the code word CW for which the latest match time Te has not been updated for a predetermined period is the image information acquired from the moving body image included in the input image D1.
  • the deletion determination period By deleting the code word CW whose latest match time Te has not been updated for a predetermined period from the cache model, the image information of the moving image is deleted from the cache model.
  • this predetermined period may be referred to as “deletion determination period”.
  • the deletion judgment period changes in image information due to changes in brightness, such as changes in sunlight or lighting, and changes in the environment, such as changes in the placement of posters or desks, and moving objects such as people to be detected move This is a period set in advance to distinguish the change in image information that sometimes occurs. For example, when the imaging frame rate of the imaging unit that captures the input image D1 is 30 fps, the deletion determination period is set to a period in which, for example, several tens to several hundreds of input images D1 are input.
  • step s155 when the code word CW that is included in the cache model and whose latest match time Te has not been updated for the deletion determination period is deleted, the background model update unit 40 executes step s156.
  • step s156 the background model update unit 40 identifies the codeword CW that has been registered for the cache model and has passed the registration determination period from the codewords CW registered in the cache model.
  • step s156 when the code word CW is generated, the code word CW is immediately registered in the cache memory. Therefore, the code word CW is included in the code word CW as the time when the code word CW is registered in the cache model.
  • the code word generation time Ti can be used.
  • the registration judgment period is set to a larger value than the deletion judgment period.
  • the registration determination period is set to a value several times larger than the deletion determination period, for example.
  • the registration determination period is represented by the number of frames. If the registration determination period is, for example, “500”, the registration determination period is a period in which input images D1 for 500 frames are input.
  • step s156 When step s156 is executed, in step s157, the background model update unit 40 performs background model registration processing. In the background model registration process, the code word CW identified in step s156 is registered in the background model in the background model storage unit 3.
  • the background model update unit 40 may delete the code word CW in the cache model until the registration determination period elapses after it is registered in the cache model. .
  • the background model update unit 40 deletes the code word CW (background image information candidate) in the cache model before the registration determination period elapses after it is registered in the cache model. This means that it is determined that the code word CW (background image information candidate) registered in the cache memory is not registered in the background model based on a plurality of input images D1 input in the determination period.
  • the background model update unit 40 can register the code word CW in the cache model in the background model without deleting the code word CW in the cache model until the registration determination period elapses. is there.
  • the background model update unit 40 registers the code word CW (background image information candidate) in the cache model in the background model without deleting the code word CW (background image information candidate) from the cache model until the registration determination period elapses. This means that the update unit 40 has determined that the code word CW (background image information candidate) registered in the cache memory is registered in the background model based on the plurality of input images D1 input in the registration determination period. .
  • the background model update unit 40 determines whether or not to register the background image information candidate registered in the cache model as background image information in the background model based on the plurality of input images D1 input in the registration determination period. Therefore, the image information of the image block erroneously determined to be a moving image by the moving object detection unit 10 can be registered in the background model as background image information. Therefore, the background model can be appropriately updated, and the accuracy of moving object detection by the moving object detection unit 10 is improved.
  • the background model update process described above for example, when a person (moving object) enters a certain area and then leaves the area, the image information including the belonging is obtained. Eventually it will be registered in the background model.
  • FIG. 18 shows an example of a series of area unit moving body images D3 and area state images D4 when a person leaves with his belongings.
  • both people and belongings exist in the area A1 for a relatively long period of time. Therefore, also in the example of FIG. 18, a moving object is initially detected in the area A1 (“1” is indicated in the area A1), and the area state of the area A1 is a staying state (horizontal line hatching).
  • the moving object When the person subsequently moves to the area A3 on the right side, the moving object is not detected in the area A2, and the moving object is detected in the area A3. As this state continues, the area states of areas A2 and A3 become a leaving state and an entering state, respectively. Since the moving object is not detected thereafter in the area A2, the area state will eventually be absent.
  • the background model update unit 40 Register information in the background model. As a result, there is no difference between the input image D1 and the background model in the image block, and the belongings are not detected as moving objects in the area A1. This is indicated by “0” in area A1 in the area unit moving body image D3 at the right end of FIG.
  • the area condition of the surrounding area of the area A1 is not an approaching state, so the second condition is not satisfied for the area A1. Therefore, the area state of area A1 continues to maintain the stay state. This is indicated by the hatching of the horizontal line of the area A1 in the area state image D4 at the right end of FIG.
  • an object when image information representing a moving object is registered in the background model, an object is to appropriately change the area state of the area including the moving object from the staying state to the leaving state.
  • the area state estimation unit 30 retains information (hereinafter referred to as an entry flag) that the area of interest has entered the entry state when the area state of the area of interest transitions to the entry state. More specifically, this approach flag is recorded in the storage unit 110, for example. This approach flag is held for a holding period longer than the registration determination period, and is erased when the holding period elapses. The approach flag is held for each area.
  • the area state estimation unit 30 changes the area state from the staying state to the leaving state not only when both the first and second conditions are satisfied, but also when the next condition is satisfied.
  • the condition is a condition that satisfies both the first condition and the third condition that the approach flag for the surrounding area is held.
  • the approach flag is held for the area A2.
  • the entry flag is held for a holding period longer than the registration determination period after the area A2 enters the entry state. Therefore, even when the image information of the image blocks belonging to the area A1 is registered in the background model (see the state at the right end in FIG. 18), the entry flag of the area A2 is retained without being erased. Thereby, the third condition is satisfied.
  • the area state estimation unit 30 changes the area state of the area A1 from the staying state to the leaving state.
  • the area A1 can be appropriately transitioned from the staying state to the leaving state.
  • the area state when both the first and third conditions are satisfied in the staying state, the area state is changed from the staying state to the leaving state, but the area state is changed from the staying state to the absence state. You may make a transition. Thereby, an area state can be changed to an actual state promptly. In other words, the area state can be estimated with high responsiveness.
  • the above-mentioned problem arises because the second condition is adopted as the condition from the staying state to the leaving state, and the approach flag is employed to solve this problem.
  • the above-described problem does not occur when the area state does not become the stay state. Therefore, in this case, even if the area state becomes the entry state, it is not necessary to hold the entry flag for the area. For example, when the moving object simply crosses the imaging area, it is assumed that there is no area that is in the staying state, and therefore no entry flag is required.
  • a condition for holding the entry flag a condition is added that the area state of the surrounding area (for example, the eight areas immediately surrounding the area) is the staying state.
  • the approach flag is held for the surrounding area of the area. Therefore, even if the image information of the area in the staying state is registered in the background model, the third condition is satisfied, and accordingly, the area state of the area is appropriately changed to the withdrawal state (or absence state). be able to.
  • an entry / exit time required for transition from the entry state to the withdrawal state (a period corresponding to the withdrawal determination value FNref21) is set as an entry stay period (entrance determination value FNref11) required for the transition from the entry state to the stay state. )
  • an entry stay period (entrance determination value FNref11) required for the transition from the entry state to the stay state. )
  • both the first and second areas that are adjacent in the left-right direction may be in the approach state.
  • the second area far from the moving source can be in the staying state before the first area close to the moving source changes from the entering state to the leaving state, that is, the first area is in the entering state. . Therefore, in this case, the approach flag is held for the first area close to the movement source. That is, the entry flag is held even when the moving object is simply moving. Such a situation can be avoided by setting the above period.
  • the entry flag may be held for the area (surrounding area).
  • the entry flag may be held in the surrounding area on the condition that the area state of the surrounding area has changed to the entering state while the area state of the attention area is in the staying state.
  • a determination period adjustment unit 50 is further provided.
  • the determination period adjustment unit 50 adjusts the registration determination period according to the area state and the presence / absence of the entry flag. Briefly, the determination period adjustment unit 50 determines that the image information indicates the background for the area where the object is left behind, and sets the registration determination period for the area to be short.
  • the area state estimation unit 30 holds the entry flag as described in the third embodiment. That is, when the area state of the surrounding area transitions to the entering state and the area state of the attention area maintains the staying state from before (at least one frame before), the entry flag is held for the surrounding area To do. This avoids holding unnecessary entry flags. That is, the entry flag is not held when the moving object is simply moved. In other words, when there is an area where an object is left behind and is in a staying state, an approach flag is held in the surrounding area.
  • the determination period adjustment unit 50 determines that the attention area includes an object left behind when the area state of the attention area is the staying state and the approach flag is held for the surrounding area. Then, the registration determination period of the image block belonging to the attention area is set shorter than the registration determination period of the image block belonging to another area. This registration determination period is output to the background model update unit 40.
  • the registration determination period is set so that image information belonging to an area that is likely to contain an object left behind is registered in the background model at an early stage. Set it short. Thereby, an area state can be estimated with high responsiveness.
  • the image processing unit 1 has been described in detail. However, the above description is illustrative in all aspects, and the present invention is not limited thereto. The various modifications described above can be applied in combination as long as they do not contradict each other. And it is understood that the countless modification which is not illustrated can be assumed without deviating from the scope of the present invention.

Abstract

La présente invention se rapporte à un dispositif d'estimation d'état de zone dans lequel une unité d'estimation d'état de zone amène l'état de zone d'une zone d'intérêt à passer d'un état d'absence à un état d'entrée lorsqu'un corps mobile est détecté pendant une première période dans une zone d'intérêt qui est une zone dans une image d'entrée, (ii) amène l'état de zone de la zone d'intérêt à passer à un état d'immobilité lorsque le corps mobile est détecté dans la zone d'intérêt pendant une deuxième période pendant qu'il est dans l'état d'entrée, (iii) amène l'état de zone de la zone d'intérêt à passer à un état de départ lorsque le corps mobile n'est pas détecté dans la zone d'intérêt pendant une troisième période pendant qu'il est dans l'état d'immobilité, et (iv) amène l'état de zone de la zone d'intérêt à passer à l'état d'absence lorsque le corps mobile n'est pas détecté dans la zone d'intérêt pendant une quatrième période pendant qu'il est dans l'état de départ.
PCT/JP2015/057099 2014-03-26 2015-03-11 Dispositif d'estimation d'état de zone, procédé d'estimation d'état de zone, et système de commande d'environnement WO2015146582A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014063118A JP6396051B2 (ja) 2014-03-26 2014-03-26 エリア状態推定装置、エリア状態推定方法、プログラムおよび環境制御システム
JP2014-063118 2014-03-26

Publications (1)

Publication Number Publication Date
WO2015146582A1 true WO2015146582A1 (fr) 2015-10-01

Family

ID=54195106

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/057099 WO2015146582A1 (fr) 2014-03-26 2015-03-11 Dispositif d'estimation d'état de zone, procédé d'estimation d'état de zone, et système de commande d'environnement

Country Status (2)

Country Link
JP (1) JP6396051B2 (fr)
WO (1) WO2015146582A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7005285B2 (ja) * 2017-11-01 2022-01-21 株式会社東芝 画像センサ、センシング方法、制御システム及びプログラム
JP6948759B2 (ja) * 2018-08-14 2021-10-13 Kddi株式会社 異なる検知方式のセンサ用いて移動体の存在又は不在を判定する装置、プログラム及び方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008077361A (ja) * 2006-09-20 2008-04-03 Kanazawa Inst Of Technology 監視方法および監視システム
JP2010256045A (ja) * 2009-04-21 2010-11-11 Taisei Corp 広域・高精度人体検知センサ
JP2012209214A (ja) * 2011-03-30 2012-10-25 Panasonic Corp 照明システム
JP2013096947A (ja) * 2011-11-04 2013-05-20 Panasonic Corp 人センサ及び負荷制御システム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008077361A (ja) * 2006-09-20 2008-04-03 Kanazawa Inst Of Technology 監視方法および監視システム
JP2010256045A (ja) * 2009-04-21 2010-11-11 Taisei Corp 広域・高精度人体検知センサ
JP2012209214A (ja) * 2011-03-30 2012-10-25 Panasonic Corp 照明システム
JP2013096947A (ja) * 2011-11-04 2013-05-20 Panasonic Corp 人センサ及び負荷制御システム

Also Published As

Publication number Publication date
JP2015184233A (ja) 2015-10-22
JP6396051B2 (ja) 2018-09-26

Similar Documents

Publication Publication Date Title
TWI759286B (zh) 用於藉由機器學習訓練物件分類器之系統及方法
JP6509275B2 (ja) 画像の背景差分に用いられる背景モデルを更新する方法及び装置
JP5675233B2 (ja) 情報処理装置、その認識方法及びプログラム
JP6482195B2 (ja) 画像認識装置、画像認識方法及びプログラム
CN105404884B (zh) 图像分析方法
JP2019145174A (ja) 画像処理システム、画像処理方法及びプログラム記憶媒体
JP6024658B2 (ja) 物体検出装置、物体検出方法及びプログラム
JP2007323572A (ja) 物体検出装置、物体検出方法および物体検出プログラム
EP3092619A1 (fr) Appareil de traitement d'informations et procédé de traitement d'informations
CN112703533A (zh) 对象跟踪
JP6652051B2 (ja) 検出システム、検出方法及びプログラム
JPWO2018061976A1 (ja) 画像処理装置
JP6809613B2 (ja) 画像前景の検出装置、検出方法及び電子機器
JP2009140307A (ja) 人物検出装置
Liu et al. Scene background estimation based on temporal median filter with Gaussian filtering
WO2015146582A1 (fr) Dispositif d'estimation d'état de zone, procédé d'estimation d'état de zone, et système de commande d'environnement
US9824462B2 (en) Method for detecting object and object detecting apparatus
Vishnyakov et al. Fast moving objects detection using ilbp background model
JP6326622B2 (ja) 人物検出装置
JP6177708B2 (ja) 動体検出装置、動体検出方法及び制御プログラム
JP5241687B2 (ja) 物体検出装置及び物体検出プログラム
JP6162492B2 (ja) 動体検出装置、動体検出方法及び制御プログラム
JP7435298B2 (ja) 物体検出装置および物体検出方法
JP6378483B2 (ja) 検出装置、検出対象物の検出方法及び制御プログラム
Lech The detection of horizontal lines based on the Monte Carlo reduced resolution images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15768949

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15768949

Country of ref document: EP

Kind code of ref document: A1