WO2019082652A1 - Image sensor, person detection method, program, and control system - Google Patents

Image sensor, person detection method, program, and control system

Info

Publication number
WO2019082652A1
WO2019082652A1 PCT/JP2018/037693 JP2018037693W WO2019082652A1 WO 2019082652 A1 WO2019082652 A1 WO 2019082652A1 JP 2018037693 W JP2018037693 W JP 2018037693W WO 2019082652 A1 WO2019082652 A1 WO 2019082652A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
image sensor
unit
identification
person
Prior art date
Application number
PCT/JP2018/037693
Other languages
French (fr)
Japanese (ja)
Inventor
榎原 孝明
禎敏 齋藤
Original Assignee
株式会社 東芝
東芝インフラシステムズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 東芝, 東芝インフラシステムズ株式会社 filed Critical 株式会社 東芝
Publication of WO2019082652A1 publication Critical patent/WO2019082652A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • Embodiments of the present invention relate to an image sensor, a person detection method, a program and a control system.
  • a recent image sensor has a CPU (Central Processing Unit) and a memory, and can be said to be an embedded computer with a lens. It also has advanced image processing functions, and can analyze the captured image data to calculate, for example, the presence or absence of people, or the number of people.
  • CPU Central Processing Unit
  • memory can be said to be an embedded computer with a lens. It also has advanced image processing functions, and can analyze the captured image data to calculate, for example, the presence or absence of people, or the number of people.
  • the interframe difference method is known as one of the methods of detecting moving objects from image data.
  • the principle is that a background image to be a reference is stored in advance, a change in luminance from the background image is evaluated for each pixel, and a person or the like is detected from the result.
  • the inter-frame difference method is prone to errors under different conditions from the scene in which the reference background image was taken. That is, it has a weak point that a change in luminance of the background image is erroneously detected as a person. Therefore, in the related art, false detection is prevented by selecting a background image for each scene.
  • an object of the present invention is to provide an image sensor, a person detection method, a program and a control system capable of preventing false detection due to a change in luminance.
  • the image sensor comprises an imaging unit, an extraction unit, an identification unit, and a detection unit.
  • the imaging unit captures an image of a target space to obtain image data.
  • the extraction unit extracts a motion feature amount from the image data.
  • the identification unit identifies a motion type based on the motion feature amount.
  • the detection unit detects a person in the target space based on the identification result.
  • FIG. 1 is a schematic view showing an example of a building management system provided with an image sensor according to the embodiment.
  • FIG. 2 is a diagram illustrating the appearance in the floor of the building.
  • FIG. 3 is a diagram showing an example of a communication network in a building.
  • FIG. 4 is a block diagram showing an example of the image sensor according to the embodiment.
  • FIG. 5 is a diagram showing an example of the flow of data in the image sensor according to the first embodiment.
  • FIG. 6 is a diagram for explaining processing in the motion extraction unit 33a.
  • FIG. 7 is a diagram showing an example of the movement type.
  • FIG. 8 is a diagram for explaining the process in the motion identification unit 33b.
  • FIG. 9 is a flowchart showing an example of the processing procedure in the image sensor shown in FIG. FIG.
  • FIG. 10 is a diagram showing an example of the flow of data in the image sensor according to the second embodiment.
  • FIG. 11 is a diagram showing an example of the flow of data in the image sensor according to the third embodiment.
  • FIG. 12 is a diagram showing an example of the flow of data in the image sensor according to the fourth embodiment.
  • the image sensor can acquire various information as compared with a human sensor, a light sensor, an infrared sensor, and the like. If a fisheye lens or an ultra-wide-angle lens is used, an area that can be photographed by one image sensor can be enlarged, and distortion of the image can be corrected by calculation processing. It is also possible to give the image sensor a learning function.
  • FIG. 1 is a schematic view showing an example of a building management system provided with an image sensor according to the embodiment.
  • the lighting device 1, the air conditioner 2, and the image sensor 3 are provided for each floor of the building 100 and are communicably connected to the control device 40.
  • the control device 40 on each floor is communicably connected to a building monitoring device 50 provided, for example, in a building management center via the in-building network 500.
  • Building Automation and Control Networking protocol (BACnet (registered trademark)) is representative.
  • the building monitoring device 50 can be connected to the cloud computing system (cloud) 200 via, for example, a Transmission Control Protocol / Internet Protocol (TCP / IP) -based communication network 600.
  • the cloud 200 includes a server 300 and a database 400, and provides services related to building management.
  • the lighting device 1, the outlet of the air conditioner 2, and the image sensor 3 are disposed, for example, on the ceiling of each floor.
  • the image sensor 3 captures an image captured within the field of view to acquire image data.
  • This image data is processed in the image sensor 3 to generate environmental information and / or personal information.
  • the lighting device 1 and the air conditioner 2 can be controlled using these pieces of information.
  • the image sensor 3 processes image data to obtain environmental information and personal information.
  • the environment information is information on the environment of the space (zone) of the imaging target.
  • the environmental information is information indicating the illuminance and temperature of the office.
  • Person information is information on humans in the target space.
  • the personal information is information indicating the presence or absence of a person (referred to as presence / absence), the number of people, the behavior of the person, the amount of activity of the person, and the like.
  • Each of the small areas obtained by dividing the zone into a plurality of areas is called an area.
  • environmental information and personal information can be calculated for each area.
  • walking / staying as one of personal information will be described.
  • the walking / staying is information indicating whether a person is walking or staying at one place.
  • FIG. 3 is a diagram showing an example of a communication network in the building 100.
  • the lighting device 1, the air conditioner 2, and the image sensor 3 are connected in a daisy chain shape via a signal line L.
  • one image sensor 3 is connected to the in-building network 500 via the gateway (GW) 7-1.
  • GW gateway
  • Each image sensor 3 is connected to the in-building network 500 via a LAN (Local Area Network) 10, a hub (Hub) 6 and a gateway (GW) 7-2. Thereby, the image data, the environment information and the person information acquired by the image sensor 3 are transmitted to the control device 4, the display device 11 and the building monitoring device 5 via the in-building network 500 independently of the signal line L.
  • LAN Local Area Network
  • GW gateway
  • each image sensor 3 can communicate with each other via the LAN 10.
  • the control device 4 generates control information for controlling the lighting device 1 and the air conditioner 2 based on the environment information and the person information sent from the image sensor 3.
  • the control information is sent to the lighting device 1 and the air conditioner 2 via the gateway 7-1 and the signal line L.
  • the display device 11 visually displays environment information and person information acquired from the image sensor 3 or various information acquired from the building monitoring device 5.
  • the wireless access point 8 is connected to, for example, the gateway 7-2.
  • the notebook computer 9 or the like having a wireless communication function can access the image sensor 3 via the gateway 7-2.
  • FIG. 4 is a block diagram showing an example of the image sensor 3 according to the embodiment.
  • the image sensor 3 includes a camera unit 31 as an imaging unit, a memory 32, a processor 33, and a communication unit 34. These are connected to one another via an internal bus 35.
  • the camera unit 31 includes a fisheye lens 31a, an aperture mechanism 31b, an image sensor 31c, and a register 30.
  • the fisheye lens 31a captures a space (target space) in the office floor in a view looking down from the ceiling and forms an image on the image sensor 31c.
  • the light quantity from the fisheye lens 31a is adjusted by the diaphragm mechanism 31b.
  • the image sensor 31c is, for example, a CMOS (complementary metal oxide semiconductor) sensor, and generates, for example, a video signal at a frame rate of 30 frames per second. This video signal is digitally encoded and output as image data.
  • CMOS complementary metal oxide semiconductor
  • the register 30 stores camera information 30a.
  • the camera information 30a is, for example, information on the camera unit 31 such as the state of the auto gain control function, the value of gain, exposure time, or information on the image sensor 3 itself.
  • the memory 32 is a semiconductor memory such as SDRAM (Synchronous Dynamic RAM) or a non-volatile memory such as EPROM (Erasable Programmable ROM), and a program 32 b for causing the processor 33 to execute various functions according to the embodiment
  • SDRAM Serial Dynamic RAM
  • EPROM Erasable Programmable ROM
  • the image data 32a acquired by the unit 31 is stored.
  • the memory 32 further stores mask setting data 32c and a motion dictionary 32d.
  • the mask setting data 32 c is data used to distinguish an area to be image-processed and an area not to be image-processed in the field of view captured by the camera unit 31.
  • the mask setting data 32 c can be set to each image sensor 3 via the communication unit 34 from the notebook computer 9 (FIG. 3), for example.
  • the motion dictionary 32d is data in a table format in which motion feature quantities and motion types are associated with each other, and can be generated by, for example, a method such as machine learning.
  • the processor 33 loads and executes the program stored in the memory 32 to implement various functions described in the embodiment.
  • the processor 33 is, for example, a large scale integration (LSI) that includes a multi-core CPU (central processing unit) and is tuned to execute image processing at high speed.
  • the processor 15 can also be configured by an FPGA (Field Programmable Gate Array) or the like.
  • An MPU Micro Processing Unit is also one of the processors.
  • the communication unit 34 is connectable to the signal line L and the LAN 10, and mediates exchange of data with a communication partner including the building monitoring device 5, the notebook computer 9, and the other image sensor 3.
  • the processor 33 includes, as processing functions according to the embodiment, a motion extraction unit 33a, a motion identification unit 33b, a person detection unit 33c, a sensitivity setting unit 33d, and a camera information acquisition unit 33e.
  • the program 32b stored in the memory 32 is loaded into the register of the processor 33, and Accordingly, it can be understood as a process generated by the processor 33 performing arithmetic processing. That is, the program 32 b includes a motion extraction program, a motion identification program, a person detection program, a sensitivity setting program, and a camera information acquisition program.
  • the motion extraction unit 33a performs image processing on the image data 32a stored in the memory 32 according to a predetermined algorithm to extract a motion feature amount. For example, it is possible to calculate a motion feature amount by tracing a change in luminance of a frame included in image data for each pixel and analyzing its time series. For example, feature quantities such as histograms of oriented gradients (HOG) feature quantities, contrast, resolution, S / N ratio, and color tone are known. In addition, a luminance gradient direction co-occurrence histogram (Co-occurrence HOG: Co-HOG) feature, a Haar-Like feature, and the like are also known as a feature. The motion extraction unit 33a particularly extracts a motion feature that indicates the motion of the object in the field of view.
  • HOG oriented gradients
  • Co-occurrence HOG Co-occurrence HOG
  • the motion identification unit 33 b identifies the motion type of the object by, for example, rule-based identification processing or machine learning identification processing based on the extracted motion feature amount.
  • the human detection unit 33c detects a human in the target space based on the result of the motion identification.
  • the sensitivity setting unit 33 d sets the sensitivity of the motion identification unit 33 b to identify the motion type.
  • the set value of the sensitivity is input, for example, from the notebook computer 9 (FIG. 3) via the communication unit 34.
  • the camera information acquisition unit 33 e acquires the camera information 30 a from the register 30 of the camera unit 31. Next, several embodiments will be described based on the above configuration.
  • FIG. 5 is a diagram showing an example of the flow of data in the image sensor according to the first embodiment.
  • the image data 32a acquired by the camera unit 31 is temporarily stored in the memory 32, and then sent to the motion extraction unit 33a.
  • the motion extraction unit 33a performs image processing on the image data 32a to extract a motion feature amount representing the type of motion from the image data 32a.
  • motion features can be calculated for each image frame. Besides, as shown in FIG. 6A, it is also possible to divide the image frame or the target space into a plurality of small regions (blocks) and calculate the motion feature amount for each block. It is also possible to calculate the motion feature quantity for each middle area (area) including a plurality of blocks. Ultimately, it is possible to calculate motion features for each of the pixels that make up an image frame.
  • the unit for extracting the motion feature amount and the calculation range related to the extraction of the motion feature amount are set to be the same, but may be different as shown in FIG.
  • an area of a bold line frame surrounding the block may be set as an operation range related to extraction of a motion feature.
  • the mask area may be set from the notebook computer 9 and the mask area may be excluded from the calculation target.
  • the motion feature quantity extracted by the motion extraction unit 33a is passed to the motion identification unit 33b.
  • the motion identification unit 33 b identifies a motion type in the image data 32 a based on the motion feature amount. For example, as shown in FIG. 7, movement types relating to a person can be roughly classified into (office work) and (walking), and furthermore, three types of movement types can be identified including (other than that). These indicate unique motion feature quantities.
  • the motion identification unit 33 b can extract motion feature amounts for each of a plurality of motion types.
  • the motion types detected by the change in luminance are roughly classified into moving and non-moving objects.
  • Lighting control lighting on / off, light control
  • change in daylight and movement of a screen such as a display / projector / television all show movement feature quantities as non-moving objects.
  • These items all cause changes in the brightness of the image data, and may be detected as motion even though they themselves do not move.
  • a screen saver of a personal computer screen or a motion of a fan of a fan corresponds to this.
  • the motion is roughly divided into a person and a non-person.
  • non-persons objects that are not persons
  • the office work and walking of the person show their respective motion feature quantities, and the motion having feature quantities that do not fall under this is classified as (other than that).
  • an object detected in the target space indicates a unique motion feature amount, it can be used to improve the accuracy of human detection.
  • the motion extraction unit 33a passes the result of motion identification (person walking) in these blocks to the person detection unit 33c (FIG. 5).
  • the person detection unit 33c determines that a person has been detected for the block, as shown in FIG. 8 (b).
  • the communication unit 34 in FIG. 5 is a signal line L as a communication network or LAN 10 as a result of detection of a person by the person detection unit 33c, processing result of the motion extraction unit 33a and the motion identification unit 33b, processing data, parameters, and the like.
  • the data and information can be shared with the other image sensors 3, the building monitoring device 5, the notebook computer 9 and the like via the in-building network 500 and the like.
  • FIG. 9 is a flowchart showing an example of the processing procedure in the image sensor 3 shown in FIG.
  • the image sensor 3 acquires image data by the camera unit 31 (step S1)
  • the image sensor 3 stores the image data in the memory 32 (step S2).
  • the image sensor 3 performs image processing on the image data 32a in the memory 32 to extract a motion feature amount (step S3).
  • the extracted motion feature may be stored in the memory 32.
  • the image sensor 3 identifies a motion type based on the extracted motion feature amount (step S4).
  • the image sensor 3 detects a person in the target space based on the result of motion identification (step S5).
  • the result of the human detection obtained in this step is communicated with another image sensor via the communication unit 34 (step S6) and shared.
  • the motion extracting unit 33a that extracts the motion feature amount based on the luminance change of the image data, and the motion of the target based on the extracted motion feature amount
  • a motion identification unit 33b for identifying the type. Then, when a movement type corresponding to a movement of a person is detected, the presence of a person in the target space is detected.
  • the existing technology that relies on the inter-frame difference method is vulnerable to changes in the scene, so it has to prepare a large number of reference background images. If the background image is not accumulated enough, the expected accuracy can only be obtained in a very limited environment, eg with only one illumination, no windows and no ambient light. Furthermore, for example, at home, there is a fear that movements other than the person, such as the shaking of a television screen or a curtain, may be detected as the movement of the person.
  • the first embodiment by combining the results of the motion identification and determining whether the identified motion type is the motion of a person, illumination control, brightness change due to daylight, paper of a printer in an office, etc. It has been made to prevent false detection of non-human subjects such as screen savers and fans as human figures. Furthermore, by setting a monitoring camera so that the target space is viewed from directly above with a fisheye lens, it is possible to facilitate identification of office work / walking.
  • FIG. 10 is a diagram showing an example of the flow of data in the image sensor according to the second embodiment.
  • the motion dictionary 32d of the image sensor shown in FIG. 10 includes a plurality of motion identification data (motion identification 1, motion identification 2,). Each motion identification data is prepared in advance, for example, for each motion type shown in FIG.
  • the sensitivity setting unit 33 d sets, for each motion type, the sensitivity for identifying the motion type by the motion identification unit 33 b.
  • the sensitivity of identification can be varied for each movement type by individually setting the threshold value related to movement identification for each movement identification data.
  • the sensitivity of motion type identification may be set in units of the entire area of the captured image.
  • the target space may be variably set for each of a plurality of divided areas. That is, the sensitivity of the identification of the motion type can be set for each area divided in the mesh shape of FIG.
  • the setting can be made in units of pixels or blocks in more detail.
  • the sensitivity of identification may be variably set based on the time series of the identification result for each motion type. That is, the position at which the moving object is detected in the previous state and the sensitivity around it are increased, so that the position of the moving object detected last time and its surroundings are easily identified as the "moving object" in the determination in the next identification. If there is no moving object, it is easy to distinguish it as "non-moving object".
  • the sensitivity of identification for each motion type can be variably set according to the state of the target space. For example, in the image sensor installed near the window on the south side, it is easy to detect a change in daylight in the daytime time zone. Therefore, for an image sensor in such an environment, the identification error can be reduced by, for example, increasing the sensitivity for identifying "daylight change" during the daytime period by setting from the notebook computer 9 .
  • external information such as weather acquired from the communication unit 34 may be referred to.
  • the sensitivity of the "daylight change" identification is increased in the daytime time zone and when the weather is fine.
  • the sensitivity can also be set to change depending on the environment or time zone.
  • the sensitivity of identification can be variably set for each of a plurality of motion types. As a result, it is possible to improve the accuracy of the motion identification and further enhance the effect of preventing false detection due to a change in luminance.
  • FIG. 11 is a diagram showing an example of the flow of data in the image sensor according to the third embodiment.
  • the motion identification unit 33 b acquires camera information 30 a from the register 30 of the camera unit 31. Thereby, the identification of the movement type and the function such as the automatic gain control of the camera unit 31 can be linked. That is, the motion identification unit 33b identifies the motion type based on the extracted motion feature amount and the camera information 30a.
  • the sensitivity of identification of “lighting control” and / or “daylight change” is increased to make it easy to identify these motion types. This also improves the accuracy of motion identification, and can further enhance the effect of preventing false detection due to a change in luminance.
  • FIG. 12 is a diagram showing an example of the flow of data in the image sensor according to the fourth embodiment.
  • the memory 32 of the image sensor shown in FIG. 12 includes a motion dictionary 32 d-1 including a plurality of motion identification data (motion identification 1, motion identification 2,...), And a countermeasure dictionary 32 d for coping with undetection / false detection. -Remember that.
  • the countermeasure dictionary 32d-2 also includes a plurality of motion identification data (motion identification 1, motion identification 2,). Each motion identification data is prepared in advance, for example, for each motion type in FIG.
  • the motion identification unit 33 b determines motion identification data (a parameter of motion identification) in the memory 32 to identify the scene as a specific motion type in the subsequent identification. Change / rule / dictionary etc.)
  • the movement of the undetected person is identified as the “person”
  • the erroneously detected illumination change is identified as the “illumination change”
  • the paper feed / fan of the erroneously detected printer is identified as the “non-person”
  • the motion identification data is backpropagated to identify the screen saver animation as "non-moving".
  • motion identification data such as parameters / rules / dictionaries related to motion identification can be copied between the plurality of image sensors 3 by the communication unit 34.
  • the adjustment man-hour can be shortened by copying the movement identification of the individual that becomes the base for which adjustment has been completed for a certain property to another image sensor.
  • the motion identification data can also be modified by online learning.
  • the sensitivity setting unit 33d can set the number and / or time of image frames to be referred to by the motion extraction unit 33a in motion extraction processing fixed or variable. For example, when the frame rate of the camera unit 31 changes, the time is fixed and the number of sheets is variable. Further, for example, when it is identified as walking in the previous state, it is identified with a small number of sheets, and when it is identified as office work in the previous state, it is identified with a large number of sheets. Furthermore, even if the reference number is fixed by default, parameter settings can be changed from the outside by the notebook computer 9 or the like.
  • the sensitivity setting unit 33 d may select the type of image data to be referred to by the motion extraction unit 33 a at the time of motion extraction processing from an original image / average image / edge image or the like. Only one type of image data may be referenced, or a plurality of types of image data may be referenced. Furthermore, even if the type of the reference image is fixed by default, the type of the reference image can be changed from the outside by the notebook computer 9 or the like.
  • the present invention is not limited to the above embodiment.
  • processing results such as inter-frame difference / background difference / human shape recognition are given to the human detection unit 33c, and these information are comprehensively combined to obtain a human It may be detected.
  • the unit of person detection may be an image unit or an area unit. Also, as shown in FIG. 8B, detection may be performed in individual person units. Furthermore, coordinates on the image of a person may be acquired as a result of person detection.
  • the motion identification unit 33 b and the camera unit 31 are linked.
  • the motion extraction unit 33a and the camera unit 31 may be linked.
  • the items of the motion type may be increased to correspond to the specific scene and the specific motion type.
  • the motion dictionary for identification itself may be increased, such as a motion dictionary of a person who has become undetected, a dictionary of erroneously detected illumination changes, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

According to an embodiment, this image sensor is equipped with an imaging unit, an extraction unit, an identification unit, and a detection unit. The imaging unit images a target space to acquire image data. The extraction unit extracts a motion feature amount from the image data. The identification unit identifies a motion type on the basis of the motion feature amount. The detection unit detects a person in the target space on the basis of the result of identification.

Description

画像センサ、人物検知方法、プログラムおよび制御システムImage sensor, person detection method, program and control system
 本発明の実施形態は、画像センサ、人物検知方法、プログラムおよび制御システムに関する。 Embodiments of the present invention relate to an image sensor, a person detection method, a program and a control system.
 近年の画像センサは、CPU(Central Processing Unit)やメモリを備え、いわばレンズ付きの組み込みコンピュータといえる。高度な画像処理機能も有しており、撮影した画像データを分析して、例えば人間の在・不在、あるいは人数などを計算することができる。 A recent image sensor has a CPU (Central Processing Unit) and a memory, and can be said to be an embedded computer with a lens. It also has advanced image processing functions, and can analyze the captured image data to calculate, for example, the presence or absence of people, or the number of people.
 フレーム間差分法は、画像データから動くものを検出する手法の一つとして知られている。その原理は、基準となる背景画像を予め記憶し、背景画像からの輝度変化をピクセルごとに評価し、その結果から人物などを検知するというものである。 The interframe difference method is known as one of the methods of detecting moving objects from image data. The principle is that a background image to be a reference is stored in advance, a change in luminance from the background image is evaluated for each pixel, and a person or the like is detected from the result.
特許第5099904号明細書Patent No. 5099904 specification
 フレーム間差分法は、基準となる背景画像が撮影されたシーンと異なる条件下では誤差を生じやすい。つまり背景画像の輝度変化を人物として誤検知してしまうという弱点を持つ。このため従来では、シーンごとに背景画像を選択することで誤検知を防止していた。 The inter-frame difference method is prone to errors under different conditions from the scene in which the reference background image was taken. That is, it has a weak point that a change in luminance of the background image is erroneously detected as a person. Therefore, in the related art, false detection is prevented by selecting a background image for each scene.
 しかし、シーンは、照明機器の状態(オン/オフや照度など)や、外光の状態(強さや差し込む角度など)によって変わるので、シーンごとに背景画像を用意すると多くのリソースが消費される。例えば、多数の照明を有する空間(オフィス等)における全てのシーンの背景画像を記憶することは、記憶容量のサイズを肥大化させ、調整のために必要な工程数も膨大になる。まして全てのシーンの組合せの照明制御中の背景を記憶することは、記憶容量のサイズや調整工数からみて非現実的である。さらに、外光は天候や季節などに影響されるので、全てのシーンの背景画像を記憶するためには年単位の調整期間が必要となる。このように、輝度変化に頼っている限り、動体を誤りなく検知することは難しい。 However, since the scene changes depending on the state of the lighting apparatus (on / off, illuminance, etc.) and the state of external light (intensity, angle of insertion, etc.), preparing a background image for each scene consumes a lot of resources. For example, storing background images of all scenes in a space with many lights (such as an office) enlarges the size of the storage capacity and also the number of steps required for adjustment. It is even more unrealistic to store the background during lighting control of all scene combinations, in terms of the size of the storage capacity and the number of adjustment steps. Furthermore, since ambient light is affected by the weather, season, etc., an annual adjustment period is required to store background images of all scenes. As described above, it is difficult to detect a moving object without error as long as it relies on a change in luminance.
 そこで、目的は、輝度変化による誤検知を防止し得る画像センサ、人物検知方法、プログラムおよび制御システムを提供することにある。 Therefore, an object of the present invention is to provide an image sensor, a person detection method, a program and a control system capable of preventing false detection due to a change in luminance.
 実施形態によれば、画像センサは、撮像部と、抽出部と、識別部と、検知部とを具備する。撮像部は、対象空間を撮像して画像データを取得する。抽出部は、画像データから動き特徴量を抽出する。識別部は、動き特徴量に基づいて動き種別を識別する。検知部は、識別の結果に基づいて対象空間における人物を検知する。 According to an embodiment, the image sensor comprises an imaging unit, an extraction unit, an identification unit, and a detection unit. The imaging unit captures an image of a target space to obtain image data. The extraction unit extracts a motion feature amount from the image data. The identification unit identifies a motion type based on the motion feature amount. The detection unit detects a person in the target space based on the identification result.
図1は、実施形態に係る画像センサを備えるビル管理システムの一例を示す模式図である。FIG. 1 is a schematic view showing an example of a building management system provided with an image sensor according to the embodiment. 図2は、ビルのフロア内の様子を例示する図である。FIG. 2 is a diagram illustrating the appearance in the floor of the building. 図3は、ビルにおける通信ネットワークの一例を示す図である。FIG. 3 is a diagram showing an example of a communication network in a building. 図4は、実施形態に係る画像センサの一例を示すブロック図である。FIG. 4 is a block diagram showing an example of the image sensor according to the embodiment. 図5は、第1の実施形態に係る画像センサにおけるデータの流れの一例を示す図である。FIG. 5 is a diagram showing an example of the flow of data in the image sensor according to the first embodiment. 図6は、動き抽出部33aにおける処理を説明するための図である。FIG. 6 is a diagram for explaining processing in the motion extraction unit 33a. 図7は、動き種別の例を示す図である。FIG. 7 is a diagram showing an example of the movement type. 図8は、動き識別部33bにおける処理を説明するための図である。FIG. 8 is a diagram for explaining the process in the motion identification unit 33b. 図9は、図5に示される画像センサにおける処理手順の一例を示すフローチャートである。FIG. 9 is a flowchart showing an example of the processing procedure in the image sensor shown in FIG. 図10は、第2の実施形態に係る画像センサにおけるデータの流れの一例を示す図である。FIG. 10 is a diagram showing an example of the flow of data in the image sensor according to the second embodiment. 図11は、第3の実施形態に係る画像センサにおけるデータの流れの一例を示す図である。FIG. 11 is a diagram showing an example of the flow of data in the image sensor according to the third embodiment. 図12は、第4の実施形態に係る画像センサにおけるデータの流れの一例を示す図である。FIG. 12 is a diagram showing an example of the flow of data in the image sensor according to the fourth embodiment.
 画像センサは、人感センサ、明かりセンサあるいは赤外線センサ等に比べて多様な情報を取得することができる。魚眼レンズや超広角レンズなどを用いれば1台の画像センサで撮影可能な領域を拡大できるし、画像の歪みは計算処理で補正できる。画像センサに学習機能を持たせることも可能である。 The image sensor can acquire various information as compared with a human sensor, a light sensor, an infrared sensor, and the like. If a fisheye lens or an ultra-wide-angle lens is used, an area that can be photographed by one image sensor can be enlarged, and distortion of the image can be corrected by calculation processing. It is also possible to give the image sensor a learning function.
 図1は、実施形態に係る画像センサを備えるビル管理システムの一例を示す模式図である。図1において、照明機器1、空調機器2、および画像センサ3は、ビル100の各フロアごとに設けられ、制御装置40と通信可能に接続される。各階の制御装置40は、例えばビル管理センタ等に設けられるビル監視装置50と、ビル内ネットワーク500経由で通信可能に接続される。ビル内ネットワーク500の通信プロトコルとしてはBuilding Automation and Control Networking protocol(BACnet(登録商標))が代表的である。 FIG. 1 is a schematic view showing an example of a building management system provided with an image sensor according to the embodiment. In FIG. 1, the lighting device 1, the air conditioner 2, and the image sensor 3 are provided for each floor of the building 100 and are communicably connected to the control device 40. The control device 40 on each floor is communicably connected to a building monitoring device 50 provided, for example, in a building management center via the in-building network 500. As a communication protocol of the in-building network 500, Building Automation and Control Networking protocol (BACnet (registered trademark)) is representative.
 ビル監視装置50は、例えばTCP/IP(Transmission Control Protocol / Internet Protocol)ベースの通信ネットワーク600経由で、クラウドコンピューティングシステム(クラウド)200に接続されることができる。クラウド200は、サーバ300およびデータベース400を備え、ビル管理に関するサービスを提供する。 The building monitoring device 50 can be connected to the cloud computing system (cloud) 200 via, for example, a Transmission Control Protocol / Internet Protocol (TCP / IP) -based communication network 600. The cloud 200 includes a server 300 and a database 400, and provides services related to building management.
 図2に示されるように、照明機器1、空調機器2の吹き出し口、および画像センサ3は各フロアの例えば天井に配設される。画像センサ3は、視野内に捕えた映像を撮影して画像データを取得する。この画像データは画像センサ3において処理され、環境情報、および/または人物情報が生成される。これらの情報を利用して、照明機器1および空調機器2を制御することができる。 As shown in FIG. 2, the lighting device 1, the outlet of the air conditioner 2, and the image sensor 3 are disposed, for example, on the ceiling of each floor. The image sensor 3 captures an image captured within the field of view to acquire image data. This image data is processed in the image sensor 3 to generate environmental information and / or personal information. The lighting device 1 and the air conditioner 2 can be controlled using these pieces of information.
 画像センサ3は、画像データを処理し、環境情報および人物情報を取得する。環境情報は、撮像対象の空間(ゾーン)の環境に関する情報である。例えば、環境情報は、オフィスの照度や温度などを示す情報である。人物情報は、対象空間における人間に関する情報である。例えば、人物情報は、人の存在または不在(在・不在と称する)、人数、人の行動、人の活動量などを示す情報である。 The image sensor 3 processes image data to obtain environmental information and personal information. The environment information is information on the environment of the space (zone) of the imaging target. For example, the environmental information is information indicating the illuminance and temperature of the office. Person information is information on humans in the target space. For example, the personal information is information indicating the presence or absence of a person (referred to as presence / absence), the number of people, the behavior of the person, the amount of activity of the person, and the like.
 ゾーンを複数に分割した小領域のそれぞれを、エリアと称する。例えば環境情報および人物情報を、エリアごとに算出することが可能である。実施形態では、人物情報の一つとしての歩行・滞留について説明する。歩行・滞留とは、人が歩いているか、または1つの場所に留まっているかを示す情報である。 Each of the small areas obtained by dividing the zone into a plurality of areas is called an area. For example, environmental information and personal information can be calculated for each area. In the embodiment, walking / staying as one of personal information will be described. The walking / staying is information indicating whether a person is walking or staying at one place.
 図3は、ビル100における通信ネットワークの一例を示す図である。図3において、照明機器1、空調機器2、および画像センサ3は信号線Lを介してデイジーチェーン状に接続される。このうち例えば一つの画像センサ3が、ゲートウェイ(GW)7-1を介してビル内ネットワーク500に接続される。これにより全ての照明機器1、空調機器2、および画像センサ3が、ビル内ネットワーク500経由でビル監視装置5に接続される。 FIG. 3 is a diagram showing an example of a communication network in the building 100. As shown in FIG. In FIG. 3, the lighting device 1, the air conditioner 2, and the image sensor 3 are connected in a daisy chain shape via a signal line L. Among them, for example, one image sensor 3 is connected to the in-building network 500 via the gateway (GW) 7-1. As a result, all the lighting devices 1, the air conditioners 2, and the image sensor 3 are connected to the building monitoring device 5 via the in-building network 500.
 それぞれの画像センサ3は、LAN(Local Area Network)10、ハブ(Hub)6、およびゲートウェイ(GW)7-2経由でビル内ネットワーク500に接続される。これにより画像センサ3で取得された画像データ、環境情報および人物情報は、信号線Lとは独立にビル内ネットワーク500経由で制御装置4、表示装置11およびビル監視装置5に伝送される。 Each image sensor 3 is connected to the in-building network 500 via a LAN (Local Area Network) 10, a hub (Hub) 6 and a gateway (GW) 7-2. Thereby, the image data, the environment information and the person information acquired by the image sensor 3 are transmitted to the control device 4, the display device 11 and the building monitoring device 5 via the in-building network 500 independently of the signal line L.
 さらに、各画像センサ3は、LAN10経由で相互に通信することが可能である。 
 制御装置4は、画像センサ3から送られた環境情報および人物情報に基づき、照明機器1や空調機器2を制御するための制御情報を生成する。この制御情報はゲートウェイ7-1および信号線Lを介して照明機器1、空調機器2に送られる。 
 表示装置11は、画像センサ3から取得した環境情報および人物情報、あるいはビル監視装置5から取得した各種の情報を視覚的に表示する。
Furthermore, each image sensor 3 can communicate with each other via the LAN 10.
The control device 4 generates control information for controlling the lighting device 1 and the air conditioner 2 based on the environment information and the person information sent from the image sensor 3. The control information is sent to the lighting device 1 and the air conditioner 2 via the gateway 7-1 and the signal line L.
The display device 11 visually displays environment information and person information acquired from the image sensor 3 or various information acquired from the building monitoring device 5.
 さらに、無線アクセスポイント8が、例えばゲートウェイ7-2に接続される。これにより、無線通信機能を備えたノートパソコン9等がゲートウェイ7-2経由で画像センサ3にアクセスすることができる。 Furthermore, the wireless access point 8 is connected to, for example, the gateway 7-2. As a result, the notebook computer 9 or the like having a wireless communication function can access the image sensor 3 via the gateway 7-2.
 図4は、実施形態に係る画像センサ3の一例を示すブロック図である。画像センサ3は、撮像部としてのカメラ部31と、メモリ32、プロセッサ33、および通信部34を備える。これらは内部バス35を介して互いに接続される。 FIG. 4 is a block diagram showing an example of the image sensor 3 according to the embodiment. The image sensor 3 includes a camera unit 31 as an imaging unit, a memory 32, a processor 33, and a communication unit 34. These are connected to one another via an internal bus 35.
 カメラ部31は、魚眼レンズ31a、絞り機構31b、イメージセンサ31cおよびレジスタ30を備える。魚眼レンズ31aは、オフィスフロア内の空間(対象空間)を天井から見下ろす形で視野に捕え、イメージセンサ31cに結像する。魚眼レンズ31aからの光量は絞り機構31bにより調節される。イメージセンサ31cは例えばCMOS(相補型金属酸化膜半導体)センサであり、例えば毎秒30フレームのフレームレートの映像信号を生成する。この映像信号はディジタル符号化され、画像データとして出力される。 The camera unit 31 includes a fisheye lens 31a, an aperture mechanism 31b, an image sensor 31c, and a register 30. The fisheye lens 31a captures a space (target space) in the office floor in a view looking down from the ceiling and forms an image on the image sensor 31c. The light quantity from the fisheye lens 31a is adjusted by the diaphragm mechanism 31b. The image sensor 31c is, for example, a CMOS (complementary metal oxide semiconductor) sensor, and generates, for example, a video signal at a frame rate of 30 frames per second. This video signal is digitally encoded and output as image data.
 レジスタ30は、カメラ情報30aを記憶する。カメラ情報30aは、例えばオートゲインコントロール機能の状態、ゲインの値、露光時間などの、カメラ部31に関する情報、あるいは画像センサ3それ自体に関する情報である。 The register 30 stores camera information 30a. The camera information 30a is, for example, information on the camera unit 31 such as the state of the auto gain control function, the value of gain, exposure time, or information on the image sensor 3 itself.
 メモリ32は、SDRAM(Synchronous Dynamic RAM)などの半導体メモリ、あるいはEPROM(Erasable Programmable ROM)などの不揮発性メモリであり、実施形態に係わる各種の機能をプロセッサ33に実行させるためのプログラム32b、およびカメラ部31により取得された画像データ32aなどを記憶する。さらにメモリ32は、マスク設定データ32c、および動き辞書32dを記憶する。 The memory 32 is a semiconductor memory such as SDRAM (Synchronous Dynamic RAM) or a non-volatile memory such as EPROM (Erasable Programmable ROM), and a program 32 b for causing the processor 33 to execute various functions according to the embodiment The image data 32a acquired by the unit 31 is stored. The memory 32 further stores mask setting data 32c and a motion dictionary 32d.
 マスク設定データ32cは、カメラ部31に捕えられた視野のうち、画像処理する領域と、画像処理しない領域とを区別するために用いられるデータである。マスク設定データ32cは、例えばノートパソコン9(図3)から通信部34経由で各画像センサ3に設定することが可能である。 
 動き辞書32dは、動き特徴量と動き種別とを対応づけたテーブル形式のデータであり、例えば機械学習(Machine-Learning)等の手法により生成することが可能である。
The mask setting data 32 c is data used to distinguish an area to be image-processed and an area not to be image-processed in the field of view captured by the camera unit 31. The mask setting data 32 c can be set to each image sensor 3 via the communication unit 34 from the notebook computer 9 (FIG. 3), for example.
The motion dictionary 32d is data in a table format in which motion feature quantities and motion types are associated with each other, and can be generated by, for example, a method such as machine learning.
 プロセッサ33は、メモリ32に記憶されたプログラムをロードし、実行することで、実施形態において説明する各種の機能を実現する。プロセッサ33は、例えばマルチコアCPU(Central Processing Unit)を備え、画像処理を高速で実行することについてチューニングされたLSI(Large Scale Integration)である。FPGA(Field Programmable Gate Array)等でプロセッサ15を構成することもできる。MPU(Micro Processing Unit)もプロセッサの一つである。 The processor 33 loads and executes the program stored in the memory 32 to implement various functions described in the embodiment. The processor 33 is, for example, a large scale integration (LSI) that includes a multi-core CPU (central processing unit) and is tuned to execute image processing at high speed. The processor 15 can also be configured by an FPGA (Field Programmable Gate Array) or the like. An MPU (Micro Processing Unit) is also one of the processors.
 通信部34は、信号線LおよびLAN10に接続可能で、ビル監視装置5、ノートパソコン9、および他の画像センサ3を含む通信相手先とのデータの授受を仲介する。 The communication unit 34 is connectable to the signal line L and the LAN 10, and mediates exchange of data with a communication partner including the building monitoring device 5, the notebook computer 9, and the other image sensor 3.
 ところで、プロセッサ33は、実施形態に係る処理機能として、動き抽出部33a、動き識別部33b、人物検知部33c、感度設定部33d、およびカメラ情報取得部33eを備える。動き抽出部33a、動き識別部33b、人物検知部33c、感度設定部33d、およびカメラ情報取得部33eは、メモリ32に記憶されたプログラム32bがプロセッサ33のレジスタにロードされ、当該プログラムの進行に伴ってプロセッサ33が演算処理を実行することで生成されるプロセスとして、理解され得る。つまりプログラム32bは、動き抽出プログラム、動き識別プログラム、人物検知プログラム、感度設定プログラム、およびカメラ情報取得プログラム、を含む。 The processor 33 includes, as processing functions according to the embodiment, a motion extraction unit 33a, a motion identification unit 33b, a person detection unit 33c, a sensitivity setting unit 33d, and a camera information acquisition unit 33e. In the motion extraction unit 33a, the motion identification unit 33b, the person detection unit 33c, the sensitivity setting unit 33d, and the camera information acquisition unit 33e, the program 32b stored in the memory 32 is loaded into the register of the processor 33, and Accordingly, it can be understood as a process generated by the processor 33 performing arithmetic processing. That is, the program 32 b includes a motion extraction program, a motion identification program, a person detection program, a sensitivity setting program, and a camera information acquisition program.
 動き抽出部33aは、メモリ32に蓄積された画像データ32aを所定のアルゴリズムで画像処理して、動き特徴量を抽出する。例えば、画像データに含まれるフレームの輝度の変化をピクセルごとにトレースし、その時系列を分析することで動き特徴量を計算することができる。例えば、輝度勾配方向ヒストグラム(histograms of oriented gradients:HOG)特徴量、コントラスト、解像度、S/N比、および色調などの特徴量が知られている。また、輝度勾配方向共起ヒストグラム(Co-occurrence HOG:Co-HOG)特徴量、Haar-Like特徴量なども特徴量として知られている。動き抽出部33aは、特に、視野内における物体の動きを示す動き特徴量を抽出する。 The motion extraction unit 33a performs image processing on the image data 32a stored in the memory 32 according to a predetermined algorithm to extract a motion feature amount. For example, it is possible to calculate a motion feature amount by tracing a change in luminance of a frame included in image data for each pixel and analyzing its time series. For example, feature quantities such as histograms of oriented gradients (HOG) feature quantities, contrast, resolution, S / N ratio, and color tone are known. In addition, a luminance gradient direction co-occurrence histogram (Co-occurrence HOG: Co-HOG) feature, a Haar-Like feature, and the like are also known as a feature. The motion extraction unit 33a particularly extracts a motion feature that indicates the motion of the object in the field of view.
 動き識別部33bは、上記抽出された動き特徴量に基づいて、例えばルールベースによる識別処理、あるいは機械学習による識別処理により、対象の動き種別を識別する。 The motion identification unit 33 b identifies the motion type of the object by, for example, rule-based identification processing or machine learning identification processing based on the extracted motion feature amount.
 人物検知部33cは、上記動き識別の結果に基づいて、対象空間における人物を検知する。 
 感度設定部33dは、動き識別部33bによる動き種別の識別の感度を設定する。この感度の設定値は、例えばノートパソコン9(図3)から通信部34経由で入力される。
The human detection unit 33c detects a human in the target space based on the result of the motion identification.
The sensitivity setting unit 33 d sets the sensitivity of the motion identification unit 33 b to identify the motion type. The set value of the sensitivity is input, for example, from the notebook computer 9 (FIG. 3) via the communication unit 34.
 カメラ情報取得部33eは、カメラ部31のレジスタ30からカメラ情報30aを取得する。次に、上記構成を基礎として複数の実施形態について説明する。 The camera information acquisition unit 33 e acquires the camera information 30 a from the register 30 of the camera unit 31. Next, several embodiments will be described based on the above configuration.
 [第1の実施形態]
 図5は、第1の実施形態に係る画像センサにおけるデータの流れの一例を示す図である。図5において、カメラ部31で取得された画像データ32aは、メモリ32に一時的に記憶されたのち、動き抽出部33aに送られる。動き抽出部33aは、画像データ32aを画像処理して、動きの種別を表す動き特徴量を画像データ32aから抽出する。
First Embodiment
FIG. 5 is a diagram showing an example of the flow of data in the image sensor according to the first embodiment. In FIG. 5, the image data 32a acquired by the camera unit 31 is temporarily stored in the memory 32, and then sent to the motion extraction unit 33a. The motion extraction unit 33a performs image processing on the image data 32a to extract a motion feature amount representing the type of motion from the image data 32a.
 一般的に、動き特徴量は、画像フレームごとに計算されることができる。このほか図6(a)に示されるように、画像フレームまたは対象空間を複数の小領域(ブロック)に分割し、各ブロックごとに動き特徴量を計算することも可能である。複数のブロックを含む中領域(エリア)ごとに動き特徴量を計算することも可能である。究極的には、画像フレームを構成する画素ごとに、動き特徴量を計算することが可能である。 In general, motion features can be calculated for each image frame. Besides, as shown in FIG. 6A, it is also possible to divide the image frame or the target space into a plurality of small regions (blocks) and calculate the motion feature amount for each block. It is also possible to calculate the motion feature quantity for each middle area (area) including a plurality of blocks. Ultimately, it is possible to calculate motion features for each of the pixels that make up an image frame.
 通常、動き特徴量を抽出する単位と、動き特徴量の抽出に係わる演算範囲とは同じに設定されるが、図6(b)に示されるように両者が異なってもよい。例えば図6(b)の斜線ハッチングのブロックを処理対象としたとき、このブロックを取り囲む太線枠の領域を、動き特徴量の抽出に係わる演算範囲としてもよい。さらには、例えばノートパソコン9からマスク領域を設定し、マスク領域については演算の対象外としてもよい。このように動き特徴量を抽出する領域を限定することで、演算処理量を削減したり、処理時間の短縮などの効果がある。 Usually, the unit for extracting the motion feature amount and the calculation range related to the extraction of the motion feature amount are set to be the same, but may be different as shown in FIG. For example, when a block indicated by hatching in FIG. 6B is to be processed, an area of a bold line frame surrounding the block may be set as an operation range related to extraction of a motion feature. Furthermore, for example, the mask area may be set from the notebook computer 9 and the mask area may be excluded from the calculation target. By limiting the area for extracting the motion feature amount in this manner, the amount of operation processing can be reduced, and the processing time can be shortened.
 図5に戻り、動き抽出部33aで抽出された動き特徴量は、動き識別部33bに渡される。動き識別部33bは、動き特徴量に基づいて、画像データ32aにおける動き種別を識別する。例えば図7に示されるように、人物に関する動き種別は(オフィスワーク)と(歩行)とに大別され、さらに、(それ以外)を含めて3種類の動き種別を識別することができる。これらは、それぞれ特有の動き特徴量を示す。このように動き識別部33bは、複数の動き種別ごとの動き特徴量を抽出可能である。 Returning to FIG. 5, the motion feature quantity extracted by the motion extraction unit 33a is passed to the motion identification unit 33b. The motion identification unit 33 b identifies a motion type in the image data 32 a based on the motion feature amount. For example, as shown in FIG. 7, movement types relating to a person can be roughly classified into (office work) and (walking), and furthermore, three types of movement types can be identified including (other than that). These indicate unique motion feature quantities. Thus, the motion identification unit 33 b can extract motion feature amounts for each of a plurality of motion types.
 図7において、輝度変化により検出される動き種別は、動体と非動体とに大別される。照明制御(照明の点灯/消灯、調光)、昼光の変化、ディスプレイ/プロジェクタ/テレビなどの画面の動きは、いずれも非動体としての動き特徴量を示す。これらの項目は、いずれも画像データの輝度の変化をもたらすので、それ自体は動かないにも拘わらず動きとして検知されてしまうことがある。例えばパソコン画面のスクリーンセーバや、扇風機の羽の動きなどがこれに相当する。 In FIG. 7, the motion types detected by the change in luminance are roughly classified into moving and non-moving objects. Lighting control (lighting on / off, light control), change in daylight, and movement of a screen such as a display / projector / television all show movement feature quantities as non-moving objects. These items all cause changes in the brightness of the image data, and may be detected as motion even though they themselves do not move. For example, a screen saver of a personal computer screen or a motion of a fan of a fan corresponds to this.
 動体は、人物と非人物とに大別される。このうち非人物(人物でない物体)は、例えば、動きに周期性があるものと、周期性の無いものとに区別される。人物のオフィスワークおよび歩行はそれぞれ特有の動き特徴量を示し、これに該当しない特徴量を持つ動きは、(それ以外)として分類される。このように、対象空間において検知された物体は固有の動き特徴量を示すので、このことを利用して人物検知の精度を上げることができる。 The motion is roughly divided into a person and a non-person. Among these, non-persons (objects that are not persons) are distinguished, for example, from those having periodicity in motion and those having no periodicity. The office work and walking of the person show their respective motion feature quantities, and the motion having feature quantities that do not fall under this is classified as (other than that). As described above, since an object detected in the target space indicates a unique motion feature amount, it can be used to improve the accuracy of human detection.
 図8(a)に示されるように、例えば3つのブロック(ハッチング領域)において、(人物歩行)を示す特徴量が検出されたとする。そうすると動き抽出部33aは、これらのブロックにおける動き識別の結果(人物歩行)を、人物検知部33c(図5)に渡す。人物検知部33cは、図8(b)に示されるように、当該ブロックについて人物を検知したことを判定する。図5の通信部34は、人物検知部33cによる人物の検知の結果や、動き抽出部33aおよび動き識別部33bの処理結果、処理データ、パラメータなどを、通信ネットワークとしての信号線L、あるいはLAN10に送出する。これにより、上記データや情報は、他の画像センサ3、ビル監視装置5、およびノートパソコン9等と、ビル内ネットワーク500等を経由して共有されることが可能である。 As shown in FIG. 8A, for example, it is assumed that a feature amount indicating (person walking) is detected in three blocks (hatched areas). Then, the motion extraction unit 33a passes the result of motion identification (person walking) in these blocks to the person detection unit 33c (FIG. 5). The person detection unit 33c determines that a person has been detected for the block, as shown in FIG. 8 (b). The communication unit 34 in FIG. 5 is a signal line L as a communication network or LAN 10 as a result of detection of a person by the person detection unit 33c, processing result of the motion extraction unit 33a and the motion identification unit 33b, processing data, parameters, and the like. Send to Thus, the data and information can be shared with the other image sensors 3, the building monitoring device 5, the notebook computer 9 and the like via the in-building network 500 and the like.
 図9は、図5に示される画像センサ3における処理手順の一例を示すフローチャートである。図5において、画像センサ3は、カメラ部31により画像データを取得すると(ステップS1)、その画像データをメモリ32に蓄積する(ステップS2)。 FIG. 9 is a flowchart showing an example of the processing procedure in the image sensor 3 shown in FIG. In FIG. 5, when the image sensor 3 acquires image data by the camera unit 31 (step S1), the image sensor 3 stores the image data in the memory 32 (step S2).
 次に、画像センサ3は、メモリ32の画像データ32aを画像処理し、動き特徴量を抽出する(ステップS3)。抽出された動き特徴量はメモリ32に記憶されても良い。次に、画像センサ3は、抽出された動き特徴量に基づいて、動き種別を識別する(ステップS4)。次に、画像センサ3は、動き識別の結果に基づいて、対象空間における人物を検知する(ステップS5)。このステップで得られた人物検知の結果は、通信部34を介して他の画像センサと通信され(ステップS6)、共有される。 Next, the image sensor 3 performs image processing on the image data 32a in the memory 32 to extract a motion feature amount (step S3). The extracted motion feature may be stored in the memory 32. Next, the image sensor 3 identifies a motion type based on the extracted motion feature amount (step S4). Next, the image sensor 3 detects a person in the target space based on the result of motion identification (step S5). The result of the human detection obtained in this step is communicated with another image sensor via the communication unit 34 (step S6) and shared.
 以上説明したように、第1の実施形態に係る画像センサは、例えば画像データの輝度変化に基づいて動き特徴量を抽出する動き抽出部33aと、抽出された動き特徴量に基づいて対象の動き種別を識別する動き識別部33bとを備える。そして、人の動きに対応する動き種別が検出された場合に、対象空間おける人物の存在を検知するようにした。 As described above, in the image sensor according to the first embodiment, for example, the motion extracting unit 33a that extracts the motion feature amount based on the luminance change of the image data, and the motion of the target based on the extracted motion feature amount And a motion identification unit 33b for identifying the type. Then, when a movement type corresponding to a movement of a person is detected, the presence of a person in the target space is detected.
 フレーム間差分法に頼った既存の技術では、シーンの変化に弱いので、基準となる背景画像を大量に用意せざるを得なかった。背景画像の蓄積が十分でないと、期待した精度は、例えば照明が1基しかなく、窓もなく外光が入らないという、極めて限定された環境でしか得られない。さらに、例えば、ホームにおいてはテレビ画面やカーテンの揺れなど、人物以外の動きも人の動きとして検知される怖れがある。 The existing technology that relies on the inter-frame difference method is vulnerable to changes in the scene, so it has to prepare a large number of reference background images. If the background image is not accumulated enough, the expected accuracy can only be obtained in a very limited environment, eg with only one illumination, no windows and no ambient light. Furthermore, for example, at home, there is a fear that movements other than the person, such as the shaking of a television screen or a curtain, may be detected as the movement of the person.
 これに対し第1の実施形態では、動き識別の結果を組み合わせ、識別された動き種別が人物の動きか否かを判定することで、照明制御や、昼光による輝度変化、オフィスにおけるプリンタの紙送り、スクリーンセーバ、扇風機などの、人物でない対象を人物として誤検知することを防止するようにした。さらに、対象空間を魚眼レンズにより真上から見下ろすように監視カメラを設置することで、オフィスワーク/歩行の識別を容易にすることもできる。 On the other hand, in the first embodiment, by combining the results of the motion identification and determining whether the identified motion type is the motion of a person, illumination control, brightness change due to daylight, paper of a printer in an office, etc. It has been made to prevent false detection of non-human subjects such as screen savers and fans as human figures. Furthermore, by setting a monitoring camera so that the target space is viewed from directly above with a fisheye lens, it is possible to facilitate identification of office work / walking.
 これらのことから、輝度変化による誤検知を防止し得る画像センサ、人物検知方法、プログラムおよび制御システムを提供することが可能となる。 From these, it is possible to provide an image sensor, a person detection method, a program, and a control system that can prevent erroneous detection due to a change in luminance.
 [第2の実施形態]
 図10は、第2の実施形態に係る画像センサにおけるデータの流れの一例を示す図である。図10に示される画像センサの動き辞書32dは、複数の動き識別データ(動き識別1、動き識別2、…)を含む。それぞれの動き識別データは、例えば図7に示される動き種別ごとに予め用意される。
Second Embodiment
FIG. 10 is a diagram showing an example of the flow of data in the image sensor according to the second embodiment. The motion dictionary 32d of the image sensor shown in FIG. 10 includes a plurality of motion identification data (motion identification 1, motion identification 2,...). Each motion identification data is prepared in advance, for example, for each motion type shown in FIG.
 感度設定部33dは、動き識別部33bによる動き種別の識別の感度を、動き種別ごとに設定する。例えば、動き識別に係わる閾値の値を動き識別データごとに個別に設定することで、動き種別ごとに識別の感度を可変することができる。 The sensitivity setting unit 33 d sets, for each motion type, the sensitivity for identifying the motion type by the motion identification unit 33 b. For example, the sensitivity of identification can be varied for each movement type by individually setting the threshold value related to movement identification for each movement identification data.
 動き種別の識別の感度は、撮像された画像の領域全体を単位として設定しても良い。あるいは、例えば、対象空間を分割した複数のエリアごとに可変設定してもよい。つまり、図6のメッシュ状に区切られた各エリアごとに、動き種別の識別の感度を設定することができる。さらに細かく、画素、あるいはブロック単位で設定することもできる。 The sensitivity of motion type identification may be set in units of the entire area of the captured image. Alternatively, for example, the target space may be variably set for each of a plurality of divided areas. That is, the sensitivity of the identification of the motion type can be set for each area divided in the mesh shape of FIG. Furthermore, the setting can be made in units of pixels or blocks in more detail.
 また、動き種別ごとの識別の結果の時系列に基づいて、識別の感度を可変設定してもよい。つまり、前状態で動体が検出された位置とその周囲の感度を上げ、次回の識別における判定において、前回検出された動体の位置とその周囲を「動体」と識別されやすくする。動体が存在していなければ、「非動体」と識別しやすくする。 Also, the sensitivity of identification may be variably set based on the time series of the identification result for each motion type. That is, the position at which the moving object is detected in the previous state and the sensitivity around it are increased, so that the position of the moving object detected last time and its surroundings are easily identified as the "moving object" in the determination in the next identification. If there is no moving object, it is easy to distinguish it as "non-moving object".
 また、対象空間の状態に応じて、動き種別ごとの識別の感度を可変設定することもできる。例えば、南側の窓近辺に設置された画像センサにおいては、昼間の時間帯では昼光の変化が検出されやすい。そこで、そのような環境に在る画像センサについては、例えばノートパソコン9からの設定により昼間時間帯の「昼光変化」の識別の感度を上げておくことで、識別誤差を少なくすることができる。 Also, the sensitivity of identification for each motion type can be variably set according to the state of the target space. For example, in the image sensor installed near the window on the south side, it is easy to detect a change in daylight in the daytime time zone. Therefore, for an image sensor in such an environment, the identification error can be reduced by, for example, increasing the sensitivity for identifying "daylight change" during the daytime period by setting from the notebook computer 9 .
 さらに、通信部34から取得した天候などの外部情報を参照してもよい。例えば、南側の窓近辺に設置された画像センサにおいて、昼間の時間帯で、且つ、天候が晴れの場合に、「昼光変化」の識別の感度を上げるようにする。このように、感度は、環境や時間帯によって変化するように設定することもできる。 Furthermore, external information such as weather acquired from the communication unit 34 may be referred to. For example, in an image sensor installed near the south side window, the sensitivity of the "daylight change" identification is increased in the daytime time zone and when the weather is fine. Thus, the sensitivity can also be set to change depending on the environment or time zone.
 以上のように第2の実施形態では、複数の動き種別のそれぞれごとに識別の感度を可変設定できるようにした。これにより動き識別の精度を向上させ、ひいては、輝度変化による誤検知を防止する効果をさらに高めることが可能になる。 As described above, in the second embodiment, the sensitivity of identification can be variably set for each of a plurality of motion types. As a result, it is possible to improve the accuracy of the motion identification and further enhance the effect of preventing false detection due to a change in luminance.
 [第3の実施形態]
 図11は、第3の実施形態に係る画像センサにおけるデータの流れの一例を示す図である。図11において、動き識別部33bは、カメラ部31のレジスタ30からカメラ情報30aを取得する。これにより動き種別の識別と、カメラ部31のオートゲインコントロールなどの機能とを連携させることができる。すなわち動き識別部33bは、抽出された動き特徴量とカメラ情報30aとに基づいて、動き種別を識別する。
Third Embodiment
FIG. 11 is a diagram showing an example of the flow of data in the image sensor according to the third embodiment. In FIG. 11, the motion identification unit 33 b acquires camera information 30 a from the register 30 of the camera unit 31. Thereby, the identification of the movement type and the function such as the automatic gain control of the camera unit 31 can be linked. That is, the motion identification unit 33b identifies the motion type based on the extracted motion feature amount and the camera information 30a.
 例えば、ゲイン、または露光時間などが変化したときは、「照明制御」および/または「昼光変化」の識別の感度を上げ、これらの動き種別を識別されやすくする。このようにすることでも動き識別の精度を高められ、輝度変化による誤検知を防止する効果をさらに高めることが可能になる。 For example, when the gain, the exposure time, or the like changes, the sensitivity of identification of “lighting control” and / or “daylight change” is increased to make it easy to identify these motion types. This also improves the accuracy of motion identification, and can further enhance the effect of preventing false detection due to a change in luminance.
 [第4の実施形態]
 図12は、第4の実施形態に係る画像センサにおけるデータの流れの一例を示す図である。図12に示される画像センサのメモリ32は、複数の動き識別データ(動き識別1、動き識別2、…)を含む動き辞書32d-1と、未検知/誤検知に対応するための対策辞書32d-2とを記憶する。この対策辞書32d-2も、複数の動き識別データ(動き識別1、動き識別2、…)を含む。それぞれの動き識別データは、例えば図7の動き種別ごとに予め用意される。
Fourth Embodiment
FIG. 12 is a diagram showing an example of the flow of data in the image sensor according to the fourth embodiment. The memory 32 of the image sensor shown in FIG. 12 includes a motion dictionary 32 d-1 including a plurality of motion identification data (motion identification 1, motion identification 2,...), And a countermeasure dictionary 32 d for coping with undetection / false detection. -Remember that. The countermeasure dictionary 32d-2 also includes a plurality of motion identification data (motion identification 1, motion identification 2,...). Each motion identification data is prepared in advance, for example, for each motion type in FIG.
 図12において、動き識別部33bは、未検知/誤検知が発生したときに、次回以降の識別においてそのシーンを特定の動き種別として識別すべく、メモリ32内の動き識別データ(動き識別のパラメータ/ルール/辞書など)を変更する。 In FIG. 12, when the non-detection / false detection occurs, the motion identification unit 33 b determines motion identification data (a parameter of motion identification) in the memory 32 to identify the scene as a specific motion type in the subsequent identification. Change / rule / dictionary etc.)
 例えば、未検知になった人物の動きを「人物」に識別し、誤検知した照明変化を「照明変化」に識別し、誤検知したプリンタの紙送り/扇風機を「非人物」と識別し、スクリーンセーバのアニメーションを「非動体」と識別するように、動き識別データをバックプロパゲーションしてゆく。このようにすることで、未検知/誤検知の発生の頻度を減少させることができ、ひいては、人物検知の精度を高めることができる。 For example, the movement of the undetected person is identified as the “person”, the erroneously detected illumination change is identified as the “illumination change”, and the paper feed / fan of the erroneously detected printer is identified as the “non-person”; The motion identification data is backpropagated to identify the screen saver animation as "non-moving". By doing this, it is possible to reduce the frequency of occurrence of non-detection / false detection, and thus to improve the accuracy of human detection.
 なお、動き識別に関するパラメータ/ルール/辞書などの情報(動き識別データ)を、通信部34による複数の画像センサ3間でコピーすることができる。例えば、ある物件で調整が完了したベースとなる個体の動き識別を、他の画像センサにコピーすることで、調整工数を短縮することができる。さらには、オンライン学習によっても動き識別データを改変することができる。 Note that information (motion identification data) such as parameters / rules / dictionaries related to motion identification can be copied between the plurality of image sensors 3 by the communication unit 34. For example, the adjustment man-hour can be shortened by copying the movement identification of the individual that becomes the base for which adjustment has been completed for a certain property to another image sensor. Furthermore, the motion identification data can also be modified by online learning.
 [その他の実施形態]
 感度設定部33dにより、動き抽出処理に際して動き抽出部33aが参照する画像フレームの枚数および/または時間を、固定または可変に設定できる。例えば、カメラ部31のフレームレートが変化した場合は、時間を固定にし、枚数を可変にする。また、例えば、前状態で歩行と識別された場合は少ない枚数で識別し、前状態でオフィスワークと識別された場合は多い枚数で識別する。さらに、参照枚数がデフォルトで固定であっても、ノートパソコン9等で外部からパラメータ設定を変更することができる。
Other Embodiments
The sensitivity setting unit 33d can set the number and / or time of image frames to be referred to by the motion extraction unit 33a in motion extraction processing fixed or variable. For example, when the frame rate of the camera unit 31 changes, the time is fixed and the number of sheets is variable. Further, for example, when it is identified as walking in the previous state, it is identified with a small number of sheets, and when it is identified as office work in the previous state, it is identified with a large number of sheets. Furthermore, even if the reference number is fixed by default, parameter settings can be changed from the outside by the notebook computer 9 or the like.
 感度設定部33dにより、動き抽出処理に際して動き抽出部33aが参照する画像データの種別を、原画像/平均画像/エッジ画像などから選択できるようにしても良い。1種類の画像データのみを参照してもよいし、複数種類にわたる画像データを参照してもよい。さらに、参照画像の種別がデフォルトで固定されていても、ノートパソコン9等で外部から参照画像の種別を変更することができる。 The sensitivity setting unit 33 d may select the type of image data to be referred to by the motion extraction unit 33 a at the time of motion extraction processing from an original image / average image / edge image or the like. Only one type of image data may be referenced, or a plurality of types of image data may be referenced. Furthermore, even if the type of the reference image is fixed by default, the type of the reference image can be changed from the outside by the notebook computer 9 or the like.
 以上説明したように上記各実施形態によれば、輝度変化による誤検知を防止し得る画像センサ、人物検知方法、プログラムおよび制御システムを提供することが可能となる。 As described above, according to the above embodiments, it is possible to provide an image sensor, a person detection method, a program, and a control system capable of preventing false detection due to a change in luminance.
 なお、この発明は上記実施の形態に限定されるものではない。例えば、人物検知部33cに、動き識別部33bによる動き識別の結果だけでなく、フレーム間差分/背景差分/人物形状認識などの処理結果を与えて、これらの情報を総合的に組合せて人物を検知してもよい。 The present invention is not limited to the above embodiment. For example, not only the result of motion identification by the motion identification section 33b but also processing results such as inter-frame difference / background difference / human shape recognition are given to the human detection unit 33c, and these information are comprehensively combined to obtain a human It may be detected.
 人物検知の単位は、画像単位であっても良いし、エリア単位であっても良い。また、図8(b)に示されるように個々の人物単位で検知してもよい。さらには、人物検知の結果として、人物の画像上の座標を取得してもよい。 The unit of person detection may be an image unit or an area unit. Also, as shown in FIG. 8B, detection may be performed in individual person units. Furthermore, coordinates on the image of a person may be acquired as a result of person detection.
 また、第3の実施形態において、動き識別部33bとカメラ部31とを連携させるようにした。これに代えて動き抽出部33aとカメラ部31とを連携させてもよい。このようにすることで、ゲイン/露光時間などを加味して動き特徴量を抽出することができ、人物検出の精度を向上させることができる。 In the third embodiment, the motion identification unit 33 b and the camera unit 31 are linked. Instead of this, the motion extraction unit 33a and the camera unit 31 may be linked. By doing this, it is possible to extract the motion feature amount in consideration of gain / exposure time and the like, and it is possible to improve the accuracy of human detection.
 また、第4の実施形態において、特定シーンと特定の動き種別に対応すべく、動き種別の項目を増やしてもよい。あるいは、未検知となった人物の動き辞書、誤検知した照明変化の辞書などのように、識別用の動き辞書自体を増やすようにしてもよい。 Further, in the fourth embodiment, the items of the motion type may be increased to correspond to the specific scene and the specific motion type. Alternatively, the motion dictionary for identification itself may be increased, such as a motion dictionary of a person who has become undetected, a dictionary of erroneously detected illumination changes, or the like.
 本発明の実施形態を説明したが、この実施形態は例として提示するものであり、発明の範囲を限定することは意図していない。この新規な実施形態は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、変更を行うことができる。この実施形態やその変形は、発明の範囲や要旨に含まれるとともに、特許請求の範囲に記載された発明とその均等の範囲に含まれる。 While the embodiments of the present invention have been described, this embodiment is presented as an example and is not intended to limit the scope of the invention. This novel embodiment can be implemented in other various forms, and various omissions, replacements, and changes can be made without departing from the scope of the invention. This embodiment and its modifications are included in the scope and the gist of the invention, and are included in the invention described in the claims and the equivalents thereof.

Claims (13)

  1.  対象空間を撮像して画像データを取得する撮像部と、
     前記画像データから動き特徴量を抽出する抽出部と、
     前記動き特徴量に基づいて動き種別を識別する識別部と、
     前記識別の結果に基づいて前記対象空間における人物を検知する検知部とを具備する、画像センサ。
    An imaging unit for imaging the target space to acquire image data;
    An extraction unit that extracts a motion feature from the image data;
    An identification unit that identifies a motion type based on the motion feature amount;
    An image sensor comprising: a detection unit that detects a person in the target space based on a result of the identification.
  2.  前記識別部は、複数の動き種別ごとの動き特徴量を抽出可能である、請求項1に記載の画像センサ。 The image sensor according to claim 1, wherein the identification unit can extract a motion feature amount for each of a plurality of motion types.
  3.  前記複数の動き種別ごとの識別の感度を設定するための設定部をさらに具備する、請求項2に記載の画像センサ。 The image sensor according to claim 2, further comprising a setting unit configured to set a sensitivity of identification for each of the plurality of motion types.
  4.  前記設定部は、前記対象空間を分割した複数のエリアごとに前記感度を可変設定する、請求項3に記載の画像センサ。 The image sensor according to claim 3, wherein the setting unit variably sets the sensitivity for each of a plurality of areas obtained by dividing the target space.
  5.  前記設定部は、前記動き種別ごとの識別の結果に基づいて前記感度を可変設定する、請求項3に記載の画像センサ。 The image sensor according to claim 3, wherein the setting unit variably sets the sensitivity based on a result of identification for each of the motion types.
  6.  前記設定部は、前記対象空間の状態に応じて前記感度を可変設定する、請求項3に記載の画像センサ。 The image sensor according to claim 3, wherein the setting unit variably sets the sensitivity in accordance with a state of the target space.
  7.  前記撮像部に関する情報を取得する取得部をさらに具備し、
     前記識別部は、前記動き特徴量および前記撮像部に関する情報に基づいて前記動き種別を識別する、請求項1に記載の画像センサ。
    And an acquisition unit configured to acquire information on the imaging unit.
    The image sensor according to claim 1, wherein the identification unit identifies the movement type based on the movement feature amount and information on the imaging unit.
  8.  前記人物の検知の結果を通信ネットワークに送出する通信部をさらに具備する、請求項1に記載の画像センサ。 The image sensor according to claim 1, further comprising a communication unit that transmits the result of the detection of the person to a communication network.
  9.  前記抽出部により参照される画像データのフレーム枚数を可変設定するための設定部をさらに具備する、請求項1に記載の画像センサ。 The image sensor according to claim 1, further comprising a setting unit for variably setting the number of frames of the image data to be referred to by the extraction unit.
  10.  前記抽出部により参照される画像データの種別を選択するための設定部をさらに具備する、請求項1に記載の画像センサ。 The image sensor according to claim 1, further comprising a setting unit for selecting a type of image data to be referred to by the extraction unit.
  11.  対象空間を撮像して画像データを取得するコンピュータにより実行される人物検知方法であって、
     前記コンピュータが、前記画像データから動き特徴量を抽出する過程と、
     前記コンピュータが、前記動き特徴量に基づいて動き種別を識別する過程と、
     前記コンピュータが、前記識別の結果に基づいて前記対象空間における人物を検知する過程とを具備する、人物検知方法。
    A person detection method executed by a computer that captures an image of a target space and acquires image data,
    Extracting the motion feature quantity from the image data by the computer;
    Identifying the motion type based on the motion feature amount by the computer;
    And D. the computer detecting the person in the target space based on the result of the identification.
  12.  対象空間を撮像して画像データを取得する画像センサのコンピュータに、
     前記画像データから動き特徴量を抽出する過程と、
     前記動き特徴量に基づいて動き種別を識別する過程と、
     前記識別の結果に基づいて前記対象空間における人物を検知する過程とを実行させる、プログラム。
    In a computer of an image sensor that captures an object space and acquires image data,
    Extracting a motion feature from the image data;
    Identifying a motion type based on the motion feature amount;
    A program for executing a process of detecting a person in the target space based on a result of the identification.
  13.  対象空間を撮像する請求項1乃至10のいずれか1項に記載の画像センサと、
     前記画像センサによる前記対象空間における人物の検知の結果に基づいて、前記対象空間に設けられた機器を制御する制御装置とを具備する、制御システム。
    The image sensor according to any one of claims 1 to 10, which images a target space;
    A control system, comprising: a control device configured to control an apparatus provided in the target space based on a result of detection of a person in the target space by the image sensor.
PCT/JP2018/037693 2017-10-25 2018-10-10 Image sensor, person detection method, program, and control system WO2019082652A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-205805 2017-10-25
JP2017205805A JP7002912B2 (en) 2017-10-25 2017-10-25 Image sensors, person detection methods, programs and control systems

Publications (1)

Publication Number Publication Date
WO2019082652A1 true WO2019082652A1 (en) 2019-05-02

Family

ID=66246857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/037693 WO2019082652A1 (en) 2017-10-25 2018-10-10 Image sensor, person detection method, program, and control system

Country Status (2)

Country Link
JP (1) JP7002912B2 (en)
WO (1) WO2019082652A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7472964B2 (en) 2020-02-27 2024-04-23 コニカミノルタ株式会社 Person determination system and person determination program

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04192781A (en) * 1990-11-27 1992-07-10 Toshiba Corp Tracking device
JPH07271426A (en) * 1994-03-28 1995-10-20 Sharp Corp Equipment controller and control system for indoor equipment
JPH11203481A (en) * 1998-01-20 1999-07-30 Mitsubishi Heavy Ind Ltd Moving body identifying device
JP2001243475A (en) * 2000-02-25 2001-09-07 Secom Co Ltd Image sensor
JP2004236088A (en) * 2003-01-31 2004-08-19 Funai Electric Co Ltd Security system
JP2006311367A (en) * 2005-04-28 2006-11-09 Toshiba Corp Imaging apparatus and imaging method
JP2007180932A (en) * 2005-12-28 2007-07-12 Secom Co Ltd Image sensor
JP2010097265A (en) * 2008-10-14 2010-04-30 Nohmi Bosai Ltd Smoke detecting apparatus
JP2016170502A (en) * 2015-03-11 2016-09-23 株式会社東芝 Moving body detection apparatus, moving body detection method and computer program
WO2017047494A1 (en) * 2015-09-18 2017-03-23 株式会社日立国際電気 Image-processing device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4981610B2 (en) 2007-10-05 2012-07-25 三菱電機ビルテクノサービス株式会社 Air conditioning control system
SG185350A1 (en) 2011-05-13 2012-12-28 Toshiba Kk Energy management system
JP2016171526A (en) 2015-03-13 2016-09-23 株式会社東芝 Image sensor, person detection method, control system, control method, and computer program
JP6555617B2 (en) 2015-12-16 2019-08-07 パナソニックIpマネジメント株式会社 Human detection system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04192781A (en) * 1990-11-27 1992-07-10 Toshiba Corp Tracking device
JPH07271426A (en) * 1994-03-28 1995-10-20 Sharp Corp Equipment controller and control system for indoor equipment
JPH11203481A (en) * 1998-01-20 1999-07-30 Mitsubishi Heavy Ind Ltd Moving body identifying device
JP2001243475A (en) * 2000-02-25 2001-09-07 Secom Co Ltd Image sensor
JP2004236088A (en) * 2003-01-31 2004-08-19 Funai Electric Co Ltd Security system
JP2006311367A (en) * 2005-04-28 2006-11-09 Toshiba Corp Imaging apparatus and imaging method
JP2007180932A (en) * 2005-12-28 2007-07-12 Secom Co Ltd Image sensor
JP2010097265A (en) * 2008-10-14 2010-04-30 Nohmi Bosai Ltd Smoke detecting apparatus
JP2016170502A (en) * 2015-03-11 2016-09-23 株式会社東芝 Moving body detection apparatus, moving body detection method and computer program
WO2017047494A1 (en) * 2015-09-18 2017-03-23 株式会社日立国際電気 Image-processing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
S. RAHIMI ET AL.: "Human Detection and Tracking Using New Features Combination in Particle Filter Framework", 2013 8TH IRANIAN CONFERENCE ON MACHINE VISION AND IMAGE PROCESSING (MVIP, 10 September 2013 (2013-09-10), New Jersey, pages 349 - 354, XP032583126, DOI: doi:10.1109/IranianMVIP.2013.6780009 *

Also Published As

Publication number Publication date
JP7002912B2 (en) 2022-01-20
JP2019080177A (en) 2019-05-23

Similar Documents

Publication Publication Date Title
US10445590B2 (en) Image processing apparatus and method and monitoring system
US9295141B2 (en) Identification device, method and computer program product
JP2016171526A (en) Image sensor, person detection method, control system, control method, and computer program
US11089228B2 (en) Information processing apparatus, control method of information processing apparatus, storage medium, and imaging system
EP4035070B1 (en) Method and server for facilitating improved training of a supervised machine learning process
CN111259763B (en) Target detection method, target detection device, electronic equipment and readable storage medium
WO2019087742A1 (en) Image sensor, sensing method, control system and program
WO2019082652A1 (en) Image sensor, person detection method, program, and control system
WO2013114803A1 (en) Image processing device, image processing method therefor, computer program, and image processing system
EP3529788B1 (en) Presence detection system and method
JP2018084861A (en) Information processing apparatus, information processing method and information processing program
KR20200009530A (en) System and method for detecting abnormal object
JP7286747B2 (en) Image sensor, motion detection method, program and control system
KR102471441B1 (en) Vision inspection system for detecting failure based on deep learning
US11575841B2 (en) Information processing apparatus, imaging apparatus, method, and storage medium
JP7419095B2 (en) Image sensors, computing devices, and image sensor systems
JP2023096127A (en) Image sensor, moving body detection method, program, and control system
JP6784254B2 (en) Retained object detection system, retained object detection method and program
JP6301202B2 (en) Shooting condition setting device and shooting condition setting method
JP2021136671A (en) Information processing device, imaging device, method, program and storage medium
KR102670083B1 (en) Vision inspection system for applying environmental weight to output of plurality of artificial intelligence models
JP6991922B2 (en) Image sensors, identification methods, control systems and programs
KR102670082B1 (en) Vision inspection system for detecting failure based on ensemble method beween artificial intelligence models
JP2020182190A (en) Sensor system, image sensor, and sensing method
JP2022020297A (en) Image sensor, sensing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18870958

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18870958

Country of ref document: EP

Kind code of ref document: A1