WO2006011593A1 - 個体検出器及び共入り検出装置 - Google Patents
個体検出器及び共入り検出装置 Download PDFInfo
- Publication number
- WO2006011593A1 WO2006011593A1 PCT/JP2005/013928 JP2005013928W WO2006011593A1 WO 2006011593 A1 WO2006011593 A1 WO 2006011593A1 JP 2005013928 W JP2005013928 W JP 2005013928W WO 2006011593 A1 WO2006011593 A1 WO 2006011593A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- distance
- distance image
- detection stage
- area
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V8/00—Prospecting or detecting by optical means
- G01V8/10—Detecting, e.g. by using light barriers
Definitions
- the present invention relates to an individual detector for individually detecting one or more physical objects in a detection area, and a co-incident detection device equipped with the individual detector.
- the state-of-the-art entry / exit management system uses biometric information to enable accurate identification. There is a simple way to get through even such high-tech security. In other words, when an individual (for example, an employee or a resident) authorized by authentication enters through an unlocked door, the intrusion is caused by the so-called “joint entry” that opens the door. Allowed while.
- the prior art system described in Japanese Patent Application Laid-Open No. 2004-124497 detects co-entrance by counting the number of three-dimensional silhouettes of a person.
- the silhouette is virtually realized on a computer by a visual volume intersection method based on the principle that a physical object exists inside a visual hull that corresponds to two or more viewpoints. . That is, the method uses two or more cameras and virtually projects a two-dimensional silhouette obtained from the output of each camera into real space to correspond to the shape of the entire circumference of the physical object. Compose a silhouette.
- the above system requires the use of two or more cameras for the view volume intersection method.
- the system requires a human volumetric intersection method in which a human face is taken by one of the two cameras, so that the detection area (one or more physical objects) must be within the field of view of each camera.
- a 3D silhouette cannot be constructed while the face or its front is in the field of view. This makes it difficult to track the trajectory of the movement of one or more physical objects in the detection area.
- This problem leads to an increase in the footprint and cost of the force system that can be solved by adding a camera.
- the number of cameras increases as the number of doors increases.
- the visual volume intersection method is not a technique for separating overlapping physical objects, and therefore has another problem when a three-dimensional silhouette is formed by overlapping physical objects.
- the prior art system can detect a state in which two or more physical objects are overlapped according to a reference size corresponding to one physical object, but two or more states in which a person and a bag are overlapping each other. It is indistinguishable from the state of overlapping people. The former need not issue an alarm, while the latter need to issue an alarm.
- the prior art system removes noise by taking the difference between a pre-recorded background image and the current image, but it does not remove static noise from walls or plants (hereinafter referred to as “static noise”). Even if it can be removed, dynamic physical objects such as luggage and carts (hereinafter “dynamic noise”) cannot be removed.
- a first object of the present invention is to individually detect one or more physical objects in a detection area without increasing the number of components for detecting the one or more physical objects. There is to do.
- a second object of the present invention is to distinguish a state in which a person and dynamic noise overlap each other from a state in which two or more persons overlap.
- the individual detector of the present invention includes a distance image sensor and an object detection stage.
- the distance image sensor is arranged facing the detection area and generates a distance image.
- Each image element of the distance image includes each distance value to one or more physical objects when one or more physical objects are in the area.
- the object detection stage individually detects one or more physical objects in the area based on the distance image generated by the sensor.
- the distance image sensor is arranged facing downward with respect to the lower detection area.
- the target detection stage is one or more physical objects to be detected in the area, obtained from the distance image, one or more of the physical objects to be detected. Detecting individual objects based on specific physical objects or data at each altitude
- the object detection stage includes a sensor force based on a difference between a background distance image that is a distance image obtained in advance and a current distance image obtained from the sensor. Generate a distance image and based on the foreground distance image, individually detect one or more persons as one or more physical objects to be detected in the area. According to the present invention, since the foreground distance image does not include static noise, static noise can be removed.
- the object detection stage generates a foreground distance image by extracting specific image elements from each image element of the current distance image.
- the particular image element is extracted when the distance difference obtained by subtracting the image element of the current distance image from the corresponding image element of the background distance image is greater than a predetermined distance threshold.
- the range image sensor has a camera structure including an optical system and a two-dimensional photosensitive array disposed facing the detection area via the optical system.
- the object detection stage converts the camera coordinate system of the foreground distance image, which depends on the camera structure, into an orthogonal coordinate system based on the camera 'calibration' data for the distance image sensor recorded in advance. A Cartesian coordinate transformation image showing each non-existing position is generated.
- the object detection stage converts the orthogonal coordinate system of the orthogonal coordinate conversion image into a world coordinate system that is virtually set in real space, so that the presence of a physical object exists. Generate a world coordinate transformation image that shows each position where Z does not exist in actual position and actual size
- the orthogonal coordinate system of the orthogonal coordinate conversion image being converted into the world coordinate system by rotation, parallel movement, or the like based on data such as the sensor position and depression angle
- the world coordinate conversion is performed.
- Data of one or more physical objects in the image can be handled in actual position and actual size (distance, size).
- the object detection stage is configured by projecting a world coordinate conversion image onto a predetermined plane by parallel projection and including each image element viewed from the predetermined plane in the world coordinate conversion image.
- a parallel projection image is generated.
- the object detection stage extracts sampling data corresponding to one or more physical object portions from the world coordinate transformation image, and the data is Based on the part, whether or not the force corresponds to the reference data recorded in advance is identified, and whether or not the physical object corresponding to the sampled data is a person is determined.
- the reference data plays the role of almost human characteristic data in the world coordinate transformation image from which static noise and dynamic noise (for example, luggage and cart) are removed. , One or more people in the detection area can be detected individually.
- the object detection stage extracts sampling data corresponding to one or more parts of the physical object from the parallel projection image, and the data is Identify whether it corresponds to the pre-recorded reference data based on the human part Then, it is determined whether or not the physical object corresponding to the sampling data is a person.
- the reference data of the human part serves almost as the data of human characteristics in the parallel projection image from which static noise and dynamic noise (for example, luggage and cart) are removed. In doing so, one or more people in the detection area can be detected individually.
- the sampling 'data is a volume of one or more physical object portions or a ratio of width to depth to height that is virtually represented in a world coordinate transformation image. It is.
- the reference data is pre-recorded based on one or more parts of a person and is a value or range of values for the volume or the ratio of width, depth and height of that part. According to the present invention, the number of people in the detection area can be detected.
- the sampling 'data is the area or the ratio of width to depth of one or more physical object portions that are virtually represented in a parallel projection image.
- the reference data is pre-recorded based on one or more parts of a person and is a value or a range of values for the area of the part or the ratio of width to depth. According to the present invention, the number of people in the detection area can be detected.
- the sampling 'data is a three-dimensional pattern of one or more portions of the physical object that are virtually represented in the world coordinate transformation image.
- the reference data is at least one three-dimensional pattern pre-recorded based on one or more human parts.
- the number of people in the detection area can be detected by selecting, as reference data, a three-dimensional pattern of a person's shoulder strength to the head, and a person's moving hand can be detected.
- the influence can also be eliminated.
- selecting the 3D pattern of the person's head as the reference data one or more persons related to each person's physique can be detected individually.
- the sampling 'data is a two-dimensional pattern of one or more physical object portions that are virtually represented in a parallel projection image.
- the baseline data is at least one pre-recorded 2 based on one or more person sites. It is a dimensional pattern.
- the number of persons in the detection area can be detected by selecting at least one two-dimensional contour pattern between the shoulder and the head of the person as reference data. It can also eliminate the effects of human moving hands. By selecting the 2D contour pattern of the person's head as the reference data, one or more persons related to each person's physique can be detected individually.
- the distance image sensor further includes a light source that emits intensity-modulated light to the detection area, and based on the received light intensity for each image element, the intensity image is displayed in addition to the distance image.
- the object detection stage extracts sampling 'data corresponding to one or more physical object parts based on the Cartesian transformation image, and extracts the physical object part corresponding to the sampling' data based on the intensity image. It is determined whether or not there is a force that is lower than the predetermined strength. With this configuration, it is possible to detect a portion of a physical object that is lower than a predetermined intensity.
- the distance image sensor further includes a light source that emits intensity-modulated infrared light to the detection area, and based on the infrared light from the area, the distance image is reddened into the distance image.
- An intensity image of external light is generated.
- the object detection stage extracts sampling data corresponding to one or more physical object parts based on the world coordinate transformation image, and based on the intensity image, the physical object part corresponding to the sampling data. It is determined whether or not the average intensity of the infrared light of the force is lower than a predetermined intensity, and whether or not the portion of the physical object corresponding to the sampling data is the human head.
- the reflectance of the human hair with respect to infrared light is usually lower than that of the person's shoulder side, so that the person's head can be detected.
- the object detection stage determines the position of the portion of the physical object determined as a person in the parallel projection image based on the number of physical objects determined as a person.
- the number of physical objects is verified based on the divided areas obtained by the K-average algorithm of clustering assigned to the constituent elements. In this configuration, the number of physical objects identified as a person can be verified, and the position of the person can be estimated.
- the object detection stage is each image element of the distance image.
- To generate a foreground distance image by extracting specific image elements, and based on the foreground distance image, individually select one or more persons as one or more physical objects to be detected in the area.
- the specific image element is extracted when the distance value of the image element of the distance image is smaller than a predetermined distance threshold value.
- a physical object between the position of the distance image sensor and a front position away from the sensor (a distance corresponding to a predetermined distance threshold) can be detected.
- the distance threshold is set to an appropriate value
- the state where people and dynamic noise (eg, luggage and trolley) overlap is distinguished from the state where two or more people overlap. You can do it.
- the target detection stage includes a feature in which a distance image near an image element having a minimum value of a distance value distribution of a distance image is recorded in advance based on a human part. Whether or not the force corresponds to the size of the fixed shape and the specific shape is identified, and it is determined whether or not the physical object corresponding to the distance image near the image element having the minimum value is a person.
- the object detection stage generates a distribution image from each distance value of the distance image and, based on the distribution image, detects one or more physical objects in the detection area. Detect individually.
- a distribution image includes one or more distribution regions when one or more physical objects are in the detection area.
- the distribution area is formed from each image element having a distance value smaller than a predetermined distance threshold in the distance image.
- the predetermined distance threshold value is obtained by adding a predetermined distance value to the minimum value of each distance value in the distance image.
- a state can be distinguished from a state where two or more people overlap.
- the co-incident detection device of the present invention includes the individual detector and the co-incident detection stage.
- the distance image sensor continuously generates distance images.
- the co-entry detection stage is The movement trajectory of one or more people detected by the elephant detection stage is individually tracked, and the occurrence of co-entrance is detected when two or more people move to the detection area in a predetermined direction with Z force. Alarm signal.
- an alarm signal is output when two or more people move Z force to the detection area in a predetermined direction, so that co-entry can be suppressed. Also, even if multiple people are detected, an alarm signal is not output when two or more people do not move Z force to the detection area in a predetermined direction, so that false alarms can be prevented.
- Another co-incident detection device of the present invention includes the individual detector and the co-incident detection stage.
- the distance image sensor continuously generates distance images.
- the co-entry detection stage monitors the entry and exit of one or more persons detected by the target detection stage and the directions of the entry and exit, and within a specified time set for co-entry warning. When two or more people move to the detection area in the predetermined direction with Z force, the occurrence of co-entrance is detected and an alarm signal is output.
- an alarm signal is output when two or more people move to the detection area in the predetermined direction with a Z force, so that co-entry can be suppressed. Also, even if multiple people are detected, an alarm signal is not output when two or more people do not move Z force in the detection area in the specified direction, so that false alarms can be prevented.
- FIG. 1 shows a management system in which the co-entry detection device of the first embodiment according to the present invention is incorporated.
- FIG. 2 Shows the vicinity of the door of the room to be managed by the management system of FIG.
- FIG. 3 is a three-dimensional development view of each image element of a distance image or foreground distance image obtained from a distance image sensor of the co-incident detection device.
- FIG. 4A An example of the detection area status is shown.
- FIG. 4B shows the distance image of FIG. 4A.
- FIG. 4C shows a foreground distance image generated from the distance image of FIG. 4B.
- [5] Shows the orthogonal coordinate transformation image and parallel projection image generated from the foreground distance image.
- [6] Indicates each part extracted from the parallel projection image.
- FIG. 7A shows an example of the extracted part in FIG.
- FIG. 7B shows an example of the extracted part in FIG.
- FIG. 8A shows an example of the extracted part in FIG.
- FIG. 8B shows an example of a pre-recorded pattern.
- FIG. 9 Each horizontal cross-sectional image obtained from a 3D Cartesian coordinate transformation image or a 3D World coordinate transformation image is shown.
- FIG. 10A shows the position of the head detected based on the cross-sectional area of the head and the hair.
- FIG. 10B shows the position of the head detected based on the cross-sectional area of the head and the hair.
- FIG. 12 is a flowchart executed by the CPU.
- FIG. 14 is an explanatory diagram of the operation of the target detection stage in the co-entry detection device of the third embodiment according to the present invention.
- FIG. 15 is an operation explanatory diagram of the target detection stage in the co-entrance detection device of the fourth embodiment according to the present invention.
- FIG. 16 is an operation explanatory diagram of the co-incident detection stage in the co-incident detection device of the fifth embodiment according to the present invention.
- FIG. 17 is a configuration diagram of a distance image sensor in the co-incidence detection device according to the sixth embodiment of the present invention.
- FIG. 19A shows an area corresponding to one photosensitive portion in the distance image sensor of FIG.
- FIG. 19B shows a region corresponding to one photosensitive portion in the distance image sensor of FIG.
- FIG. 20 is an explanatory diagram of a charge extraction unit in the distance image sensor of FIG.
- FIG. 21 is an explanatory diagram of the operation of the distance image sensor in the co-incidence detection device according to the seventh embodiment of the present invention.
- FIG. 22A is an operation explanatory diagram of the distance image sensor in FIG. 21.
- FIG. 22A is an operation explanatory diagram of the distance image sensor in FIG. 21.
- FIG. 22B is an operation explanatory diagram of the distance image sensor of FIG.
- FIG. 23A shows an alternative embodiment of the range image sensor of FIG.
- FIG. 23B shows an alternative embodiment of the range image sensor of FIG.
- FIG. 1 shows a management system equipped with the co-entry detection device of the first embodiment according to the present invention.
- the management system includes at least one co-entry detection device 1, a security device 2, and at least one input device 3 for each door 20 of the room to be managed. And a control device 4 that communicates with each co-detection device 1, each security device 2, and each input device 3.
- the management system of the present invention is not limited to the entry management system, and may be an entry / exit management system.
- the security device 2 is an electronic lock that has an automatic lock function and unlocks the door 20 in accordance with an unlock control signal from the control device 4. After locking the door 20, the electronic lock transmits a closing notification signal to the control device 4.
- the security device 2 is an open / close control device in an automatic door system.
- the opening / closing control device opens or closes the doors 20 according to the opening or closing control signal from the control device 4, and transmits the closing notification signal to the control device 4 after closing the door 20.
- the input device 3 is a card reader that is installed on the adjacent wall outside the door 20 and reads information from the ID card and transmits it to the control device 4. If the management system is an entry / exit management system, another input device 3, such as a card reader, is also installed on the wall of the room to be managed inside the door 20.
- the control device 4 includes a CPU and a storage device that stores each pre-registered I blueprint, a program, and the like, and executes overall system control.
- the device 4 matches the I information blue recorded in the HD information storage device from the input device 3, the device 4 transmits an unlock control signal to the corresponding security device 2 and responds. You An admission permission signal is transmitted to the co-entry detection device 1. Further, when the device 4 receives the close notification signal from the security device 2, the device 4 transmits an entry prohibition signal to the corresponding co-entry detection device 1.
- the device 4 responds when the If blueprint from the input device 3 matches the If blueprint recorded in the storage device. An open control signal is transmitted to the open / close control device, and after a predetermined time, the close control signal is transmitted to the corresponding open / close control device. Further, when the device 4 receives the closing notification signal from the opening / closing control device, the device 4 transmits an entry prohibition signal to the corresponding co-entry detection device 1.
- the device 4 when the device 4 receives an alarm signal from the co-entry detection device 1, for example, it performs predetermined processing such as notification to the administrator and extension of the operating time of the camera (not shown), After a warning signal is received, if a predetermined release operation is performed or a predetermined time elapses, a release signal is transmitted to the corresponding co-entry detection device 1.
- predetermined processing such as notification to the administrator and extension of the operating time of the camera (not shown)
- a warning signal After a warning signal is received, if a predetermined release operation is performed or a predetermined time elapses, a release signal is transmitted to the corresponding co-entry detection device 1.
- the co-entry detection apparatus 1 includes an individual detector composed of a distance image sensor 10 and a target detection stage 16, a co-entry detection stage 17 and an alarm stage 18.
- the object detection stage 16 and the co-entry detection stage 17 are configured by a storage device that stores a CPU, a program, and the like.
- the distance image sensor 10 is disposed facing downward with respect to the lower detection area A1, and continuously generates distance images.
- each image element of the distance image contains each distance value to that one or more physical objects, as shown in FIG.
- a distance image D1 as shown in FIG. 4B is obtained.
- the senor 10 includes a light source (not shown) that emits intensity-modulated infrared light to the area A1, and includes an optical system such as a lens and an infrared transmission filter, and the area A1 via the optical system. It has a camera structure (not shown) composed of a two-dimensional photosensitive array placed facing the front. The sensor 10 having a camera structure generates an intensity image of infrared light in addition to the distance image based on the infrared light from the area A1.
- the object detection stage 16 detects one or more persons as one or more physical objects to be detected in the area A1, obtained from the distance image generated by the sensor 10, Detect individually based on the identification of one or more persons to be identified or parts (sites) at each altitude. For this reason, the target detection stage 16 executes the following processes.
- the object detection stage 16 performs a background distance image DO, which is a distance image obtained in advance from the sensor 10, and a current distance image D obtained from the sensor 10. Based on the difference from 1, the foreground distance image D2 is generated.
- the background distance image DO is taken with the door 20 closed. Further, the background distance image may include an average distance value in the time and space directions in order to suppress variation in distance values.
- the foreground distance image is generated by extracting specific image elements from each image element of the current distance image.
- the specific image element is extracted when the distance difference obtained by subtracting the image element of the current distance image from the corresponding image element of the background distance image is larger than a predetermined distance threshold.
- static noise is removed.
- the position force corresponding to the background distance image One or more physical objects behind the distance forward position corresponding to the predetermined distance threshold can be removed, so the predetermined distance threshold and value
- the carriage C1 as dynamic noise is removed, as shown in Figure 4C.
- the physical object behind the door 20 is also removed. Therefore, it is possible to distinguish a state in which people and dynamic noise (such as the trolley C1 and the physical object behind the door 20) overlap each other from a state in which two or more people overlap each other.
- the object detection stage 16 is based on camera 'calibration' data (for example, pixel pitch and lens distortion, etc.) for the sensor 10 recorded in advance as shown in FIG. Convert the camera coordinate system of the foreground distance image D2, which depends on the camera structure, into a three-dimensional Cartesian coordinate system (X, y, z), Generate. That is, each image element (xi, xj, xk) of the orthogonal coordinate transformation image E1 is represented by “T RUE” or “FALSE”. “TRUE” indicates the presence of a physical object, and “FALSE” indicates its absence.
- the object detection stage 16 performs rotation and based on pre-recorded camera 'calibration' data (for example, the position of the sensor 10, the depression angle, the actual distance of the pixel pitch, etc.).
- the object detection stage 16 projects the world coordinate conversion image on a predetermined plane such as a horizontal plane or a vertical plane by parallel projection, and looks at the predetermined plane in the world coordinate conversion image.
- a parallel projection image constituted by each image element is generated.
- the parallel projection image F1 is composed of image elements viewed from a horizontal surface on the ceiling surface side, and each image element indicating a physical object to be detected has the highest altitude. It is in position.
- the object detection stage 16 corresponds to one or more physical object parts (Blob) in the target extraction area A2 from the parallel projection image F1, as shown in FIG.
- Sampling data is extracted and labeled, and the position of sampling data (physical target part) (for example, the position of the center of gravity) is specified.
- Sampling 'When data overlaps the boundary of area A2, the data may be processed so that it belongs to the area of the larger area among the areas inside and outside area A2.
- sampling data corresponding to person B2 outside area A2 is excluded. In this case, only the part of the physical object in the object extraction area A2 can be extracted, so that dynamic noise due to reflection on a glass door, for example, can be removed, and it matches the room to be managed. Individual detection is possible.
- the sixth process and the seventh process are executed in parallel.
- the target detection stage 16 determines whether or not the sampling data extracted in the fifth process corresponds to the reference data recorded in advance based on one or more human parts. To determine whether the physical object corresponding to the sampling data is a person.
- the sampling data is the area S or width of one or more physical object portions that are virtually represented in the parallel projection image. And And the ratio of depth.
- the ratio is the width w and depth of the circumscribed rectangle that includes the part of the physical object
- the ratio of D (W: D).
- the reference data is pre-recorded based on one or more parts of a person and is a value or range of values for the area of that part or the ratio of width and depth. As a result, the number of persons in the target extraction area A2 in the detection area A1 can be detected.
- the sampling 'data is a two-dimensional pattern of one or more physical object portions that are virtually represented in a parallel projection image.
- the reference data is at least one two-dimensional pattern prerecorded based on one or more human sites, as shown in FIGS. 8B and 8C.
- a pattern as shown in FIGS. 8B and 8C is used, and if the correlation value obtained by Noturn 'matching is larger than a predetermined value, the number of people corresponding to the Noturn is added.
- the number of persons in the detection area can be detected, and the influence of the moving hand of the person can also be detected. Can be eliminated.
- the 2D contour pattern of a person's head as reference data, one or more persons related to each person's physique can be detected individually.
- the object detection stage 16 extracts each image element on a predetermined plane from each image element of the three-dimensional orthogonal coordinate transformation image or the three-dimensional world coordinate transformation image.
- a cross-sectional image is generated.
- the horizontal slice image Gl-G5 is generated by extracting each image element on the horizontal plane at each altitude (for example, 10 cm) upward from the altitude of the distance threshold in the first processing. Is done.
- the target detection stage 16 extracts and records sampling data corresponding to one or more physical target portions from the horizontal cross-sectional image.
- the object detection stage 16 determines whether the sampling data force extracted in the eighth process corresponds to the reference data recorded in advance based on one or more human parts. And determine whether the physical object corresponding to the sampled data is a person.
- Sampling 'data is the cross-sectional area of one or more physical objects that are virtually represented in a horizontal cross-sectional image.
- Reference data is a value or range of values for the cross-sectional area of one or more human heads.
- the object detection stage 16 identifies whether or not the sampling data is smaller than the reference data every time a horizontal cross-sectional image is generated. When the sampling 'data becomes smaller than the reference data (G4 , G5), the sampling data at the maximum altitude is counted as data corresponding to the human head.
- the target detection stage 16 performs the following based on the intensity image generated by the sensor 10 every time a horizontal cross-sectional image is generated after the altitude of the horizontal cross-sectional image reaches a predetermined altitude. Identifies whether the average intensity of infrared light from the physical object corresponding to the sampling data is lower than the specified intensity, and the physical object corresponding to the sampling data is the human head It is determined whether or not the force is. When the part of the physical object corresponding to the sampling data is a human head, the sampling data is counted as data corresponding to the human head. Since the reflectance of a person's hair to infrared light is usually lower than that of the person's shoulder side, the person's head can be detected when the predetermined intensity is set to an appropriate value.
- the object detection stage 16 discriminates at the position B31 of the head B31 of the person B3 at the maximum altitude determined in the ninth process and the tenth process. If the position B32 of the head of the person B3 is identical, it is determined that the person B3 is upright and has hair. Otherwise, as shown in FIGS. 10A and 10B, the object detection stage 16 determines that the position B41 of the head of the person B4 that has the maximum altitude only by the ninth process is determined, so that the person B4 is upright. If it is determined that the person has no hair or is wearing a cap, and the position B52 of the head of person B5 is determined only by the tenth process as shown in FIG. Tilt the part and determine that you have hair. The object detection stage 16 then counts the number of people.
- the co-entry detection stage 17 in FIG. 1 detects whether or not the co-entry has occurred based on the number of persons detected by the target detection stage 16 after receiving the admission permission signal from the control device 4. .
- the co-entry detection stage 17 detects that co-entry has occurred and receives an alarm signal and a release signal from the device 4. Until Sent to device 4 and alarm stage 18. If the alarm signal is not transmitted to the device 4 and the alarm stage 18, the co-entry detection stage 17 shifts to the standby mode after receiving the input prohibition signal from the control apparatus 4.
- the alarm stage 18 issues a warning while receiving an alarm signal from the co-entry detection stage 17.
- the operation of the first embodiment will be described.
- the device 3 transmits the I card blue information to the control device 4.
- device 4 authenticates whether the information matches the pre-recorded I blueprint, and when they match each other, sends a permission signal to the corresponding co-entry detection device 1, and Send the unlock control signal to the corresponding security device 2. This allows the ID card holder to open the door 20 and enter the room to be managed.
- the operation after the co-entry detection device 1 receives an input permission signal from the control device 4 will be described.
- the co-incident detection device 1 a distance image and an intensity image of infrared light are generated by the distance image sensor 10 (see S10 in FIG. 11).
- the object detection stage 16 generates a foreground distance image based on the distance image, background distance image, distance threshold, and value! / ⁇ (S11), and performs orthogonal coordinate conversion from the foreground distance image.
- An image is generated (S 12)
- a rectangular coordinate conversion image force is also generated as a world coordinate conversion image (S 13)
- a parallel projection image is generated from the world coordinate conversion image (S 14).
- the stage 16 extracts the data (sampling 'data) of the part (contour) of the physical object as the parallel projection image force (S15).
- step S16 the object detection stage 16 uses the sampling data (contour area and ratio) based on the reference data (value or range of values for the area and ratio of the human reference region). It is determined whether or not the corresponding physical object is a person. In step 16, if any physical object is identified as a person (“YES” in S16), the number of persons (N1) in the target extraction area A2 is counted in step S17, and any physical object is identified as a person. If not determined (“NO” in S16), zero is counted as N1 in step SI8.
- step S19 the object detection stage 16 determines whether or not the physical object corresponding to the sampling data (contour pattern) is a person based on the reference data (human reference region pattern). Determine. In step 16, if any physical object is identified as a person (“YES” in S19), in step S20, the number of persons (N2) in the target extraction area A2 is counted, and which physical object is If the subject is not identified as a person ("NO" in S19), zero is counted as N2 in step S21.
- the joint entry detection stage 17 determines whether or not N1 and N2 match each other (S22). Step 17 detects whether or not a co-entry has occurred in step S23 if N1 and N2 match each other ("YES” in S22), and otherwise (in other words, if N1 and N2 match each other ("YES" in S22) In S22, “NO”), the process proceeds to step S30 in FIG.
- the co-entry detection stage 17 sends an alarm signal to the control device 4 and the alarm stage 18 until a release signal is received from the device 4 Send (S24, S25) 0 This causes alarm stage 18 to issue an alarm.
- the joint entry detection stage 17 receives the release signal from the device 4, the joint entry detection device 1 returns to the standby mode.
- the co-entry detection device 1 if the co-entry detection stage 17 receives an input prohibition signal from the control device 4 ( If "YES” in S26), the process returns to the standby mode. If not ("NO" in S26), the process returns to step S10.
- step S30 of FIG. 12 the object detection stage 16 generates a horizontal cross-sectional image from the height of the distance threshold value in the first process.
- step S31 in step S31, the horizontal cross-sectional image force is also extracted from the physical target portion (cross-sectional contour) data (sampling 'data).
- step S32 the reference data (about the human head cross-sectional area) The position of the person's head (Ml) by determining whether the part of the physical object corresponding to the sampling data (the area of the contour) is the person's head ) Is detected.
- Step 16 then proceeds to step S35 if all horizontal slice images have been generated (“YES” in S33), and returns to step S30 otherwise (“NO” in S33).
- step S34 the object detection stage 16 detects the position (M2) of the human head based on the intensity image and the predetermined intensity, and then proceeds to step S35.
- step S35 the object detection stage 16 compares Ml with M2, and if they match ("YES” in S36), in step S37, the object detection stage 16 detects a person who stands upright and has hair. Otherwise (“NO” in S36), if only Ml is detected (“YES” in S38), step 16 detects a person who is upright and has no hair in step S39. Otherwise (“NO” in S38), if only M2 is detected (“YES” in S40), step 16 is step S41, the head is tilted and the person with hair Is detected. Otherwise (“NO” in S40), step 16 does not detect a person in step S42.
- the object detection stage 16 counts the number of people in step S43, and returns to step S23 in FIG.
- the co-entry detection device 1 is provided outside the door 20.
- the control device 4 activates the co-entry detection device 1. If the co-entry state occurs outside the door 20, the co-entry detection device 1 transmits an alarm signal to the control device 4 and the alarm stage 18, and the control device 4 starts from the co-entry detection device 1. Based on the alarm signal, the ID card HD information is maintained regardless of the door 20 lock. This can prevent co-entry. If the co-entry state has not occurred outside the door 20, the control device 4 transmits an unlock control signal to the security device 2. This allows ID card carriers to open the door 20 and enter the room to be managed.
- FIG. 13 is an explanatory diagram of the operation of the target detection stage in the co-incidence detection device according to the second embodiment of the present invention.
- the object detection stage of the second embodiment executes the first to seventh processes in the same manner as those of the first embodiment.
- the K—average algorithm clustering process is executed.
- the object detection stage of the second embodiment uses the number of physical objects determined as a person to determine the position of the part of the physical object determined as a person in the parallel projection image.
- the number of physical objects identified as the above person is verified by the K-average algorithm of clustering.
- the target detection stage obtains each divided area by the K-average algorithm, calculates the area of the divided area, and the difference between the area of the divided area and the area of the person recorded in advance is a predetermined value. When equal to or smaller than, the divided area is counted as a human part. When the difference is larger than the predetermined threshold, the target detection stage increases or decreases the initial value of the number of divisions and executes the K-average algorithm again. According to this K-average algorithm, each person's position can be estimated. it can.
- FIG. 14 is an explanatory diagram of the operation of the object detection stage in the co-incidence detection device according to the third embodiment of the present invention.
- the object detection stage of the third embodiment replaces each process of the first embodiment with a specific image from each image element of the distance image from the distance image sensor 10.
- the foreground distance image D20 is generated by extracting the elements.
- the specific image element is extracted when the distance value of the image element of the distance image is smaller than a predetermined distance threshold value.
- the object detection stage then individually detects one or more persons as one or more physical objects to be detected in the detection area based on the foreground distance image D20.
- the black part is formed from image elements having a distance value smaller than the predetermined distance threshold
- the white part is an image having a distance value larger than the predetermined distance threshold. Formed from elements.
- the third embodiment it is possible to detect a physical object between the position of the distance image sensor and the sensor force thereof (a distance corresponding to a predetermined distance! And a distance corresponding to the value).
- a given distance threshold is set to an appropriate value
- people and dynamic noise eg, luggage and trolley
- two or more people overlap.
- the shoulder force of the person B6 in the detection area and the part of the head of the person B7 can be detected individually.
- FIG. 15 is an explanatory diagram of the operation of the target detection stage in the co-incidence detection device according to the fourth embodiment of the present invention.
- the object detection stage of the fourth embodiment replaces each process of the first embodiment with a distribution screen from each distance value of the distance image generated by the distance image sensor 10.
- One or more distribution area forces in the distribution screen are identified based on the human part to determine whether the data corresponds to the pre-recorded data, and one or more distributions in the distribution screen, respectively. It is determined whether or not the physical object corresponding to the area is a person.
- a distribution image includes one or more distribution regions when one or more physical objects are in the detection area.
- the distribution area is formed from each image element having a distance value smaller than a predetermined distance threshold in the distance image.
- the predetermined distance threshold is the distance value of the distance image. It is obtained by adding a predetermined distance value (for example, a value about half of the average face length) to the minimum value of.
- the distribution screen is a binary image
- the black portion is a distribution region
- the white portion is formed from each distance value larger than a specific distance value in the distance image.
- the pre-recorded data is the area or diameter of the contour of the human part.
- the shape obtained from the contour of the human head for example, Circle.
- FIG. 16 is an operation explanatory diagram of the co-incident detection stage in the co-incident detection device according to the fifth embodiment of the present invention.
- the co-entry detection stage of the fifth embodiment individually tracks the movement trajectories of one or more people detected by the target detection stage at the time of co-arrival warning.
- the occurrence of co-entrance is detected and an alarm signal is sent to the control device 4 and the alarm stage 18.
- 20 is an automatic door.
- the predetermined direction is set to a direction that moves to the detection area A1 across the boundary line of the detection area A1 on the door 20 side.
- the movement trajectory of one person's Bl, Bl, B1 and the movement trajectory of another person's B2, B2 are both
- a joint entry warning time (for example, 2 seconds) is established.
- the specified time can be set to a time from when the automatic door 20 is opened until it is closed.
- an alarm signal is output. It can be discovered instantly. Even if a plurality of people are detected, an alarm signal is not output unless two or more people move to the detection area in a predetermined direction, so that false alarms can be prevented.
- the co-entry detection device 1 is provided outside the door 20.
- the predetermined direction is set to a direction moving from the detection area to the boundary line of the detection area on the door 20 side.
- FIG. 17 shows the distance image sensor 10 in the co-entry detection device of the sixth embodiment according to the present invention.
- the distance image sensor 10 of the sixth embodiment includes a light source 11, an optical system 12, a light detection element 13, a sensor control stage 14, and an image generation stage 15, and can be used in each of the above embodiments.
- the light source 11 is composed of, for example, an infrared LED array or an infrared semiconductor laser and a diverging lens arranged in one plane in order to ensure light intensity, as shown in FIG.
- the intensity K1 of the infrared light is modulated so as to periodically change at a constant period, and the intensity-modulated infrared light is irradiated to the detection area.
- the intensity waveform of the intensity-modulated infrared light is not limited to a sine wave, but may be a triangular wave or a sawtooth wave.
- the optical system 12 is a light receiving optical system, which is composed of, for example, a lens and an infrared transmission filter, and collects infrared light having a detection area force on the light receiving surface (each photosensitive unit 131) of the light detecting element 13. Shine.
- the optical system 12 is disposed, for example, so that its optical axis is orthogonal to the light receiving surface of the light detection element 13.
- the light detection element 13 is formed in a semiconductor device, and includes a plurality of photosensitive units 131, a plurality of sensitivity control units 132, a plurality of charge integration units 133, and a charge extraction unit 134.
- Each photosensitive unit 131, each sensitivity control unit 132, and each charge integration unit 133 constitute a two-dimensional photosensitive array as a light receiving surface disposed facing the detection area via the optical system 12.
- each photosensitive unit 131 is formed as each photosensitive element of, for example, a 100 ⁇ 100 two-dimensional photosensitive array by a semiconductor layer 13a to which impurities in a semiconductor substrate are added.
- the photosensitivity controlled by the corresponding sensitivity control unit 132 generates an amount of charge corresponding to the infrared light amount of the detection area force.
- the semiconductor layer 13a is n-type, and the generated charge is an electron.
- each photosensitive unit 131 If the origin is set at the center of the optical system 12, each photosensitive unit 131 generates an amount of charge corresponding to the amount of light from the direction represented by the azimuth angle and the elevation angle.
- the infrared light emitted from the light source 11 is reflected by the physical object and received by the photosensitive unit 131, so that the photosensitive unit 131 is as shown in FIG.
- the photosensitive unit 131 receives intensity-modulated infrared light delayed by a phase ⁇ corresponding to the back-and-forth distance to the physical object, and generates an amount of charge corresponding to the intensity K2.
- the intensity-modulated infrared light is
- ⁇ is the angular frequency and ⁇ is the external light component.
- the sensitivity control unit 132 is formed by a plurality of control electrodes 13b stacked on the surface of the semiconductor layer 13a via an insulating film (oxide film) 13e, and follows the sensitivity control signal of the sensor control stage 14. Therefore, the sensitivity of the corresponding photosensitive unit 131 is controlled.
- the width of the control electrode 13b in the left-right direction is set to about 1 ⁇ m.
- the control electrode 13b and the insulating film 13e are formed of a material that is transparent to the infrared light of the light source 11.
- the sensitivity control unit 132 includes a plurality of (for example, five) control electrodes for the corresponding photosensitive unit 131. For example, when the generated charge is an electron, a voltage (+ V, OV) is applied to each control electrode 13b as a sensitivity control signal.
- the charge integration unit 133 is composed of a potential well (depletion layer) 13c that changes in response to a sensitivity control signal applied to each corresponding control electrode 13b.
- the charge integration unit 133 stores electrons (e) in the vicinity of the potential well 13c. Capture and accumulate. Electrons that are not accumulated in the charge accumulation unit 133 disappear due to recombination with holes. Therefore, the photosensitivity of the photodetecting element 13 can be controlled by changing the size of the potential well 13c by the sensitivity control signal. For example, the sensitivity of the state of FIG. 19A is higher than that of the state of FIG. 19B.
- the charge extraction unit 134 has a structure similar to a frame “transfer (FT) type CCD image” sensor.
- the imaging region L1 composed of a plurality of photosensitive units 131 and the light-shielded accumulation region L2 adjacent to the region L1
- the semiconductor layer 13a integrally continuous in each vertical direction has a charge in the vertical direction.
- the vertical direction corresponds to the left-right direction in FIGS. 19A and 19B.
- the charge extraction unit 134 includes an accumulation region L2, each transfer path, and a CCD horizontal transfer unit 13d that receives charges from one end of each transfer path and transfers the charges in the horizontal direction.
- the charge transfer from the imaging region L1 to the storage region L2 is executed once in the vertical blanking period. That is, after charge is accumulated in the potential well 13c, a voltage pattern different from the voltage pattern of the sensitivity control signal is applied to each control electrode 13b as a vertical transfer signal, and the charge accumulated in the potential well 13c is transferred in the vertical direction. Is done.
- a horizontal transfer signal is supplied to the horizontal transfer unit 13d, and charges for one horizontal line are transferred in one horizontal period.
- the horizontal transfer unit transfers charge in a direction normal to the plane of FIGS. 19A and 19B.
- the sensor control stage 14 is an operation timing control circuit that controls the operation timing of the light source 11, each sensitivity control unit 132, and the charge extraction unit 134. That is, since the propagation time of the light at the above-mentioned round-trip distance is very short and a nanosecond level, the sensor control stage 14 supplies a modulation signal having a predetermined modulation frequency (for example, 20 MHz) to the light source 11. , Intensity Control the intensity change timing of modulated infrared light.
- a predetermined modulation frequency for example, 20 MHz
- the sensor control stage 14 applies a voltage (+ V, OV) as a sensitivity control signal to each control electrode 13b, and switches the sensitivity of the light detection element 13 between high sensitivity and low sensitivity.
- the sensor control stage 14 supplies a vertical transfer signal to each control electrode 13B in the vertical blanking period, and supplies a horizontal transfer signal to the horizontal transfer unit 13d in one horizontal period.
- the image generation stage 15 is configured by, for example, a CPU, a storage device that stores a program, and the like, and generates a distance image and an intensity image based on a signal from the light detection element 13.
- the phase (phase difference) ⁇ in Fig. 18 corresponds to the round-trip distance between the light receiving surface of the light detection element 13 and the physical object in the detection area, so the distance to the physical object is calculated by calculating the phase ⁇ . be able to.
- the phase ⁇ can be calculated by calculating the time integral value (for example, the integral values QO, Ql, Q2, and Q3 of the period Tw) of the curve expressed by (Equation 1) above.
- Time integration values (light received) QO, Ql, Q2 and Q3 start at 0, 90, 180 and 270 degrees, respectively.
- QO , Ql, Q2, Q3 instantaneous values q0, ql, q2, q3 are respectively
- phase ⁇ is given by the following (Equation 2), and the phase ⁇ can be obtained by (Equation 2) even in the case of the time integral value.
- the sensor control stage 14 controls the sensitivity of the photodetecting element 13 so that a plurality of periods of the intensity-modulated infrared light can be obtained.
- the charges generated by the photosensitive unit 131 are accumulated in the charge accumulation unit 133. Since the phase ⁇ and the reflectivity of the physical object hardly change during the period of multiple periods of intensity-modulated infrared light, for example, when accumulating charges corresponding to the time integration value QO in the charge integration unit 133, During the period corresponding to QO, the sensitivity of the light detection element 13 is increased, and during the other period, the sensitivity of the light detection element 13 is decreased.
- the photosensitive unit 131 generates a charge proportional to the amount of light received
- the charge integration unit 133 accumulates the charge of QO
- a charge proportional to aQO + ⁇ (Q1 + Q2 + Q3) + j8 Qx is accumulated.
- ⁇ is the sensitivity during the period corresponding to QO to Q3
- j8 is the sensitivity during the other periods
- Qx is the amount of light received outside the period in which QO, Ql, Q2, and Q3 are obtained.
- the charge accumulation unit 133 accumulates the charge of Q2
- a charge proportional to aQ2 + ⁇ (QO + Ql + Q3) + ⁇ Qx is accumulated.
- the sensor control stage 14 After a plurality of periods of intensity-modulated infrared light, the sensor control stage 14 sends a vertical transfer signal to each control electrode 13B in the vertical blanking period in order to take out the charges accumulated in the charge integration unit 133. And a horizontal transfer signal is supplied to the horizontal transfer unit 13d in one horizontal period.
- the image generation stage 15 can generate a distance image and an intensity image from QO to Q3.
- the distance value and the intensity value at the same position can be obtained by generating the distance image and the intensity image from QO to Q3.
- the image generation stage 15 calculates a distance value from QO to Q3 by (Equation 2), and generates a distance image from each distance value.
- the three-dimensional information of the detection area may be calculated from each distance value and a distance image may be generated from the three-dimensional information. Since the intensity image includes an average value of QO to Q3 as an intensity value, the influence of light from the light source 11 can be removed.
- FIG. 21 is an explanatory diagram of the operation of the distance image sensor in the co-incidence detection device according to the seventh embodiment of the present invention.
- the distance image sensor of the seventh embodiment is different from the distance image sensor of the sixth embodiment in that two photosensitive units are used as one pixel, and QO ⁇ within one cycle of the modulation signal. Generate two types of charge corresponding to Q3.
- the seventh embodiment in order to solve the problem, as shown in FIGS. 22A and 22B, two photosensitive units are used as one pixel.
- the two control electrodes on both sides in FIGS. 19A and 19B of the sixth embodiment are potential barriers for preventing the charge from flowing out to the adjacent photosensitive unit 131 while the charge is generated by the photosensitive unit 131. Play the role of forming
- a barrier is formed by any one of the photosensitive units 131. Therefore, three control electrodes are provided for each photosensitive unit, and one unit is provided.
- FIG. 22A a voltage force of + V (predetermined positive voltage) is applied to each of the control electrodes 13b-1, 1, 13b-2, 13b-3, 13b-5, and the voltage of OV is applied to the control electrode 13b-4 , 13b-6.
- FIG. 22B + V voltage force is applied to each of the control electrodes 13b-2, 13b-4, 13b-5, 13b-6, and the voltage of OV is applied to each of the control electrodes 13b-1, 13b-3. To be applied.
- These voltage patterns are alternately switched every time the phase of the modulation signal changes to the opposite phase (180 degrees).
- + V voltage force is applied to each of the control electrodes 13b-2 and 13b-5, and OV voltage force is applied to the remaining control electrodes.
- the photodetecting element can generate charges corresponding to QO with the voltage pattern of FIG. 22A, and charges corresponding to Q2 with the voltage pattern of FIG. 22B. Can be generated.
- the + V voltage is always applied to each of the control electrodes 13b-2 and 13b-5, the charge corresponding to QO and the charge corresponding to Q2 are integrated and held.
- both voltage patterns in FIGS. 22A and 22B are used and the timing at which both voltage patterns are applied is shifted by 90 degrees, the charge corresponding to Q1 and the charge corresponding to Q3 can be generated and held.
- Charges are transferred from the imaging region L1 to the storage region L2 between a period for generating charges corresponding to QO and Q2 and a period for generating charges corresponding to Ql and Q3. That is, the charge corresponding to QO is accumulated in the potential well 13c corresponding to the control electrodes 13b-1, 13b-2, 13b-3, and the charge corresponding to Q2 is stored in the control electrodes 13b-4, 13b-5, 13b. — When accumulated in the potential well 13c corresponding to 6, charges corresponding to QO and Q2 are extracted.
- the charge corresponding to Q1 is accumulated in the potential well 13c corresponding to the control electrodes 13b-1, 13b-2, 13b-3, and the charge corresponding to Q3 is stored in the control electrodes 13b-4, 13b-5,
- the charges corresponding to Ql and Q3 are extracted.
- the charges corresponding to QO to Q3 can be taken out by two reading operations, and the phase ⁇ can be obtained using the taken out charges. For example, when an image of 30 frames per second is required, the total period of generating charge corresponding to QO and Q2 and generating charge corresponding to Ql and Q3 is 60 The period is shorter than 1 second.
- a voltage of + V is applied to the control electrode 13b.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Geophysics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Geophysics And Detection Of Objects (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/658,869 US8330814B2 (en) | 2004-07-30 | 2005-07-29 | Individual detector and a tailgate detection device |
EP05767175A EP1772752A4 (en) | 2004-07-30 | 2005-07-29 | Single detector and additional detector |
CN2005800139668A CN1950722B (zh) | 2004-07-30 | 2005-07-29 | 个体检测器和共入检测设备 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004224485 | 2004-07-30 | ||
JP2004-224485 | 2004-07-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006011593A1 true WO2006011593A1 (ja) | 2006-02-02 |
Family
ID=35786339
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/013928 WO2006011593A1 (ja) | 2004-07-30 | 2005-07-29 | 個体検出器及び共入り検出装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US8330814B2 (ja) |
EP (1) | EP1772752A4 (ja) |
JP (1) | JP4400527B2 (ja) |
KR (1) | KR101072950B1 (ja) |
CN (1) | CN1950722B (ja) |
WO (1) | WO2006011593A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006064695A (ja) * | 2004-07-30 | 2006-03-09 | Matsushita Electric Works Ltd | 個体検出器及び共入り検出装置 |
JP2009294755A (ja) * | 2008-06-03 | 2009-12-17 | Nippon Telegr & Teleph Corp <Ntt> | 混雑度計測装置、混雑度計測方法、混雑度計測プログラムおよびそのプログラムを記録した記録媒体 |
JP2017539114A (ja) * | 2014-10-30 | 2017-12-28 | 日本電気株式会社 | 監視システム、監視方法およびプログラム |
Families Citing this family (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5000928B2 (ja) * | 2006-05-26 | 2012-08-15 | 綜合警備保障株式会社 | 物体検知装置および方法 |
JP4857974B2 (ja) * | 2006-07-13 | 2012-01-18 | トヨタ自動車株式会社 | 車両周辺監視装置 |
JP5118335B2 (ja) * | 2006-11-27 | 2013-01-16 | パナソニック株式会社 | 通過管理システム |
JP5016299B2 (ja) * | 2006-11-27 | 2012-09-05 | パナソニック株式会社 | 通過管理システム |
JP5170731B2 (ja) * | 2007-02-01 | 2013-03-27 | 株式会社メガチップス | 通行監視システム |
JP5065744B2 (ja) * | 2007-04-20 | 2012-11-07 | パナソニック株式会社 | 個体検出器 |
JP5133614B2 (ja) * | 2007-06-22 | 2013-01-30 | 株式会社ブリヂストン | 3次元形状測定システム |
JP5014241B2 (ja) * | 2007-08-10 | 2012-08-29 | キヤノン株式会社 | 撮像装置、およびその制御方法 |
US9131140B2 (en) | 2007-08-10 | 2015-09-08 | Canon Kabushiki Kaisha | Image pickup apparatus and image pickup method |
DE102008016516B3 (de) * | 2008-01-24 | 2009-05-20 | Kaba Gallenschütz GmbH | Zugangskontrollvorrichtung |
DE102009009047A1 (de) * | 2009-02-16 | 2010-08-19 | Daimler Ag | Verfahren zur Objektdetektion |
CN101937563B (zh) * | 2009-07-03 | 2012-05-30 | 深圳泰山在线科技有限公司 | 一种目标检测方法和设备及其使用的图像采集装置 |
JP2011185664A (ja) * | 2010-03-05 | 2011-09-22 | Panasonic Electric Works Co Ltd | 対象物検出装置 |
DE102010011225B3 (de) * | 2010-03-12 | 2011-02-24 | Mühlbauer Ag | Personendurchgangskontrolle mit Kamerasystem |
JP5369036B2 (ja) * | 2010-03-26 | 2013-12-18 | パナソニック株式会社 | 通過者検出装置、通過者検出方法 |
US9355556B2 (en) | 2010-04-15 | 2016-05-31 | Iee International Electronics & Engineering S.A. | Configurable access control sensing device |
EP2395451A1 (en) * | 2010-06-09 | 2011-12-14 | Iee International Electronics & Engineering S.A. | Configurable access control sensing device |
WO2011139734A2 (en) * | 2010-04-27 | 2011-11-10 | Sanjay Nichani | Method for moving object detection using an image sensor and structured light |
WO2012023639A1 (ko) * | 2010-08-17 | 2012-02-23 | 엘지전자 주식회사 | 다수의 센서를 이용하는 객체 계수 방법 및 장치 |
JP5845582B2 (ja) * | 2011-01-19 | 2016-01-20 | セイコーエプソン株式会社 | 位置検出システム、表示システム及び情報処理システム |
JP5845581B2 (ja) * | 2011-01-19 | 2016-01-20 | セイコーエプソン株式会社 | 投写型表示装置 |
JP5830876B2 (ja) * | 2011-02-18 | 2015-12-09 | 富士通株式会社 | 距離算出プログラム、距離算出方法及び距離算出装置 |
JP5177461B2 (ja) | 2011-07-11 | 2013-04-03 | オプテックス株式会社 | 通行監視装置 |
TW201322048A (zh) * | 2011-11-25 | 2013-06-01 | Cheng-Xuan Wang | 景深變化偵測系統、接收裝置、景深變化偵測及連動系統 |
CN103164894A (zh) * | 2011-12-08 | 2013-06-19 | 鸿富锦精密工业(深圳)有限公司 | 票闸控制装置及方法 |
EP2704107A3 (en) * | 2012-08-27 | 2017-08-23 | Accenture Global Services Limited | Virtual Access Control |
EP2893521A1 (en) * | 2012-09-07 | 2015-07-15 | Siemens Schweiz AG | Methods and apparatus for establishing exit/entry criteria for a secure location |
TWI448990B (zh) * | 2012-09-07 | 2014-08-11 | Univ Nat Chiao Tung | 以分層掃描法實現即時人數計數 |
JP2014092998A (ja) * | 2012-11-05 | 2014-05-19 | Nippon Signal Co Ltd:The | 乗降客数カウントシステム |
US20140133753A1 (en) * | 2012-11-09 | 2014-05-15 | Ge Aviation Systems Llc | Spectral scene simplification through background subtraction |
CN103268654A (zh) * | 2013-05-30 | 2013-08-28 | 苏州福丰科技有限公司 | 一种基于三维面部识别的电子锁 |
CN103345792B (zh) * | 2013-07-04 | 2016-03-02 | 南京理工大学 | 基于传感器景深图像的客流统计装置及其方法 |
US9199576B2 (en) | 2013-08-23 | 2015-12-01 | Ford Global Technologies, Llc | Tailgate position detection |
JP6134641B2 (ja) * | 2013-12-24 | 2017-05-24 | 株式会社日立製作所 | 画像認識機能を備えたエレベータ |
JP6300571B2 (ja) * | 2014-02-27 | 2018-03-28 | 日鉄住金テクノロジー株式会社 | 収穫補助装置 |
US9823350B2 (en) * | 2014-07-31 | 2017-11-21 | Raytheon Company | Linear mode computational sensing LADAR |
US9311802B1 (en) | 2014-10-16 | 2016-04-12 | Elwha Llc | Systems and methods for avoiding collisions with mobile hazards |
US9582976B2 (en) | 2014-10-16 | 2017-02-28 | Elwha Llc | Systems and methods for detecting and reporting hazards on a pathway |
CN104809794B (zh) * | 2015-05-18 | 2017-04-12 | 苏州科达科技股份有限公司 | 门禁控制方法及系统 |
JP6481537B2 (ja) * | 2015-07-14 | 2019-03-13 | コニカミノルタ株式会社 | 被監視者監視装置および被監視者監視方法 |
CN105054936B (zh) * | 2015-07-16 | 2017-07-14 | 河海大学常州校区 | 基于Kinect景深图像的快速身高和体重测量方法 |
JP6512034B2 (ja) * | 2015-08-26 | 2019-05-15 | 富士通株式会社 | 測定装置、測定方法及び測定プログラム |
ES2751364T3 (es) * | 2016-02-04 | 2020-03-31 | Holding Assessoria I Lideratge S L Hal Sl | Detección de acceso fraudulento en puertas de acceso controlado |
CN106157412A (zh) * | 2016-07-07 | 2016-11-23 | 浪潮电子信息产业股份有限公司 | 一种人员准入系统及方法 |
EP3598175B1 (en) * | 2017-03-14 | 2023-06-21 | Konica Minolta, Inc. | Object detection system |
JP6713619B2 (ja) * | 2017-03-30 | 2020-06-24 | 株式会社エクォス・リサーチ | 身体向推定装置および身体向推定プログラム |
CN108280802A (zh) * | 2018-01-12 | 2018-07-13 | 盎锐(上海)信息科技有限公司 | 基于3d成像的图像获取方法及装置 |
CN108184108A (zh) * | 2018-01-12 | 2018-06-19 | 盎锐(上海)信息科技有限公司 | 基于3d成像的图像生成方法及装置 |
CN108268842A (zh) * | 2018-01-12 | 2018-07-10 | 盎锐(上海)信息科技有限公司 | 图像识别方法及装置 |
CN108089773B (zh) * | 2018-01-23 | 2021-04-30 | 歌尔科技有限公司 | 一种基于景深投影的触控识别方法、装置及投影部件 |
WO2019161562A1 (en) * | 2018-02-26 | 2019-08-29 | Intel Corporation | Object detection with image background subtracted |
CN110008802B (zh) | 2018-12-04 | 2023-08-29 | 创新先进技术有限公司 | 从多个脸部中选择目标脸部及脸部识别比对方法、装置 |
EP3680814A1 (de) * | 2019-01-14 | 2020-07-15 | Kaba Gallenschütz GmbH | Verfahren zur erkennung von bewegungsabläufen und passiererkennungssystem |
CN109867186B (zh) * | 2019-03-18 | 2020-11-10 | 浙江新再灵科技股份有限公司 | 一种基于智能视频分析技术的电梯困人检测方法及系统 |
JP7311299B2 (ja) * | 2019-04-10 | 2023-07-19 | 株式会社国際電気通信基礎技術研究所 | 人認識システムおよび人認識プログラム |
US10850709B1 (en) * | 2019-08-27 | 2020-12-01 | Toyota Motor Engineering & Manufacturing North America, Inc. | Facial recognition and object detection for vehicle unlocking scenarios |
WO2021050753A1 (en) * | 2019-09-10 | 2021-03-18 | Orion Entrance Control, Inc. | Method and system for providing access control |
CA3181167A1 (en) * | 2020-04-24 | 2021-10-28 | Alarm.Com Incorporated | Enhanced property access with video analytics |
CN112050944B (zh) * | 2020-08-31 | 2023-12-08 | 深圳数联天下智能科技有限公司 | 门口位置确定方法及相关装置 |
CN117916781A (zh) * | 2021-08-31 | 2024-04-19 | 亚萨合莱自动门系统有限公司 | 用于操作人员分离装置的方法以及人员分离装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000230809A (ja) * | 1998-12-09 | 2000-08-22 | Matsushita Electric Ind Co Ltd | 距離データの補間方法,カラー画像階層化方法およびカラー画像階層化装置 |
JP2002277239A (ja) * | 2001-03-19 | 2002-09-25 | Matsushita Electric Works Ltd | 距離測定装置 |
JP2003057007A (ja) | 2001-08-10 | 2003-02-26 | Matsushita Electric Works Ltd | 距離画像を用いた人体検知方法 |
JP2003196656A (ja) * | 2001-12-28 | 2003-07-11 | Matsushita Electric Works Ltd | 距離画像処理装置 |
JP2004124497A (ja) * | 2002-10-02 | 2004-04-22 | Tokai Riken Kk | 本人確認と連れ込み防止の機能を備えた入退室管理システム |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1057626C (zh) | 1994-01-07 | 2000-10-18 | 中国大恒公司 | 区域移动物体监视控制管理系统 |
JP3123587B2 (ja) | 1994-03-09 | 2001-01-15 | 日本電信電話株式会社 | 背景差分による動物体領域抽出方法 |
JP3233584B2 (ja) * | 1996-09-04 | 2001-11-26 | 松下電器産業株式会社 | 通過人数検知装置 |
AU2003221893A1 (en) | 2002-04-08 | 2003-10-27 | Newton Security Inc. | Tailgating and reverse entry detection, alarm, recording and prevention using machine vision |
US7203356B2 (en) * | 2002-04-11 | 2007-04-10 | Canesta, Inc. | Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications |
JP2004062980A (ja) * | 2002-07-29 | 2004-02-26 | Toyota Gakuen | 磁性合金、磁気記録媒体、および磁気記録再生装置 |
US20040260513A1 (en) * | 2003-02-26 | 2004-12-23 | Fitzpatrick Kerien W. | Real-time prediction and management of food product demand |
US7623674B2 (en) * | 2003-11-05 | 2009-11-24 | Cognex Technology And Investment Corporation | Method and system for enhanced portal security through stereoscopy |
WO2006011593A1 (ja) * | 2004-07-30 | 2006-02-02 | Matsushita Electric Works, Ltd. | 個体検出器及び共入り検出装置 |
JP4122384B2 (ja) * | 2005-01-31 | 2008-07-23 | オプテックス株式会社 | 通行監視装置 |
-
2005
- 2005-07-29 WO PCT/JP2005/013928 patent/WO2006011593A1/ja active Application Filing
- 2005-07-29 JP JP2005221822A patent/JP4400527B2/ja not_active Expired - Fee Related
- 2005-07-29 CN CN2005800139668A patent/CN1950722B/zh not_active Expired - Fee Related
- 2005-07-29 KR KR1020087010556A patent/KR101072950B1/ko not_active IP Right Cessation
- 2005-07-29 EP EP05767175A patent/EP1772752A4/en not_active Withdrawn
- 2005-07-29 US US11/658,869 patent/US8330814B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000230809A (ja) * | 1998-12-09 | 2000-08-22 | Matsushita Electric Ind Co Ltd | 距離データの補間方法,カラー画像階層化方法およびカラー画像階層化装置 |
JP2002277239A (ja) * | 2001-03-19 | 2002-09-25 | Matsushita Electric Works Ltd | 距離測定装置 |
JP2003057007A (ja) | 2001-08-10 | 2003-02-26 | Matsushita Electric Works Ltd | 距離画像を用いた人体検知方法 |
JP2003196656A (ja) * | 2001-12-28 | 2003-07-11 | Matsushita Electric Works Ltd | 距離画像処理装置 |
JP2004124497A (ja) * | 2002-10-02 | 2004-04-22 | Tokai Riken Kk | 本人確認と連れ込み防止の機能を備えた入退室管理システム |
Non-Patent Citations (1)
Title |
---|
See also references of EP1772752A4 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006064695A (ja) * | 2004-07-30 | 2006-03-09 | Matsushita Electric Works Ltd | 個体検出器及び共入り検出装置 |
JP2009294755A (ja) * | 2008-06-03 | 2009-12-17 | Nippon Telegr & Teleph Corp <Ntt> | 混雑度計測装置、混雑度計測方法、混雑度計測プログラムおよびそのプログラムを記録した記録媒体 |
JP2017539114A (ja) * | 2014-10-30 | 2017-12-28 | 日本電気株式会社 | 監視システム、監視方法およびプログラム |
US10735693B2 (en) | 2014-10-30 | 2020-08-04 | Nec Corporation | Sensor actuation based on sensor data and coverage information relating to imaging range of each sensor |
US10893240B2 (en) | 2014-10-30 | 2021-01-12 | Nec Corporation | Camera listing based on comparison of imaging range coverage information to event-related data generated based on captured image |
US11800063B2 (en) | 2014-10-30 | 2023-10-24 | Nec Corporation | Camera listing based on comparison of imaging range coverage information to event-related data generated based on captured image |
Also Published As
Publication number | Publication date |
---|---|
US8330814B2 (en) | 2012-12-11 |
KR20080047485A (ko) | 2008-05-28 |
JP2006064695A (ja) | 2006-03-09 |
JP4400527B2 (ja) | 2010-01-20 |
CN1950722B (zh) | 2010-05-05 |
EP1772752A1 (en) | 2007-04-11 |
US20090167857A1 (en) | 2009-07-02 |
EP1772752A4 (en) | 2009-07-08 |
CN1950722A (zh) | 2007-04-18 |
KR101072950B1 (ko) | 2011-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4400527B2 (ja) | 共入り検出装置 | |
JP6764547B1 (ja) | カメラを用いた追跡及び許可延長システム | |
US7397929B2 (en) | Method and apparatus for monitoring a passageway using 3D images | |
US7400744B2 (en) | Stereo door sensor | |
US20210272086A1 (en) | Automated vending case with an integrated credential reader | |
CN106144795B (zh) | 通过识别用户操作用于乘客运输控制和安全的系统和方法 | |
CN106144797B (zh) | 用于乘客运输的通行列表产生 | |
CN106144862B (zh) | 用于乘客运输门控制的基于深度传感器的乘客感测 | |
Ko | A survey on behavior analysis in video surveillance for homeland security applications | |
JP5155553B2 (ja) | 入退室管理装置 | |
US20050249382A1 (en) | System and Method for Restricting Access through a Mantrap Portal | |
CN101552910B (zh) | 基于全方位计算机视觉的遗留物检测装置 | |
CN102833478B (zh) | 容错背景模型化 | |
MX2007016406A (es) | Deteccion y rastreo de objetivo a partir de flujos de video aereos. | |
WO2011139734A2 (en) | Method for moving object detection using an image sensor and structured light | |
EP1771749A1 (en) | Image processing device | |
JP2013250856A (ja) | 監視システム | |
CN108701211A (zh) | 用于实时地检测、跟踪、估计和识别占用的基于深度感测的系统 | |
KR20070031896A (ko) | 개체 검출기 및 동반입장 검출 디바이스 | |
Zou et al. | Occupancy detection in elevator car by fusing analysis of dual videos | |
WO2007138025A1 (en) | Electro-optical device for counting persons, or other, based on processing three-dimensional images, and relative method | |
WO2019156599A1 (en) | System and method of people's movement control | |
US20230410545A1 (en) | Lidar-based Alert System | |
Bianchi et al. | Evaluation of a foreground segmentation algorithm for 3d camera sensors | |
Jia et al. | Anti-Tailing AB-Door Detection Based on Motion Template Algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005767175 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020067022873 Country of ref document: KR Ref document number: 200580013966.8 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 1020067022873 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2005767175 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11658869 Country of ref document: US |