WO2004012142A1 - 画像処理装置 - Google Patents
画像処理装置 Download PDFInfo
- Publication number
- WO2004012142A1 WO2004012142A1 PCT/JP2002/007632 JP0207632W WO2004012142A1 WO 2004012142 A1 WO2004012142 A1 WO 2004012142A1 JP 0207632 W JP0207632 W JP 0207632W WO 2004012142 A1 WO2004012142 A1 WO 2004012142A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- distance
- area
- unit
- image capturing
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
Definitions
- the present invention relates to an image processing apparatus capable of automatically recognizing the state of a passenger such as an elevator, specifically, whether or not there is a passenger in a wheelchair among the passengers.
- the elevator speed of the car is lower than in the normal operation mode, and the landing accuracy between the car and the floor of the hall when the floor is stopped is more precise than in the normal operation mode.
- the elevator speed is not reduced by lowering the elevator speed and the landing accuracy is not particularly increased. Can be improved.
- the switching of the operation mode for the elevator system due to the getting on and off of the wheelchair passengers is performed by pushing the wheelchair passenger's own push button when the wheelchair passenger calls the car, or by getting in the car.
- the operation was performed by operating a push button exclusively for wheelchair passengers.
- the operation mode of the elevator system for such wheelchair passengers It is desirable that switching be performed by automatically detecting wheelchair passengers in consideration of the convenience of wheelchair passengers. Also, when young children get into the elevator, they may accidentally or unknowingly press the button dedicated to wheelchair passengers, which will reduce the efficiency of the elevator operation. From this point as well, it is desirable to automatically detect passengers in wheelchairs and automatically switch the operation mode of the elevator.
- an image processing device for automatically recognizing whether or not the passenger in the wheelchair is going to enter the elevator is required.
- it is not an image processing device for recognizing passengers in wheelchairs as a conventional technology relating to image recognition and recognition of passengers in an elevator, for example, Japanese Patent Application Laid-Open No. 6-92563 discloses an elevator person counting device. ing.
- FIG. 14 is a block diagram showing the configuration of this conventional elevator counting device.
- the elevator number-of-people measuring device includes an imaging unit 901, an image processing unit 902, a neural network 903, and a number-of-heads determining unit 904.
- the imaging means 901 is installed in the ceiling of the elevator car or in the elevator hall, which is located above the passenger's head, and is installed so as to image vertically below.
- the state of the elevator car or the state of the elevator wheel is imaged by the imaging means 901, and is input to the image processing means 902.
- FIG. 15 is an image captured by the imaging means 901, and shows a state in which three people 906 to 908 are captured.
- the image processing means 902 is a binary image based on the absolute value of the difference between each pixel value of the captured image and the background image of the elevator car or elevator hall in the absence of a human being. By performing the difference processing, the difference image is generated, and the human region is clarified. Further, the image processing means 902 divides the binary-differentiated difference image into a plurality of blocks having different areas, calculates a ratio of a human region for each block, and outputs the calculated result as input data. Into the dual network 903. Next, the input data is processed by a previously trained dual-node network 903, and the output signal is input to the number-of-people determination means 904. In the number-of-people determination means 904, the number of people is determined from the output signal from the neural network 903, and the result is transmitted and output to another control device or the like according to the purpose.
- this conventional elevator person counting device is applied to the above-described image processing device for recognizing a passenger in a wheelchair in an elevator, when an image of a person is taken from above, an image is taken from the side or obliquely from above. Because the image is extremely small compared to the case where the image is taken, it is not possible to stably determine whether the imaged person is a normal standing pedestrian or a person in a wheelchair. There was a title.
- each person when a plurality of people are present adjacent to each other, each person can be separated, and a pedestrian standing on a wheelchair or a wheelchair is taken from a captured image. It is an object of the present invention to provide an image processing device capable of determining whether a person is a human. Disclosure of the invention
- An image processing apparatus includes: an image capturing unit that captures an image in a target region to be monitored; and a face region extracting unit that extracts a human face region from an image captured by the image capturing unit.
- a distance measuring unit that measures a distance from the image capturing unit for each position of the image captured by the image capturing unit; and a distance measuring area that is a predetermined distance below the extracted human face region.
- Distance distribution calculating means for calculating a distance distribution from the image capturing means in the distance measuring area using the distance measured by the distance measuring means; and A wheelchair presence determining whether or not a person having the face region is in a wheelchair by comparing the measured distance of the face region from the image capturing means with the distance distribution in the distance measurement area. And determining means.
- the image in the target area to be monitored is captured by the image capturing means, and the human face area is extracted from the image captured by the image capturing means by the face area extracting means. Further, the distance from the image pickup means is measured for each position of the image picked up by the image pickup means by the distance measurement means. Further, the distance distribution calculation means sets a distance measurement area below the extracted human face area by a predetermined distance, and uses the distance measured by the distance measurement means from the image pickup means in the distance measurement area. Is calculated. Then, the wheelchair presence determination unit compares the distance of the face area measured by the distance measurement unit from the image capturing unit with the distance distribution of the distance measurement area, and the person having the face area is in the wheelchair. It is determined whether or not.
- the image processing apparatus is the image processing apparatus according to the above invention, wherein a difference image is generated from an image captured by the image capturing unit and a background image in which no object in the target area exists. Extracting a region that has changed based on the change region detection unit, wherein the face region extraction unit extracts a face region only in the changed region extracted by the change region detection unit.
- a difference image is generated by the change region detection unit from the image captured by the image imaging unit and a background image in which no object in the target region exists, and based on the generated difference image Then, the changed area is extracted. Then, the face area is extracted by the face area extracting means only in the changed area extracted by the changing area detecting means.
- An image processing apparatus is characterized in that, in the above invention, the distance measuring means is a scanning laser range finder.
- a scanning laser range finder is used as the distance measuring means. I am trying to be.
- the image processing apparatus is the image processing apparatus according to the above invention, wherein a difference image is generated from an image captured by the image capturing unit and a background image in which no object in the target area exists. Extracting a region that has changed based on the change region detection unit, wherein the face region extraction unit extracts a face region only in the changed region extracted by the change region detection unit.
- a difference image is generated by the change region detection unit from the image captured by the image imaging unit and a background image in which no object in the target region exists, and based on the generated difference image Then, the changed area is extracted. Then, a face area is extracted by the face area extracting means only in the changed area extracted by the change area detecting means.
- the image processing apparatus captures an image of the target area to be monitored.
- the first image capturing means captures an image of the target area, which is arranged at a distance in the horizontal direction.
- Second and third image capturing means; face area extracting means for extracting a human face area from the image captured by the first image capturing means; and the second and third images A stereo that measures a distance from the first image capturing unit for each position of the image captured by the first image capturing unit by a stereo method using two images captured by the image capturing unit.
- Calculating means for setting a distance measurement area below the extracted human face area by a predetermined distance, and using the distance measured by the stereo calculation means to calculate the distance in the distance measurement area.
- Image capturing means A distance distribution calculating means for calculating a distance distribution from the image data; anda distance between the face area measured by the stereo calculating means from the first image capturing means and a distance distribution in the distance measuring area. And a wheelchair presence determining means for determining whether or not the person having the face region is in a wheelchair.
- the image in the target area to be monitored is provided by the first image capturing means.
- An image is captured, and a human face area is extracted from the image captured by the first image capturing means by the face area extracting means.
- the image of the target area is captured by the second and third image capturing means arranged at a distance in the horizontal direction, and is captured by the second and third image capturing means by the stereo calculation means.
- the distance from the first image capturing unit is measured for each position of the image captured by the first image capturing unit by the stereo method using the images.
- a distance measurement area is set below the extracted human face area by a predetermined distance by the distance distribution calculation means, and the distance measurement area in the distance measurement area is set using the distance measured by the stereo calculation means.
- the distance distribution from the first image pickup means is calculated. Then, the distance of the face area measured by the stereo calculation means from the first image capturing means and the distance distribution in the distance measurement area are compared by the wheelchair presence determination means, and the person having the face area is placed on the wheelchair. It is determined whether or not the person is riding. '
- the image processing apparatus is the image processing apparatus according to the above invention, wherein the difference image is generated from an image captured by the first image capturing unit and a background image in which no object in the target area exists.
- the image processing apparatus further includes a change area detection unit that extracts a region that has changed based on the generated difference image, wherein the face region extraction unit includes a face only in the change region extracted by the change region detection unit. It is characterized by extracting a region.
- a difference image is generated by the change region detection unit from the image captured by the first image capturing unit and a background image in which no object in the target region exists, and the generated difference image is generated.
- the changed area is extracted on the basis of this. Further, the face area is extracted only by the face of the change extracted by the change area detecting means by the face area extracting means.
- An image processing apparatus comprises: a first image capturing means for capturing an image in a target area to be monitored; a first image capturing means arranged horizontally apart from the first image capturing means; Second image capturing means for capturing an image of the subject; and face area extracting means for extracting a human face area from the image captured by the first image capturing means; Using the two images captured by the first and second image capturing means, each position of the image captured by the first image capturing means by the stereo method is described.
- a stereo calculation means for measuring a distance from the first image pickup means; and a distance measurement area set a predetermined distance below the extracted human face area, and the distance is measured by the stereo calculation means.
- a distance distribution calculating means for calculating a distance distribution from the first image capturing means in the distance measuring area by using the measured distance; and a calculation of the face area measured by the stereo calculating means.
- the first image capturing unit compares the distance between the force and the distance with the distance distribution in the distance measurement area, and determines whether or not the force of the person having the face region is in a wheelchair.
- Judgment means and And butterflies are used to calculate a distance distribution from the first image capturing means in the distance measuring area by using the measured distance; and a calculation of the face area measured by the stereo calculating means.
- an image in the target area to be monitored is captured by the first image capturing means, and the second image capturing means is disposed at a distance from the first image capturing means in the horizontal direction.
- the face area extracting means extracts a human face area from the image captured by the first image capturing means, and the two images captured by the first and second image capturing means by the stereo calculation means. Using this image, the distance from the first image capturing unit is measured for each position of the image captured by the first image capturing unit by the stereo method.
- a distance measurement area is set by a predetermined distance below the extracted human face region by the distance distribution calculation means, and the distance measurement area in the distance measurement area is set using the distance measured by the stereo calculation means.
- a distance distribution from one image capturing unit is calculated. Then, the distance of the face area measured by the stereo calculation means from the first image capturing means and the distance distribution in the distance measurement area are compared by the wheelchair presence determination means, and the person having the face area is It is determined whether or not the vehicle is riding.
- the image processing apparatus is the image processing apparatus according to the invention described above, wherein a difference image is generated from an image captured by the first image capturing unit and a background image in which no object in the target area exists. And a change area detecting means for extracting a changed area based on the generated difference image.
- the method is characterized in that a face area is extracted only in a changed area extracted by the outputting means.
- a change image is generated by the change area detecting means from the image captured by the first image capturing means and the ray background image in which no object in the target area exists, and the generated difference is generated.
- the changed area is extracted based on the image.
- the face area extracting means extracts the face area only in the changed area extracted by the changing area detecting means.
- FIG. 1 is a block diagram showing Embodiment 1 of the image processing apparatus of the present invention
- FIG. 2 is a diagram showing a method for roughly calculating the height of a person in a target area
- the figure is a diagram for explaining a method of determining whether a person in the target area is in a wheelchair
- FIG. 3 (a) schematically shows a state in which a passenger is standing.
- FIG. 3 (b) is a front view showing the distance measurement area set for the passenger (a)
- FIG. 4 is a diagram showing whether a person in the target area is in a wheelchair.
- Fig. 4 (a) schematically shows a state in which a passenger is in a wheelchair
- FIG. 4 (b) is a diagram for explaining a method of making a determination of (a).
- FIG. 5 is a front view showing a distance measurement area set for a passenger, and FIG. 5 shows a distance measurement within the distance measurement area.
- Fig. 6 (a) is a schematic diagram for explaining the method of Fig. 6.
- Fig. 6 (a) shows the horizontal position of each section in the distance measurement area and each section in the distance measurement area.
- Fig. 6 (b) is a distribution map of the passenger compartment when the passenger is in a standing position, created using the horizontal distance from the image pickup unit of Fig. 6 as a parameter.
- FIG. 7 (a) is a diagram in which the horizontal distance of a representative point of the face area from the image pickup unit is plotted, and FIG. Fig.
- FIG. 7 (b) is a distribution map of the partitions when the passenger is in a wheelchair, created using the horizontal position and the horizontal distance of each partition from the image capturing unit as parameters.
- FIG. 8 is a diagram in which the horizontal distance of the table point from the image pickup unit is plotted.
- FIG. 8 is a flowchart showing the processing procedure of the image processing apparatus shown in FIG. 1, and
- FIG. 9 is an image of this invention.
- FIG. 10 is a block diagram showing a second embodiment of the processing apparatus.
- FIG. 10 is a diagram for explaining the principle of the twin-lens stereo method
- FIG. 11 is a diagram showing the image processing shown in FIG.
- FIG. 12 is a flowchart showing a processing procedure of the image processing device.
- FIG. 10 is a block diagram showing a second embodiment of the processing apparatus.
- FIG. 10 is a diagram for explaining the principle of the twin-lens stereo method
- FIG. 11 is a diagram showing the image processing shown in FIG.
- FIG. 12 is a block diagram showing Embodiment 3 of the image processing device of the present invention.
- FIG. 13 is a block diagram showing the image processing device shown in FIG.
- FIG. 14 is a flowchart showing a processing procedure of the processing device,
- FIG. 14 is a block diagram of a conventional example of an image processing device, and
- FIG. 15 is an image processing device shown in FIG.
- FIG. 4 is a diagram showing an image taken by an image pickup means.
- FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to a first embodiment of the present invention.
- the image processing apparatus includes an image capturing unit 1 that captures an image of a monitoring area, a distance measuring unit 2 that measures a distance at each position in a field of view, and an image capturing unit that captures an image captured by an image capturing unit 1.
- a face area extraction unit 3 for extracting a face area
- a distance measurement area calculation unit 4 for setting a distance measurement area for obtaining a distance from an image
- a distance distribution calculation unit 5 for calculating a distance distribution in the distance measurement area
- It has a wheelchair presence determination unit 6 that determines whether a person is in a wheelchair based on the distance distribution.
- the distance distribution calculation means in the claims corresponds to the distance measurement area calculation unit 4 and the distance distribution calculation unit 5.
- the image capturing unit 1 is configured by a device realized by a CCD camera or the like, and is installed near the ceiling of an elevator hall in a car of an elevator so that a human face to be imaged can be imaged and monitored.
- the target area is imaged. That is, in the present invention, the image capturing unit 1 does not capture an image vertically below the target area, but It is installed so as to capture images from obliquely above or from the side.
- the I® image capturing section 1 is connected to the face area extracting section 3 and outputs image data to the face area extracting section 3.
- the distance measurement unit 2 is implemented by a scanning laser range finder or the like that irradiates the target area with a pulsed laser while scanning it, and measures the distance to the object by reflecting the laser light from an object present in the target area.
- the laser light generating section 21 is configured by a device realized by a semiconductor laser or the like that emits a pulse laser, and is mounted at substantially the same position as the image capturing section 1. Then, by changing the manner in which the laser light blinks, the laser light has a light emission pattern for associating the emitted laser light with a point of the reflected laser light imaged by the laser light detection unit 23. Control is performed by switching the pulse laser on and off according to the light emission pattern.
- the laser beam scanning unit 22 scans the pulse laser so as to scan the pulse laser emitted from the laser beam generating unit 21 according to the light emission pattern in synchronization with the image capturing timing by the laser beam detection unit 23. Control speed and scanning range.
- the laser light detection unit 23 is configured by a device realized by a CCD camera or the like, and sets a state in which the pulse laser emitted from the laser light generation unit 21 is reflected by the object as a laser light reflection image, and sets the laser light reflection image as a laser light reflection image.
- the image is captured in synchronization with the scanning of the scanning unit 22, and the image storage unit 24 stores the laser light reflection image captured by the laser light detection unit 23 together with time information.
- the distance calculation unit 25 reads the laser light reflection image stored in the image storage unit 24, and calculates the distance from the laser light generation unit 21 to the laser light reflection position by image processing. Since the time at which the image was captured is associated with the scanning angle (and emission pattern) of the laser light generation unit 21 at that time, the scanning angle, the laser light generation unit 21 and the laser light detection unit 2 From the distance between 3, based on the principle of triangulation, the three-dimensional position on the object irradiated by the pulsed laser, The distance from the laser light generator 21 (image pickup unit 1) is measured. The distance data storage unit 26 stores the measured distance data.
- the face area extraction unit 3 extracts a human face area from the input image from the image capturing unit 1 by using a human face area extraction algorithm, and calculates the position of the extracted face area, for example, the center of gravity of the face area. . Then, the face area is stored together with the position of the face area.
- a human face region extraction algorithm for example, a method of extracting a face region based on the extraction of a skin color portion in a color image, a specific human face image is prepared as a template, and a captured image is most often used for this template.
- Known methods such as a method of extracting an overlapping portion as a face region (template matching) and a method of extracting a face region by detecting a face part such as an eye, a nose, a mouth, and an ear from a captured image are used.
- a face part such as an eye, a nose, a mouth, and an ear from a captured image
- the distance measurement area calculation unit 4 which stores the extracted face regions together with the calculated positions of the face regions, stores the extracted face regions.
- Set the distance measurement area based on the position.
- the distance measurement area is a part of the target area set to determine whether the passenger of the elevator is a standing passenger or a wheelchair passenger.
- the distance measurement area is set as an area near the knee of the passenger in the wheelchair (area below the knee) with reference to the position of the human face.
- FIG. 2 is a diagram showing a method of setting a distance measurement error by the distance measurement error calculating section 4.
- the position of the wheelchair of the passenger in the wheelchair (the position of the knee when the user is in the wheelchair) is located a predetermined distance below the face of the passenger 100 in the wheelchair.
- the distance measurement area is set from the statistical data that Specifically, first, when calculating the distance measurement area, the distance measurement area calculation unit 4 holds reference data on the width of a general human face, and the reference data is extracted by the face area extraction unit 3. By comparing the width of the obtained face area 111 with the reference data, the distance L from the image capturing unit 1 to the person having the face area 111 is calculated.
- the installation depression angle of the imaging unit 1, that is, the angle 0 at which the optical axis of the lens of the imaging unit 1 forms a horizontal line, and the installation height H from the floor of the imaging unit 1 are determined when the imaging unit 1 is installed. Using the distance L, the installation depression angle 0, and the installation height H, the approximate height h of the passenger 100 from the floor can be measured,
- the distance measurement area 1 is located at a position located a predetermined distance s below the center of the face of the target passenger 100. 1 2 is set. This distance measurement area 112 has a predetermined range so that a portion below the target knee can be extracted even if the size of the passenger 100 is different.
- the distance distribution calculation unit 5 extracts only the distance data within the distance measurement area set by the distance measurement area calculation unit 4 from the distance data measured by the distance measurement unit 2, and Calculate the distance distribution. Then, the result of the distance distribution is output to the wheelchair presence determination unit 6. Since the distance distribution calculation unit 5 calculates the distance distribution only for the distance data in the set distance measurement area, it is possible to save the calculation time of the distance distribution in other areas.
- the wheelchair presence determination unit 6 determines the force of the target person on the wheelchair. Judgment is made, and the judgment result is output to the outside.
- FIG. 3 (a) schematically shows a state in which the passenger is in a standing position
- FIG. 3 (b) is a front view showing a distance measurement area set for the passenger in (a).
- Fig. 4 (a) schematically shows a passenger in a wheelchair
- Fig. 4 (b) shows a front view to show the distance measurement area set for the passenger in (a).
- FIG. 5 is a schematic diagram for explaining a method of distance measurement in the distance measurement area.
- Fig. 6 (a) shows that all the blocks that compose the distance measurement area Fig.
- FIG. 6 (b) is a distribution diagram of the partitions when the passenger is in a standing position, in which the horizontal position in the distance measurement area and the horizontal distance of each partition from the image pickup unit are created as parameters.
- () Is a diagram in which the horizontal distance of the representative point of the face area of the passenger from the image capturing unit is plotted in the distribution diagram of (a).
- Fig. 7 (a) shows the parameters of the horizontal position in the distance measurement area of each section and the horizontal distance of each section from the image pickup unit for all the sections constituting the distance measurement area.
- Fig. 7 (b) shows the distribution map of (a) when the passenger is in a wheelchair. This is a plot of the horizontal distance.
- P1 indicates a representative point of the face area, for example, the center of gravity (the two-dimensional center of gravity of the face area when viewed from the front).
- P2 indicates a point from the representative point P1 of the face area. Indicates the position in the distance measurement area 1 1 2 set below the fixed distance, and T 1 and T 2 are distance meters subtracted from the center of the lens of the image pickup unit 1 to the positions P 1 and P 2.
- L1 and L2 indicate the distances from the center of the lens of the image pickup unit to the positions P1 and P2, respectively.
- the distance measurement area 112 set in FIGS. 3 (b) and 4 (b) includes other than the target passenger 100 except for the target passenger 100. It is assumed that there is no moving object, and the background is sufficiently far away from the passenger's horizontal distance from the image pickup unit.
- the distance distribution calculation unit 5 causes the distance distribution calculation unit 5 to execute FIG.
- the horizontal distance L 2 -cos (T 2) from the image capturing unit 1 at the position P 2 in the distance measurement area 1 12 set as shown in FIG. 4 (b) is calculated.
- a distance measurement area 112 as shown in FIG. 5 is used.
- the upper right end of the distance measurement area 1 1 2 is taken as the origin
- the X axis is taken horizontally and rightward from this origin
- the y axis is taken downward and perpendicular to the X axis through the origin.
- M in X direction Aliquoted (m is a natural number) each string obtained by, in order from the side closer to the origin, Xl, x 2,.
- the distance measurement area 1 12 on the X-axis and the horizontal distance from the image pickup unit 1 are used as parameters to obtain the distance measurement area. 1 Create a distribution map of all plots in 12.
- the X-axis is the “position of the distance measurement area on the X-axis”, and the X-axis of the distance measurement area 112 shown in FIG. It has the same axis.
- the y-axis is “horizontal distance from image pickup unit”, which is the horizontal distance from image pickup unit 1 calculated for each section.
- the z-axis is the “number of sections at the same position”, and the position on the X-y plane indicated by a combination of the above (position of the distance measurement area on the X-axis, horizontal distance from the image pickup unit)
- ⁇ and ⁇ indicate whether the shoes overlap.
- a distribution map as shown in (a) is obtained, but the distribution graph in which the horizontal distance from the image pickup unit 1 is drawn in the portion of R1 is within the distance measurement area 112 in FIG. 4 (b).
- the distribution graph which represents the area near the feet (knees) of the existing passenger 100 and whose horizontal distance from the image pickup unit 1 is drawn at R 2, is included in the upper part of the distance measurement area 112. It shows the part near the abdomen.
- the distribution graph in which the horizontal distance from the image capturing unit 1 is R3 is a ray background portion where the feet of the passenger 100 do not exist in the distance measurement area 112.
- the horizontal distance L1-of the center of gravity P1 of the face area from the image capturing unit 1 is determined. cos (T1) is longer than the horizontal distance L2-cos (T2) of the position P2 of the foot in the distance measurement area 112 from the image capturing unit 1. Then, when the horizontal distance of the center of gravity P1 of the face area from the image pickup unit 1 is plotted in FIG. 7 (a), the position is represented by the image pickup unit 1 as shown in FIG. 7 (b). Is equal to the horizontal distance R 2 from.
- the wheelchair presence determination unit 6 determines the center of gravity position P 1 of the face area measured by the distance measurement unit 2 and the distance measurement area 1 12 calculated by the distance distribution calculation unit 5. Are compared with each other, and if both are located linearly in the vertical direction, the target passenger 100 is determined to be in a standing position, and both are positioned in the vertical direction. When there is no linear positional relationship, it is determined that the target passenger 100 is in a wheelchair. The result is output to an external device such as an elevator control device.
- the wheelchair presence determination unit 6 determines whether the passenger 100 is standing or in a wheelchair by determining the center of gravity position P 1 of the face area and the distance measurement area 1. As described above, the case where the determination is made by comparing the distance distribution within the area 2 is described as an example. It is also possible to compare with the distance distribution in 1 1 2. For example, in this case, the entire face area is divided into a plurality of sections, and the horizontal position of each section in the face area and the image pickup unit 1 of each section are determined for all the divided sections. By creating a distribution map of the parcels using the horizontal distance as a parameter and comparing them with Figs. 6 (a) and 7 (a), it is checked whether the passenger 100 is standing or in a wheelchair. It can be determined whether or not the vehicle is in a state of being in the vehicle.
- a target area such as an elevator car or an elevator hall is imaged by the image imaging unit 1, and the image is input to the face area extraction unit 3 (step S1).
- the face area extraction unit 3 extracts a target human face area from the input image and calculates the position of the center of gravity (step S2).
- the distance measurement area calculation unit 4 calculates the distance of the target person from the image pickup unit 1 from the extracted face area, and sets the distance measurement area a predetermined distance below the center of gravity of the face area. (Step S3).
- the laser light generating unit 21 and the laser light scanning unit 22 scan the target area while irradiating the pulsed laser, and synchronize with the laser light scanning unit 22.
- the laser light detection unit 23 captures an image of the laser light reflected by the object in the target area, and the captured laser light reflection image is stored in the image storage unit 24. Be stacked.
- the distance calculation unit 25 calculates the distance from the laser light generation unit 21 (image imaging unit 1) to the target object from the accumulated laser light reflection image and the scanning angle of the laser light scanning unit 22.
- the obtained distance data is stored in the distance data storage unit 26 (step S4).
- the distance distribution calculation unit 5 extracts the distance data in the distance measurement area from the distance data storage unit 26, and calculates the distance distribution (step S5). Then, the wheelchair presence determination unit 6 compares the center of gravity of the face area stored in the distance data storage unit 26 with the calculated distance distribution of the distance data in the distance measurement area, and determines the center of gravity of the face area. If it is determined that the distribution position of the distance data within the distance measurement result is on the vertical line, the target person is determined to be in the standing position, and the center of gravity of the face area is determined. If it is determined that the position and the distribution position of the distance data in the distance measurement area are not on a vertical line, it is determined that the target person is in a wheelchair (step S 6). Then, the judgment result is output to the outside (step S7), and the process is terminated.
- the result output by the wheelchair presence determination unit 6 is used, for example, to change the operation mode of the elevator by the elevator control unit.
- the wheelchair presence determination unit 6 determines that there is a passenger in a wheelchair in the target area, for example, the elevator speed is lower than usual, and when the floor stops, the car and the hall floor surface Set the wheelchair operation mode in the elevator control section, such as adjusting the landing accuracy of the vehicle more precisely than usual.
- the wheelchair presence determination unit 6 determines that there is only a standing passenger in the target area, normal operation such as not lowering the power and speed of lifting / lowering and not particularly increasing the landing accuracy is required. Set the mode to the elevator control.
- the image capturing unit 1 and the laser beam detecting unit 23 of the distance measuring unit 2 are forces that are respectively installed at different positions. 3 may be installed at substantially the same position.
- the laser light generator 21 needs to be mounted at a different position from the image pickup unit 1. It is also possible to use, as the laser light generator 21, a pulse laser that is deformed into a slit shape by a cylindrical lens / lens and is long in the vertical (longitudinal) direction. In this case, the distance distribution of one line in the vertical direction can be measured by one image pickup by the laser light detection unit 23, and therefore, there is an advantage that the scanning direction may be only the horizontal direction.
- the position of the center of gravity of the human face region from the image captured by the image capturing unit 1 and the distance distribution in the region below the face region by a predetermined distance, specifically, the region below the knee are calculated. are compared to determine whether or not the imaged person is in a wheelchair. This has the effect that the presence of a passenger in a wheelchair can be recognized with a high probability.
- FIG. 9 is a block diagram showing a configuration of Embodiment 2 of the image processing apparatus according to the present invention.
- the image processing apparatus according to the second embodiment is different from the first embodiment in that the distance measurement unit 2 of the image processing apparatus shown in FIG.
- Other configurations are the same as those of the first embodiment, and the same components are denoted by the same reference numerals and description thereof will be omitted.
- the first image capturing unit 1 of the second embodiment corresponds to the image capturing unit 1 of FIG.
- the distance distribution calculation means in the claims corresponds to the distance measurement area calculation unit 4 and the distance distribution calculation unit 5.
- the second image capturing section 31 and the third image capturing section 32 are configured by devices realized by a CCD camera or the like, and are arranged at a distance from each other in the horizontal direction.
- the imaging regions of the second image capturing unit 31 and the third image capturing unit 32 are designed so that the first image capturing unit 1 can capture a target region. The distance from the imaging unit 1 to the target existing in the target area can be calculated.
- the second image capturing unit 31 and the third image capturing unit 32 are arranged at substantially the same position as the position of the first image capturing unit 1.
- the stereo calculation unit 33 obtains the parallax between the pixels of the two images input from the second image capturing unit 31 and the third image capturing unit 32 whose geometrical positional relationship is known in advance, and calculates the parallax. It detects the distance to the object by detecting and converting the parallax into a distance, and has a distance calculation unit 34 and a distance data storage unit 35.
- Two image pickup units 31, 32 realized by a CCD camera or the like having optical lenses 201A and 201B that are parallel to each other and have the same focal length ⁇ and have a ⁇ lens are arranged at a predetermined interval D in the horizontal direction.
- the horizontal axis on which the two image pickup units 31 and 32 are arranged is defined as the X axis
- the axis orthogonal to the X axis and extending in the height direction is defined as the Y axis
- both the X axis and the Y axis are used.
- the orthogonal axis be the Z axis.
- an image captured by the (second) image capturing unit 31 is referred to as an image A
- an image captured by the (third) image capturing unit 32 is referred to as an image B.
- XA—XB S is the parallax, and this parallax S is one of the two image capturing units 31 and 32 in which the optical axes 201 A and 201 B are parallel and set at a predetermined distance D.
- D the difference between the positions of the images captured by the respective image capturing units 31 and 32 is shown. In this manner, the distance from the two images A and B captured by the second image capturing unit 31 and the third image capturing unit 32 to the object on the image can be obtained.
- the distance calculation unit 34 of the stereo calculation unit 33 converts the two images A and B input by the second image pickup unit 31 and the third image pickup unit 32 into the first
- the distance from the image capturing unit 1 to the target is calculated.
- the distance data storage unit 35 stores the distance data calculated by the distance calculation unit 34.
- the second image capturing section 31 captures the image of the target area.
- the captured image is stored in the stereo calculation unit 33 (step S14).
- the third image capturing unit 32 captures an image of the target area, and the captured image is stored in the stereo calculation unit 33 (step S15). Note that in steps S14 to S15, the imaging of the target area by the second image capturing unit 31 and the third image capturing unit 32 may be performed at the same time, or may be performed at a predetermined time interval. Good.
- the distance calculation unit 34 of the stereo calculation unit 33 calculates and calculates the distance from the first image pickup unit 1 for each position in the image by performing stereo image processing from the two stored images.
- the distance data is stored in the distance data storage unit 35 (step S16).
- step S5 the same processing as in steps S5 to S7 described in FIG. 8 of the first embodiment is performed to determine whether or not the human having the extracted face region is in a wheelchair or not.
- step S17 to S19 the distance distribution of the distance data in the distance measurement area by a predetermined distance below the center of gravity of the face area in the image captured by the first image capturing unit 1 was calculated, and the center of gravity of the face area was calculated. Compare the distance distribution with the distance data in the distance measurement area. As a result of the comparison, if the center of gravity of the face area and the distance distribution of the distance data in the distance meter are on a vertical line, the person having the face area is determined to be a standing passenger, and the face area is determined. Center of gravity and distance measurement area If the distance distribution of the distance data in the area does not exist on the vertical line, the person having the face area is determined to be a passenger in a wheelchair, and the result of the determination is output to the outside. The process ends.
- the determination result by the wheelchair presence determination unit 6 can be used, for example, to change the operation mode of the elevator by the elevator control unit, as in the first embodiment.
- the first image capturing unit 1 for extracting a face region, the second image capturing unit 31 for performing stereo image processing, and the third image capturing unit The case where three image capturing units of 3 2 are installed has been described.However, the first image capturing unit 1 and the second image capturing unit 32 are shared, and the two image capturing units described above are used. It is also possible to perform the following processing. That is, the first image capturing unit 1 captures an image of the target region, obtains an image for extracting a face region, obtains an image for performing stereo image processing, and obtains an image for performing stereo image processing. May be configured to obtain another image for performing stereo image processing. This is because in the second embodiment, the two images required for performing the stereo image processing need not be simultaneously captured by the second image capturing unit 31 and the third image capturing unit 32. It depends on the good.
- the wheelchair presence determination unit 6 determines whether the passenger is in a standing position or in a wheelchair, based on the position of the center of gravity of the face area and the distance measurement area. As an example, the determination was made by comparing the distance distribution with the distance distribution of the face area. The distance distribution may be compared with the distance distribution.
- stereo image processing is used as a method for measuring the distance from the first image capturing unit 1 to the object, so that the device configuration for measuring the distance can be made more compact. It has the effect of being able to.
- FIG. 12 is a block diagram showing a configuration of an image processing apparatus according to a third embodiment of the present invention.
- the image processing device described in the first embodiment is provided with a changed region detecting unit 41 that extracts only a changed region from an image captured by the image capturing unit 1.
- Other configurations are the same as those of the first embodiment, and the same components are denoted by the same reference characters and description thereof is omitted.
- the change area detection unit 41 has a background image captured in a state where an object such as a human does not exist in a target area captured by the image capturing unit 1, and an image captured by the image capturing unit 1 is
- the difference image processing is performed to binarize the absolute value of the difference between each pixel value in the two images of the captured image and the background image based on a predetermined threshold, and the difference image is calculated. Generate.
- a changed portion that does not exist in the background image is recorded as a candidate for the object.
- the position information of the changed portion in the difference image is output to the face area extracting unit 3.
- the distance distribution calculation means in the claims corresponds to the distance measurement area calculation section 4 and the distance distribution calculation section 5.
- the face area extraction section 3 uses this position information to select the human input from the image input from the image pickup section 1.
- the human face area is extracted according to the face area extraction algorithm, and the position of the face area, for example, the center of gravity is calculated.
- the face area is stored together with the position of the face area.
- the inside of the elevator car or the elevator hall is imaged by the image imaging unit 1, and the captured image is input to the face area extraction unit 3 and the change area detection unit 41 (step S21).
- the change area detection unit 41 generates a difference image from the input image and the background image, detects a changed area from the obtained difference image, and outputs the position information to the face area extraction unit 3 (step S22).
- the face area extraction unit 3 extracts a target human face area from the change areas in the image input from the image imaging unit 1 based on the position information of the change area input from the change area detection unit 41. It is extracted and its center of gravity is calculated (step S23).
- the same processing as in steps S3 to S7 described in FIG. 8 of Embodiment 1 is performed, and it is determined whether or not a passenger in a wheelchair exists in the target area.
- the result is output to the outside (steps 324 to 328). That is, when an image is captured, the distance to the target at each position in the image is measured.
- a distance measurement area is set for a human image having the extracted face region, and a distance distribution is calculated based on distance data of each position measured in the distance measurement area. Then, the center of gravity of the face area is compared with the calculated distance distribution of the distance data in the distance measurement area.
- the person having the face area is determined to be a standing passenger, and the face area is determined. If the position of the center of gravity and the distance distribution of the distance data in the distance measurement area do not exist on a vertical line, the person having the face area is determined to be a passenger in a wheelchair, and the determination result is sent to the outside. Output and end processing.
- the determination result by the wheelchair presence determination unit 6 can be used, for example, to change the operation mode of the elevator by the elevator control unit, as in the first embodiment.
- the distance measuring unit 2 is constituted by a device realized by the same scanning laser range finder as in the first embodiment is exemplified.
- Stereo image processing by the binocular stereo method described in the second embodiment may be used. That is, instead of the distance measuring unit 2 in FIG. 12, the second image capturing unit 31, the third image capturing unit 32 and the stereo calculation unit 33, or the second image capturing unit 31 and (4) It may be configured to have the stereo calculation unit 33.
- the wheelchair presence determination unit 6 determines whether the passenger is in a standing position or in a wheelchair, based on the position of the center of gravity of the face area and the distance measurement area.
- the distance distribution from the image pickup unit 1 is not the center of gravity of the face area but the entire face area as the representative point of the face area. It may be compared with the distance distribution in the distance measurement area.
- a changed area is extracted by generating a difference image from the image of the captured target area and the background image prepared in advance by the changed area detection unit 41. Since the face area is extracted by the face area extraction unit 3 in the changed area, the time required to extract the face area can be reduced. In addition, it is possible to suppress an erroneous determination in which an image pattern resembling a human face existing in the target area is determined to be a human face, thereby enabling more accurate extraction of a face area. Having.
- a face region is extracted from an image obtained by capturing a target region, and a distance distribution from an image capturing unit in a distance measurement error set below the face region by a predetermined distance. And the distance of the face area from the image capturing means is compared to determine whether a person having the face area is in a wheelchair, so that the presence of a wheelchair passenger can be recognized with a high probability. It has the effect of being able to.
- a difference image is generated from an image captured by the image capturing unit and a background image in which no object is present in the target area, and an area having a change based on the generated difference image is determined.
- the face area extracting means Since it is configured to further include the change area detecting means for extracting, there is an effect that the time for extracting the face area by the face area extracting means can be reduced. In addition, it is possible to suppress an erroneous determination that an image pattern resembling a human face existing in the target area is determined to be a human face, thereby enabling more accurate extraction of a face area. Having.
- a scanning laser range finder is used as the distance measuring means, so that the distance from the image pickup means to the target existing in the target area can be accurately measured.
- a difference image is generated from an image captured by the image capturing unit and a background image in which no object is present in the target area, and an area having a change based on the generated difference image is determined. Since it is configured so as to further include a change area detecting means for extracting, it is possible to reduce a time for extracting a face area by the face area extracting means. This has the effect. In addition, it is possible to suppress an erroneous determination that an image pattern resembling a human face existing in the target area is determined to be a human face, thereby enabling more accurate extraction of a face area. Having.
- the two images in the target area imaged by the second and third image imaging means are imaged by the first image imaging means by the stereo calculation means. Since the distance from the first image pickup means at each position of the image to the object present at that position is measured, the distance from the first image pickup means to the target object can be determined with a simple device configuration. It has the effect of being able to measure.
- a difference image is generated from an image captured by the first image capturing unit and a ray background image in which no object in the target area exists, and a difference image is generated based on the generated difference image. Since it is configured to further include a change area detecting unit that extracts a region that has been located, it is possible to reduce the time required to extract a face area by the face area extracting unit. In addition, it is possible to suppress erroneous determination that an image pattern resembling a human face existing in the target area is determined to be a human face, and it is possible to extract a more accurate face area. This has the effect.
- the image of the target region imaged by the first image imaging means and the second image imaging means arranged at a distance from the first image imaging means in the horizontal direction.
- the first image capturing means includes an image for extracting a face area in the target area, and an image for measuring a distance between the first image capturing means and the target in the target area by a stereo method.
- a difference image is generated from an image captured by the first image capturing unit and a ray background image in which no object in the target area exists, and a difference image is generated based on the generated difference image. Since it is configured to further include a change area detecting unit that extracts a region that has been located, it is possible to reduce the time required to extract a face area by the face area extracting unit. In addition, it is possible to suppress erroneous determination that an image pattern resembling a human face existing in the target area is determined to be a human face, and it is possible to extract a more accurate face area. This has the effect. Industrial applicability
- the image processing apparatus automatically switches the operation mode of the elevator when there is a passenger in a wheelchair in an elevator car, an elevator hall, or the like. It is suitable for passenger detection systems for elevators used in automobiles.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
- Emergency Alarm Devices (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2002/007632 WO2004012142A1 (ja) | 2002-07-26 | 2002-07-26 | 画像処理装置 |
JP2004524082A JP4127545B2 (ja) | 2002-07-26 | 2002-07-26 | 画像処理装置 |
EP02751738A EP1526477B1 (en) | 2002-07-26 | 2002-07-26 | Image processing apparatus |
US10/493,434 US7142694B2 (en) | 2002-07-26 | 2002-07-26 | Image processing apparatus |
DE60236461T DE60236461D1 (de) | 2002-07-26 | 2002-07-26 | Bildverarbeitungsvorrichtung |
CNB028251091A CN1321394C (zh) | 2002-07-26 | 2002-07-26 | 图像处理装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2002/007632 WO2004012142A1 (ja) | 2002-07-26 | 2002-07-26 | 画像処理装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004012142A1 true WO2004012142A1 (ja) | 2004-02-05 |
Family
ID=30795867
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2002/007632 WO2004012142A1 (ja) | 2002-07-26 | 2002-07-26 | 画像処理装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US7142694B2 (ja) |
EP (1) | EP1526477B1 (ja) |
JP (1) | JP4127545B2 (ja) |
CN (1) | CN1321394C (ja) |
DE (1) | DE60236461D1 (ja) |
WO (1) | WO2004012142A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007020033A (ja) * | 2005-07-11 | 2007-01-25 | Nikon Corp | 電子カメラおよび画像処理プログラム |
JP2007272474A (ja) * | 2006-03-30 | 2007-10-18 | National Institute Of Advanced Industrial & Technology | ステレオカメラを用いた車椅子使用者検出システム |
WO2007125794A1 (ja) * | 2006-04-27 | 2007-11-08 | Konica Minolta Holdings, Inc. | データ測定装置及びデータ測定方法 |
JP2008102781A (ja) * | 2006-10-19 | 2008-05-01 | Fuji Electric Fa Components & Systems Co Ltd | 情報案内システムおよび情報案内プログラム |
JP2013073459A (ja) * | 2011-09-28 | 2013-04-22 | Oki Electric Ind Co Ltd | 画像処理装置、画像処理方法、プログラム、および画像処理システム |
JP2013245096A (ja) * | 2012-05-29 | 2013-12-09 | Toshiba Elevator Co Ltd | エレベータの呼び登録装置 |
DE112015007051T5 (de) | 2015-10-21 | 2018-06-28 | Mitsubishi Electric Corporation | Drahtlose informationsverteilungsvorrichtung, steuerverfahren für eine drahtlose informationsverteilungsvorrichtung und steuerprogramm |
JP7367174B1 (ja) | 2022-12-20 | 2023-10-23 | 東芝エレベータ株式会社 | エレベータシステム |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4533824B2 (ja) * | 2005-08-30 | 2010-09-01 | 株式会社日立製作所 | 画像入力装置及び校正方法 |
US8259995B1 (en) | 2006-01-26 | 2012-09-04 | Adobe Systems Incorporated | Designating a tag icon |
US7813526B1 (en) | 2006-01-26 | 2010-10-12 | Adobe Systems Incorporated | Normalizing detected objects |
US7636450B1 (en) | 2006-01-26 | 2009-12-22 | Adobe Systems Incorporated | Displaying detected objects to indicate grouping |
US7716157B1 (en) | 2006-01-26 | 2010-05-11 | Adobe Systems Incorporated | Searching images with extracted objects |
US7706577B1 (en) * | 2006-01-26 | 2010-04-27 | Adobe Systems Incorporated | Exporting extracted faces |
US7813557B1 (en) | 2006-01-26 | 2010-10-12 | Adobe Systems Incorporated | Tagging detected objects |
US7720258B1 (en) | 2006-01-26 | 2010-05-18 | Adobe Systems Incorporated | Structured comparison of objects from similar images |
US7694885B1 (en) | 2006-01-26 | 2010-04-13 | Adobe Systems Incorporated | Indicating a tag with visual data |
US7978936B1 (en) | 2006-01-26 | 2011-07-12 | Adobe Systems Incorporated | Indicating a correspondence between an image and an object |
JP5140256B2 (ja) * | 2006-09-12 | 2013-02-06 | 株式会社ユニバーサルエンターテインメント | サンド装置、入金残高精算機及び遊技システム |
JP5140257B2 (ja) * | 2006-09-12 | 2013-02-06 | 株式会社ユニバーサルエンターテインメント | サンド装置、入金残高精算機、管理サーバ及び遊技システム |
EP2196425A1 (de) * | 2008-12-11 | 2010-06-16 | Inventio Ag | Verfahren zur Benachteiligungsgerechten Benutzung einer Aufzugsanlage |
US20120175192A1 (en) * | 2011-01-11 | 2012-07-12 | Utechzone Co., Ltd. | Elevator Control System |
KR101046677B1 (ko) * | 2011-03-15 | 2011-07-06 | 동국대학교 산학협력단 | 눈 위치 추적방법 및 이를 이용한 의료용 헤드램프 |
JP5733614B2 (ja) * | 2011-03-29 | 2015-06-10 | リコーイメージング株式会社 | 撮影情報管理方法、及び撮影情報管理装置 |
EP3227828B1 (en) * | 2014-12-03 | 2023-10-04 | Inventio Ag | System and method for alternatively interacting with elevators |
US10254402B2 (en) * | 2016-02-04 | 2019-04-09 | Goodrich Corporation | Stereo range with lidar correction |
JP6513594B2 (ja) * | 2016-03-30 | 2019-05-15 | 株式会社日立製作所 | エレベータ装置及びエレベータ制御方法 |
KR101774692B1 (ko) * | 2016-05-16 | 2017-09-04 | 현대자동차주식회사 | 에어백 제어 장치 및 방법 |
TWI719409B (zh) * | 2019-02-23 | 2021-02-21 | 和碩聯合科技股份有限公司 | 追蹤系統及其方法 |
CN109948494B (zh) * | 2019-03-11 | 2020-12-29 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN110338800A (zh) * | 2019-07-17 | 2019-10-18 | 成都泰盟软件有限公司 | 自动测量身高体重的躺椅 |
CN111932596B (zh) * | 2020-09-27 | 2021-01-22 | 深圳佑驾创新科技有限公司 | 摄像头遮挡区域的检测方法、装置、设备和存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04286596A (ja) * | 1991-03-13 | 1992-10-12 | Toshiba Corp | エスカレータ |
JPH09145362A (ja) * | 1995-11-29 | 1997-06-06 | Ikegami Tsushinki Co Ltd | ステレオ画像による物体の高さ測定方法 |
JPH10232985A (ja) * | 1997-02-19 | 1998-09-02 | Risou Kagaku Kenkyusho:Kk | 室内監視装置 |
JP2001351190A (ja) * | 2000-06-05 | 2001-12-21 | Oki Electric Ind Co Ltd | 歩行支援システム |
JP2002197463A (ja) * | 2000-12-26 | 2002-07-12 | Matsushita Electric Ind Co Ltd | 挙動検出装置および挙動検出システム |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05147835A (ja) * | 1991-11-25 | 1993-06-15 | Hitachi Ltd | 身体障害者対応エレベータの乗場呼び登録装置 |
US5323470A (en) * | 1992-05-08 | 1994-06-21 | Atsushi Kara | Method and apparatus for automatically tracking an object |
JPH061546A (ja) * | 1992-06-17 | 1994-01-11 | Hitachi Ltd | 車椅子運転仕様付エレベータの制御方法 |
JPH0692563A (ja) | 1992-09-17 | 1994-04-05 | Hitachi Ltd | エレベータの人数計測装置 |
US6027138A (en) * | 1996-09-19 | 2000-02-22 | Fuji Electric Co., Ltd. | Control method for inflating air bag for an automobile |
JPH11268879A (ja) * | 1998-03-20 | 1999-10-05 | Mitsubishi Electric Corp | エレベータの運転制御装置 |
JP4377472B2 (ja) * | 1999-03-08 | 2009-12-02 | 株式会社東芝 | 顔画像処理装置 |
JP2001302121A (ja) * | 2000-04-19 | 2001-10-31 | Mitsubishi Electric Corp | エレベータ装置 |
US7079669B2 (en) * | 2000-12-27 | 2006-07-18 | Mitsubishi Denki Kabushiki Kaisha | Image processing device and elevator mounting it thereon |
-
2002
- 2002-07-26 JP JP2004524082A patent/JP4127545B2/ja not_active Expired - Fee Related
- 2002-07-26 DE DE60236461T patent/DE60236461D1/de not_active Expired - Lifetime
- 2002-07-26 US US10/493,434 patent/US7142694B2/en not_active Expired - Fee Related
- 2002-07-26 EP EP02751738A patent/EP1526477B1/en not_active Expired - Fee Related
- 2002-07-26 CN CNB028251091A patent/CN1321394C/zh not_active Expired - Fee Related
- 2002-07-26 WO PCT/JP2002/007632 patent/WO2004012142A1/ja active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04286596A (ja) * | 1991-03-13 | 1992-10-12 | Toshiba Corp | エスカレータ |
JPH09145362A (ja) * | 1995-11-29 | 1997-06-06 | Ikegami Tsushinki Co Ltd | ステレオ画像による物体の高さ測定方法 |
JPH10232985A (ja) * | 1997-02-19 | 1998-09-02 | Risou Kagaku Kenkyusho:Kk | 室内監視装置 |
JP2001351190A (ja) * | 2000-06-05 | 2001-12-21 | Oki Electric Ind Co Ltd | 歩行支援システム |
JP2002197463A (ja) * | 2000-12-26 | 2002-07-12 | Matsushita Electric Ind Co Ltd | 挙動検出装置および挙動検出システム |
Non-Patent Citations (1)
Title |
---|
See also references of EP1526477A4 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007020033A (ja) * | 2005-07-11 | 2007-01-25 | Nikon Corp | 電子カメラおよび画像処理プログラム |
JP2007272474A (ja) * | 2006-03-30 | 2007-10-18 | National Institute Of Advanced Industrial & Technology | ステレオカメラを用いた車椅子使用者検出システム |
WO2007125794A1 (ja) * | 2006-04-27 | 2007-11-08 | Konica Minolta Holdings, Inc. | データ測定装置及びデータ測定方法 |
JP2008102781A (ja) * | 2006-10-19 | 2008-05-01 | Fuji Electric Fa Components & Systems Co Ltd | 情報案内システムおよび情報案内プログラム |
JP2013073459A (ja) * | 2011-09-28 | 2013-04-22 | Oki Electric Ind Co Ltd | 画像処理装置、画像処理方法、プログラム、および画像処理システム |
JP2013245096A (ja) * | 2012-05-29 | 2013-12-09 | Toshiba Elevator Co Ltd | エレベータの呼び登録装置 |
DE112015007051T5 (de) | 2015-10-21 | 2018-06-28 | Mitsubishi Electric Corporation | Drahtlose informationsverteilungsvorrichtung, steuerverfahren für eine drahtlose informationsverteilungsvorrichtung und steuerprogramm |
US10244487B2 (en) | 2015-10-21 | 2019-03-26 | Mitsubishi Electric Corporation | Wireless information distribution apparatus, control method for wireless information distribution apparatus, and non-transitory computer readable medium storing control program |
JP7367174B1 (ja) | 2022-12-20 | 2023-10-23 | 東芝エレベータ株式会社 | エレベータシステム |
Also Published As
Publication number | Publication date |
---|---|
US20050013488A1 (en) | 2005-01-20 |
EP1526477A4 (en) | 2008-09-10 |
CN1605086A (zh) | 2005-04-06 |
CN1321394C (zh) | 2007-06-13 |
JP4127545B2 (ja) | 2008-07-30 |
EP1526477B1 (en) | 2010-05-19 |
EP1526477A1 (en) | 2005-04-27 |
US7142694B2 (en) | 2006-11-28 |
DE60236461D1 (de) | 2010-07-01 |
JPWO2004012142A1 (ja) | 2005-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004012142A1 (ja) | 画像処理装置 | |
US7079669B2 (en) | Image processing device and elevator mounting it thereon | |
CN1324529C (zh) | 对场景中的物体进行分类的系统及其方法 | |
US8929608B2 (en) | Device and method for recognizing three-dimensional position and orientation of article | |
US20190156557A1 (en) | 3d geometric modeling and 3d video content creation | |
US6445814B2 (en) | Three-dimensional information processing apparatus and method | |
JPH10334207A (ja) | 人流計測装置 | |
CN108592886B (zh) | 图像采集设备和图像采集方法 | |
JP5300694B2 (ja) | 検出装置 | |
JP4431749B2 (ja) | 顔姿勢検出方法 | |
JP2002122416A (ja) | 三次元形状測定装置 | |
JP5336325B2 (ja) | 画像処理方法 | |
KR101733657B1 (ko) | 거리영상 기반 카메라를 이용한 객체 카운터 시스템 및 그 카운트 방법 | |
JP4476546B2 (ja) | 画像処理装置及びそれを搭載したエレベータ | |
JP4238042B2 (ja) | 監視装置および監視方法 | |
JP2004110804A (ja) | 3次元画像撮影装置及び方法 | |
JP2012181757A (ja) | 光学情報読み取り装置 | |
KR102458582B1 (ko) | Tof 센서를 이용한 동물 사람 구별 시스템 | |
JP2001241927A (ja) | 形状測定装置 | |
Grosse et al. | Space-time multiplexing in a stereo-photogrammetry setup | |
JP5279459B2 (ja) | 混雑検知装置 | |
JP2002013918A (ja) | 3次元画像生成装置および3次元画像生成方法 | |
JP6837880B2 (ja) | 画像処理装置、画像処理システム、画像処理方法、およびプログラム | |
JP2002015306A (ja) | 3次元画像生成装置および3次元画像生成方法 | |
JP2004101215A (ja) | 軌道間隔測定方法および軌道間隔測定装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2004524082 Country of ref document: JP |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 10493434 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2002751738 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20028251091 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 2002751738 Country of ref document: EP |