CN112441497B - User detection system for elevator - Google Patents

User detection system for elevator Download PDF

Info

Publication number
CN112441497B
CN112441497B CN202010428003.0A CN202010428003A CN112441497B CN 112441497 B CN112441497 B CN 112441497B CN 202010428003 A CN202010428003 A CN 202010428003A CN 112441497 B CN112441497 B CN 112441497B
Authority
CN
China
Prior art keywords
detection
image
user
car
sensitivity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010428003.0A
Other languages
Chinese (zh)
Other versions
CN112441497A (en
Inventor
渡边雄太
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Elevator and Building Systems Corp
Original Assignee
Toshiba Elevator Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Elevator Co Ltd filed Critical Toshiba Elevator Co Ltd
Publication of CN112441497A publication Critical patent/CN112441497A/en
Application granted granted Critical
Publication of CN112441497B publication Critical patent/CN112441497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B13/00Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
    • B66B13/02Door or gate operation
    • B66B13/14Control systems or devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0012Devices monitoring the users of the elevator system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0037Performance analysers

Abstract

A user detection system for an elevator. The user is correctly detected using an image photographed by a camera having an ultra-wide angle lens. A user detection system for an elevator according to one embodiment includes an imaging unit, a detection unit, and a sensitivity changing unit. The camera shooting part is provided with an ultra-wide-angle lens and shoots the inside of the passenger car and the waiting hall in a large range. The detection unit detects a user or an object present in the car or the hall using the image captured by the imaging unit. The sensitivity changing section changes the detection sensitivity when the user or the object is detected by the detecting section at least at a central portion and a peripheral portion of the image.

Description

User detection system for elevator
The present application is based on Japanese patent application 2019-155740 (filing date: 8/28/2019), and priority is granted based on the application. This application is incorporated by reference into this application in its entirety.
Technical Field
Embodiments of the present invention relate to a user detection system for an elevator.
Background
In recent years, there has been known a system in which a camera is installed in a car of an elevator, and a user is detected from an image captured by the camera and reflected in door opening/closing control of a door. In such a system, in order to detect a user in a wider range, a fisheye lens is sometimes used in a camera. The fisheye lens is a super wide-angle convex lens with a field angle of 180 degrees or more. By using such a fisheye lens, it is possible to detect a user not only near a door but also in a wide range including a hall and a car.
Disclosure of Invention
However, the resolution of the peripheral portion (outer peripheral portion) of the image passing through the fisheye lens is poor, and the light quantity is also attenuated. Further, since the information of each pixel in the peripheral portion is extended and interpolated from the original image by the distortion correction, the luminance change due to noise or the like is expressed in a wide range. Therefore, when the presence or absence of a user is detected from the luminance change of the image, the possibility of erroneous detection increases.
The invention provides a user detection system of an elevator, which can accurately detect a user by using an image shot by a camera with an ultra-wide angle lens.
A user detection system for an elevator according to one embodiment includes an imaging unit, a detection unit, and a sensitivity changing unit.
The camera shooting part is provided with an ultra-wide-angle lens and shoots the inside of the passenger car and the waiting hall in a large range. The detection unit detects a user or an object present in the car or the hall using the image captured by the imaging unit. The sensitivity changing unit changes detection sensitivity when the user or the object is detected by the detecting unit at least at a central portion and a peripheral portion of the image.
According to the elevator user detection system configured as described above, it is possible to accurately detect a user using an image captured by a camera having an ultra-wide-angle lens.
Drawings
Fig. 1 is a diagram showing a configuration of an elevator user detection system according to embodiment 1.
Fig. 2 is a diagram showing a configuration of a portion around an entrance in the car in this embodiment.
Fig. 3 is a diagram showing an example of an image captured by the camera in the present embodiment.
Fig. 4 is a diagram for explaining a plurality of detection regions set in the captured image.
Fig. 5 is a diagram for explaining a coordinate system in real space in this embodiment.
Fig. 6 is a flowchart showing the flow of the overall processing of the user detection system in this embodiment.
Fig. 7 is a diagram for explaining the relationship between the distortion correction of the image and the detection sensitivity in this embodiment.
Fig. 8 is a diagram showing an example of a luminance change of an arbitrary portion on an image in the embodiment.
Fig. 9 is a diagram for explaining the relationship between the distortion correction of the image and the detection sensitivity in the case where the region in which the detection sensitivity is changed is divided into 3 or more as a modification of the embodiment.
Fig. 10 is a diagram showing an example of a luminance change of an arbitrary portion on an image in the above-described modification.
Fig. 11 is a flowchart showing the flow of the overall processing in the user detection system according to embodiment 2.
Fig. 12 is a diagram for explaining the relationship between the brightness of an image and the detection sensitivity of each region in this embodiment.
Fig. 13 is a diagram for explaining the relationship between the brightness of an image and the detection sensitivity of each region in the case where the region in which the detection sensitivity is changed is divided into 3 or more as a modification of this embodiment.
Detailed Description
Hereinafter, embodiments will be described with reference to the drawings.
The disclosure is merely an example, and the invention is not limited to the contents described in the following embodiments. Variations that can be readily envisioned by one skilled in the art are, of course, within the scope of this disclosure. In the drawings, the dimensions, shapes, and the like of the respective portions are schematically shown in some cases by being modified from those of the actual embodiment in order to make the description more clear. In the drawings, corresponding elements are denoted by the same reference numerals, and detailed description thereof may be omitted.
(embodiment 1)
Fig. 1 is a diagram showing a configuration of an elevator user detection system according to embodiment 1. In addition, although 1 car is described as an example, a plurality of cars may be configured similarly.
A camera 12 is provided at an upper portion of an entrance of the car 11. Specifically, the camera 12 is provided in a door lintel plate 11a covering an upper portion of the doorway of the car 11 so that a lens portion thereof faces directly downward. The camera 12 has an ultra-wide-angle lens such as a fisheye lens, for example, and photographs a subject included in the car 11 over a wide range at a field angle of 180 degrees or more. The camera 12 is capable of taking images of several frames (e.g., 30 frames/second) continuously in 1 second.
The installation place of the camera 12 may be not above the doorway of the car 11 as long as it is near the car door 13. For example, the place may be a place where the entire car room including the entire area of the floor surface in the car 11 and the hall 15 near the doorway when the door is opened can be photographed, such as a ceiling surface near the doorway of the car 11.
In the hall 15 at each floor, a hall door 14 is openably and closably provided at an arrival entrance of the car 11. When the car 11 arrives, the hoistway door 14 engages with the car door 13 and performs an opening and closing operation. The power source (door motor) is located on the car 11 side, and the hoistway doors 14 are opened and closed only following the car doors 13. In the following description, it is assumed that the hoistway doors 14 are also opened when the car doors 13 are opened, and the hoistway doors 14 are also closed when the car doors 13 are closed.
The image processing device 20 analyzes each image (video) continuously captured by the camera 12 in real time. Note that, although the image processing device 20 is shown in fig. 1 as being taken out of the car 11 for convenience, the image processing device 20 is actually housed in the lintel plate 11a together with the camera 12.
The image processing apparatus 20 includes a storage unit 21 and a detection unit 22. The storage section 21 sequentially holds images captured by the camera 12, and has a buffer area for temporarily holding data necessary for the processing by the detection section 22. The storage unit 21 may store an image subjected to a process such as distortion correction, enlargement and reduction, and partial cropping as a pre-process for the captured image.
The detection unit 22 detects a user located in the car 11 or the hall 15 using the image captured by the camera 12. The detection unit 22, if functionally divided, is composed of a detection region setting unit 22a, a detection processing unit 22b, and a sensitivity changing unit 22 c. Further, they may be realized by software, hardware such as an IC (Integrated Circuit), or both of software and hardware.
The detection area setting unit 22a sets at least two or more detection areas for detecting a user (a person using the elevator) or an object on the captured image of the camera 12. The "object" as referred to herein includes a moving body such as clothes, luggage, and a wheelchair of a user. Further, the present invention includes devices related to elevator devices such as operation buttons, lamps, and display devices in the car. The method of setting the detection region will be described in detail later with reference to fig. 3 and 4.
The detection processing unit 22b performs detection processing related to the operation of the car 11 for each detection region set by the detection region setting unit 22 a. The "detection processing relating to the operation of the car 11" is processing for detecting a user or an object based on the operation state such as opening and closing of the door of the car 11, and includes at least one or more operation states of the car 11 during the door opening operation or the door closing operation, during the lifting operation, and during the stop of the operation.
The sensitivity changing unit 22c changes the detection sensitivity at least in the central portion and the peripheral portion of the image in consideration of the lens characteristics of the camera 12. The "detection sensitivity" is a sensitivity when a user or an object is detected on an image, and specifically is a threshold value for a change in brightness of an image (a difference in brightness values of respective images compared in a predetermined unit). Details will be described later with reference to fig. 8.
In addition, the elevator control device 30 may have a part or all of the functions of the image processing device 20.
The elevator control device 30 is configured by a computer having a CPU, a ROM, a RAM, and the like, and controls operations of various devices (destination floor buttons, lighting, and the like) provided in the car 11. The elevator control device 30 includes an operation control unit 31, a door opening/closing control unit 32, and a notification unit 33. The operation control unit 31 controls the operation of the car 11. The door opening/closing control unit 32 controls opening/closing of the doors of the car doors 13 when the car 11 arrives at the lobby 15. Specifically, the door opening/closing control portion 32 opens the car doors 13 when the car 11 arrives at the hall 15, and closes the doors after a predetermined time has elapsed.
Here, for example, when the detection processing unit 22b detects a user or an object during the door opening operation of the car door 13, the door opening/closing control unit 32 performs door opening/closing control for avoiding a door accident (an accident in which the user is pulled into a door dark box). Specifically, the door opening/closing control unit 32 temporarily stops the door opening operation of the car doors 13, moves the car doors in the opposite direction (door closing direction), or slows down the door opening speed of the car doors 13. The notification unit 33 calls the attention of the user in the car 11 based on the detection result by the detection processing unit 22 b.
Fig. 2 is a diagram showing a configuration of a portion around an entrance in the car 11.
A car door 13 is openably and closably provided at an entrance of the car 11. In the example of fig. 2, the two-door double-split type car door 13 is shown, and the two door panels 13a and 13b constituting the car door 13 are opened and closed in directions opposite to each other in the width direction (horizontal direction). The "width" is the same as the entrance and exit of the car 11.
Front pillars 41a and 41b are provided on both sides of the doorway of the car 11, and the front pillars 41a and 41b surround the doorway of the car 11 together with the lintel plate 11 a. The "front pillar" is also called an entrance pillar or an entrance frame, and generally a door box for housing the car door 13 is provided on the inner side. In the example of fig. 2, when the car door 13 is opened, one door panel 13a is housed in the door black 42a provided on the back side of the front pillar 41a, and the other door panel 13b is housed in the door black 42b provided on the back side of the front pillar 41 b.
One or both of the front pillars 41a and 41b are provided with a display 43, an operation panel 45 on which a destination floor button 44 and the like are arranged, and a speaker 46. In the example of fig. 2, a speaker 46 is provided on the front pillar 41a, and a display 43 and an operation panel 45 are provided on the front pillar 41 b.
Here, a camera 12 having an ultra-wide angle lens such as a fisheye lens is provided at a central portion of a door lintel plate 11a at an upper portion of an entrance of the car 11.
Fig. 3 is a diagram showing an example of the captured image by the camera 12. This shows a case where the entire car room and the hall 15 near the doorway are photographed at a viewing angle of 180 degrees or more from the upper part of the doorway of the car 11 in a state where the car doors 13 ( door panels 13a and 13 b) and the hall doors 14 ( door panels 14a and 14 b) are fully opened. The upper side is a waiting hall 15, and the lower side is the interior of the car 11.
In the hall 15, door pockets 17a and 17b are provided on both sides of an arrival entrance of the car 11, and belt-shaped hall sills 18 having a predetermined width are arranged on a floor surface 16 between the door pockets 17a and 17b in an opening and closing direction of the hall doors 14. A belt-shaped car threshold 47 having a predetermined width is disposed on the doorway side of the floor surface 19 of the car 11 along the opening/closing direction of the car doors 13.
Here, detection areas E1 to E5 for detecting a user or an object are set in the car 11 and the hall 15 photographed in the photographed image.
The detection area E1 is an area (boarding detection area) for detecting the boarding state of the user (the boarding position of the user, the number of boarding passengers, and the like) in the car 11, and is set at least as the entire area of the floor surface 19. The detection area E1 may include front pillars 41a and 41b, side surfaces 48a and 48b, and a back surface 49 surrounding the car interior.
Specifically, as shown in fig. 4, the detection area E1 is set so as to match the lateral width W1 and the longitudinal width W2 of the floor surface 19. The detection area E1 is set to a position at a height h1 from the floor surface 19 with respect to the front surface posts 41a and 41b, the side surfaces 48a and 48b, and the rear surface 49. The height h1 is arbitrary.
The detection areas E2-1 and E2-2 are areas for detecting the user's door being caught by the door (detection areas caught by the door) in advance during the door opening operation, and are set on the inner side surfaces 41a-1 and 41b-1 of the front posts 41a and 41 b.
Specifically, as shown in fig. 4, the detection regions E2-1, E2-2 are formed in a band shape having predetermined widths D1, D2 in the width direction of the inner side surfaces 41a-1, 41b-1 of the front pillars 41a, 41 b. The widths D1, D2 are set to be, for example, the same as or slightly smaller than the lateral widths (widths in the short-side direction) of the inner side surfaces 41a-1, 41b-1. The widths D1 and D2 may be the same value or different values. In addition, the detection areas E2-1, E2-2 are set to a height h2 from the floor surface 19. The height h2 is arbitrary, and may be the same value as h1 or a different value.
The detection area E3 is an area (hall state detection area) for detecting the state of the hall 15 (the waiting position of the user, the number of waiting persons, and the like), and is set in the vicinity of the entrance and exit of the car 11.
Specifically, as shown in fig. 4, the detection area E3 is set to have a predetermined distance L1 from the entrance of the car 11 toward the direction of the hall 15. W0 in the figure is the lateral width of the doorway. The shape of the detection area E3 may be a rectangle having the same or a size in the lateral direction (X direction) as W0 or larger, or a trapezoid excluding dead corners of the door pockets 17a and 17 b. The vertical (Y direction) and horizontal (X direction) dimensions of the detection zone E3 may be fixed or may be dynamically changed in accordance with the opening and closing operation of the car doors 13.
The detection area E4 is an area (approach detection area) for detecting a user or an object approaching the car 11 from the lobby 15, and is set near the entrance/exit of the car 11 in the lobby 15.
Specifically, as shown in fig. 4, the detection area E4 is set to have a predetermined distance L2 (L1 > L2) from the entrance of the car 11 toward the hall 15. The shape of the detection area E3 may be a rectangle having the same or a size in the lateral direction (X direction) as W0 or larger, or a trapezoid excluding dead corners of the door pockets 17a and 17 b. The detection zone E4 may be included in the detection zone E3 and dynamically changed in conjunction with the detection zone E3 in accordance with the opening and closing operation of the car door 13.
The detection area E5 is set along the hall sill 18 and the car sill 47 for detecting an area sandwiched between the doors during the door closing operation in advance when the car doors 13 are of the center-double-door-split type, and for detecting an area hit by the doors during the door closing operation in advance when the car doors 13 are of the side-open type.
Specifically, as shown in fig. 4, the probe area E5 is set to have L3 from the car-side end of the car sill 47 toward the lobby-side end of the lobby sill 18. The size of the lateral width (X direction) of the detection region E5 is the same as W0.
In fig. 3, an example is shown in which 5 detection regions E1 to E5 are set in the captured image, but the detection regions may be set more finely. For example, a detection area may be set in the operation panel 45 in the car 11 shown in fig. 2, and the state of various buttons on the operation panel 45 may be detected.
The detection region setting unit 22a calculates three-dimensional coordinates of the captured image based on design values of each component of the car 11 and parameter values unique to the camera 12, determines where on the captured image is captured, and sets a detection region at a location to be detected.
As shown in fig. 5, the three-dimensional coordinates are coordinates when the direction horizontal to the car doors 13 is an X axis, the direction from the center of the car doors 13 toward the lobby 15 (the direction perpendicular to the car doors 13) is a Y axis, and the height direction of the car 11 is a Z axis.
Next, the operation of the present system will be described in detail.
Fig. 6 is a flowchart showing the overall processing flow in the present system.
First, as an initial setting, a detection region setting process is executed by the detection region setting unit 22a of the detection unit 22 provided in the image processing apparatus 20 (step S11). This detection region setting process is executed, for example, when the camera 12 is set or when the set position of the camera 12 is adjusted as follows.
That is, the detection region setting unit 22a sets the plurality of detection regions E1 to E5 shown in fig. 3 on the image captured by the camera 12. The detection regions E1 to E5 may be set after the image captured by the camera 12 is subjected to distortion correction.
The detection area E1 is used as the "boarding condition detection area", and may be set at least as the entire area of the floor surface 19, including the front pillars 41a and 41b, the side surfaces 48a and 48b, and the back surface 49 surrounding the car room.
The detection areas E2-1, E2-2 are used as "detection areas sandwiched by doors", and are set on the inner side surfaces 41a-1, 41b-1 of the front pillars 41a, 41 b. The detection area E3 is used as a "hall situation detection area" and is set in a direction from the entrance/exit of the car 11 toward the hall 15. The detection area E4 is used as an "approach detection area" and is set near the doorway of the car 11. The detection area E5 is used as a "sandwiched detection area" or a "door collision detection area" and is set on the rocker.
The areas where the floor surface 19, the front pillars 41a and 41b, the hall 15, and the like are projected on the photographed image are calculated from the design values of the components of the car 11 and the intrinsic values of the camera 12.
Width of face width (lateral width of doorway of cage)
Height of the door
Width of the column
Type of door (side-by-side or side-by-side split/right)
Area of floor or wall
Relative position of camera with respect to face width (three-dimensional)
Angle of the Camera (3 axes)
Angle of view (focal length) of the camera
The detection region setting unit 22a calculates a region in which the detection target is reflected on the captured image based on these values. For example, in the case of the frontal columns 41a, 41b, the detection region setting unit 22a calculates the three-dimensional coordinates of the frontal columns 41a, 41b based on the relative position, angle, and angle of view of the camera 12 with respect to the face width, assuming that the frontal columns 41a, 41b are vertically erected from both ends of the face width (entrance/exit).
For example, as shown in fig. 4, marks m1 and m2 may be placed on both ends of the car sill 47 on the car inner side, and the three-dimensional coordinates of the face pillars 41a and 41b may be obtained based on the positions of the marks m1 and m 2. Alternatively, the positions of both ends of the car sill 47 on the car inner side may be obtained by image processing, and the three-dimensional coordinates of the front pillars 41a and 41b may be obtained based on the positions.
The detection region setting unit 22a projects the three-dimensional coordinates of the front surface posts 41a and 41b onto the two-dimensional coordinates on the captured image, obtains the regions projected by the front surface posts 41a and 41b on the captured image, and sets the detection regions E2-1 and E2-2 in the regions. Specifically, the detection region setting unit 22a sets the detection regions E2-1 and E2-2 having the predetermined widths D1 and D2 along the longitudinal direction of the inner side surfaces 41a-1 and 41b-1 of the frontal bars 41a and 41 b.
The setting processing of the detection zones E2-1, E2-2 may be performed in a state where the car door 13 is opened, or may be performed in a state where the car door 13 is closed. In a state where the car door 13 is closed, the waiting hall 15 is not reflected in the image captured by the camera 12, and the detection areas E2-1 and E2-2 are easily set accordingly.
In addition, the lateral width (width in the short-side direction) of the car sill 47 is generally larger than the thickness of the car door 13. Therefore, even when the car doors 13 are fully closed, the captured image is reflected on one end side of the car threshold 47. Therefore, the positions of the front pillars 41a and 41b can be determined based on the position of the one end side, and the detection regions E2-1 and E2-2 can be set.
Similarly, for the other detection regions E1, E3, E4, and E5, the regions in which the detection object is reflected on the captured image are determined from the design values of the components of the car 11 and the intrinsic values of the camera 12, and the detection regions E1, E3, E4, and E5 are set in these regions.
Next, the operation of the car 11 during operation will be described.
As shown in fig. 6, when the car 11 arrives at the waiting hall 15 at any floor (yes in step S12), the elevator control device 30 opens the car door 13 (step S13).
At this time (during the door opening operation of the car door 13), the inside of the car 11 and the hall 15 are photographed at a predetermined frame rate (for example, 30 frames/second) by the camera 12 having a super wide-angle lens. The image processing device 20 acquires images captured by the camera 12 in time series, sequentially stores the images in the storage unit 21 (step S14), and executes detection processing described later in real time.
In the present embodiment, as preprocessing for the captured image, distortion correction is performed on the captured image, and the image is sequentially stored in the storage unit 21 (step S15).
Fig. 7 is a diagram for explaining the relationship between distortion correction of an image and detection sensitivity. In the figure, 50 denotes an image before distortion correction, and 60 denotes an image after distortion correction.
The camera 12 installed in the car 11 uses an ultra-wide-angle lens such as a fisheye lens. The image 50 of the subject captured by the camera 12 is curved in a circular shape, and the image of the outer peripheral portion 52 of the central portion 51 is distorted. Therefore, generally, the image 50 obtained by the camera 12 is subjected to distortion correction by a predetermined method, and the corrected image 60 is used for detection processing by the user. As the method of correcting distortion, a generally known method is used, and a detailed description thereof will be omitted.
By the distortion correction, each pixel of the peripheral portion 52 of the original image 50 to be corrected extends outward and is shaped into a rectangular shape. In this case, in the extended portion, the periphery is interpolated using the information of the original pixel and the information of the pixel adjacent to the original pixel. Therefore, for example, when a cause of erroneous detection such as noise or shadow is included in the original image 50, the luminance change of the pixel that is the cause of the erroneous detection appears in a wide range in the corrected image 60. In particular, since each pixel in the peripheral portion 62 of the image 60 has a large range extending from the original image 50, the luminance change of the pixel in the portion that causes the erroneous detection appears to be larger in area than the central portion 61. Therefore, when the presence or absence of the user is detected based on the change in the brightness of the image, the possibility of erroneous detection in the peripheral portion 62 becomes high.
In order to prevent such erroneous detection, the present embodiment is characterized in that, when the distortion-corrected image 60 is obtained, the image is divided into two regions, i.e., a central portion 61 and a peripheral portion 62 of the image, and detection processing is performed while changing the detection sensitivity.
When the corrected image 60 is acquired, the sensitivity changer 22c of the detector 22 changes the detection sensitivity in the central portion 61 and the peripheral portion 62 outside the central portion 61 of the image (step S16). Specifically, the sensitivity changer 22c changes the detection sensitivity to be lower than the reference value in the central portion 61 of the image 60 and to be higher than the reference value in the peripheral portion 62 of the image 60. The "detection sensitivity is lower than the reference value" means that the threshold value for the luminance change of the image is made higher than the reference value. The "detection sensitivity is higher than the reference value" means that the threshold value for the luminance change of the image is made lower than the reference value. Fig. 8 shows the relationship between the luminance change and the threshold value.
Fig. 8 is a diagram showing an example of a luminance change in an arbitrary portion of an image.
Normally, the threshold TH0 is set as a reference value. That is, if the amount of change (difference in brightness) in comparing the brightness values of the respective images obtained in time series from the camera 12 in a predetermined unit is equal to or greater than the threshold TH0, it is determined that the user or the object is present.
Here, since the central portion 61 of the image 60 is less affected by the distortion correction, even if the luminance variation is small, the user or the object can be detected. Therefore, for the central portion 61 of the image 60, a threshold TH1 lower than the threshold TH0 is set as the detection sensitivity α (TH 1< TH 0). The detection sensitivity α may be set to the same value as the threshold TH0 of the reference value.
On the other hand, as described above, since the peripheral portion 62 of the image 60 is interpolated by extending the information of each pixel of the original image 50 outward by the distortion correction, the luminance change due to the noise or the like is expressed in a large area. Therefore, if the luminance change is not large (the amount of change is large), the possibility of erroneous detection is high. Therefore, for the peripheral portion 62 of the image 60, a threshold TH2 higher than the threshold TH0 is set as the detection sensitivity β (TH 2> TH 0). That is, in the peripheral portion 62 of the image 60, the threshold TH2 is raised to make it difficult to detect in consideration of the influence of the distortion correction.
Returning to fig. 6, the detection processing unit 22b extracts images in the detection regions E1 to E5 from a plurality of captured images obtained in time series by the camera 12, analyzes the images, and performs detection processing corresponding to the detection regions E1 to E5 (step S17).
Here, the detection processing unit 22b uses the detection sensitivity set in step S16 and executes the detection processing according to the detection sensitivity. In this case, the detection processing is not performed by changing the detection sensitivity for each detection region, but performed by changing the detection sensitivity for a portion corresponding to the central portion 61 and a portion corresponding to the peripheral portion 62 of the image 60 in the detection region.
That is, for example, if the detection area E1 is set, the detection processing unit 22b performs analysis processing on the image 60 in the detection area E1 to detect a user or an object in the car 11. At this time, the detection processing section 22b performs detection processing at a detection sensitivity α (threshold TH 1) for a portion of the detection region E1 corresponding to the central portion 61 of the image 60 (the floor surface 19 of the car 11 in the example of fig. 3). In addition, the detection processing section 22b performs detection processing at the detection sensitivity β (threshold TH 2) for the portions of the detection region E1 corresponding to the peripheral portion 62 of the image 60 (in the example of fig. 3, the side surfaces 48a, 48b and the back surface 49 of the car 11).
Similarly, the detection processing unit 22b performs detection processing in accordance with the detection sensitivity described above in the other detection regions E2 to E5. That is, the detection processing unit 22b performs detection processing with the detection sensitivity α (threshold TH 1) for the portion corresponding to the central portion 61 of the image 60 among the detection regions E2 to E5, and performs detection processing with the detection sensitivity β (threshold TH 2) for the portion corresponding to the peripheral portion 62 of the image 60.
After the detection processing is performed for each detection area in this way, the results of the detection processing are output from the image processing device 20 to the elevator control device 30 (step S18). The elevator control device 30 receives the detection results for each detection area, and executes the corresponding processing corresponding to each detection result (step S19).
The correspondence processing executed in step S19 differs for each detection zone, and for example, if it is the detection zone E1, it is processing corresponding to the riding situation in the car 11. Specifically, for example, when the user is concentrated in front of the car door 13, the elevator control device 30 guides the user in the car 11 to crowd the user in the car through the notification portion 33. In addition, when a large number of users are riding in the elevator and in a crowded situation, the elevator control device 30 performs operation control such as suppressing the assignment of hall calls to the car 11 by the operation control section 31.
In the flowchart of fig. 6, the case where the detection processing is performed for each zone during the door opening operation of the car 11 is described, but the same is also applied to the door closing operation of the car 11. That is, in the door closing operation, the detection processing is performed for each of the detection areas E1 to E5, and the corresponding processing corresponding to the detection result is executed. Thus, for example, when a user is detected in the detection zone E3 while the car doors 13 are closed, the car doors 13 can be opened again to allow the user to get on the elevator.
The present invention can also be applied to the lifting operation (moving operation) of the car 11. In this case, the detection areas E1 and E2 set in the car 11 may be detection targets, and detection processing may be performed on each of these areas during the ascending and descending operation (movement) of the car 11, and corresponding processing corresponding to the detection results may be executed.
Further, the present invention can be applied even when the operation of the car 11 is stopped for some reason. In this case, the detection regions E1 and E2 set in the car 11 are also the detection targets. For example, if the number of passengers is detected from an image set in the detection area E1 of the floor surface 19 of the car 11 and reported to a monitoring center, not shown, when the car 11 is stopped in an emergency with the door closed due to an earthquake, it is possible to quickly cope with a trapped accident in the car.
As described above, according to embodiment 1, in a system for detecting a user using a camera having an ultra-wide-angle lens, by changing detection sensitivity in a central portion of a captured image and a peripheral portion outside the central portion in consideration of characteristics of the lens, erroneous detection particularly in the peripheral portion where image quality is degraded can be prevented, and accurate detection of the user can be performed.
(modification example)
In embodiment 1, the area in which the detection sensitivity is changed is divided into 2 areas, but the area in which the detection sensitivity is changed may be divided into 3 or more areas as shown in fig. 9, for example.
In the example of fig. 9, the image 50 is divided into 3 areas of a central portion 51, a 1 st peripheral portion 52a, and a 2 nd peripheral portion 52 b. The 2 nd peripheral portion 52b is the outermost periphery of the image 50, and has the worst image quality. When the distortion of the image 50 is corrected and the user is detected by using the corrected image 60, as shown in fig. 10, the detection sensitivity α is changed to the central portion 61 of the image 60, the detection sensitivity β is changed to the 1 st peripheral portion 62a, and the detection sensitivity γ is changed to the 2 nd peripheral portion 62b of the outermost periphery, and the detection processing is performed.
The threshold of the detection sensitivity α with respect to the luminance change is TH1, and is set to be lower than a threshold TH0 as a reference value (TH 1< TH 0). The threshold of the detection sensitivity β with respect to the change in luminance is TH2, and is set higher than the threshold TH0 (TH 2> TH 0). The threshold of the detection sensitivity γ with respect to the change in luminance is TH3, and is set higher than the threshold TH2 (TH 3> TH 2).
Therefore, in the central portion 51 of the image 60, when a change in luminance of the threshold TH1 or more is detected in the detection region, it is determined that the user or the object is present. In addition, if the 1 st peripheral portion 62a is present, it is determined that a user or an object is present when a change in luminance equal to or greater than the threshold TH2 is detected in the detection region. In the case of the 2 nd peripheral portion 62b, when a change in luminance equal to or greater than the threshold TH3 is detected in the detection region, it is determined that a user or an object is present. In the example of fig. 9, the regions in which the detection sensitivity is changed are divided into 3 regions, but may be further divided.
By dividing the region in which the detection sensitivity is changed into 3 or more areas in this way, it is possible to prevent erroneous detection of the peripheral portion with degraded image quality and to accurately detect the user, as in the above-described embodiment 1.
(embodiment 2)
Next, embodiment 2 will be explained.
The brightness of the captured image is not always constant but varies depending on, for example, the lighting environment of the lobby of each floor or the like. Here, if the captured image is dark, it is difficult to detect the luminance change, and therefore the detection accuracy of the user is affected. In particular, in the case of an image captured by an ultra-wide-angle lens such as a fisheye lens, since the image quality of the peripheral portion is degraded, if the captured image is in a dark state, erroneous detection is likely to occur. Therefore, in embodiment 2, the detection sensitivity is changed in consideration of the brightness of the captured image.
Fig. 11 is a flowchart showing the flow of the overall processing in the user detection system according to embodiment 2. The processing in steps S21 to S25 is the same as the processing in steps S11 to S15 in fig. 6 in embodiment 1.
That is, first, as an initial setting, a plurality of detection regions E1 to E5 shown in fig. 3 are set on the image captured by the camera 12 (step S21). When the car 11 arrives at an arbitrary floor and the door is opened (yes in step S22), the camera 12 photographs the inside of the car 11 and the hall 15 over a wide range, and the photographed images are sequentially stored in the storage unit 21 in chronological order (steps S23 to S24). At this time, distortion of the captured image is corrected by the distortion correction processing described in fig. 7 (step S25).
In embodiment 2, when the sensitivity changing unit 22c changes the detection sensitivity, the brightness of the captured image is detected (step S26). In addition, a generally known method is used for detecting the brightness of a captured image, and a detailed description thereof will be omitted here. After the brightness of the captured image is detected, the sensitivity changing unit 22c changes the detection sensitivity based on the information indicating the detected brightness (step S27).
The detection sensitivity changing process in step S27 will be described in detail with reference to the distortion-corrected image 60 shown in fig. 7 as an example.
The sensitivity changing unit 22c distinguishes the central portion 61 of the image 60 from the peripheral portion 62 located outside the central portion 61, and sets different detection sensitivities for these portions 61, 62. At this time, as shown in fig. 12, if the brightness of the image 60 is equal to or greater than the constant value X, the sensitivity changer 22c sets the detection sensitivity α in the central portion 61 of the image 60 and sets the detection sensitivity β in the peripheral portion 62.
As explained in fig. 8, the threshold of the detection sensitivity α with respect to the luminance change is TH1, and is set to be lower than the threshold TH0 as the reference value (TH 1< TH 0). The threshold of the detection sensitivity β with respect to the change in luminance is TH2, and is set higher than the threshold TH0 as a reference value (TH 2> TH 0).
On the other hand, when the brightness of the image 60 is smaller than the constant value X, the sensitivity changer 22c sets the detection sensitivity α to the central portion 61 of the image 60, but invalidates the detection sensitivity to the peripheral portion 62. "invalidating the detection sensitivity" means not performing the detection processing. When the brightness of the image 60 does not satisfy a certain value, the possibility of false detection particularly in the peripheral portion 62 having poor image quality increases. Therefore, in this case, it is preferable not to perform the detection processing. The fixed value X is determined based on, for example, the average brightness of each image when photographed in the hall 15 of each floor.
Returning to fig. 11, the detection processing unit 22b extracts images in the detection regions E1 to E5 from a plurality of captured images obtained in time series by the camera 12, analyzes the images, and performs detection processing corresponding to the detection regions E1 to E5 (step S28).
At this time, the detection processing unit 22b uses the detection sensitivity set in step S27 based on the brightness of the captured image, and executes the detection processing according to the detection sensitivity. As described above, if the brightness of the image 60 is equal to or greater than the certain value X, the detection sensitivity α is set for the central portion 61 of the image 60, and the detection sensitivity β is set for the peripheral portion 62. Therefore, for example, if the detection area E1 is the detection area E1, the detection processing unit 22b performs detection processing with the detection sensitivity α (the threshold TH 1) on a portion of the detection area E1 corresponding to the central portion 61 of the image 60 (the floor surface 19 of the car 11 in the example of fig. 3). In addition, the detection processing unit 22b performs detection processing with the detection sensitivity β (threshold TH 2) for the portions of the detection region E1 corresponding to the peripheral portion 62 of the image 60 (in the example of fig. 3, the side surfaces 48a and 48b and the back surface 49 of the car 11).
On the other hand, if the brightness of the image 60 is less than the fixed value X, the detection sensitivity α is set for the central portion 61 of the image 60, but the detection sensitivity is invalidated for the peripheral portion 62. Therefore, for example, if the detection region is the detection region E1, the detection processing unit 22b performs detection processing with the detection sensitivity α (threshold TH 1) for a portion corresponding to the central portion 61 of the image 60 in the detection region E1, but does not perform detection processing for a portion corresponding to the peripheral portion 62 of the image 60.
As in the case of the above-described embodiment 1, when the detection processing is executed for each detection area, the results of the detection processing are output from the image processing device 20 to the elevator control device 30 (step S29). The elevator control device 30 receives the detection results for each detection area, and executes the corresponding processing corresponding to each detection result (step S30).
As described above, according to embodiment 2, for example, when the brightness of the captured image differs depending on the lighting environment of the hall of each floor, the detection processing can be performed by changing or invalidating the detection sensitivity in the central portion and the peripheral portion of the captured image according to the brightness at that time. This makes it possible to prevent erroneous detection of peripheral portions, particularly those with reduced image quality, when the captured image is dark due to, for example, the lighting environment of a hall.
(modification example)
In embodiment 2, the case where the region in which the detection sensitivity is changed is divided into 2 regions has been described, but the detection sensitivity may be changed in stages by dividing the region in which the detection sensitivity is changed into 3 or more regions. In this case, as shown in fig. 13, the detection sensitivity for each region is changed or invalidated in stages according to the relationship between the brightness of the image and each region.
The description will be given by taking an image 60 shown in fig. 9 as an example.
Now, it is assumed that the detection sensitivity is set in three regions divided into the center portion 61, the 1 st peripheral portion 62a, and the 2 nd peripheral portion 62b of the image 60. If the brightness of the image is equal to or greater than a certain value X, the detection sensitivity α is set for the central portion 61 of the image 60, the detection sensitivity β is set for the 1 st peripheral portion 62a, and the detection sensitivity γ is set for the 2 nd peripheral portion 62b (α < β < γ).
Here, when the brightness of the image is less than a certain value X and equal to or greater than a certain value Y (X > Y), the detection sensitivity α is set for the central portion 61 of the image 60, and the detection sensitivity β is set for the 1 st peripheral portion 62a, but the detection sensitivity is invalidated for the 2 nd peripheral portion 62 b. In addition, if the brightness of the image is less than a certain value Y, the detection sensitivity is invalidated for the 1 st peripheral portion 62a and the 2 nd peripheral portion 62 b.
In each of the above embodiments, as shown in fig. 7 or 9, the description has been given assuming a case where user detection is performed using the image 60 after the distortion correction, but the present invention can also be applied to a case where user detection is performed using the image 50 without the distortion correction. That is, if the image quality of the peripheral portion 52 of the image 50 is poor and the detection process is performed with the same detection sensitivity as that of the other regions, erroneous detection is likely to occur. Therefore, by dividing the image 50 into the center portion 51 and the peripheral portion 52 and performing the detection processing by changing the detection sensitivity, erroneous detection in the peripheral portion 52 having poor image quality can be prevented, and the user can be accurately detected.
Further, the case where a plurality of detection regions E1 to E5 are set on the captured image and the user or the object is detected for each of the detection regions has been described as an example, but a configuration may be adopted in which at least 1 detection region is set and the presence or absence of the user is detected from the image in the detection region. In addition, the present invention can be applied to a case where the presence or absence of a user is detected with respect to the entire captured image regardless of the detection region.
According to at least one embodiment described above, it is possible to provide a user detection system for an elevator, which can accurately detect a user using an image captured by a camera having an ultra-wide-angle lens such as a fisheye lens.
Although several embodiments of the present invention have been described, these embodiments are presented as examples and are not intended to limit the scope of the invention. These new embodiments can be implemented in other various ways, and various omissions, substitutions, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and the equivalent scope thereof.

Claims (8)

1. A user detection system for an elevator, comprising:
a camera part which has an ultra wide angle lens and shoots the inside of the passenger car and the waiting hall in a large range;
a detection unit that detects a user or an object present in the car or the hall using an image captured by the imaging unit; and
a sensitivity changing unit that changes detection sensitivity of the user or the object detected by the detecting unit at least in a central portion and a peripheral portion of the image,
the detection sensitivity comprises a threshold for brightness variations of the image,
the sensitivity changing unit makes the detection sensitivity for a central portion of the image equal to or higher than a reference value, and the sensitivity changing unit makes the detection sensitivity for a peripheral portion of the image lower than the reference value.
2. The user detection system of an elevator according to claim 1,
the sensitivity changing unit sets detection sensitivities for a central portion and a peripheral portion of the image when the image has a constant luminance, and invalidates the detection sensitivities for the peripheral portion of the image when the image does not satisfy the constant luminance.
3. The user detection system of an elevator according to claim 1,
the sensitivity changing section divides the image into 3 or more regions from a central portion toward a peripheral portion, and changes the detection sensitivity stepwise for each of the regions.
4. The user detection system of an elevator according to claim 3,
the sensitivity changing unit changes or invalidates the detection sensitivity for the 3 or more regions in a stepwise manner in accordance with a relationship between the brightness of the image and the 3 or more regions.
5. The user detection system of an elevator according to claim 1,
the detection unit detects a user or an object for each of a plurality of detection regions set on the image,
the sensitivity changing unit changes, in each of the detection regions, a detection sensitivity when detecting the user or the object in a central portion of the image and a detection sensitivity when detecting the user or the object in a peripheral portion of the image.
6. The elevator user detection system according to claim 1,
as a pre-processing, the image is distortion corrected.
7. The elevator user detection system according to claim 1,
the imaging part is arranged at the upper part of the doorway of the passenger car.
8. The elevator user detection system according to claim 1,
the car door opening and closing control device further comprises a door opening and closing control part which controls the opening and closing of the door of the car according to the detection result of the detection part.
CN202010428003.0A 2019-08-28 2020-05-20 User detection system for elevator Active CN112441497B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-155740 2019-08-28
JP2019155740A JP6871324B2 (en) 2019-08-28 2019-08-28 Elevator user detection system

Publications (2)

Publication Number Publication Date
CN112441497A CN112441497A (en) 2021-03-05
CN112441497B true CN112441497B (en) 2023-01-10

Family

ID=74675197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010428003.0A Active CN112441497B (en) 2019-08-28 2020-05-20 User detection system for elevator

Country Status (2)

Country Link
JP (1) JP6871324B2 (en)
CN (1) CN112441497B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009077129A (en) * 2007-09-20 2009-04-09 Denso Corp Image pick-up device
CN101444098A (en) * 2006-05-16 2009-05-27 Opt株式会社 Image processing device, camera device and image processing method
JP5274386B2 (en) * 2009-06-10 2013-08-28 株式会社日立製作所 Elevator equipment
CN103456171A (en) * 2013-09-04 2013-12-18 北京英泰智软件技术发展有限公司 Vehicle flow detection system and method based on fish-eye lens and image correction method
CN104822010A (en) * 2014-01-31 2015-08-05 日立产业控制解决方案有限公司 Imaging apparatus
JP2018090351A (en) * 2016-11-30 2018-06-14 東芝エレベータ株式会社 Elevator system
JP2019006535A (en) * 2017-06-22 2019-01-17 株式会社日立ビルシステム Elevator and escalator

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3490466B2 (en) * 1992-02-21 2004-01-26 株式会社東芝 Image monitoring device and elevator control device using the image monitoring device
JPH06282793A (en) * 1993-03-30 1994-10-07 Isuzu Motors Ltd Lane deviation alarm device
JP6377797B1 (en) * 2017-03-24 2018-08-22 東芝エレベータ株式会社 Elevator boarding detection system
JP6377796B1 (en) * 2017-03-24 2018-08-22 東芝エレベータ株式会社 Elevator boarding detection system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101444098A (en) * 2006-05-16 2009-05-27 Opt株式会社 Image processing device, camera device and image processing method
JP2009077129A (en) * 2007-09-20 2009-04-09 Denso Corp Image pick-up device
JP5274386B2 (en) * 2009-06-10 2013-08-28 株式会社日立製作所 Elevator equipment
CN103456171A (en) * 2013-09-04 2013-12-18 北京英泰智软件技术发展有限公司 Vehicle flow detection system and method based on fish-eye lens and image correction method
CN104822010A (en) * 2014-01-31 2015-08-05 日立产业控制解决方案有限公司 Imaging apparatus
JP2018090351A (en) * 2016-11-30 2018-06-14 東芝エレベータ株式会社 Elevator system
JP2019006535A (en) * 2017-06-22 2019-01-17 株式会社日立ビルシステム Elevator and escalator

Also Published As

Publication number Publication date
JP6871324B2 (en) 2021-05-12
CN112441497A (en) 2021-03-05
JP2021031272A (en) 2021-03-01

Similar Documents

Publication Publication Date Title
JP7230114B2 (en) Elevator user detection system
CN113428752B (en) User detection system for elevator
CN111704012A (en) User detection system of elevator
CN112429609B (en) User detection system for elevator
CN111847159B (en) User detection system of elevator
CN112441497B (en) User detection system for elevator
CN112441490B (en) User detection system for elevator
CN113428750B (en) User detection system for elevator
CN112340560B (en) User detection system for elevator
CN115703609A (en) Elevator user detection system
CN113911868B (en) Elevator user detection system
CN112551292B (en) User detection system for elevator
CN112456287B (en) User detection system for elevator
JP7305849B1 (en) elevator system
JP7282952B1 (en) elevator system
JP7135144B1 (en) Elevator user detection system
CN111704013A (en) User detection system of elevator
JP2024032246A (en) Elevator user detection system
CN112520525A (en) User detection system for elevator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant