CN112520525A - User detection system for elevator - Google Patents

User detection system for elevator Download PDF

Info

Publication number
CN112520525A
CN112520525A CN202010434190.3A CN202010434190A CN112520525A CN 112520525 A CN112520525 A CN 112520525A CN 202010434190 A CN202010434190 A CN 202010434190A CN 112520525 A CN112520525 A CN 112520525A
Authority
CN
China
Prior art keywords
door
detection
car
user
detects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010434190.3A
Other languages
Chinese (zh)
Inventor
横井谦太朗
野田周平
木村纱由美
田村聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Elevator and Building Systems Corp
Original Assignee
Toshiba Elevator Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Elevator Co Ltd filed Critical Toshiba Elevator Co Ltd
Publication of CN112520525A publication Critical patent/CN112520525A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0012Devices monitoring the users of the elevator system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B11/00Main component parts of lifts in, or associated with, buildings or other structures
    • B66B11/02Cages, i.e. cars
    • B66B11/0226Constructional features, e.g. walls assembly, decorative panels, comfort equipment, thermal or sound insulation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B13/00Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
    • B66B13/02Door or gate operation
    • B66B13/14Control systems or devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B13/00Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
    • B66B13/02Door or gate operation
    • B66B13/14Control systems or devices
    • B66B13/16Door or gate locking devices controlled or primarily controlled by condition of cage, e.g. movement or position
    • B66B13/18Door or gate locking devices controlled or primarily controlled by condition of cage, e.g. movement or position without manually-operable devices for completing locking or unlocking of doors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0037Performance analysers

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Civil Engineering (AREA)
  • Mechanical Engineering (AREA)
  • Structural Engineering (AREA)
  • Elevator Door Apparatuses (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)

Abstract

The invention provides a user detection system of an elevator, which can accurately detect a user or an object according to the opening and closing state of a door. A user detection system for an elevator according to one embodiment includes an imaging unit, a detection area setting unit, a detection processing unit, and a detection area changing unit. The imaging unit images a predetermined range including doors from the car to the waiting hall. The probe area setting unit sets a probe area including the hall on the captured image obtained by the imaging unit. The detection processing unit detects a user or an object using the image in the detection area set by the detection area setting unit. The detection area changing unit detects a position of a distal end portion of the door on a captured image when the door is moved in a door closing direction or a door opening direction, and changes a range of the detection area based on the detected position of the distal end portion of the door.

Description

User detection system for elevator
The application is based on Japanese patent application 2019-1699307 (application date: 2019, 9 and 18), and is entitled to priority based on the application. This application is incorporated by reference into this application in its entirety.
Technical Field
Embodiments of the present invention relate to a user detection system for an elevator.
Background
In general, when a car of an elevator arrives at a waiting hall and is opened, the car is closed after a predetermined time has elapsed and then starts. In this case, since the user of the elevator does not know when the car is closed, the user may hit the door during the closing process when the user gets into the car from the waiting hall.
In order to avoid such a door collision during boarding, there is a system that detects a user who wants to board a car using an image captured by a camera and reflects the detection result in the control of opening and closing the door.
Disclosure of Invention
In the above system, it is assumed that the presence or absence of a user is detected in a predetermined detection region when the door is fully opened, and a situation is not assumed, for example, when the door is closed, that is, when the door is about to be closed, or when the door is reopened from the closing process to the fully open position. Therefore, when the door starts to be closed, the entire detection area is generally narrowed to be elongated with a sufficient margin in order to prevent the door, which has moved to the inner side of the area width, from entering the detection area.
The invention provides a user detection system of an elevator, which can accurately detect a user or an object according to the opening and closing state of a door.
A user detection system for an elevator according to one embodiment includes an imaging unit, a detection area setting unit, a detection processing unit, and a detection area changing unit.
The imaging unit images a predetermined range including doors from the car to the waiting hall. The probe area setting unit sets a probe area including the hall on the captured image obtained by the imaging unit. The detection processing unit detects a user or an object using the image in the detection area set by the detection area setting unit. The detection area changing unit detects a position of a distal end portion of the door on a captured image when the door is moved in a door closing direction or a door opening direction, and changes a range of the detection area based on the detected position of the distal end portion of the door.
According to the elevator user detection system with the above structure, a user or an object can be accurately detected according to the opening and closing state of the door.
Drawings
Fig. 1 is a diagram showing a configuration of an elevator user detection system according to an embodiment.
Fig. 2 is a diagram showing a configuration of a portion around an entrance in the car in this embodiment.
Fig. 3 is a diagram showing an example of an image captured by the camera in the present embodiment.
Fig. 4 is a flowchart showing user detection processing when the user detection system according to this embodiment is opened.
Fig. 5 is a diagram for explaining a coordinate system in real space in this embodiment.
Fig. 6 is a diagram showing a state in which the captured image in this embodiment is divided in units of blocks.
Fig. 7 is a diagram showing an example of the detection area set when the doors of the car are fully opened in the embodiment.
Fig. 8 is a diagram showing an example of a detection area when the doors of the car in the embodiment move in the door closing direction from the state shown in fig. 7.
Fig. 9 is a diagram showing an example of the detection region when the doors of the car in this embodiment are moved further in the door closing direction from the state shown in fig. 8.
Fig. 10A is a diagram showing an image coordinate system in this embodiment.
Fig. 10B is a diagram showing a world coordinate system in the present embodiment.
Fig. 11 is a diagram for explaining a gate area on a captured image in this embodiment.
Fig. 12 is a diagram showing an example of the door opening/closing mechanism in the present embodiment.
Fig. 13 is a flowchart showing the detection area changing process (1) using the rotation amount of the door motor in the embodiment.
Fig. 14 is a flowchart showing the detection region changing process (2) using edge detection in this embodiment.
Fig. 15 is a flowchart showing the detection region changing process (3) using machine learning in the embodiment.
Detailed Description
Hereinafter, embodiments will be described with reference to the drawings.
The disclosure is merely an example, and the invention is not limited to the contents described in the following embodiments. Variations that can be readily envisioned by one skilled in the art are, of course, within the scope of this disclosure. In the drawings, the dimensions, shapes, and the like of the respective portions may be schematically shown by being changed from those of the actual embodiment in order to make the description more clear. In the drawings, corresponding elements are denoted by the same reference numerals, and detailed description thereof may be omitted.
Fig. 1 is a diagram showing a configuration of an elevator user detection system according to an embodiment. In addition, although 1 car is described as an example, the same configuration is applied to a plurality of cars.
A camera 12 is provided at an upper portion of an entrance of the car 11. Specifically, the camera 12 is provided in a header plate 11a covering the upper part of the doorway of the car 11 so that the lens portion is inclined at a predetermined angle to the right below, or to the side of the waiting hall 15 or the inside of the car 11.
The camera 12 is a small-sized monitoring camera such as an in-vehicle camera, has a wide-angle lens or a fisheye lens, and can continuously capture images of several frames (for example, 30 frames/second) within 1 second. The camera 12 is activated when the car 11 arrives at the waiting hall 15 of each floor, and performs imaging including the vicinity of the car door 13.
The shooting range at this time was adjusted to L1+ L2(L1> > L2). L1 is a photographing range on the hall side, and has a predetermined distance from the car door 13 toward the hall 15. L2 is a car-side imaging range, and is a predetermined distance from the car door 13 toward the car back surface. L1 and L2 indicate the depth direction range, and the width direction range (direction orthogonal to the depth direction) is at least larger than the lateral width of the car 11.
In the hall 15 at each floor, a hall door 14 is openably and closably provided at an arrival gate of the car 11. The hoistway doors 14 engage with the car doors 13 to perform opening and closing operations when the car 11 arrives. The power source (door motor) is located on the car 11 side, and the hoistway doors 14 are opened and closed only following the car doors 13. In the following description, it is assumed that the hoistway doors 14 are also opened when the car doors 13 are opened, and the hoistway doors 14 are also closed when the car doors 13 are closed.
The image processing device 20 analyzes each image (video) continuously captured by the camera 12 in real time. Note that, although the image processing device 20 is shown in fig. 1 as being taken out of the car 11 for convenience, the image processing device 20 is actually housed in the lintel plate 11a together with the camera 12.
The image processing apparatus 20 includes a storage section 21 and a detection section 22. The storage unit 21 sequentially stores images captured by the camera 12, and has a buffer area for temporarily storing data necessary for processing by the detection unit 22. The storage unit 21 may store an image subjected to a process such as distortion correction, enlargement and reduction, and partial cropping as a pre-process for the captured image.
The detection section 22 detects a user located near the car door 13 using a captured image of the camera 12. The detection unit 22, if functionally divided, is composed of a detection region setting unit 22a, a detection processing unit 22b, and a detection region changing unit 22 c. Note that they may be realized by software, may be realized by hardware such as an IC (Integrated Circuit), or may be realized by both software and hardware.
The detection area setting unit 22a sets a detection area E1 including the hall 15 on the image captured by the camera 12. In detail, as described below, the detection zone setting unit 22a sets a detection zone E1 (see fig. 3) having a predetermined distance L3 from the entrance of the car 11 to the lobby 15, including the doorsills 18 and 47.
The detection processing unit 22b detects a user or an object using the image in the detection area E1 set by the detection area setting unit 22 a. The term "object" as used herein includes, for example, clothes and luggage of a user, and a moving object such as a wheelchair.
The detection zone changing unit 22c performs a process of dynamically changing the range of the detection zone E1 in accordance with the opening/closing operation of the car doors 13. Specifically, the detection zone changing unit 22c detects the position of the distal end portion of the car door 13 on the image when the car door 13 moves, and changes the dimension of the detection zone E1 in the X-axis direction (door opening/closing direction) based on the detected position of the distal end portion of the car door 13. In addition, the car control device 30 may have a part or all of the functions of the image processing device 20.
The car control device 30 is configured by a computer having a CPU, ROM, RAM, and the like, and controls operations of various devices (destination floor buttons, lighting, and the like) provided on the car 11. The car control device 30 has a door opening/closing control unit 31. The door opening/closing control unit 31 controls opening/closing of the doors of the car doors 13 when the car 11 arrives at the waiting hall 15. Specifically, the door opening/closing control unit 31 opens the car doors 13 when the car 11 arrives at the waiting hall 15, and the door opening/closing control unit 31 closes the doors after a predetermined time has elapsed. However, when the detection processing unit 22b detects a user during the door closing operation of the car doors 13, the door opening/closing control unit 31 prohibits the door closing operation of the car doors 13, and opens the car doors 13 again in the fully open direction to maintain the door open state.
Fig. 2 is a diagram showing a configuration of a portion around an entrance in the car 11.
A car door 13 is openably and closably provided at an entrance of the car 11. In the example of fig. 2, a double-split type car door 13 is shown, and both door panels 13a and 13b constituting the car door 13 are opened and closed in opposite directions to each other in the width direction (horizontal direction). The "face width" is the same as the entrance and exit of the car 11.
Front pillars 41a and 41b are provided on both sides of the doorway of the car 11, and surround the doorway of the car 11 together with the lintel plate 11 a. The "front pillar" is also called an entrance pillar or an entrance frame, and generally a door box for housing the car door 13 is provided on the inner side. In the example of fig. 2, when the car door 13 is opened, one door panel 13a is housed in a door black 42a provided on the back side of the face pillar 41a, and the other door panel 13b is housed in a door black 42b provided on the back side of the face pillar 41 b.
One or both of the front pillars 41a and 41b are provided with a display 43, an operation panel 45 on which a destination floor button 44 and the like are arranged, and a speaker 46. In the example of fig. 2, a speaker 46 is provided on the front pillar 41a, and a display 43 and an operation panel 45 are provided on the front pillar 41 b. Here, a camera 12 having a wide-angle lens is provided at a central portion of a door lintel plate 11a at an upper portion of an entrance of the car 11.
Fig. 3 is a diagram showing an example of the captured image by the camera 12. The upper side is a waiting hall 15, and the lower side is the interior of the car 11. E1 in the figure indicates the detection region.
The car door 13 has two door panels 13a, 13b that move in opposite directions on a car threshold 47. Similarly, the hall door 14 includes two door panels 14a and 14b that move in opposite directions on the hall sills 18. The door panels 14a, 14b of the hall doors 14 move in the door opening and closing direction together with the door panels 13a, 13b of the car doors 13.
The camera 12 is installed at an upper portion of an entrance of the car 11. Therefore, when the car 11 opens at the waiting hall 15, as shown in fig. 1, a predetermined range on the waiting hall side (L1) and a predetermined range in the car (L2) are photographed. In the predetermined range (L1) on the side of the hall, a detection area E1 for detecting a user who is going to get on the car 11 is set.
In the actual space, the probe area E1 has a distance L3 from the center of the doorway (width of the face) toward the lobby (L3 is equal to or less than the imaging range L1 on the lobby side). The lateral width W1 of the detection area E1 when fully open is set to a distance equal to or greater than the lateral width W0 of the doorway (face width). As indicated by oblique lines in fig. 3, the detection area E1 is set to include the doorsills 18 and 47 and to remove dead corners of the door pockets 17a and 17 b. As described below, the lateral (X-axis) dimension of the detection zone E1 is changed in accordance with the opening and closing operation of the car doors 13.
Hereinafter, the operation of the present system will be described in terms of (a) user detection processing and (b) detection area change processing.
(a) User detection process
Fig. 4 is a flowchart showing user detection processing at the time of door opening in the present system.
First, as the initial setting, the detection region setting unit 22a of the detection unit 22 provided in the image processing apparatus 20 executes the detection region setting process (step S10). This detection region setting processing is executed, for example, in the following manner when the camera 12 is set or when the set position of the camera 12 is adjusted.
That is, the detection area setting unit 22a sets the detection area E1 having a distance L3 from the entrance to the hall 15 in a state where the car 11 is fully opened. As shown in fig. 3, the detection area E1 is set to include the doorsills 18 and 47 and to remove the dead space of the door pockets 17a and 17 b. Here, in a state where the car 11 is fully opened, the detection area E1 has a dimension in the lateral direction (X-axis direction) of W1 and has a distance of not less than the lateral width W0 of the doorway (face width).
Here, when the car 11 arrives at the waiting hall 15 at an arbitrary floor (yes in step S11), the car control device 30 opens the car door 13 and waits for a user to get into the car 11 (step S12).
At this time, the camera 12 provided at the upper part of the doorway of the car 11 captures an image of a predetermined range (L1) on the side of the waiting hall and a predetermined range (L2) in the car at a predetermined frame rate (e.g., 30 frames/second). The image processing apparatus 20 acquires images captured by the camera 12 in time series, sequentially stores the images in the storage unit 21 (step S13), and executes the user detection processing described below in real time (step S14). Further, as the preprocessing of the captured image, distortion correction, enlargement and reduction, cutting of a part of the image, and the like may be performed.
The user detection processing is executed by the detection processing unit 22b of the detection unit 22 provided in the image processing apparatus 20. The detection processing unit 22b extracts images in the detection area E1 from a plurality of captured images obtained in time series by the camera 12, and detects the presence or absence of a user or an object from these images.
Specifically, as shown in fig. 5, the camera 12 captures an image in which the direction horizontal to the car door 13 provided at the doorway of the car 11 is the X axis, the direction from the center of the car door 13 toward the lobby 15 (the direction perpendicular to the car door 13) is the Y axis, and the height direction of the car 11 is the Z axis. In each image captured by the camera 12, the movement of the user's foot position moving in the direction from the center of the car door 13 to the lobby 15, i.e., the Y-axis direction is detected by comparing the parts of the detection area E1 on a block-by-block basis.
Fig. 6 shows an example in which a captured image is divided into a matrix in units of predetermined blocks. An image obtained by dividing the original image into a grid of one Wblock side is called a "block". In the example of fig. 6, the length in the longitudinal direction and the transverse direction of the block is the same, but the length in the longitudinal direction and the transverse direction may be different. The blocks may be uniformly sized in the entire image area, or may be non-uniform in size such as, for example, being shorter in the vertical direction (Y-axis direction) in the upper portion of the image.
The detection processing unit 22b reads out the images stored in the storage unit 21 one by one in time series order, and calculates the average luminance value of the images for each block. At this time, the average luminance value for each block calculated when the first image is input as the initial value is held in the first buffer area, not shown, in the storage unit 21.
When the second and subsequent images are obtained, the detection processing section 22b compares the average luminance value of each block of the current image with the average luminance value of each block of the previous image held in the first buffer area. As a result, when there is a block having a luminance difference equal to or greater than a preset threshold value in the current image, the detection processing unit 22b determines that the block is a motion block. When determining whether or not there is motion in the current image, the detection processing section 22b holds the average luminance value of each block of the image in the first buffer area for comparison with the next image. Similarly, the detection processing unit 22b repeatedly determines the presence or absence of motion while comparing the luminance values of the respective images in units of blocks in time series.
The detection processing unit 22b checks whether or not there is a moving block in the image in the detection region E1. As a result, if there is a moving block in the image in the detection region E1, the detection processing unit 22b determines that there is a person or an object in the detection region E1.
In this way, when the car door 13 is opened, if the presence of a user or an object in the detection zone E1 is detected (yes in step S15), a user detection signal is output from the image processing apparatus 20 to the car control apparatus 30. The door opening/closing control unit 31 of the car control device 30 prohibits the door closing operation of the car doors 13 by receiving the user detection signal, and maintains the door open state (step S16).
Specifically, when the car doors 13 are in the fully open state, the door opening/closing control unit 31 starts the door opening time counting operation and closes the doors at the time when a predetermined time T (for example, 1 minute) is counted. If the user is detected during this period and a user detection signal is sent, the door opening/closing control unit 31 stops the counting operation and clears the count value. This maintains the open state of the car door 13 for the period of time T.
If a new user is detected during this period, the count value is cleared again, and the door-open state of the car door 13 is maintained during the period of time T. However, if a user arrives a plurality of times within the time T, the situation in which the car doors 13 cannot be closed at all times continues, and therefore it is preferable to provide an allowable time Tx (for example, 3 minutes) and forcibly close the car doors 13 when the allowable time Tx has elapsed.
When the counting operation within the time T is completed, the door opening/closing control portion 31 closes the car doors 13 and starts the car 11 to the destination floor (step S17).
In the flowchart of fig. 4, the description has been given assuming that the user is detected when the car door 13 opens, but similarly in the case of closing the door, the door closing operation is temporarily interrupted when the user or the object is detected in the detection area E1 during a period from when the door is closed to when it is fully closed (during the door closing operation). At this time, as described in the probe zone changing process of (b) below, when the car doors 13 move from the fully opened state to the door closing direction, the range of the probe zone E1 is changed.
(b) Probe area changing process
The detection zone changing process is a process of dynamically changing the range of the detection zone E1 in accordance with the opening and closing operation of the car door 13. Specifically, the "dynamically changing the range of the detection zone E1" means that the dimension of the detection zone E1 set in advance on the captured image in the X axis direction (door opening/closing direction) is reduced or enlarged in accordance with the leading end portion of the car door 13 when the car door 13 moves in the door closing direction or the door opening direction.
The detection zone changing process is executed during a period from when the car door 13 is fully opened to when it is fully closed (including a case where it is re-opened in the door opening direction during the closing of the car). The detection zone changing process may be executed during a period from when the car doors 13 are fully closed to when they are fully opened (including a case where they are closed again in the door closing direction during the door opening process). Fig. 7 to 9 show an example of the detection area E1 that is changed when the car door 13 moves from the fully open state to the door closing direction.
The probe area changing process will be described in detail below.
Fig. 10A and 10B are diagrams for perspective projection conversion from the world coordinate system to the image coordinate system. Fig. 10A shows an image coordinate system, and fig. 10B shows a world coordinate system.
One method of converting the coordinates of the three-dimensional space into the coordinates of the two-dimensional image is perspective projection conversion. The perspective projection transformation is a transformation in which a viewpoint is placed on the z-axis and projected onto a plane perpendicular to the z-axis. The relationship between the camera coordinate system and the image coordinate system and the relationship between the world coordinate system and the camera coordinate system are represented by the following schematic expressions.
[ image coordinate system ] ═ internal parameters ] · [ camera coordinate system ] … (1)
[ camera coordinate system ] ═[ external parameters ] · [ world coordinate system ] … (2)
The perspective projection transformation is represented by the following schematic expression according to the above expressions (1) and (2).
[ image coordinate system ] ═ internal parameter ] · [ external parameter ] · [ world coordinate system ] … (3)
The intrinsic parameters refer to values used for transformation from the camera coordinate system to the image coordinate system. The extrinsic parameters are values for transformation from the world coordinate system to the camera coordinate system. These parameters are calculated in advance by the calibration process. According to the above equation (3), the coordinate points of the image coordinate system can be obtained from the coordinate points of the world coordinate system. Further, if the inverse transformation is performed, coordinate points of the world coordinate system can be obtained from the coordinate points of the image coordinate system.
As shown in fig. 10A and 10B, an arbitrary point a in the preset detection region E1 is (Xa, Ya, Za) in the three-dimensional world coordinate system and is indicated by (ua, va) in the two-dimensional image coordinate system. The transformation from the coordinates (Xa, Ya, Za) of the world coordinate system to the coordinates (ua, va) of the image coordinate system can be performed using the perspective projection transformation expression shown in the above expression (3).
In the detection area E1 set in the hall 15 when fully opened, there is a portion that is blocked by an elevator structure such as the door pockets 17a and 17b (see fig. 3). This part is an area shown in the captured image, and is not the floor surface 16 of the hall 15 located inside, and therefore, it is necessary to exclude the area from the detection area E1. Therefore, the exclusion target portion is excluded from the detection region E1 by calculating image coordinates corresponding to the world coordinates of each point of the exclusion target portion. The three-dimensional spatial range occupied by the exclusion target portions such as the door pockets 17a and 17b is provided in advance as design values of the elevator.
Here, in the present embodiment, when the car doors 13 move from the fully open state to the door closing direction, the range of the detection zone E1 is changed in accordance with the movement of the car doors 13 at that time. At this time, the position of the distal end portion of the car door 13 (door distal end position) is detected by any one of three methods described below, and the range of the detection zone E1 is dynamically changed according to the door distal end position.
The method a comprises the following steps: amount of rotation of door motor
The method b: edge detection
The method c comprises the following steps: machine learning
As shown in fig. 11, the "top end portion of the car door 13" is specifically the top end portions 13a-1 and 13b-1 of the door panels 13a and 13b, and is detected as an inclined edge extending radially outward from the car sill 47 on the captured image. The "position of the top end portion of the car door 13" refers to the position of the lower ends 51a, 51b where the top end portions 13a-1, 13b-1 of the door panels 13a, 13b contact the car sill 47, and is detected as an edge (hereinafter referred to as a vertical edge) crossing the Y-axis direction of the car sill 47 (the direction orthogonal to the door opening and closing direction) on the captured image. The term "edge" refers to a boundary line between regions having different characteristics such as color, brightness, and pattern, in addition to straight lines and curved lines in an image.
In the figure, 14a-1 and 14b-1 are distal end portions of two door panels 13a and 13b constituting a hall door 14. 52a, 52b are the lower ends of the tip portions 14a-1, 14 b-1. In practice, the range of the probe area E1 is changed by detecting the door area that is shown in the captured image including the hall doors 14 and removing the door area from the probe area E1 set when fully opened. The portion indicated by oblique lines is a door region. An arbitrary point D in the gate region is represented by (Xd, Yd, Zd) in world coordinates and (ud, vd) in image coordinates.
The probe region changing process using the method a, the method b, and the method c will be described in detail below.
[ method a ]
Fig. 12 is a diagram showing an example of the door opening/closing mechanism.
The car door 13 is driven by a door motor 61 and is opened and closed by a door opening and closing mechanism 62. In this case, if the door system is the center-split system, the two door panels 13a and 13b constituting the car door 13 are opened and closed in opposite directions to each other. In the side door system, the two door panels 13a and 13b constituting the car door 13 are opened and closed in the same direction.
The hoistway doors 14 are also of the same door type as the car doors 13, and move simultaneously with the opening and closing operation of the car doors 13. The door opening/closing mechanism 62 includes, for example, a plurality of pulleys 63 and a transmission belt 64 wound around the pulleys 63, and converts the rotational force of the door motor 61 into a door opening force and transmits the door opening force to the car doors 13.
Here, since the distance that the car door 13 moves can be calculated from the rotation amount of the door motor 61, it is possible to know where the door tip end position is based on the movement distance. In addition to the amount of rotation of the door motor 61, for example, the amount of movement of the belt 64 may be detected optically or mechanically by the sensor 65, and the door end position may be detected based on the amount of movement of the belt 64.
Fig. 13 is a flowchart showing the detection area changing process (1) using the method a (the amount of rotation of the door motor).
Now, it is assumed that the car door 13 starts moving in the door closing direction from the fully opened state. The detection-zone changing unit 22c of the detection unit 22 acquires the rotation amount of the door motor 61 from the car control device 30 (step S21), and calculates the distance that the car door 13 moves from the fully open position in the door closing direction based on the rotation amount (step S22).
The detection zone changing section 22c detects the door tip position from the moving distance of the car door 13 (step S23). The coordinates of the door top position detected here are information of a world coordinate system of a three-dimensional space. Therefore, the detection region changing unit 22c converts the coordinates of the door edge position into an image coordinate system by the perspective projection conversion process (step S24), and detects the moved door region from the converted door edge position (step S25).
Specifically, for example, it is assumed that the car door 13 moves xm in the X-axis direction (door opening/closing direction) in accordance with the amount of rotation of the door motor 61 (here, for simplicity, the opening/closing direction of the car door 13 is defined as the X-axis direction of the world coordinate). In this case, the world coordinates (Xd, Yd, Zd) of an arbitrary point D in the gate area are moved to (Xd + xm, Yd, Zd).
The image coordinates (um, vm) are obtained from the world coordinates (Xd + xm, Yd, Zd) by the perspective projection conversion processing. From this, it is known that the door area moves to (um, vm) on the image. By performing such processing for all points in the door area, the door area after the movement can be detected.
When the moved door region is detected, the probe region changing unit 22c removes the door region from the probe region E1 set at the time of full-open, thereby changing the range of the probe region E1 (step S26). The state of the detection area E1 at this time is narrowed in the X-axis direction in accordance with the opening of the car door 13 as shown in fig. 8 and 9.
[ method b ]
As described in fig. 11, the position of the car top end portion is detected as a vertical edge (image coordinate) on the car sill 47. Therefore, the range of the detection zone E1 may be changed by obtaining the moving distance of the car door 13 from the longitudinal edge and determining the door zone (image coordinates) from the moving distance.
Fig. 14 is a flowchart showing the probe region changing process (2) using the method b (edge detection).
Now, it is assumed that the car door 13 starts moving in the door closing direction from the fully opened state. The detection zone changing portion 22c detects the longitudinal edge of the car door 13 moving on the sill zone on the captured image as the position of the door tip portion (step S31).
For the edge detection, for example, a Sobel filter, a Laplacian filter, a Canny filter, or other common image processing can be used. Further, more stable detection results can be obtained by adding hough transform, edge tracking processing, and the like. In addition, only the edge of the door edge portion having a motion may be detected by using differential detection or optical flow processing. By these processes, even when a part of the door top end portion is blocked by the entrance of a person, the position of the door top end portion can be accurately detected from the edge of the other part.
The door tip position detected here is information obtained in a two-dimensional image coordinate system. Therefore, the detection region changing unit 22c obtains the world coordinates of the door top position by performing the inverse transform of the perspective projection transform (step S32). For example, when the door edge position is (um, vm) in the two-dimensional image coordinate system, it is determined as (Xd + xm, Yd, Zd ═ 0) in the world coordinate system. From this, it is understood that the car door 13 has moved xm in the X-axis direction in the three-dimensional space.
In general, in the inverse transformation from the image coordinates (u, v) to the world coordinates (X, Y, Z), the values of (X, Y, Z) cannot be uniquely determined because of the degree of freedom. However, in the present embodiment, since the positions of the lower ends 51a and 51b that contact the car sill 47 are defined as the door front end positions, there is a restriction that the Z direction is located on the floor surface (Z is 0). Therefore, world coordinates (X, Y, 0) can be uniquely determined.
When the distance by which the car door 13 moves is obtained in this way (step S33), the probe zone changing unit 22c detects the door zone (image coordinates) from the movement distance thereof by the same procedure as in the above-described method a, and excludes the door zone from the probe zone E1, thereby changing the range of the probe zone E1 (steps S34 to S37).
(other methods utilizing edge detection)
The description has been given of the case where the longitudinal edge of the car door 13 is detected as the door top end position, and the door zone is detected from the door top end position, but the door top end position may be detected from the inclined edge of the car door 13 as another method.
That is, as described in fig. 11, the top end portions 13a-1, 13b-1 of the car doors 13 are detected as inclined edges (image coordinates) extending radially from the car sills 47 on the captured image. Therefore, the lower end of the inclined edge can be detected as a door edge position (image coordinate), and a region existing outside the inclined edge can be detected as a door region (image coordinate) with the door edge position as a reference.
[ method c ]
Detection of the door top may also use machine learning. Specifically, for example, a semantic segmentation method based on deep learning (deep learning) such as Unet (document 1) or SegNet (document 2) can be used.
(document 1)
"U-Net:Convolutional Networks for Biomedical Image Segmentation",Olaf Ronneberger,Philipp Fischer,Thomas Brox,Medical Image Computing and Computer-Assisted Intervention(MICCAI),Springer,LNCS,Vol.9351:234--241,2015.
(document 2)
"SegNet:A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.",Vijay Badrinarayanan,Alex Kendall and Roberto Cipolla,PAMI,2017.。
Fig. 15 is a flowchart showing the detection region changing process (3) using the method c (machine learning).
Using a plurality of images captured by the camera 12 in advance, at least the elevator structure related to the door opening and closing is learned by the above-described deep learning, and the learning result is stored in the storage unit 21 of the image processing device 20, for example (step S41). The "elevator structure related to door opening and closing" includes the car door 13, the hall door 14, the car threshold 47 provided on the subsequent moving path, the hall threshold 18, and the like.
When the image captured by the camera 12 is input to the image processing device 20 when the door is closed (step S42), the detection region changing unit 22c recognizes the elevator structure shown in the captured image based on the learning result, and displays the elevator structure in a predetermined color (step S43). In this case, for example, the elevator structure is classified into "door portion", "threshold portion", "floor surface portion", and the like, and can be displayed in different colors. Such a method of displaying the learning result of the deep learning by differentiation is generally known.
The detection region changing unit 22c detects the boundary between the "door portion" and the "threshold portion" as the door top end position (um, vm) in the elevator structure that is displayed in the above-described division (step S44). Thereafter, similarly to the method a described above, the probe area changing unit 22c detects the door area from the door end position (um, vm) (step S45), and changes the range of the probe area E1 by excluding the door area from the probe area E1 (step S46).
By detecting the tip end position of the car door 13 by using any of the methods a, b, and c described above, the range of the detection zone E1 can be dynamically changed in accordance with the movement of the car door 13.
Here, as a general method, there is a method of detecting the tip end position of the car door 13 with the lapse of time from the start of door closing. However, in such a method, for example, when the car door 13 does not normally move for some reason, an error occurs between the door top end position detected from the elapsed time and the actual door top end position. If the range of the detection zone E1 is changed in accordance with the door top end position including such an error, the detection zone E1 does not coincide with the actual opening state of the car door 13, and there is a possibility that, for example, a user or an object present outside the opening of the car door 13 is erroneously detected.
On the other hand, in the present embodiment, unlike the above-described method of changing the zone with the passage of time, the range of the detection zone E1 can be accurately changed in accordance with the actual opening state of the car door 13. That is, in the method a, the actual moving distance of the car door 13 is obtained from the motor rotation amount, and therefore the tip end position can be accurately detected. The method b and the method c are also the same. In the method b, the actual top end position of the car door 13 can be correctly detected based on the edge information (longitudinal edge or inclined edge) of the car door 13. In the method c, the actual top end position of the car door 13 can be accurately detected from the result of learning the elevator structure by the deep learning.
If the range of the detection zone E1 is changed according to the door top end position detected by these methods, the actual opening state of the car door 13 can be made to coincide with the detection zone E1. Therefore, for example, the movement of the leading end portion of the car door 13 into the inner side of the face width in accordance with the door closing operation is not erroneously detected, and the approach of the user or the object in the detection zone E1 can be accurately detected and reflected in the door opening and closing control while ensuring the detection zone E1 that matches the leading end portion of the car door 13.
In the above embodiment, the description was given assuming that the car doors 13 are closed, but the zones may be changed similarly when the doors are opened. In this case, the detection area E1 may be changed in the same manner as described above until the car door 13 moves from the fully closed state in the door opening direction to the fully open state (including the case where the car door is closed again in the door closing direction during the door opening process).
According to at least one embodiment described above, it is possible to provide a user detection system for an elevator, which can accurately detect a user or an object according to the open/close state of a door.
Although several embodiments of the present invention have been described, these embodiments are merely provided as examples, and are not intended to limit the scope of the present invention. These new embodiments can be implemented in other various ways, and various omissions, substitutions, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and the equivalent scope thereof.

Claims (11)

1. A user detection system for an elevator, comprising:
an imaging unit that images a predetermined range including doors from a car to a waiting hall;
a detection area setting unit that sets a detection area including the hall on the captured image obtained by the imaging unit;
a detection processing unit that detects a user or an object using the image in the detection area set by the detection area setting unit; and
and a detection area changing unit that detects a position of a distal end portion of the door on the captured image when the door moves in a door closing direction or a door opening direction, and changes a range of the detection area based on the detected position of the distal end portion of the door.
2. The user detection system of an elevator according to claim 1,
the detection area changing unit obtains a moving distance of the door from a rotation amount of a motor for driving the door, and detects a position of a distal end portion of the door from the moving distance of the door.
3. The user detection system of an elevator according to claim 1,
the detection area changing unit acquires a moving distance of the door from a sensor provided in an opening/closing mechanism for moving the door in an opening/closing direction, and detects a position of a distal end portion of the door based on the moving distance of the door.
4. The user detection system of an elevator according to claim 1,
the detection region changing unit detects a longitudinal edge in a direction orthogonal to the opening and closing of the door when the door moves on the threshold from the captured image, and detects a position of the longitudinal edge as a position of a top end portion of the door.
5. The user detection system of an elevator according to claim 1,
the detection region changing unit detects, from the captured image, an inclined edge extending radially outward from a threshold when the door moves on the threshold, and detects a lower end of the inclined edge as a position of a top end portion of the door.
6. The user detection system of an elevator according to claim 1,
the detection region changing unit detects a position of a distal end portion of the door when the door moves from the captured image based on a result of machine learning performed on an elevator structure related to opening and closing of the door in advance.
7. The user detection system of an elevator according to claim 6,
the elevator structure comprises at least the door and a door sill arranged in the path of movement of the door,
the detection region changing unit detects a boundary between the door and the threshold as a position of a distal end portion of the door.
8. The user detection system of an elevator according to claim 6,
deep learning is used as the machine learning.
9. The user detection system of an elevator according to claim 1,
the detection area changing unit detects an area in which the door is reflected from a position of a distal end portion of the door, and excludes the area in which the door is reflected from the detection area set when the door is fully opened.
10. The user detection system of an elevator according to claim 1,
the imaging part is arranged at the upper part of the doorway of the passenger car.
11. The user detection system of an elevator according to claim 1,
the door opening/closing control unit controls the opening/closing operation of the door based on the detection result of the detection processing unit.
CN202010434190.3A 2019-09-18 2020-05-21 User detection system for elevator Pending CN112520525A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019169307A JP6828112B1 (en) 2019-09-18 2019-09-18 Elevator user detection system
JP2019-169307 2019-09-18

Publications (1)

Publication Number Publication Date
CN112520525A true CN112520525A (en) 2021-03-19

Family

ID=74529702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010434190.3A Pending CN112520525A (en) 2019-09-18 2020-05-21 User detection system for elevator

Country Status (2)

Country Link
JP (1) JP6828112B1 (en)
CN (1) CN112520525A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5001557A (en) * 1988-06-03 1991-03-19 Inventio Ag Method of, and apparatus for, controlling the position of an automatically operated door
US5284225A (en) * 1991-09-23 1994-02-08 Memco Limited Lift door apparatus
CN101723226A (en) * 2009-12-24 2010-06-09 杭州优迈科技有限公司 System and method of machine vision three-dimensional detection elevator light curtain
CN106966276A (en) * 2016-01-13 2017-07-21 东芝电梯株式会社 The seating detecting system of elevator
CN108116956A (en) * 2016-11-30 2018-06-05 东芝电梯株式会社 Elevator device
CN108622751A (en) * 2017-03-24 2018-10-09 东芝电梯株式会社 The boarding detection system of elevator
CN108622777A (en) * 2017-03-24 2018-10-09 东芝电梯株式会社 The boarding detection system of elevator
CN109928290A (en) * 2017-12-15 2019-06-25 东芝电梯株式会社 User's detection system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5387768A (en) * 1993-09-27 1995-02-07 Otis Elevator Company Elevator passenger detector and door control system which masks portions of a hall image to determine motion and court passengers
JP5690504B2 (en) * 2010-05-14 2015-03-25 株式会社日立製作所 Safety elevator
JP5969148B1 (en) * 2016-01-13 2016-08-17 東芝エレベータ株式会社 Elevator system
JP6139729B1 (en) * 2016-03-16 2017-05-31 東芝エレベータ株式会社 Image processing device
JP6317004B1 (en) * 2017-03-24 2018-04-25 東芝エレベータ株式会社 Elevator system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5001557A (en) * 1988-06-03 1991-03-19 Inventio Ag Method of, and apparatus for, controlling the position of an automatically operated door
US5284225A (en) * 1991-09-23 1994-02-08 Memco Limited Lift door apparatus
CN101723226A (en) * 2009-12-24 2010-06-09 杭州优迈科技有限公司 System and method of machine vision three-dimensional detection elevator light curtain
CN106966276A (en) * 2016-01-13 2017-07-21 东芝电梯株式会社 The seating detecting system of elevator
CN108116956A (en) * 2016-11-30 2018-06-05 东芝电梯株式会社 Elevator device
CN108622751A (en) * 2017-03-24 2018-10-09 东芝电梯株式会社 The boarding detection system of elevator
CN108622777A (en) * 2017-03-24 2018-10-09 东芝电梯株式会社 The boarding detection system of elevator
CN109928290A (en) * 2017-12-15 2019-06-25 东芝电梯株式会社 User's detection system

Also Published As

Publication number Publication date
JP6828112B1 (en) 2021-02-10
JP2021046282A (en) 2021-03-25

Similar Documents

Publication Publication Date Title
US10196241B2 (en) Elevator system
CN108622777B (en) Elevator riding detection system
CN113428752B (en) User detection system for elevator
CN110294391B (en) User detection system
CN112340577B (en) User detection system for elevator
CN113023518B (en) Elevator user detection system
CN112429609B (en) User detection system for elevator
JP7322250B1 (en) elevator system
CN112520525A (en) User detection system for elevator
JP7187629B1 (en) Elevator user detection system
CN113428750B (en) User detection system for elevator
CN112441490B (en) User detection system for elevator
CN112340560B (en) User detection system for elevator
CN113911868B (en) Elevator user detection system
CN113428751A (en) User detection system of elevator
CN112456287B (en) User detection system for elevator
CN112441497B (en) User detection system for elevator
CN111453588B (en) Elevator system
CN115108425B (en) Elevator user detection system
CN112551292B (en) User detection system for elevator
JP2021091556A (en) Elevator user detection system
CN118145448A (en) Elevator system
JP2020186124A (en) Elevator user detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210319

RJ01 Rejection of invention patent application after publication