EP3730443A1 - Elevator user detection system - Google Patents

Elevator user detection system Download PDF

Info

Publication number
EP3730443A1
EP3730443A1 EP20170824.5A EP20170824A EP3730443A1 EP 3730443 A1 EP3730443 A1 EP 3730443A1 EP 20170824 A EP20170824 A EP 20170824A EP 3730443 A1 EP3730443 A1 EP 3730443A1
Authority
EP
European Patent Office
Prior art keywords
car
detection
door
image
detection area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20170824.5A
Other languages
German (de)
French (fr)
Inventor
Shuhei Noda
Kentaro Yokoi
Sayumi Kimura
Satoshi Tamura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Elevator and Building Systems Corp
Original Assignee
Toshiba Elevator Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Elevator Co Ltd filed Critical Toshiba Elevator Co Ltd
Publication of EP3730443A1 publication Critical patent/EP3730443A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/34Details, e.g. call counting devices, data transmission from car to control system, devices giving information to the control system
    • B66B1/3476Load weighing or car passenger counting devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0012Devices monitoring the users of the elevator system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/02Control systems without regulation, i.e. without retroactive action
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/34Details, e.g. call counting devices, data transmission from car to control system, devices giving information to the control system
    • B66B1/3415Control system configuration and the data transmission or communication within the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B11/00Main component parts of lifts in, or associated with, buildings or other structures
    • B66B11/02Cages, i.e. cars
    • B66B11/0226Constructional features, e.g. walls assembly, decorative panels, comfort equipment, thermal or sound insulation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B13/00Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
    • B66B13/02Door or gate operation
    • B66B13/14Control systems or devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B13/00Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
    • B66B13/24Safety devices in passenger lifts, not otherwise provided for, for preventing trapping of passengers
    • B66B13/26Safety devices in passenger lifts, not otherwise provided for, for preventing trapping of passengers between closing doors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B13/00Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
    • B66B13/30Constructional features of doors or gates
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B13/00Doors, gates, or other apparatus controlling access to, or exit from, cages or lift well landings
    • B66B13/30Constructional features of doors or gates
    • B66B13/301Details of door sills
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0018Devices monitoring the operating condition of the elevator system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B5/00Applications of checking, fault-correcting, or safety devices in elevators
    • B66B5/0006Monitoring devices or performance analysers
    • B66B5/0037Performance analysers

Definitions

  • Embodiments described herein relate generally to an elevator user detection system.
  • a single detection function is realized by images captured by a camera, and it means that cameras are required per sensing-target area in order to realize different detection functions.
  • the present application would present an elevator user detection system which can realize multiple detection function with one camera.
  • an elevator user detection system includes a camera disposed in a car and including a super-wide-angle lens, configured to capture at least an image of the entirety of the inside of the car.
  • the system includes a detection area setting unit configured to set at least two detection areas on an image captured by the camera; and a detection processing unit configured to execute a detection process related to a drive operation of the car for each of the at least two detection areas set by the detection area setting unit.
  • FIG. 1 Each figure is a schematic view of corresponding embodiment for its better understanding, and a shape, size, and ratio of the figure may be different from those of actual embodiment, and they may be arbitrarily changed based on the following description and may be shown schematically.
  • the same elements may be referred to by the same referential numbers in the figures, and the detailed description thereof may be omitted.
  • FIG. 1 is a diagram illustrating the structure of an elevator user detection system of an embodiment. Note that, in this example, a case where one car is used will be discussed; however, the same applies to a case where several cars are used.
  • a camera 12 is disposed in the upper part of the doorway of a car 11. Specifically, the camera 12 is disposed inside a modesty panel 11a covering the upper part of the doorway of the car 11 where the lens portion thereof is directed directly below.
  • the camera 12 includes, for example, a super-wide-angle lens such as a fisheye lens to widely capture an image of a target including the inside of the car 11 at an angle view of 180 degrees or more.
  • the camera 12 can consecutively capture an image at a certain FPS rate (for example, 30 FPS).
  • the position of the camera 12 is not limited to the upper part of the doorway of the car 11 but may be any part near a car door 13.
  • the camera 12 may be placed to a position where images of both the entire car room including the entirety of the floor surface of the car 11 and the hall 15 near the doorway of the car 11 during the door opening time can be captured, that is, for example, the sealing surface near the doorway of the car 11.
  • a hall door 14 is disposed to be opened/closed freely at the doorway of the car 11.
  • the hall door 14 opens/closes in accordance with the car door 13 at the time when the car 11 arrives.
  • a power source door motor
  • the hall door 14 simply opens/closes following the car door 13.
  • Images (video) captured consecutively by the camera 12 are analyzed in real time by an image processor 20.
  • the image processor 20 is illustrated outside the car 11 for easier understanding; however, the image processor 20 is actually stored in the modesty panel 11a together with the camera 12.
  • the image processor 20 includes a storage unit 21 and a detection unit 22.
  • the storage unit 21 store images captured by the camera 12 one after another, and includes a buffer area to temporarily store data necessary for the processing of the detection unit 22.
  • the storage unit 21 may store images after a preliminary process such as a distortion correction, enlargement/reduction, or partial cut process.
  • the detection unit 22 detects a user in the car 11 or in the hall 15 using the images captured by the camera 12.
  • the detection unit 22 includes, as categorized by functions, a detection area setting unit 22a and a detection processing unit 22b. Note that these components may be realized by software, or by hardware such as an integrated circuit (IC), or combination of software and hardware.
  • IC integrated circuit
  • the detection area setting unit 22a sets two or more detection areas to detect a user (person who rides the elevator) or an object on the images captured by the camera 12.
  • the object includes, for example, a cloth and a baggage of the user, and a mobile object such as a wheelchair. Furthermore, the object includes, for example, devices related to the elevator such as operation buttons, lamps, and displays. Note that a setting method of the detection areas will be explained later with reference to FIGS. 3 and 4 .
  • the detection processing unit 22b executes a detection process related to a drive operation of the car 11 per detection area set by the detection area setting unit 22a.
  • the detection process related to the drive operation of the car 11 is a process to detect a user and an object based on drive conditions of the car 11 such as door opening/closing, and the drive conditions include one or more conditions such as the door opening/closing of the car 11, upward/downward movement of the car 11, and drive stop of the car 11.
  • the detection process will be explained later with reference to FIGS. 3 and 4 .
  • an elevator controller 30 may include a part of or the entire functions of the image processor 20.
  • the elevator controller 30 is a computer including a CPU, ROM, and RAM, for example.
  • the elevator controller 30 controls operations of various devices disposed in the car 11 (floor select buttons and illuminations, for example). Based on the detection results, the elevator controller 30 executes a response process which may differ from one detection area to another based on the detection results of the detection processor 22b.
  • the response process includes at least the drive control of the car 11 and the open/close control of the door 13.
  • the elevator controller 30 includes a drive controller 31, door opening/closing control unit 32, and notification unit 33.
  • the drive controller 31 controls the drive of the car 11.
  • the door opening/closing control unit 32 controls opening/closing of the car door 13 when the car 11 arrives at a hall 15.
  • the door opening/closing control unit 32 opens the car door 13 when the car 11 arrives at a hall 15, and closes the car door 13 after a certain period of time passes.
  • the door opening/closing control unit 32 performs a door opening/closing control to avoid a door accident (accident where someone is caught by the door). Specifically, the door opening/closing control unit 32 temporarily stops the opening operation of the car door 13, or moves the car door 13 in the opposite direction (closing direction), or slows the opening speed of the car door 13.
  • the notification unit 33 sends warnings to the users in the car 11 based on a detection result by the detection processing unit 22b.
  • FIG. 2 is a diagram illustrating the structure near the doorway of the car 11.
  • the car door 13 is disposed at the doorway of the car 11 to be opened/closed freely.
  • the car door 13 is a two-leaved center open type, and two door panels 13a and 13b of the car door 13 open/close in the doorway width (horizontally) opposite directions.
  • the doorway width means the doorway of the car 11.
  • front pillars 41a and 41b surrounding the doorway of the car 11 with the modesty panel 11a.
  • the front pillars may be referred to as doorway pillars or doorway jamb, and in general, include a door pocket to accommodate the car door 13 in the back side.
  • doorway pillars or doorway jamb, and in general, include a door pocket to accommodate the car door 13 in the back side.
  • FIG. 2 when the car door 13 opens, one door panel 13a is accommodated in the door pocket 42a provided with the back side of the front pillar 41a, and the other door panel 13b is accommodated in the door pocket 42b provided with the back side of the front pillar 41b.
  • a display 43, a control panel 45 including a display 43 and a destination floor button 44, for example, and a speaker 46 are disposed on one of or both the front pillars 41a and 41b.
  • the speaker 46 is disposed on the front pillar 41a
  • the display 43 and the control panel 45 are disposed on the front pillar 41b.
  • the camera 12 including a super-wide-angle lens such as a fisheye lens is disposed in the center of the modesty panel 11a in the upper part of the doorway of the car 11.
  • FIG. 3 is a diagram illustrating an example of an image captured by the camera 12.
  • the image of FIG. 3 includes the entirety of the car room and the hall 15 near the doorway, captured at a view angle of 180 degrees or more from the upper part of the doorway of the car 11 while the car door 13 (door panels 13a and 13b) and the hall door 14 (door panels 14a and 14b) are entirely opened.
  • the upper side is the hall 15 and the lower side is the inside of the car 11.
  • jambs 17a and 17b are provided with the both sides of the arrival gate of the car 11, and a band-like hall sill 18 having a certain width is disposed on a floor surface 16 between the jambs 17a and 17b along the opening/closing direction of the hall door 14. Furthermore, a band-like car sill 47 having a certain width is disposed in the doorway side of the floor surface 19 of the car 11 along the opening/closing direction of the car door 13.
  • Detection areas E1 to E4 to detect a user or an object are set with respect to the inside of the car 11 and the hall 15 illustrated in the captured image.
  • the detection area E1 is an area to detect a user riding condition (in-car position of users, the number of users in the car, and the like) of the car 11 (car riding detection area), and is at least set to the entirety of the floor surface 19.
  • the detection area E1 may include the front pillars 41a and 41b, side surfaces 48a and 48b, and rear surface 49 surrounding the car room.
  • the detection area E1 is set to conform to a horizontal width W1 and a vertical width W2 of the floor surface 19. Furthermore, the detection area E1 is set to a height h1 from the floor surface 19 with respect to the front pillars 41a and 41b, side surfaces 48a and 48b, and rear surface 49. The height h1 is set optionally.
  • the detection areas E2-1 and E2-2 are areas to predict a user getting caught by the door during the door opening operation, and are set in the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b.
  • the detection areas E2-1 and E2-2 are set in a band-like shape with certain widths D1 and D2 in the width direction of the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b.
  • the widths D1 and D2 are set to, for example, the same as or slightly shorter than the horizontal widths (width in the transverse direction) of the inner side surfaces 41a-1 and 41b-1.
  • the widths D1 and D2 may be the same or may be different.
  • the detection areas E2-1 and E2-2 may be set to the position of a height h2 from the floor surface 19.
  • the height h2 is set optionally, and may be the same as or different from the height h1.
  • the detection area E3 is an area to detect a condition (waiting positions of users, the number of users waiting, and the like) of the hall 15 (hall condition detection area), and is set to near the doorway of the car 11.
  • the detection area E3 is set to have a certain gap L1 between the gate of the car 11 and the hall 15.
  • W0 in the figure is a horizontal width of the doorway.
  • the detection area E3 may have a rectangular shape having the same as or greater than W0 in the horizontal direction (x direction), or may be a trapezoid excluding a blind spot of the jambs 17a and 17b.
  • the vertical direction (Y direction) and the horizontal direction (X direction) of the detection area E3 may be fixed, or may be changed actively in accordance with the opening/closing operation of the car door 13.
  • the detection area E4 is an area to detect a user or an object approaching the car 11 (approach detection area) and is set near the doorway of the car 11.
  • the detection area E4 is set to have a certain gap L2 between the doorway of the car 11 and the hall 15 (L1 > L2).
  • the detection area E4 may have a rectangular shape having the same as or greater than W0 in the horizontal direction (X direction), or may be a trapezoid excluding a blind spot of the jambs 17a and 17b.
  • the detection area E4 includes the detection area E3 and may be changed actively in accordance with the opening/closing operation of the car door 13 in synchronization with the detection area E3.
  • detection areas E1 to E4 are set in the captured image; however, more detection areas may be set.
  • detection areas may be set along the car sill 47.
  • the detection areas are used, in a case where the car door 13 is a two-leaved center open type, as areas to predict a user caught by the door during the door opening operation.
  • a detection area may be set in the control panel 45 of the car 11 shown in FIG. 2 to detect a condition of various buttons on the control panel 45.
  • the detection area setting unit 22a calculates a three-dimensional coordinate of a captured image based on setting values of each component of the car 11 and unique parameter values of the camera 12, determines what is shown where on the captured image, and sets a detection area on a position to be a target of detection.
  • the three-dimensional coordinate is, as shown in FIG. 5 , a coordinate calculated where a direction parallel to the car door 13 is given as axis X, a direction from the center of the car door 13 to the hall 15 (direction orthogonal to the car door 13) is given as axis Y, and a height direction of the car 11 is given as axis Z.
  • FIG. 6 is a flowchart illustrating a whole process of the system.
  • a detection area setting process is performed by the detection area setting unit 22a of the detection unit 22 of the image processor 20 (step S11).
  • the detection area setting process is executed when, for example, the camera 12 is set, or the setting position of the camera 12 is adjusted in the following manner.
  • the detection area setting unit 22a sets, on an image captured by the camera 12, several detection areas E1 to E4 as shown in FIG. 3 .
  • the detection area E1 is used as a user riding condition detection area, and is at least set to the entirety of the floor surface 19, or may be set to include the front pillars 41a and 41b, side surfaces 48a and 48b, rear surface 49 which surround the car room.
  • the detection areas E2-1 and E2-2 are used as user-caught-by-the-door detection area, and are set to the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b.
  • the detection area E3 is used as a hall condition detection area, and is set from the doorway of the car 11 to the hall 15.
  • the detection area E4 is used as an approach detection area, and is set near the doorway of the car 11.
  • Areas including the floor surface 19, front pillars 41a and 41b, and hall 15 on the captured image are calculated based on setting values of each components of the car 11 and unique values of the camera 12.
  • Width of door transverse width of doorway of the car
  • Door height Width of pillar Door type center open type/right or left side open type
  • Area of floor and walls Relative position of camera with respect to door Three-dimensional
  • Camera angle (Three axes)
  • Camera angle of view (focus distance)
  • the detection area setting unit 22a calculates an area in which the detection target is shown on the captured image based on the above values. For example, with respect to the front pillars 41a and 41b, the detection area setting unit 22a estimates that the front pillars 41a and 41b are standing vertically from the both ends of the door (doorway), and calculates a three-dimensional coordinates of the front pillars 41a and 41b based on the relative position/angle/angle of view of the camera 12 with respect to the door.
  • markers m1 and m2 may be arranged at both ends of the car sill 47 inside the car, and the three-dimensional coordinates of the front pillars 41a and 41b may be calculated using the positions of the markers m1 and m2 as the reference.
  • the positions of both ends of the car sill 47 inside the car may be calculated by image processing, and the three-dimensional coordinates of the front pillars 41a and 41b may be calculated using the positions as the reference.
  • the detection area setting unit 22a projects the three-dimensional coordinates of the front pillars 41a and 41b on the two-dimensional coordinates on the captured image to acquire an area including the front pillars 41a and 41b on the captured image, and sets the detection areas E2-1 and E2-2 within the area. Specifically, the detection area setting unit 22a sets the detection areas E2-1 and E2-2 having certain widths D1 and D2 along the longitudinal direction of the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b.
  • the setting process of the detection areas E2-1 and E2-2 may be executed while the car door 13 is opened, or may be executed while the car door 13 is closed. Since the hall 15 is not shown in the image captured by the camera 12 when the car door 13 is closed, the detection areas E2-1 and W2-2 are easier to be set in that condition.
  • the width of the car sill 47 (width in the transverse direction) is wider than the thickness of the car door 13. Thus, even if the car door 13 is in the full-close condition, one end of the car sill 47 is shown in the captured image.
  • the positions of the front pillars 41a and 41b can be specified using the position of the one end, and the detection areas E2-1 and E2-2 can be set accordingly.
  • Each area including a detection target is acquired on the captured image based on the setting values of each component of the car 11 and the unique values of the camera 12, and the detection areas E1, E3, and E4 will be set within the area.
  • step S12 when the car 11 arrives at a hall 15 of any floor (Yes in step S12), the elevator controller 30 opens the car door 13 (step S13).
  • the camera 12 having a super-wide-angle lens captures an image of the car 11 and the hall 15 at a certain FPS (for example, 30 FPS).
  • the image processor 20 acquires the images captured by the camera 12 chronologically, stores the images in the storage unit 21 consecutively (step S14), and executes the following detection process in real time (step S15). Note that, as a preliminary treatment to the captured images, distortion correction, enlargement/reduction, and cut of images may be performed.
  • FIG. 7 is a flowchart related to the detection process executed in step S15.
  • the detection process is executed per detection area by the detection processing unit 22b of the detection unit 22 of the image processor 20. That is, the detection processing unit 22b extracts an image of each of the detection areas E1 to E4 from a plurality of images captured chronologically by the camera 12, and analyzes the images to execute the detection process corresponding to each of the detection areas E1 to E4.
  • the detection processing unit 22b analyzes an image in the detection area E1 set on the floor surface 19 of the car 11 and detects a user riding condition including riding positions of the users and he number of users in the car 11 (step S21).
  • P1 to P5 are schematic images of users.
  • the position of users may not be a precise position of each of the users but may be, for example, a position of the users concentrated on the floor surface 19 of the car 11.
  • the number of users may not be the precise number but may be, for example, an occupation rate of users with respect to the floor surface 19 of the car 11 as crowded degree.
  • the car 11 may carry out drive control including control of allocation of a hall call with respect to the car 11.
  • the detection area E1 is set to include the front pillars 41a and 41b, side surfaces 48a and 48b, and rear surface 49 surrounding the car room, a user riding condition in the car 11 where, for example, a user is contacting the front pillars 41a and 41b can be detected with more details.
  • a method of detecting a user or an object by the image analysis may be, for example, a difference method to compare a basic image and a captured image, or a movement detection method to follow the movement of the user or the object per block.
  • the existence of a user or an object may be determined by recognizing objects other than the elevator components from the image within the detection area. Any conventional object recognition method can be used. For example, deep learning, support vector machine, and random forests may be used.
  • the detection processing unit 22b analyzes images within the detection areas E2-1 and E2-2 set in the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b, and executes a detection of a user caught by the car door 13 during the door opening operation (step S22).
  • the detection of a user caught by the car door 13 means predicting a user to be caught by the door pockets 42a and 42b in the two-door both opening type car door 13 in the door opening operation. That is, as in FIG. 9 , when a user put a hand on the inner side surface 41a-1 of the front pillar 41a during the door opening operation, the hand is detected before it is caught by the door pocket 42a.
  • FIG. 10 is a diagram illustrating the structure near the doorway of the car in which a two-door one side opening type car door is used.
  • the two-door one side opening type car door 13 is disposed at the doorway of the car 11 to be opened/closed freely.
  • the car door 13 includes, as in FIG. 11 , two door panels 13a and 13b and they open/close in the same direction along the door direction.
  • the door pocket 42a is disposed only with the one side of the doorway.
  • the door pocket 42a is provided with the left side of the doorway, and the two door panels 13a and 13b are accommodated in the door pocket 42a in the door opening operation to be overlapping with each other.
  • the camera 12 with the super-wide-angle lens can capture an image of a wide range, and thus, the camera 12 is not required to be disposed closer to the door pocket 42a side, and may be disposed in the central part of the doorway.
  • the detection area E2-1 is set with respect to the front pillar 41a in the door pocket 42a side on the image captured by the camera 12. Thereby, if a hand of a user is close to the door pocket 42a, such a condition can be detected from the image in the detection area E2-1.
  • the detection processing unit 22b analyzes the image in the detection area E3 set near the hall 15 to detect a hall condition including positions of users waiting in the hall 15 and the number of waiting users (step S23).
  • P1 to P4 are schematic images of users.
  • the waiting position may not be a precise position of each user but may be a position of users concentrating in the car.
  • the number of waiting users may not be a precise number but may be an occupation rate of users with respect to the floor surface 19 of the car 11 in the detection area E3 as crowded degree of the hall 15.
  • the hall condition is detected using the image of the detection area E3 as above, and thus, if many users are waiting in the hall 15, the users in the car 11 are prompted to step further in the car 11 to accommodate as many users as possible.
  • the detection processing unit 22b analyzes the image of the detection area E4 set to near the doorway of the car 11 to detect a user or an object approaching the car 11 from the hall 15 in the door opening operation (step S24).
  • the detection processing unit 22b compares images in the detection area E4 from the images chronologically captured by the camera 12 per block to detect the movement of feet of users moving from the center of the car door 13 to the hall 15, that is, moving in the axis Y direction.
  • FIG. 13 illustrates this situation.
  • FIG. 13 is a diagram illustrating a movement detection using image comparison.
  • FIG. 13(a) schematically illustrates a part of an image captured with time t n
  • FIG. 13(b) schematically illustrates a part of an image captured with time t n+1 .
  • P1 and P2 in the figure are partial images of users detected as moving objects on the captured image, and actually, are aggregations of blocks detected as a moving object through the image comparison.
  • a moving block Bx closest to the car door 13 will be extracted from the image parts P1 and P2, and Y coordinate of the block Bx is followed to determine whether or not the target has an intention to ride the car.
  • equidistant lines as shown in dotted lines are drawn in the axis Y direction (equidistant horizontal lines parallel to the car door 13), a gap between the block Bx and the car door 13 in the axis Y direction can be acquired.
  • positions of the moving block Bx closest to the car door 13 change from y n to y n-1 , and thus, it can be understood that a user who has an intention to ride the car is approaching the car door 13.
  • a process to detect a movement track of a user or a process to detect attribution of users may be added in each detection area.
  • the detection process is executed in each detection area as in FIG. 6 , and then, a result of the detection process is output to the elevator controller 30 from the image processor 20 (step S16).
  • the elevator controller 30 receives a detection result from each detection area and executes a response process corresponding to the detection result (step S17).
  • the response processes executed in step S17 differ in the detection areas, and for example, in the detection area E1, a process corresponding to the car riding condition of the car 11. Specifically, if users are concentrated in front of the car door 13, the elevator controller 30 may guide the users in the car 11 to step further in through the notification unit 33. Furthermore, if many users are in the car in a crowded state, the elevator controller 30 carries out drive control including control of allocation of a hall call with respect to the car 11 via the drive controller 31.
  • the door opening/closing control unit 32 of the elevator controller 30 temporarily stops the door opening operation of the car door 13, and after a few seconds, resumes the door opening operation from the stop position (step S32).
  • the door opening speed of the car door 13 may be set slower than an ordinary speed, or resume of the door opening operation may be performed after the car door 13 is slightly moved to the opposite direction (door closing direction).
  • the notification unit 33 of the elevator controller 30 performs voice announce through the speaker 46 of the car 11 to inform the users to be distant from the door pockets 42a and 42b (step S33).
  • a method of notification is not limited to the voice announce, and may be a message such as "Stay away from door pockets" displayed on the display device, or both the voice announce and the message may be presented. Furthermore, an alarming sound may be used.
  • step S31 If a user or an object is not detected in the detection area E2-1 or E2-2 (No in step S31), the elevator controller 30 performs the door closing operation after the car door 13 is fully opened, and after the door is closed, the car 11 is moved to a target floor (step S34).
  • the elevator controller 30 maintains the door opening until the user rides the car 11 or sets the door opening speed of the car door 13 slower than a normal speed. That is, the elevator controller 30 executes the response process corresponding to the detection result obtained in each of the detection areas E1 to E4.
  • the detection process is performed per area during the door opening operation of the car 11 ; however, the same applies to a case during the door opening operation of the car 11. That is, the detection process is performed in each of the detection areas E1 to E4 during the door closing operation, and the response process corresponding to the detection result is executed. Thereby, if a user is detected in the detection area E3 while the car door 13 is closing, the car door 13 is reopened to accept the user.
  • the same can be applied to the upward movement operation (moving) of the car 11.
  • the detection areas E1 and E2 set in the car 11 are targets of detection, and the detection process is performed per area during the upward movement operation (moving), and then, the response process corresponding to the detection result is executed.
  • the detection areas E1 and E2 set in the car 11 are targets of detection. For example, if the car 11 is emergency stopped by an earthquake in the door closing condition, the number of users in the car 11 is detected from the image of the detection area E1 set on the floor surface 19 of the car 11 to be reported to a surveillance center which is not shown in order to rapidly respond to the elevator lock-in accident.
  • a camera having a super-wide-angle lens is used to set several detection areas on the captured image of the camera, and a detection process is performed per detection area.
  • a detection process is performed per detection area.
  • the above embodiment uses one car for example. However, the embodiment can be applied to an elevator group management system including several cars.
  • FIG. 15 is a diagram illustrating the structure of the elevator group management system, and several cars (four cars of elevators A to D) are managed as a group.
  • Cars 51a to 51d include, as in the car 11 of FIG. 1 , cameras 12a to 12d having a super-wide-angle lens such as a fisheye lens, respectively.
  • the camera 12a captures an image in the car 51a and an image of the hall near the doorway of the car 51a during the door opening operation, and sends the captured images to the image processor 20a.
  • the image processor 20a to 20d have, as with the image processor 20 of FIG. 1 , a function to set several detection areas on the captured image and to execute a detection process per detection area.
  • the door opening/closing control is performed based on the detection result of each area obtained from the image processors 20a to 20d; however, for example, the following controls may be performed in the group management controller 50 which is a higher rank controller.
  • the hall call is allocated to a car relatively less crowded amongst the cars 51a to 51d based on the number of users on board (crowded degree) output from the image processors 20a to 20d as detection results.
  • a car including no user is maintained at a certain floor to wait.
  • a certain floor includes a reference floor where users ride the car more frequently.
  • an elevator user detection system which can realize several detection functions with only one camera can be presented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mechanical Engineering (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)
  • Elevator Control (AREA)
  • Elevator Door Apparatuses (AREA)

Abstract

According to one embodiment, an elevator user detection system includes a camera (12) disposed in a car and including a super-wide-angle lens, configured to capture at least an image of the entirety of the inside of the car. The system includes a detection area setting unit (22a) configured to set at least two detection areas on an image captured by the camera (12), and a detection processing unit (22b) configured to execute a detection process related to a drive operation of the car for each of the at least two detection areas set by the detection area setting unit (22a) .

Description

    FIELD
  • Embodiments described herein relate generally to an elevator user detection system.
  • BACKGROUND
  • Conventionally, there are systems in which cameras are disposed in a car and a hall of an elevator, and the drive of the elevator is controlled by detecting an entering action of users and the number of waiting people in the hall based on images captured by the cameras.
  • In these systems, a single detection function is realized by images captured by a camera, and it means that cameras are required per sensing-target area in order to realize different detection functions.
  • The present application would present an elevator user detection system which can realize multiple detection function with one camera.
  • BRIEF DESCRIPTION OF THE DRAWINGS
    • FIG. 1 is a diagram illustrating the structure of an elevator user detection system of an embodiment.
    • FIG. 2 is a diagram illustrating the structure near the doorway of the car of the embodiment.
    • FIG. 3 is a diagram illustrating an example of an image captured by a camera of the embodiment.
    • FIG. 4 is a diagram illustrating a plurality of detection areas set on the captured image.
    • FIG. 5 is a diagram illustrating a coordinate system in a real space of the embodiment.
    • FIG. 6 is a flowchart illustrating a whole process of the user detection system of the embodiment.
    • FIG. 7 is a flowchart illustrating an operation of a detection process executed in step S15 of FIG. 6.
    • FIG. 8 is a diagram illustrating a detection process corresponding to a detection area E1 of the embodiment.
    • FIG. 9 is a diagram illustrating a detection process corresponding to a detection area E2 of the embodiment.
    • FIG. 10 is a diagram illustrating the structure near the doorway of the car in which a one-side opening type car door is used of the embodiment.
    • FIG. 11 is a diagram illustrating an opening/closing operation of the one-side opening type car door.
    • FIG. 12 is a diagram illustrating a detection process corresponding to the detection area E3 of the embodiment.
    • FIG. 13 is a diagram illustrating a detection process corresponding to the detection area E4 of the embodiment.
    • FIG. 14 is a flowchart illustrating an operation of response processes executed in step S17 of FIG. 6.
    • FIG. 15 is a diagram of the structure of a group management system of elevators as an application example.
    DETAILED DESCRIPTION
  • In general, according to one embodiment, an elevator user detection system includes a camera disposed in a car and including a super-wide-angle lens, configured to capture at least an image of the entirety of the inside of the car. The system includes a detection area setting unit configured to set at least two detection areas on an image captured by the camera; and a detection processing unit configured to execute a detection process related to a drive operation of the car for each of the at least two detection areas set by the detection area setting unit.
  • Hereinafter, embodiments will be explained with reference to the accompanying drawings.
  • Each figure is a schematic view of corresponding embodiment for its better understanding, and a shape, size, and ratio of the figure may be different from those of actual embodiment, and they may be arbitrarily changed based on the following description and may be shown schematically. The same elements may be referred to by the same referential numbers in the figures, and the detailed description thereof may be omitted.
  • FIG. 1 is a diagram illustrating the structure of an elevator user detection system of an embodiment. Note that, in this example, a case where one car is used will be discussed; however, the same applies to a case where several cars are used.
  • A camera 12 is disposed in the upper part of the doorway of a car 11. Specifically, the camera 12 is disposed inside a modesty panel 11a covering the upper part of the doorway of the car 11 where the lens portion thereof is directed directly below. The camera 12 includes, for example, a super-wide-angle lens such as a fisheye lens to widely capture an image of a target including the inside of the car 11 at an angle view of 180 degrees or more. The camera 12 can consecutively capture an image at a certain FPS rate (for example, 30 FPS).
  • Note that the position of the camera 12 is not limited to the upper part of the doorway of the car 11 but may be any part near a car door 13. For example, the camera 12 may be placed to a position where images of both the entire car room including the entirety of the floor surface of the car 11 and the hall 15 near the doorway of the car 11 during the door opening time can be captured, that is, for example, the sealing surface near the doorway of the car 11.
  • At the elevator hall 15 of each floor, a hall door 14 is disposed to be opened/closed freely at the doorway of the car 11. The hall door 14 opens/closes in accordance with the car door 13 at the time when the car 11 arrives. Note that a power source (door motor) is in the car 11 side, and the hall door 14 simply opens/closes following the car door 13. In the following description, it is assumed that when the car door 13 opens, the hall door 14 opens, and when the car door 13 opens, the hall door 14 opens.
  • Images (video) captured consecutively by the camera 12 are analyzed in real time by an image processor 20. Note that, in the FIG. 1, the image processor 20 is illustrated outside the car 11 for easier understanding; however, the image processor 20 is actually stored in the modesty panel 11a together with the camera 12.
  • The image processor 20 includes a storage unit 21 and a detection unit 22. The storage unit 21 store images captured by the camera 12 one after another, and includes a buffer area to temporarily store data necessary for the processing of the detection unit 22. Note that the storage unit 21 may store images after a preliminary process such as a distortion correction, enlargement/reduction, or partial cut process.
  • The detection unit 22 detects a user in the car 11 or in the hall 15 using the images captured by the camera 12. The detection unit 22 includes, as categorized by functions, a detection area setting unit 22a and a detection processing unit 22b. Note that these components may be realized by software, or by hardware such as an integrated circuit (IC), or combination of software and hardware.
  • The detection area setting unit 22a sets two or more detection areas to detect a user (person who rides the elevator) or an object on the images captured by the camera 12. The object includes, for example, a cloth and a baggage of the user, and a mobile object such as a wheelchair. Furthermore, the object includes, for example, devices related to the elevator such as operation buttons, lamps, and displays. Note that a setting method of the detection areas will be explained later with reference to FIGS. 3 and 4.
  • The detection processing unit 22b executes a detection process related to a drive operation of the car 11 per detection area set by the detection area setting unit 22a. The detection process related to the drive operation of the car 11 is a process to detect a user and an object based on drive conditions of the car 11 such as door opening/closing, and the drive conditions include one or more conditions such as the door opening/closing of the car 11, upward/downward movement of the car 11, and drive stop of the car 11. The detection process will be explained later with reference to FIGS. 3 and 4.
  • Note that an elevator controller 30 may include a part of or the entire functions of the image processor 20.
  • The elevator controller 30 is a computer including a CPU, ROM, and RAM, for example. The elevator controller 30 controls operations of various devices disposed in the car 11 (floor select buttons and illuminations, for example). Based on the detection results, the elevator controller 30 executes a response process which may differ from one detection area to another based on the detection results of the detection processor 22b. The response process includes at least the drive control of the car 11 and the open/close control of the door 13.
  • Specifically, the elevator controller 30 includes a drive controller 31, door opening/closing control unit 32, and notification unit 33. The drive controller 31 controls the drive of the car 11. The door opening/closing control unit 32 controls opening/closing of the car door 13 when the car 11 arrives at a hall 15. Specifically, the door opening/closing control unit 32 opens the car door 13 when the car 11 arrives at a hall 15, and closes the car door 13 after a certain period of time passes.
  • For example, if the detection processing unit 22b detects a user or an object during the opening operation of the car door 13, the door opening/closing control unit 32 performs a door opening/closing control to avoid a door accident (accident where someone is caught by the door). Specifically, the door opening/closing control unit 32 temporarily stops the opening operation of the car door 13, or moves the car door 13 in the opposite direction (closing direction), or slows the opening speed of the car door 13. The notification unit 33 sends warnings to the users in the car 11 based on a detection result by the detection processing unit 22b.
  • FIG. 2 is a diagram illustrating the structure near the doorway of the car 11.
  • The car door 13 is disposed at the doorway of the car 11 to be opened/closed freely. In the example of FIG. 2, the car door 13 is a two-leaved center open type, and two door panels 13a and 13b of the car door 13 open/close in the doorway width (horizontally) opposite directions. Note that the doorway width means the doorway of the car 11.
  • At the both sides of the doorway of the car 11, there are front pillars 41a and 41b, surrounding the doorway of the car 11 with the modesty panel 11a. The front pillars may be referred to as doorway pillars or doorway jamb, and in general, include a door pocket to accommodate the car door 13 in the back side. In the example of FIG. 2, when the car door 13 opens, one door panel 13a is accommodated in the door pocket 42a provided with the back side of the front pillar 41a, and the other door panel 13b is accommodated in the door pocket 42b provided with the back side of the front pillar 41b.
  • A display 43, a control panel 45 including a display 43 and a destination floor button 44, for example, and a speaker 46 are disposed on one of or both the front pillars 41a and 41b. In the example of FIG. 2, the speaker 46 is disposed on the front pillar 41a, and the display 43 and the control panel 45 are disposed on the front pillar 41b.
  • The camera 12 including a super-wide-angle lens such as a fisheye lens is disposed in the center of the modesty panel 11a in the upper part of the doorway of the car 11.
  • FIG. 3 is a diagram illustrating an example of an image captured by the camera 12. The image of FIG. 3 includes the entirety of the car room and the hall 15 near the doorway, captured at a view angle of 180 degrees or more from the upper part of the doorway of the car 11 while the car door 13 ( door panels 13a and 13b) and the hall door 14 ( door panels 14a and 14b) are entirely opened. The upper side is the hall 15 and the lower side is the inside of the car 11.
  • In the hall 15, jambs 17a and 17b are provided with the both sides of the arrival gate of the car 11, and a band-like hall sill 18 having a certain width is disposed on a floor surface 16 between the jambs 17a and 17b along the opening/closing direction of the hall door 14. Furthermore, a band-like car sill 47 having a certain width is disposed in the doorway side of the floor surface 19 of the car 11 along the opening/closing direction of the car door 13.
  • Detection areas E1 to E4 to detect a user or an object are set with respect to the inside of the car 11 and the hall 15 illustrated in the captured image.
  • The detection area E1 is an area to detect a user riding condition (in-car position of users, the number of users in the car, and the like) of the car 11 (car riding detection area), and is at least set to the entirety of the floor surface 19. Note that the detection area E1 may include the front pillars 41a and 41b, side surfaces 48a and 48b, and rear surface 49 surrounding the car room.
  • Specifically, as in FIG. 4, the detection area E1 is set to conform to a horizontal width W1 and a vertical width W2 of the floor surface 19. Furthermore, the detection area E1 is set to a height h1 from the floor surface 19 with respect to the front pillars 41a and 41b, side surfaces 48a and 48b, and rear surface 49. The height h1 is set optionally.
  • The detection areas E2-1 and E2-2 are areas to predict a user getting caught by the door during the door opening operation, and are set in the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b.
  • Specifically, as in FIG. 4, the detection areas E2-1 and E2-2 are set in a band-like shape with certain widths D1 and D2 in the width direction of the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b. The widths D1 and D2 are set to, for example, the same as or slightly shorter than the horizontal widths (width in the transverse direction) of the inner side surfaces 41a-1 and 41b-1. The widths D1 and D2 may be the same or may be different. Furthermore, the detection areas E2-1 and E2-2 may be set to the position of a height h2 from the floor surface 19. The height h2 is set optionally, and may be the same as or different from the height h1.
  • The detection area E3 is an area to detect a condition (waiting positions of users, the number of users waiting, and the like) of the hall 15 (hall condition detection area), and is set to near the doorway of the car 11.
  • Specifically, as in FIG. 4, the detection area E3 is set to have a certain gap L1 between the gate of the car 11 and the hall 15. W0 in the figure is a horizontal width of the doorway. Note that the detection area E3 may have a rectangular shape having the same as or greater than W0 in the horizontal direction (x direction), or may be a trapezoid excluding a blind spot of the jambs 17a and 17b. Or, the vertical direction (Y direction) and the horizontal direction (X direction) of the detection area E3 may be fixed, or may be changed actively in accordance with the opening/closing operation of the car door 13.
  • The detection area E4 is an area to detect a user or an object approaching the car 11 (approach detection area) and is set near the doorway of the car 11.
  • Specifically, as in FIG. 4, the detection area E4 is set to have a certain gap L2 between the doorway of the car 11 and the hall 15 (L1 > L2). The detection area E4 may have a rectangular shape having the same as or greater than W0 in the horizontal direction (X direction), or may be a trapezoid excluding a blind spot of the jambs 17a and 17b. The detection area E4 includes the detection area E3 and may be changed actively in accordance with the opening/closing operation of the car door 13 in synchronization with the detection area E3.
  • Note that, in the example of FIG. 3, four detection areas E1 to E4 are set in the captured image; however, more detection areas may be set. For example, detection areas may be set along the car sill 47. The detection areas are used, in a case where the car door 13 is a two-leaved center open type, as areas to predict a user caught by the door during the door opening operation. Furthermore, for example, a detection area may be set in the control panel 45 of the car 11 shown in FIG. 2 to detect a condition of various buttons on the control panel 45.
  • The detection area setting unit 22a calculates a three-dimensional coordinate of a captured image based on setting values of each component of the car 11 and unique parameter values of the camera 12, determines what is shown where on the captured image, and sets a detection area on a position to be a target of detection.
  • The three-dimensional coordinate is, as shown in FIG. 5, a coordinate calculated where a direction parallel to the car door 13 is given as axis X, a direction from the center of the car door 13 to the hall 15 (direction orthogonal to the car door 13) is given as axis Y, and a height direction of the car 11 is given as axis Z.
  • Now, the operation of the system will be explained.
  • FIG. 6 is a flowchart illustrating a whole process of the system.
  • As an initial setting, a detection area setting process is performed by the detection area setting unit 22a of the detection unit 22 of the image processor 20 (step S11). The detection area setting process is executed when, for example, the camera 12 is set, or the setting position of the camera 12 is adjusted in the following manner.
  • That is, the detection area setting unit 22a sets, on an image captured by the camera 12, several detection areas E1 to E4 as shown in FIG. 3. As described above, the detection area E1 is used as a user riding condition detection area, and is at least set to the entirety of the floor surface 19, or may be set to include the front pillars 41a and 41b, side surfaces 48a and 48b, rear surface 49 which surround the car room.
  • The detection areas E2-1 and E2-2 are used as user-caught-by-the-door detection area, and are set to the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b. The detection area E3 is used as a hall condition detection area, and is set from the doorway of the car 11 to the hall 15. The detection area E4 is used as an approach detection area, and is set near the doorway of the car 11.
  • Areas including the floor surface 19, front pillars 41a and 41b, and hall 15 on the captured image are calculated based on setting values of each components of the car 11 and unique values of the camera 12.
  • Width of door (transverse width of doorway of the car)
    Door height
    Width of pillar
    Door type (center open type/right or left side open type)
    Area of floor and walls
    Relative position of camera with respect to door (Three-dimensional)
    Camera angle (Three axes)
    Camera angle of view (focus distance)
  • The detection area setting unit 22a calculates an area in which the detection target is shown on the captured image based on the above values. For example, with respect to the front pillars 41a and 41b, the detection area setting unit 22a estimates that the front pillars 41a and 41b are standing vertically from the both ends of the door (doorway), and calculates a three-dimensional coordinates of the front pillars 41a and 41b based on the relative position/angle/angle of view of the camera 12 with respect to the door.
  • Note that, as in FIG. 4, for example, markers m1 and m2 may be arranged at both ends of the car sill 47 inside the car, and the three-dimensional coordinates of the front pillars 41a and 41b may be calculated using the positions of the markers m1 and m2 as the reference. Or, the positions of both ends of the car sill 47 inside the car may be calculated by image processing, and the three-dimensional coordinates of the front pillars 41a and 41b may be calculated using the positions as the reference.
  • The detection area setting unit 22a projects the three-dimensional coordinates of the front pillars 41a and 41b on the two-dimensional coordinates on the captured image to acquire an area including the front pillars 41a and 41b on the captured image, and sets the detection areas E2-1 and E2-2 within the area. Specifically, the detection area setting unit 22a sets the detection areas E2-1 and E2-2 having certain widths D1 and D2 along the longitudinal direction of the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b.
  • The setting process of the detection areas E2-1 and E2-2 may be executed while the car door 13 is opened, or may be executed while the car door 13 is closed. Since the hall 15 is not shown in the image captured by the camera 12 when the car door 13 is closed, the detection areas E2-1 and W2-2 are easier to be set in that condition.
  • Note that, in general, the width of the car sill 47 (width in the transverse direction) is wider than the thickness of the car door 13. Thus, even if the car door 13 is in the full-close condition, one end of the car sill 47 is shown in the captured image. Thus, the positions of the front pillars 41a and 41b can be specified using the position of the one end, and the detection areas E2-1 and E2-2 can be set accordingly.
  • The same applies to the other detection areas E1, E3, and E4. Each area including a detection target is acquired on the captured image based on the setting values of each component of the car 11 and the unique values of the camera 12, and the detection areas E1, E3, and E4 will be set within the area.
  • Now, the operation of the car 11 in the drive will be explained.
  • As in FIG. 6, when the car 11 arrives at a hall 15 of any floor (Yes in step S12), the elevator controller 30 opens the car door 13 (step S13).
  • At that time (during the car door 13 opening), the camera 12 having a super-wide-angle lens captures an image of the car 11 and the hall 15 at a certain FPS (for example, 30 FPS). The image processor 20 acquires the images captured by the camera 12 chronologically, stores the images in the storage unit 21 consecutively (step S14), and executes the following detection process in real time (step S15). Note that, as a preliminary treatment to the captured images, distortion correction, enlargement/reduction, and cut of images may be performed.
  • FIG. 7 is a flowchart related to the detection process executed in step S15.
  • The detection process is executed per detection area by the detection processing unit 22b of the detection unit 22 of the image processor 20. That is, the detection processing unit 22b extracts an image of each of the detection areas E1 to E4 from a plurality of images captured chronologically by the camera 12, and analyzes the images to execute the detection process corresponding to each of the detection areas E1 to E4.
  • (a) Detection process corresponding to detection area E1
  • As in FIG. 8, the detection processing unit 22b analyzes an image in the detection area E1 set on the floor surface 19 of the car 11 and detects a user riding condition including riding positions of the users and he number of users in the car 11 (step S21).
  • In FIG. 8, P1 to P5 are schematic images of users. Note that, the position of users may not be a precise position of each of the users but may be, for example, a position of the users concentrated on the floor surface 19 of the car 11. Furthermore, the number of users may not be the precise number but may be, for example, an occupation rate of users with respect to the floor surface 19 of the car 11 as crowded degree.
  • By detecting the user riding condition using the image of the detection area E1, if, for example, users are concentrated in front of the car door 13, the users can be guided to step further in the car 11. Furthermore, if many users are in the car in a crowded state, the car 11 may carry out drive control including control of allocation of a hall call with respect to the car 11.
  • Furthermore, as in FIG. 3, as the detection area E1 is set to include the front pillars 41a and 41b, side surfaces 48a and 48b, and rear surface 49 surrounding the car room, a user riding condition in the car 11 where, for example, a user is contacting the front pillars 41a and 41b can be detected with more details.
  • Note that a method of detecting a user or an object by the image analysis may be, for example, a difference method to compare a basic image and a captured image, or a movement detection method to follow the movement of the user or the object per block. Or, the existence of a user or an object may be determined by recognizing objects other than the elevator components from the image within the detection area. Any conventional object recognition method can be used. For example, deep learning, support vector machine, and random forests may be used.
  • (b) Detection process corresponding to detection area E2
  • The detection processing unit 22b analyzes images within the detection areas E2-1 and E2-2 set in the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b, and executes a detection of a user caught by the car door 13 during the door opening operation (step S22). The detection of a user caught by the car door 13 means predicting a user to be caught by the door pockets 42a and 42b in the two-door both opening type car door 13 in the door opening operation. That is, as in FIG. 9, when a user put a hand on the inner side surface 41a-1 of the front pillar 41a during the door opening operation, the hand is detected before it is caught by the door pocket 42a.
  • Note that the same applies to a side open opening type as in FIG. 10.
  • FIG. 10 is a diagram illustrating the structure near the doorway of the car in which a two-door one side opening type car door is used. In this example, the two-door one side opening type car door 13 is disposed at the doorway of the car 11 to be opened/closed freely. The car door 13 includes, as in FIG. 11, two door panels 13a and 13b and they open/close in the same direction along the door direction.
  • If the car door 13 is the one side opening type, the door pocket 42a is disposed only with the one side of the doorway. In the example of FIG. 10, the door pocket 42a is provided with the left side of the doorway, and the two door panels 13a and 13b are accommodated in the door pocket 42a in the door opening operation to be overlapping with each other.
  • In that case, the camera 12 with the super-wide-angle lens can capture an image of a wide range, and thus, the camera 12 is not required to be disposed closer to the door pocket 42a side, and may be disposed in the central part of the doorway. The detection area E2-1 is set with respect to the front pillar 41a in the door pocket 42a side on the image captured by the camera 12. Thereby, if a hand of a user is close to the door pocket 42a, such a condition can be detected from the image in the detection area E2-1.
  • Note that, in FIG. 10, if there is the detection area E2-2 set with respect to the other front pillar 41b, an accident where a side end of the car door 13 hits a user in the door closing operation (door hit accident) can be prevented.
  • (c) Detection process corresponding to detection area E3
  • As in FIG. 12, the detection processing unit 22b analyzes the image in the detection area E3 set near the hall 15 to detect a hall condition including positions of users waiting in the hall 15 and the number of waiting users (step S23).
  • In FIG. 12, P1 to P4 are schematic images of users. Note that, the waiting position may not be a precise position of each user but may be a position of users concentrating in the car. Furthermore, the number of waiting users may not be a precise number but may be an occupation rate of users with respect to the floor surface 19 of the car 11 in the detection area E3 as crowded degree of the hall 15.
  • The hall condition is detected using the image of the detection area E3 as above, and thus, if many users are waiting in the hall 15, the users in the car 11 are prompted to step further in the car 11 to accommodate as many users as possible.
  • (d) Detection process corresponding to detection area E4
  • The detection processing unit 22b analyzes the image of the detection area E4 set to near the doorway of the car 11 to detect a user or an object approaching the car 11 from the hall 15 in the door opening operation (step S24).
  • Specifically, the detection processing unit 22b compares images in the detection area E4 from the images chronologically captured by the camera 12 per block to detect the movement of feet of users moving from the center of the car door 13 to the hall 15, that is, moving in the axis Y direction.
  • FIG. 13 illustrates this situation.
  • FIG. 13 is a diagram illustrating a movement detection using image comparison. FIG. 13(a) schematically illustrates a part of an image captured with time tn and FIG. 13(b) schematically illustrates a part of an image captured with time tn+1. P1 and P2 in the figure are partial images of users detected as moving objects on the captured image, and actually, are aggregations of blocks detected as a moving object through the image comparison.
  • A moving block Bx closest to the car door 13 will be extracted from the image parts P1 and P2, and Y coordinate of the block Bx is followed to determine whether or not the target has an intention to ride the car. In that case, equidistant lines as shown in dotted lines are drawn in the axis Y direction (equidistant horizontal lines parallel to the car door 13), a gap between the block Bx and the car door 13 in the axis Y direction can be acquired.
  • In the example of FIG. 13, positions of the moving block Bx closest to the car door 13 change from yn to yn-1, and thus, it can be understood that a user who has an intention to ride the car is approaching the car door 13.
  • In addition, a process to detect a movement track of a user or a process to detect attribution of users (wheelchair, stroller, elderly person, and the like) may be added in each detection area.
  • Now, the detection process is executed in each detection area as in FIG. 6, and then, a result of the detection process is output to the elevator controller 30 from the image processor 20 (step S16). The elevator controller 30 receives a detection result from each detection area and executes a response process corresponding to the detection result (step S17).
  • The response processes executed in step S17 differ in the detection areas, and for example, in the detection area E1, a process corresponding to the car riding condition of the car 11. Specifically, if users are concentrated in front of the car door 13, the elevator controller 30 may guide the users in the car 11 to step further in through the notification unit 33. Furthermore, if many users are in the car in a crowded state, the elevator controller 30 carries out drive control including control of allocation of a hall call with respect to the car 11 via the drive controller 31.
  • As an example, a response process corresponding to a case where a user or an object is detected in the detection areas E2-1 and E2-2 will be explained with reference to FIG. 14.
  • If a user or an object is detected in the detection areas E2-1 and E2-2 set in the inner side surfaces 41a-1 and 41b-1 of the front pillars 41a and 41b during the door opening operation of the car door 13 (Yes in step S31), the door opening/closing control unit 32 of the elevator controller 30 temporarily stops the door opening operation of the car door 13, and after a few seconds, resumes the door opening operation from the stop position (step S32). Note that, the door opening speed of the car door 13 may be set slower than an ordinary speed, or resume of the door opening operation may be performed after the car door 13 is slightly moved to the opposite direction (door closing direction).
  • Furthermore, the notification unit 33 of the elevator controller 30 performs voice announce through the speaker 46 of the car 11 to inform the users to be distant from the door pockets 42a and 42b (step S33). Note that, a method of notification is not limited to the voice announce, and may be a message such as "Stay away from door pockets" displayed on the display device, or both the voice announce and the message may be presented. Furthermore, an alarming sound may be used.
  • While a user or an object is being detected in the detection areas E2-1 and E2-2, the above process is repeated. Thus, if a hand of the user is close to the door pocket 42a, the hand caught by the door pocket 42a can be prevented, for example.
  • If a user or an object is not detected in the detection area E2-1 or E2-2 (No in step S31), the elevator controller 30 performs the door closing operation after the car door 13 is fully opened, and after the door is closed, the car 11 is moved to a target floor (step S34).
  • Note that, even if a user or an object is not detected in the detection area E2-1 or E2-2, if a user is approaching the car 11 in the detection area E4, for example, the elevator controller 30 maintains the door opening until the user rides the car 11 or sets the door opening speed of the car door 13 slower than a normal speed. That is, the elevator controller 30 executes the response process corresponding to the detection result obtained in each of the detection areas E1 to E4.
  • Furthermore, when the car 11 stops at the hall 15 of each floor, movement of users is traced in the detection areas E3 and E4 to detect the number of users to ride the car 11 in each floor (the number of users onboard) and the number of users to leave the car 11 (the number of users outbound), and the detection result can be reflected on the drive control of the car 11.
  • Note that, in the flowchart of FIG. 6, a case where the detection process is performed per area during the door opening operation of the car 11 is illustrated; however, the same applies to a case during the door opening operation of the car 11. That is, the detection process is performed in each of the detection areas E1 to E4 during the door closing operation, and the response process corresponding to the detection result is executed. Thereby, if a user is detected in the detection area E3 while the car door 13 is closing, the car door 13 is reopened to accept the user.
  • Furthermore, the same can be applied to the upward movement operation (moving) of the car 11. In that case, the detection areas E1 and E2 set in the car 11 are targets of detection, and the detection process is performed per area during the upward movement operation (moving), and then, the response process corresponding to the detection result is executed.
  • Furthermore, the same applies to the car 11 while it is stopped for some reason. In that case, the detection areas E1 and E2 set in the car 11 are targets of detection. For example, if the car 11 is emergency stopped by an earthquake in the door closing condition, the number of users in the car 11 is detected from the image of the detection area E1 set on the floor surface 19 of the car 11 to be reported to a surveillance center which is not shown in order to rapidly respond to the elevator lock-in accident.
  • With the embodiment, a camera having a super-wide-angle lens is used to set several detection areas on the captured image of the camera, and a detection process is performed per detection area. Thus, without using several cameras, several detection functions can be realized with only one camera. Thus, with a simple and cost effective structure, security and convenience of the elevator can be increased.
  • (Application example)
  • The above embodiment uses one car for example. However, the embodiment can be applied to an elevator group management system including several cars.
  • FIG. 15 is a diagram illustrating the structure of the elevator group management system, and several cars (four cars of elevators A to D) are managed as a group.
  • Cars 51a to 51d include, as in the car 11 of FIG. 1, cameras 12a to 12d having a super-wide-angle lens such as a fisheye lens, respectively. The camera 12a captures an image in the car 51a and an image of the hall near the doorway of the car 51a during the door opening operation, and sends the captured images to the image processor 20a. The same applies to the cameras 12b to 12d to capture an image of each of the cars 51b to 51d and an image of the hall near the doorway of each of the cars 51b to 51d during the door opening operation, and sends the captured image to the image processors 20b to 20d, respectively.
  • The image processor 20a to 20d have, as with the image processor 20 of FIG. 1, a function to set several detection areas on the captured image and to execute a detection process per detection area.
  • Here, in the elevator controllers 30a to 30d of each car, the door opening/closing control is performed based on the detection result of each area obtained from the image processors 20a to 20d; however, for example, the following controls may be performed in the group management controller 50 which is a higher rank controller.
  • (1) Allocation control in response to hall call
  • If a new hall call is made, the hall call is allocated to a car relatively less crowded amongst the cars 51a to 51d based on the number of users on board (crowded degree) output from the image processors 20a to 20d as detection results.
  • (2) Car stop
  • In response to the conditions of the cars output from the image processors 20a to 20d as the detection results, a car with some kind of error is stopped.
  • (3) Car waiting at certain floor
  • Based on the existence of users output from the image processors 20a to 20d as the detection results, a car including no user is maintained at a certain floor to wait. A certain floor includes a reference floor where users ride the car more frequently.
  • According to at least one of the above embodiments, an elevator user detection system which can realize several detection functions with only one camera can be presented.
  • While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (11)

  1. An elevator user detection system comprising a camera (12) disposed in a car and including a super-wide-angle lens, configured to capture at least an image of the entirety of the inside of the car, the system characterized by comprising:
    a detection area setting unit (22a) configured to set at least two detection areas on an image captured by the camera (12); and
    a detection processing unit (22b) configured to execute a detection process related to a drive operation of the car for each of the at least two detection areas set by the detection area setting unit (22a) .
  2. The elevator user detection system of claim 1, characterized in that the camera (12) is configured to capture an image of the entirety of the inside of the car and an image of the hall near a doorway of the car.
  3. The elevator user detection system of claim 1, characterized in that the detection process related to the drive operation of the car includes a detection process of a user or an object during a door opening/closing operation of the car.
  4. The elevator user detection system of claim 1, characterized in that the detection process related to the drive operation of the car includes a detection process of a user or an object during an upward/downward movement of the car.
  5. The elevator user detection system of claim 1, characterized in that the detection process related to the drive operation of the car includes a detection process of a user or an object while the car is stopped.
  6. The elevator user detection system of claim 1, characterized in that
    the detection area setting unit (22a) is configured to set at least a first detection area on the floor surface of the car and a second detection area near the door of the car, and
    the detection processing unit (22b) is configured to detect a riding condition of users in the car based on an image of the first detection area, and to predict a user getting caught by the door based on an image of the second detection area.
  7. The elevator user detection system of claim 2 or 3, characterized in that
    the detection area setting unit (22a) is configured to set at least a third detection area near the doorway of the car and a fourth detection area near the doorway of the car in the hall, and
    the detection processing unit (22b) is configured to detect a waiting condition of users in the hall based on an image of the third detection area, and to detect users approaching of the doorway of the car based on an image of the fourth detection area.
  8. The elevator user detection system of claim 1, characterized in that the camera (12) is disposed in the upper part of the doorway of the car.
  9. The elevator user detection system of claim 1, characterized by further comprising a drive controller (31) configured to control the drive of the car based on a detection result of the detection processing unit (22b).
  10. The elevator user detection system of claim 1, characterized by further comprising a door opening/closing control unit (32) configured to control opening/closing of the door of the car based on a detection result of the detection processing unit (22b).
  11. The elevator user detection system of claim 1, characterized by further comprising a notification unit (33) configured to give warnings to users in the car based on a detection result of the detection processing unit (22b).
EP20170824.5A 2019-04-26 2020-04-22 Elevator user detection system Pending EP3730443A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2019086118A JP7009411B2 (en) 2019-04-26 2019-04-26 Elevator user detection system

Publications (1)

Publication Number Publication Date
EP3730443A1 true EP3730443A1 (en) 2020-10-28

Family

ID=70390998

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20170824.5A Pending EP3730443A1 (en) 2019-04-26 2020-04-22 Elevator user detection system

Country Status (4)

Country Link
EP (1) EP3730443A1 (en)
JP (1) JP7009411B2 (en)
CN (1) CN111847159B (en)
SG (1) SG10202003723XA (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3854743B1 (en) * 2020-01-24 2023-06-28 Otis Elevator Company Elevator cars with camera mount
KR102277107B1 (en) * 2021-01-07 2021-07-13 이연근 Elevator power saving control apparatus and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002293484A (en) * 2001-03-29 2002-10-09 Mitsubishi Electric Corp Elevator control device
WO2009031376A1 (en) * 2007-09-04 2009-03-12 Mitsubishi Electric Corporation Method and device for detecting users, and control method
JP2012020823A (en) * 2010-07-13 2012-02-02 Toshiba Elevator Co Ltd Safety device of elevator door
JP2018162117A (en) * 2017-03-24 2018-10-18 東芝エレベータ株式会社 Elevator boarding detection system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005255404A (en) 2004-03-15 2005-09-22 Mitsubishi Electric Corp Elevator control device
JP2016102023A (en) 2014-11-28 2016-06-02 株式会社日立ビルシステム Elevator control device and method
JP2018158842A (en) 2018-05-29 2018-10-11 東芝エレベータ株式会社 Image analyzer and elevator system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002293484A (en) * 2001-03-29 2002-10-09 Mitsubishi Electric Corp Elevator control device
WO2009031376A1 (en) * 2007-09-04 2009-03-12 Mitsubishi Electric Corporation Method and device for detecting users, and control method
JP2012020823A (en) * 2010-07-13 2012-02-02 Toshiba Elevator Co Ltd Safety device of elevator door
JP2018162117A (en) * 2017-03-24 2018-10-18 東芝エレベータ株式会社 Elevator boarding detection system

Also Published As

Publication number Publication date
CN111847159B (en) 2022-03-15
SG10202003723XA (en) 2020-11-27
JP7009411B2 (en) 2022-01-25
JP2020179995A (en) 2020-11-05
CN111847159A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
US10196241B2 (en) Elevator system
EP3730443A1 (en) Elevator user detection system
JP7230114B2 (en) Elevator user detection system
JP2005126184A (en) Control device of elevator
JP7043565B2 (en) Elevator user detection system
JP6702578B1 (en) Elevator user detection system
US11643303B2 (en) Elevator passenger detection system
JP7322250B1 (en) elevator system
JP2021100880A (en) User detection system for elevator
CN112340560B (en) User detection system for elevator
CN112456287B (en) User detection system for elevator
CN112441490B (en) User detection system for elevator
CN112441497B (en) User detection system for elevator
CN113911868B (en) Elevator user detection system
JP7516582B1 (en) Elevator System
CN112551292B (en) User detection system for elevator

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200422

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20220817