CN110294371B - User detection system - Google Patents

User detection system Download PDF

Info

Publication number
CN110294371B
CN110294371B CN201811562581.2A CN201811562581A CN110294371B CN 110294371 B CN110294371 B CN 110294371B CN 201811562581 A CN201811562581 A CN 201811562581A CN 110294371 B CN110294371 B CN 110294371B
Authority
CN
China
Prior art keywords
car
user
predetermined
hall
determination unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811562581.2A
Other languages
Chinese (zh)
Other versions
CN110294371A (en
Inventor
郑国龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Elevator and Building Systems Corp
Original Assignee
Toshiba Elevator Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Elevator Co Ltd filed Critical Toshiba Elevator Co Ltd
Publication of CN110294371A publication Critical patent/CN110294371A/en
Application granted granted Critical
Publication of CN110294371B publication Critical patent/CN110294371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/02Control systems without regulation, i.e. without retroactive action
    • B66B1/06Control systems without regulation, i.e. without retroactive action electric
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/34Details, e.g. call counting devices, data transmission from car to control system, devices giving information to the control system
    • B66B1/3415Control system configuration and the data transmission or communication within the control system
    • B66B1/3446Data transmission or communication within the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B2201/00Aspects of control systems of elevators
    • B66B2201/20Details of the evaluation method for the allocation of a call to an elevator car
    • B66B2201/211Waiting time, i.e. response time

Abstract

A user detection system according to an embodiment of the present invention includes: a first determination unit 1 that detects the position of a user located in a waiting hall using a plurality of time-series images captured by an imaging unit and determines the presence or absence of a user who cannot ride on a predetermined car based on a time-series change in the detected position of the user located in the waiting hall; a2 nd determination unit that detects the position of a user on the car side using a plurality of images, and determines the level of occupancy of a person/thing with respect to a predetermined loading capacity of the car based on a time-series change in the detected position of the user on the car side; and a control unit that determines whether or not to register a next call for a car other than the predetermined car among the plurality of cars group-managed by the group management control device, based on the determination results of the 1 st determination unit and the 2 nd determination unit, and registers the next call based on the determination result.

Description

User detection system
The present application is based on Japanese patent application 2018 and 054948 (application date: 3/22/2018), according to which priority is enjoyed. This application is incorporated by reference in its entirety.
Technical Field
Embodiments of the present invention relate to a user detection system.
Background
In general, in a time period (peak time) when a large number of users use an elevator, such as a commute time and a lunch break in the morning, an elevator arriving at a predetermined floor is immediately full, and all users waiting at the predetermined floor may not be able to take the elevator at one time. In this case, a user who is not riding an elevator must press a hall call button provided in the hall again to call another elevator to a predetermined floor again.
However, to call another elevator to a predetermined floor, a hall call button provided in a hall must be pressed after the elevator in a full state departs from the predetermined floor. This is not ideal for users who are not riding an elevator, as it results in increased latency.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a user detection system capable of preventing an increase in the waiting time of a user who is not riding an elevator.
According to one embodiment, the user detection system includes a group management control device that performs group management of the operation of a plurality of cars. The user detection system includes: a photographing part which can photograph a predetermined range near a door of a predetermined car when the predetermined car reaches a waiting hall; a1 st determination unit that detects a position of a user located on a waiting hall side using the plurality of captured images that are continuous in time series, and determines whether or not a user who cannot ride the predetermined car is present based on a time-series change in the detected position of the user located on the waiting hall side; a2 nd determination unit that detects a position of a user on the car side using the plurality of captured images that are continuous in time series, and determines a level of occupancy of the person/object with respect to the loading capacity of the predetermined car based on a time-series change in the position of the detected user on the car side; and a control unit that determines whether or not to register a next call to a car other than the predetermined car among the plurality of cars group-managed by the group management control device, based on the results of the determinations by the 1 st determination unit and the 2 nd determination unit, and registers the next call based on the result of the determination.
According to the user detection system configured as described above, it is possible to prevent an increase in the waiting time of a user who is not riding on the elevator.
Drawings
Fig. 1 is a diagram showing a schematic configuration example of a user detection system according to an embodiment.
Fig. 2 is a diagram for explaining a user detection area in real space.
Fig. 3 is a diagram for explaining a coordinate system in a real space.
Fig. 4 is a diagram showing the position of the user detected at time t.
Fig. 5 is a diagram showing the position of the user detected at time t + 1.
Fig. 6 is a flowchart showing an example of the overall operation of the user detection system according to this embodiment.
Fig. 7 is a flowchart showing an example of the procedure of the motion detection processing.
Fig. 8 is a flowchart showing an example of the procedure of the position estimation process.
Fig. 9 is a flowchart showing an example of the procedure of the remaining person estimation processing.
Fig. 10 is a flowchart showing an example of the procedure of the in-car occupancy rate estimation processing.
Fig. 11 is a flowchart showing an example of the procedure of the next call registration determination process.
Fig. 12 is a diagram for supplementary explanation of characteristics of a change pattern of the position of the user.
Fig. 13 is a flowchart showing an example of the procedure of the next call registration determination process in the modification of the embodiment.
Detailed Description
The following describes embodiments with reference to the drawings. The present disclosure is only an example, and the present disclosure is not limited to the contents described in the following embodiments. Variations that can be readily envisioned by one skilled in the art are, of course, within the scope of the disclosure. In the drawings, the size, shape, and the like of each part may be changed from those of the actual embodiment to schematically illustrate the description. In the drawings, corresponding elements may be denoted by the same reference numerals, and detailed description thereof may be omitted.
Hereinafter, the elevator will be referred to as a "car". In the following description, when a plurality of elevators are described, each elevator is referred to as a "car".
[ schematic constitution ]
Fig. 1 is a diagram showing a schematic configuration example of a user detection system according to an embodiment. Fig. 1 illustrates a case where 3 cars, i.e., a to C cars, are group-managed by a group management control device described later, but the number of cars group-managed by the group management control device is not limited thereto, and may be at least 2 or more. In fig. 1, only the devices corresponding to the a-number machines among the a-C machines are shown for the devices other than the group management control device. Hereinafter, only the respective devices corresponding to the machine a will be described in detail, and the respective devices corresponding to the machines B and C are the same in configuration as the respective devices corresponding to the machine a, and therefore, the detailed description thereof is omitted. Therefore, the following description will omit the letter "a" which should be attached to the end of the reference numeral to indicate that the device corresponds to the a-size machine, as appropriate.
In the user detection system of the present embodiment, a camera 12 is provided above an entrance and an exit of a car 11. Specifically, the camera 12 is provided in a lintel plate 11a of the car 11 covering an upper portion of the doorway with a lens portion directed toward the lobby side. The camera 12 is provided so as to include not only the waiting hall 15 but also the entire sill (threshold) of the car 11 within the shooting range. The camera 12 is a small-sized monitoring camera such as an in-vehicle camera, has a wide-angle lens, and can continuously capture images of 1 second frames (for example, 30 frames/second).
More specifically, the imaging range is adjusted to L1+ L2(L1 > L2) as shown in fig. 1. L1 indicates a shooting range on the hall side, and is 3m, for example, from the car door 13 to the hall 15. L2 indicates a car-side imaging range, and is, for example, 50cm from the car door 13 to the car back surface. L1 and L2 are ranges in the depth direction, and ranges in the width direction (direction orthogonal to the depth direction) are set to be larger than at least the lateral width of the car 11.
The camera 12 is activated when the moving speed of the car 11 is less than a predetermined value.
For example, when the car 11 starts decelerating to stop at a predetermined floor and the moving speed becomes less than a predetermined value, the camera 12 starts to capture images. That is, the period in which the car 11 stops at the predetermined floor is also included, and the imaging by the camera 12 is continued until the car 11 starts to decelerate to stop at the predetermined floor and the moving speed becomes less than the predetermined value, and until the car 11 starts to accelerate to go from the predetermined floor to another floor and the moving speed becomes equal to or more than the predetermined value.
In addition, the camera 12 is activated to start shooting during a period until the car 11 stops at a predetermined floor and the car door 13 and the hall door 14 described later are opened, as an example. In this case, the imaging by the camera 12 is continued for a period from when the car door 13 and the hall door 14 are opened to when they are closed.
In the hall 15 of each floor, a hall door 14 is openably and closably provided at an arrival gate of the car 11. The hall doors 14 engage with the car doors 13 when the car 11 arrives, and perform opening and closing operations. The power source (door motor) is located on the car side, and the hoistway doors 14 are opened and closed only following the car doors 13. In the following description, the hall door 14 is set to be opened when the car door 13 is opened, and the hall door 14 is set to be closed when the car door 13 is closed.
A load sensor 16 for measuring a load (may also be referred to as "load in car") in the car chamber is provided at the bottom of the car 11. Load information indicating the value of the load measured by the load sensor 16 is transmitted to an elevator control device 30 described later.
Each image (video) continuously captured by the camera 12 is analyzed and processed in real time by the image processing device 20 (user detection device). Note that, in fig. 1, the image processing device 20 is shown as being removed from the car 11 for convenience, but in reality, the image processing device 20 is housed in the lintel plate 11a together with the camera 12. Further, fig. 1 illustrates a case where the image processing device 20 is provided separately from the elevator control device 30, but the present invention is not limited thereto, and various functions of the image processing device 20 described later may be mounted on the elevator control device 30.
As shown in fig. 1, the image processing apparatus 20 includes a storage unit 21 and a user detection unit 22. The storage unit 21 has a buffer area (not shown) for sequentially storing the images captured by the camera 12 and temporarily storing data necessary for the processing by the user detection unit 22. The storage unit 21 may store an image subjected to a process such as distortion correction, enlargement and reduction, and partial cropping as a preprocessing for the image.
The user detection unit 22 focuses on the movement of the person/object nearest to the car door 13 on the lobby side with respect to each of the plurality of time-series continuous images stored in the buffer area, and detects the movement of the person/object nearest to the car door 13 on the lobby side. The user detection unit 22 detects (determines) the presence or absence (presence) of a user (hereinafter referred to as a "remaining person") who does not ride on the car 11 because the car 11 is in a full state, based on the detected movement of the person or object at the lobby side. The user detection unit 22 outputs a1 st result signal for notifying the presence or absence of the remaining person to the elevator control device 30.
The user detection unit 22 detects the motion of the person or object nearest to the car door 13 on the car side for each of a plurality of time-series continuous images stored in the buffer area, and outputs motion detection result information indicating the detection result to the elevator control device 30.
As shown in fig. 1, the elevator control device 30 is provided with an in-car occupancy estimation section 31 and a call control section 32. The in-car occupancy estimating unit 31 detects (estimates) whether the car 11 is in a full state, in other words, detects (estimates) whether the in-car occupancy described later is high, based on the load information from the load sensor 16.
The in-car occupancy estimating unit 31 detects (estimates) whether the occupancy of a person/thing (hereinafter referred to as "in-car occupancy") with respect to the load capacity of the car 11 is high or low, based on the motion detection result information from the image processing device 20. The in-car occupancy estimation unit 31 outputs a2 nd result signal for notifying that the in-car occupancy is high or low to the call control unit 32.
The call control unit 32 includes a next call registration determination unit 32 a. The next call registration determination unit 32a determines whether or not it is necessary to register a next hall call (next call) to one of a plurality of machines (in the case of fig. 1, one of the machines B and C) group-managed by the group management control device 40 described later, based on the 1 st result signal output from the image processing device 20 and the 2 nd result signal output from the in-car occupancy estimation unit 31.
The group management control device 40 is a device for performing group management control of the operations of the respective machines 11A to 11C. The elevator control devices 30 corresponding to the respective machines 11A to 11C are connected to the group management control device 40. The group management control device 40 controls the operation of the elevator control devices 30 to perform group management control of the operations of the respective machines 11A to 11C. As described above, in fig. 1, only the devices corresponding to the machine a are shown except the group management control device 40, and therefore, the elevator control devices 30 corresponding to the machines B and C are omitted in fig. 1.
The group management control device 40 includes a car assignment control unit 41. When receiving an input of an assignment instruction signal from the elevator control device 30, the car assignment control section 41 selects a car suitable for a predetermined hall call as an assigned car based on the operating state of a car other than the a-car, that is, the B-car or the C-car. The group management control device 40 outputs a response instruction signal to the elevator control device 30 corresponding to the assigned elevator selected by the elevator assignment control unit 41.
[ user detection region ]
Here, with reference to fig. 2, a user detection area set for the image processing device 20 to detect users at the lobby side and the car side using images will be described. Fig. 2 is a diagram for explaining a user detection area in real space.
As shown in fig. 2, a hall position estimation area E1 is set as a user detection area on the hall side in order to detect a user located in the hall 15, and an in-car position estimation area E2 is set as a user detection area on the car side in order to detect a user located in the car 11. The user detection area is set for each of the grid-like blocks having a predetermined size, into which the image (original image) captured by the camera 12 is divided. Moreover, the length of the vertical and horizontal directions of each block can be the same or different. Further, the blocks may be set to have a non-uniform size in the entire image area, for example, a non-uniform size such as a length in the vertical direction (Y-axis direction described later) that is shorter as the image is positioned higher.
The hall position estimation area E1 is an area for estimating (detecting) a part of the body of the user located on the hall side, specifically, the position of the foot of the user closest to the car door 13 on the hall side. The hall position estimation area E1 is set to have a distance L3 from the center of the car door 13 toward the hall direction (L3 is not greater than the hall-side imaging range L1). The (widest) lateral width W1 of the hall position estimation area E1 is set to a distance equal to or greater than the lateral width W0 of the car doors 13.
The in-car position estimation region E2 is a region for estimating (detecting) a part of the body of a user located on the car side, specifically, the position of the foot of the user closest to the car door 13 on the car side. The in-car position estimation region E2 is set to have a distance of L4 from the center of the car door 13 in the car direction (L4 is equal to or less than the car-side imaging range L2). The lateral width W2 of the car interior position estimation region E2 is set to be substantially the same distance as the lateral width W0 of the car doors 13. Alternatively, the lateral width W2 of the in-car position estimation region E2 may be set to a value larger than the lateral width W0 of the car doors 13. Further, fig. 2 illustrates a case where the in-car position estimation region E2 is rectangular, but the present invention is not limited to this, and for example, the in-car position estimation region E2 may be trapezoidal except for a dead angle of the operation panel p provided in the car 11.
Fig. 3 is a diagram for explaining a coordinate system in a real space. As shown in fig. 3, the camera 12 captures an image with the direction horizontal to the car doors 13 provided at the doorway of the car 11 being the X axis, the direction from the center of the car doors 13 to the lobby 15 (the direction perpendicular to the car doors 13) being the Y axis, and the height direction of the car 11 being the Z axis. In each image captured by the camera 12, the movement of the foot position of the user closest to the car door 13 on the hall side is detected by comparing the blocks in which the hall position estimation area E1 shown in fig. 2 is set. Similarly, in each image captured by the camera 12, the movement of the foot position of the user closest to the car door 13 on the car side is detected by comparing the blocks in which the in-car position estimation region E2 shown in fig. 2 is set.
Fig. 4 and 5 are diagrams for explaining the movement of the foot position of the user detected by comparing the blocks in which the hall position estimation area E1 is set. Fig. 4 schematically shows a portion of an image taken at time t, and fig. 5 schematically shows a portion of an image taken at time t + 1.
P1 and P2 in the figure are image portions of a user detected as having motion on the image, and are actually aggregates of blocks detected as having motion by image comparison. The user detector 22 extracts a moving block Bx closest to the car door 13 among the image portions P1 and P2, and tracks the trajectory of the Y-coordinate value of the block Bx to determine the presence or absence of a remaining person, which will be described later in detail. As shown in fig. 4 and 5, equidistant lines (equally spaced horizontal lines parallel to the car door 13) indicated by broken lines in the Y-axis direction are used to track the trajectory of the Y-coordinate value. This makes it possible to track the trajectory of the Y coordinate value by grasping the distance between the block Bx and the car door 13 in the Y axis direction.
In the example of fig. 4 and 5, the detection position (Y coordinate value) of the moving block Bx closest to the car door 13 is changed from Yn to Yn-1, and it is known that the user on the lobby side is approaching the car door 13. In addition, although the hall position estimation area E1 is described as an example, the movement of the foot position of the user on the car side is detected in the same manner in the car position estimation area E2.
[ operation of user detection System ]
Next, an example of the operation of the present system will be described in detail with reference to the flowchart of fig. 6. Fig. 6 is a flowchart showing the flow of the overall process in the present system.
When the car 11 arrives at the waiting hall 15 at any floor (yes in step S1), the elevator control device 30 opens the car door 13 and waits for a user to get in the car 11 (step S2).
At this time, the camera 12 provided at the upper part of the doorway of the car 11 captures an image of a predetermined range (L1) on the lobby side and a predetermined range (L2) on the car side at a predetermined frame rate (e.g., 30 frames/second). The image processing device 20 acquires images captured by the camera 12 in time series, sequentially stores the images in the storage unit 21 (step S3), and executes user detection processing as follows in real time (step S4).
The user detection process is executed by a user detection unit 22 provided in the image processing apparatus 20. The user detection processing is divided into motion detection processing (step S4a), position estimation processing (step S4b), and remaining person estimation processing (step S4 c). Next, each process will be described in detail.
(a) Motion detection processing
Fig. 7 is a flowchart showing an example of the procedure of the motion detection processing in step S4 a. This motion detection process is executed by the motion detection unit 22a, which is one of the components of the user detection unit 22.
First, the motion detection unit 22a reads out each image stored in the storage unit 21 one by one, and calculates an average luminance value for each block in which the hall position estimation area E1 is set (step Sa 1). At this time, the motion detection unit 22a holds the average luminance value for each block calculated when the first image is input, as an initial value, in the buffer area in the storage unit 21 (step Sa 2).
When the images after the 2 nd image are obtained, the motion detection unit 22a compares the average luminance value of each block of the current image with the average luminance value of each block of the previous image held in the buffer area (step Sa 3). As a result, if there is a block having a luminance difference equal to or greater than a preset value in the current image, the motion detection unit 22a determines that there is a motion block (step Sa 4).
When determining the presence or absence of motion for the current image, the motion detection unit 22a holds the average luminance value for each block of the image in the buffer area for comparison with the next image (step Sa 5).
Similarly, the motion detector 22a repeats the following operations: the presence or absence of motion is determined while comparing the luminance values of the respective images captured by the camera 12 in units of blocks in time series.
Note that, although the hall position estimation area E1 is described as an example, the presence or absence of movement is determined in the same manner in the car position estimation area E2.
(b) Position estimation processing
Fig. 8 is a flowchart showing an example of the procedure of the position estimation processing in step S4 b. This position estimation process is executed by the position estimation unit 22b, which is one of the components of the user detection unit 22.
The position estimating unit 22b checks the moving blocks in the current image based on the detection result of the motion detecting unit 22a (step Sb1), and extracts the block closest to the car door 13 from among the moving blocks (step Sb 2).
As described above, the camera 12 is provided above the doorway of the car 11 with the lens portion facing the lobby. Therefore, when a user moves from the hall 15 to the car door 13, there is a high possibility that the right or left leg (toe) of the user is located immediately before the captured image, that is, closest to the car door 13. Therefore, the position estimating unit 22b obtains the Y-coordinate value of the block having a motion closest to the car door 13 as the data of the foot position of the user, and holds the data in the buffer area in the storage unit 21 (step Sb 3).
Similarly, the position estimating unit 22b obtains, for each image, the Y coordinate value of the block having motion closest to the car door 13 as data of the foot position of the user, and holds the data in the buffer area.
Note that, although the hall position estimation area E1 is described as an example, the Y coordinate value of the block having motion closest to the car door 13 is obtained as the data of the foot position of the user in the car position estimation area E2 in the same manner. In this case, the obtained data of the foot position of the user is output to the elevator control device 30 as motion detection result information as appropriate.
(c) Remaining estimation processing
Fig. 9 is a flowchart showing an example of the procedure of the remaining person estimation processing in step S4 c. The remaining estimation process is executed by the remaining estimation unit 22c, which is one of the components of the user detection unit 22. For example, the remaining person estimation process is executed when the number of pieces of data of the foot positions of the user held in the buffer area is equal to or greater than a predetermined number.
The remaining person estimation unit 22c smoothes the data of the user's foot position of each image held in the buffer area (step Sc 1). As a method of smoothing, for example, any known method such as a mean value filter or a kalman filter is used, and a detailed description thereof will be omitted here.
When the data of the foot position is smoothed, if there is data having a variation equal to or larger than the predetermined value (yes at step Sc 2), the remaining person estimation unit 22c removes the data as an outlier (step Sc 3). The predetermined value is determined by, for example, a standard walking speed of the user and a frame rate of the image. Alternatively, outliers may be found and removed prior to smoothing the data for the foot position.
Next, the remnant estimating unit 22c checks the chronological change (pattern) that has elapsed until the foot position of the user located on the lobby side (that is, the Y-coordinate value of the block having motion closest to the car door 13 on the lobby side (hereinafter referred to as the "lobby-side Y-coordinate value")) becomes the current lobby-side Y-coordinate value (step Sc 4).
Then, the remnant estimating unit 22c determines whether or not the pattern of the lobby-side Y-coordinate value checked by the processing of step Sc4 matches the 1 st pattern that is gradually smaller and then does not change rapidly any more and is currently in a stopped state (step Sc 5).
When the result of the determination processing at step Sc5 is that the pattern of the lobby-side Y-coordinate value matches the pattern 1 (yes at step Sc5), the remnant estimating unit 22c calculates the elapsed time from the lobby-side Y-coordinate value becoming the stopped state, and determines whether or not the calculated elapsed time is equal to or longer than a predetermined time (step Sc 6). When the determination processing in step Sc6 is such that the calculated elapsed time is less than the predetermined time (no in step Sc6), the processing returns to step Sc4, and similar processing is executed again so that the Y coordinate value of the lobby side detected from at least 1 or more images captured at present and thereafter is included.
When the result of the determination processing at step Sc6 is that the elapsed time calculated by the determination is equal to or longer than the predetermined time ("yes" at step Sc6), the remaining person estimation unit 22c determines that there is a remaining person (step Sc7), outputs a1 st result signal for notifying the presence of the remaining person to the elevator control device 30, and ends the processing here.
On the other hand, if the result of the determination processing at step Sc5 is that it is determined that the pattern of the lobby-side Y-coordinate value does not match the pattern 1 (no at step Sc5), the remnant estimating unit 22c determines whether or not the current lobby-side Y-coordinate value has changed so as to become rapidly larger than the previous lobby-side Y-coordinate value (that is, whether or not the user has moved backwards) (step Sc 8).
When it is determined that the current hall-side Y coordinate value has changed so as to increase sharply as a result of the determination processing at step Sc8 (yes at step Sc8), the processing returns to step Sc4, and similar processing is executed again so as to include the hall-side Y coordinate value detected from at least 1 or more images captured at present and thereafter.
On the other hand, if the result of the determination processing at step Sc8 is that the hall-side Y coordinate value has not changed so as to increase rapidly, that is, has changed so as to decrease further (no at step Sc8), the remaining person estimation unit 22c determines that there is no remaining person (step Sc9), outputs a1 st result signal for notifying that there is no remaining person to the elevator control device 30, and ends the processing here.
Returning again to the description of the flowchart of fig. 6. When the image processing device 20 executes the processing of steps S4a to S4c to estimate the presence or absence of a remaining person, the elevator control device 30 then executes the in-car occupancy estimation processing to estimate the in-car occupancy (step S5). Next, the car occupancy estimation process will be described in detail with reference to fig. 10.
(d) In-car occupancy estimation processing
Fig. 10 is a flowchart showing an example of the procedure of the in-car occupancy estimation processing in step S5. The in-car occupancy rate estimation process is executed by the in-car occupancy rate estimation unit 31.
First, the in-car occupancy estimation unit 31 acquires load information indicating the current load in the car 11 from the load sensor 16 (step Sd 1).
Then, the in-car occupancy estimation unit 31 determines whether or not the car 11 is in a full state based on whether or not the current load indicated by the acquired load information is equal to or greater than a predetermined value set in advance (for example, a value of n% of the rated load mass (n is an arbitrary positive integer)), or the like (step Sd 2).
When the result of the determination processing at step Sd2 is that the current load is determined to be equal to or greater than the predetermined value and the car 11 is in the full state (yes at step Sd2), the in-car occupancy estimation unit 31 executes the processing at step Sd8 described later.
On the other hand, if the result of the determination processing at step Sd2 is that the current load is determined to be less than the predetermined value and the car 11 is not in the full state (no at step Sd2), the in-car occupancy estimation unit 31 acquires data (motion detection result information) of the foot position of the user in the in-car position estimation area E2 from the image processing device 20 (step Sd 3).
In the present embodiment, the in-car occupancy rate estimation unit 31 acquires data (motion detection result information) of the position of the user's foot in the in-car position estimation area E2, which has been obtained by the image processing apparatus 20, from the image processing apparatus 20 in order to perform processing described later. However, the data of the foot position of the user in the in-car position estimation region E2 may be obtained by the in-car occupancy rate estimation unit 31. In this case, the in-car occupancy estimation unit 31 may acquire the plurality of images captured so far from the image processing device 20, and perform the same processing as the processing of step S4a and step S4b described above for the user located in the in-car position estimation area E2.
Next, the in-car occupancy estimation unit 31 determines whether or not the position of the foot of the user located in the in-car position estimation area E2 (i.e., the Y coordinate value of the block having motion closest to the car door 13 on the car side (hereinafter referred to as "car-side Y coordinate value")), that is, whether or not the user is currently located in the in-car position estimation area E2, has been extracted from the current (latest) image by the processing of step Sd3 (step Sd 4).
If it is determined that there is no user currently in the in-car position estimation region E2 as a result of the determination processing at step Sd4 (no at step Sd4), the in-car occupancy estimation unit 31 executes the processing at step Sd10 described later.
On the other hand, if the result of the determination processing at step Sd4 is that a user currently within the in-car position estimation region E2 is determined (yes at step Sd4), the in-car occupancy estimation unit 31 checks for a time-series change (pattern) that elapses until the car-side Y-coordinate value becomes the current car-side Y-coordinate value (step Sd 5).
Then, the in-car occupancy estimation unit 31 determines whether or not the pattern of the car-side Y-coordinate value checked in the processing of step Sd5 corresponds to the 2 nd pattern that is in the stopped state and that does not change after gradually increasing (step Sd 6).
When it is determined that the pattern of the car-side Y-coordinate value matches the pattern 2 as a result of the determination processing at step Sd6 (yes at step Sd6), the in-car occupancy estimation unit 31 calculates the elapsed time from when the car-side Y-coordinate value is brought into a stopped state, and determines whether or not the calculated elapsed time is equal to or longer than a predetermined time set in advance (step Sd 7).
When the determination result of step Sd7 is that the calculated elapsed time has not reached the predetermined time (no at step Sd7), the process returns to step Sd3, and the same process is executed again so that at least 1 or more captured images captured at present and thereafter are included.
On the other hand, when the result of the determination processing in step Sd7 is that the elapsed time calculated by the determination is equal to or longer than the predetermined time ("yes" in step Sd7), the in-car occupancy estimation unit 31 determines that the in-car occupancy is high (step Sd8), outputs a2 nd result signal for notifying that the in-car occupancy is high to the call control unit 32, and ends the processing here.
As a result of the determination processing at step Sd6, when it is determined that the pattern of car-side Y-coordinate values does not match the pattern 2 (no at step Sd6), the in-car occupancy estimation unit 31 determines whether or not the current car-side Y-coordinate value is located in a region near the sill in the in-car position estimation region E2 (step Sd 9). The region near the sill is a region set to have a distance of L5(L5 ≦ L4) from the center of the car door 13 toward the car direction and a lateral width of W2.
When the result of the determination processing at step Sd9 is that it is determined that the current car-side Y-coordinate value is located in the region near the sill ("yes" at step Sd9), the processing returns to step Sd3, and the same processing is executed again so that at least 1 or more photographed images photographed at present and thereafter are included.
On the other hand, if the result of the determination processing at step Sd9 is that it is determined that the current car-side Y coordinate value is not located within the region near the sill (no at step Sd9), the in-car occupancy estimation unit 31 determines that the in-car occupancy is low (step Sd10), outputs a2 nd result signal notifying that the in-car occupancy is low to the call control unit 32, and ends the processing here.
Further, among the processes included in the in-car occupancy estimation process, the remaining person estimation section 22c may perform a process of determining whether or not the car-side Y coordinate value matches the 2 nd pattern and a process of determining whether or not the current car-side Y coordinate value is located in a region near the sill. In this case, the in-car occupancy rate estimating unit 31 may acquire the result of the above determination from the remaining estimating unit 22c and then determine the level of the in-car occupancy rate based on the result of the determination.
Further, the processes corresponding to the above-described steps Sc1 to Sc3 may be further performed between the processes of the above-described step Sd3 and step Sd 4. However, since the range of the in-car position estimation region E2 is narrower than the range of the hall position estimation region E1 and is less susceptible to outliers, the processes corresponding to the above-described steps Sc1 to Sc3 may be omitted, and these processes are omitted from the flowchart of fig. 10.
Returning again to the description of the flowchart of fig. 6. When the in-car occupancy estimation unit 31 executes the process of step S5 and outputs the 2 nd result signal indicating the in-car occupancy, the call control unit 32 executes the next call registration determination process (step S6) in order to determine whether or not a different car (car B or car C) from the car (car a) corresponding to the elevator control device 30 is called to the floor at which the car a is currently stopped (hereinafter referred to as "stop floor") (determine whether or not the next call is registered). Next, the next call registration determination process will be described in detail with reference to fig. 11.
(e) Next call registration judgment processing
Fig. 11 is a flowchart showing an example of the procedure of the next call registration determination processing in step S6. This next call registration determination process is executed by a next call registration determination section 32a that is a component of the call control section 32.
First, the next call registration determination unit 32a determines whether or not there is a remaining person based on the 1 st result signal output from the image processing device 20 (step Se 1). In addition, when the result of the determination process of step Se1 is that no member is determined to be left (no in step Se1), the next call registration determination unit 32a executes the process of step Se4 described later.
On the other hand, when the result of the determination processing at step Se1 is that the remaining person is determined (YES at step Se1), the next call registration determination unit 32a determines whether the in-car occupancy is high or low based on the 2 nd result signal output from the in-car occupancy estimation unit 31 (step Se 2).
When the result of the determination processing in step Se2 is that the occupancy in the car is determined to be high (high in step Se2), the next call registration determination unit 32a outputs an assignment instruction signal to the group management control device 40, the assignment instruction signal instructing that a hall call to the floor be assigned to a different car than the corresponding car (step Se3), and the processing here is ended.
On the other hand, when the result of the determination processing in step Se2 is that the occupancy in the car is determined to be low (low in step Se2), the next call registration determination unit 32a determines that registration of the next call is not necessary, and does not register the next call (step Se4), and the processing here ends.
Returning again to the description of the flowchart of fig. 6. When the next call registration determination unit 32a executes the processing of step S6 and the assignment instruction signal is output to the group management control device 40 (yes at step S7), the elevator assignment control unit 41 in the group management control device 40 selects an appropriate assigned elevator from among other elevators different from the elevator corresponding to the elevator control device 30 that has output the assignment instruction signal in accordance with the assignment instruction signal, and outputs a response instruction signal to the elevator control device 30 corresponding to the selected assigned elevator.
When the assignment instruction signal is output to the group management control device 40 as described above, the elevator control device 30 notifies the user who is the remaining member that another car will arrive (step S8), and executes the processing of step S10 described later. The notification method to the user may be a voice notification using a speaker provided in the hall 15 or a display notification using a pointer provided in the hall 15.
On the other hand, if the assignment instruction signal is not output to the group management controller 40 as a result of the processing at step S6 (no at step S7), the elevator control device 30 determines whether or not a predetermined time (for example, 1 minute) has elapsed since the car door 13 is fully opened (step S9). When it is determined that the predetermined time has not elapsed as a result of the determination processing in step S9 (no in step S9), the process returns to the processing in step S3, and the same processing is executed again.
Thereafter, if it is determined that the predetermined time has elapsed as a result of the determination processing at step S9 (yes at step S9), the elevator control device 30 closes the car door 13 (step S10) and starts the elevator to the destination floor.
Further, as described above, when another car is allocated to the user who is the remaining person by the group management control device 40, the elevator control device 30 starts to close the car door 13 even if a predetermined time has not elapsed since the car door 13 is in the fully open state. This prevents the car 11 in the full state from being uselessly left at the predetermined floor, and therefore, the operation efficiency (group management efficiency) can be improved.
Here, with reference to fig. 12, the characteristics of the variation pattern of the hall side Y coordinate value and the car side Y coordinate value will be described in addition. Fig. 12 is a diagram for explaining characteristics of a variation pattern of the hall side Y coordinate value and the car side Y coordinate value. The ordinate of fig. 12 represents the Y coordinate value (position), and the abscissa represents time. Further, a Y coordinate value "0" indicates an end portion of the in-car position estimation region E2 close to the car doors 13, a Y coordinate value "L4" indicates a boundary between the hall position estimation region E1 and the in-car position estimation region E2, and a Y coordinate value "L3" indicates an end portion of the hall position estimation region E1 far from the car doors 13.
Focusing on the hall position estimation area E1, as shown in fig. 12, when the user located on the hall side moves away from the hall 15 toward the car door 13, that is, approaches the car door 13, the Y coordinate value (hall side Y coordinate value) gradually decreases. Similarly, as shown in fig. 12, when the user located on the hall side moves away from the car door 13 toward the hall 15, that is, moves away from the car door 13, the Y coordinate value (hall side Y coordinate value) gradually increases. As shown in fig. 12, when the user located on the lobby side is in a stopped state, the Y-coordinate value (lobby side Y-coordinate value) is held at the previous Y-coordinate value without change. In fig. 12, the elapsed time from the time when the hall-side Y coordinate value is in the stopped state is denoted by ths (n). The above N indicates the number of times of the transition to the stopped state. Therefore, in fig. 12, the elapsed time from the time when the first stop state is reached is denoted by Ths (N), and the elapsed time from the time when the 2 nd stop state is reached is denoted by Ths (N + 1).
Accordingly, when looking at the hall position estimation area E1 in fig. 12, it is known that the user located on the hall side moves (acts) as follows.
First, the user located on the lobby side approaches the car door 13 so that the lobby side Y coordinate value changes from "L3" to "Y1" from time T1 to time T2. Thereafter, the user located at the hall stands at the position where the Y coordinate value of the hall side is "Y1" until the time ths (n) elapses. When the time ths (n) elapses, the user located on the lobby side leaves the car door 13 so that the lobby Y coordinate value changes from "Y1" to "Y2" from the time T3 to the time T4. Thereafter, the user located on the hall side stands at the position where the Y coordinate value on the hall side is "Y2" until the time Ths (N +1) elapses. That is, the possibility that the following is known from the change pattern of the Y coordinate value is high: the user wants to ride the car 11 and move the car to the car door 13, but finds that the car 11 is in a full state at the position of the hall side Y coordinate value "Y1", and waits for another car to arrive at the position of the hall side Y coordinate value "Y2" several steps after the user has retreated.
Next, a case where the in-car position estimation region E2 is focused will be described. Focusing on the in-car position estimation region E2, as shown in fig. 12, when the user located on the car side moves away from the car door 13 toward the car 11, i.e., moves so as to be close to each other in the depth direction of the car 11, the Y coordinate value (car-side Y coordinate value) gradually decreases. Similarly, as shown in fig. 12, when a user positioned on the car side moves from the car 11 to the car door 13, that is, moves out of the car 11, the Y coordinate value (car-side Y coordinate value) gradually increases. As shown in fig. 12, when the user on the car side is in a stopped state, the Y-coordinate value (car-side Y-coordinate value) is maintained at the previous Y-coordinate value without change. In fig. 12, the elapsed time from the time when the car-side Y-coordinate value becomes the stopped state is denoted by tcs (n). As above, N represents the number of times of becoming the stop state. Here, as shown in fig. 12, when there is almost no difference between the current Y-coordinate value and the previous Y-coordinate value, these Y-coordinate values are regarded as the same value.
Accordingly, when looking at the in-car position estimation area E2 of fig. 12, it is known that the following situation occurs in the car 11.
As shown in fig. 12, during the period from time T1 to time T2, the car side Y coordinate value approaches the car door 13 so as to change from "0" to "Y3". When the car-side Y-coordinate value gradually increases as described above, it is highly likely that: instead of 1 user moving closer to the car door 13, a large number of users sequentially ride on the car 11 to gradually bring the car 11 close to a full state, and users riding further behind cannot approach the car 11 any further deeply, so that the Y-coordinate value on the car side gradually increases. Therefore, it is found from the change pattern of the Y coordinate value that the new user multiplied by 1 bit is slightly closer to the vertical depth position after the time T2, and the new user multiplied by 1 bit is slightly closer to the vertical depth position, and this repeatedly occurs, and the car side Y coordinate value is kept in the stopped state at the position of "Y3" until the time tcs (n) elapses.
The remaining person estimation processing performed by the remaining person estimation unit 22c and the in-car occupancy rate estimation processing performed by the in-car occupancy rate estimation unit 31 determine whether or not the pattern 1 and the pattern 2 are matched, using the characteristics of the change pattern of the leg position.
In fig. 12, the broken line represents the time-series change in the position of a mobile body such as a wheelchair, and the solid line represents the time-series change in the position of a walking user. In the case of a walking user, the left and right feet are alternately detected, and therefore, the data changes in a curve like a solid line.
According to one embodiment described above, the user detection system includes: the presence or absence of a remaining person is determined based on the time-series change in the position of the user located on the hall side, the occupancy in the car is determined based on the time-series change in the position of the user located on the car side, and the next call can be automatically registered based on the results of these determinations. According to this configuration, the user who is the remaining person does not need to press the hall call button to register the hall call again after waiting for the car in the full state to depart to the destination floor. That is, according to the user detection system of the present embodiment, it is possible to prevent an increase in the waiting time of a user who is not riding on the elevator.
< modification example >
Next, a modified example of the above embodiment will be explained.
In general, when the destination direction of the car 11 having arrived at the hall 15 is opposite to the desired direction, the user does not get on the car 11. However, the destination direction of the car 11 often cannot be grasped unless the car arrives near the car doors 13. In this case, it is considered that the user takes the following actions: the car 11 approaches the car doors 13 to ride on the car, and then the car 11 stands by in front of the car doors 13, knowing that the destination direction of the car 11 is opposite to the desired destination direction. The user's action (hereinafter referred to as "unexpected action") is consistent with the pattern 1.
Thus, since the unexpected behavior matches the 1 st pattern, the remaining person estimation unit 22c may determine that there is a remaining person even when there is no remaining person in reality. This may affect the next call registration determination process performed by the next call registration determination unit 32 a.
Specifically, it is preferable that the hall call in the opposite direction is automatically registered in the normal case because the user takes the action other than the expected action, but if the occupancy in the car 11 that reaches the hall 15 is low, it is determined that there is a surplus and the occupancy in the car is low, and therefore, there is a problem that the next call cannot be registered. In the next call registration determination process shown in fig. 11, it is considered that only a next call in the same direction as the direction of the destination of the car 11 arriving at the lobby 15 is registered, and therefore there is also a problem that a next call in the opposite direction cannot be registered at all.
In the present modification, a next call registration determination process capable of eliminating the above-described problem will be described.
Fig. 13 is a flowchart showing an example of the procedure of the next call registration determination process of the present modification. Note that the same processing as in fig. 11 will be described with the same reference numerals.
As described above, the next call registration determination unit 32a determines whether or not there is a remaining person based on the 1 st result signal from the image processing device 20 (step Se 1). When the result of the determination process of step Se1 is that no member is determined to be left (no in step Se1), the next call registration determination unit 32a executes the process of step Se 4.
On the other hand, when the result of the determination processing in step Se1 is that a remaining person is determined (yes in step Se1), the next call registration determination unit 32a determines the level of the in-car occupancy based on the 2 nd result signal from the in-car occupancy estimation unit 31 (step Se 2). When the result of the determination processing in step Se2 is that the occupancy in the car is determined to be high (high in step Se2), the next call registration determination unit 32a outputs an assignment instruction signal to the group management control device 40, the assignment instruction signal instructing an elevator other than the one to be assigned a hall call having the same destination direction as the car 11 with respect to the stop floor of the car 11 (step Se3), and the processing here is ended.
When the result of the determination processing at step Se2 is that the occupancy in the car is determined to be low (low at step Se2), the next call registration determination unit 32a determines whether or not a hall call having a destination direction opposite to that of the car 11 has been registered for the stop floor (step Se 11). When it is determined that a hall call in the opposite direction has been registered as a result of the determination processing in step Se11 (yes in step Se11), the next call registration determination unit 32a determines that registration of the next call is not necessary, and terminates the processing here without registering the next call (step Se 4).
When it is determined that a hall call in the opposite direction has not been registered as a result of the determination processing in step Se11 (no in step Se11), the next call registration determination unit 32a outputs an assignment instruction signal to the group management control device 40, the assignment instruction signal instructing an elevator other than the one to be assigned a hall call in the opposite destination direction to the car 11 for the stop floor (step Se12), and the processing here is ended.
According to the present modification described above, even when the unexpected behavior matches the 1 st pattern and the remaining person estimation section 22c determines that there is a remaining person and the occupancy of the car 11 in the car is low, the hall call in the direction opposite to the direction of the car 11 at the destination can be automatically registered when the hall call in the direction opposite to the car 11 at the destination floor of the car 11 has not been registered, and therefore the above-described problem can be eliminated. That is, registration of the next call reflecting the intention of the user who takes the action other than the expectation can be realized.
Although several embodiments of the present invention have been described, these embodiments are presented as examples and are not intended to limit the scope of the invention. These novel embodiments may be implemented in other various forms, and various omissions, substitutions, and changes may be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and the equivalent scope thereof.

Claims (7)

1. A user detection system including a group management control device that performs group management of a plurality of cars in operation, the user detection system comprising:
a photographing part which can photograph a predetermined range near a door of a predetermined car when the predetermined car reaches a waiting hall;
a1 st determination unit that detects the position of a user located on the lobby side using the plurality of captured images that are consecutive in time series, and determines whether or not the user cannot ride on the predetermined car based on whether or not a time series change in the position of the detected user on the lobby side matches a1 st pattern that stops immediately before the predetermined car after approaching the predetermined car from the lobby;
a2 nd determination unit that detects the position of the user on the car side using the plurality of captured images that are consecutive in time series, and determines the level of occupancy of the person/thing with respect to the loading capacity of the predetermined car based on whether or not a time-series change in the position of the detected car-side user matches a2 nd pattern that does not change after gradually changing so as to approach the door of the predetermined car; and
and a control unit that determines whether or not a next call is registered for a car other than the predetermined car among the plurality of cars group-managed by the group management control device, based on the results of the determinations by the 1 st determination unit and the 2 nd determination unit, and registers the next call based on the result of the determination.
2. The user detection system according to claim 1,
the 1 st determination unit determines that there is a user who cannot ride the predetermined car when the time-series change in the position of the detected user on the lobby side corresponds to the 1 st pattern.
3. The user detection system according to claim 1,
the 1 st pattern contains the following time-series variations: the car is kept in a state of being stopped in front of the car, or is moved so as to be separated from the car after being stopped in front of the car.
4. The user detection system according to claim 1,
the 2 nd determination unit determines that the occupancy is high when the time-series change in the position of the detected car-side user matches the 2 nd pattern.
5. The user detection system according to claim 1,
further comprises a load sensor capable of measuring the current loading capacity of the predetermined car,
when the current load capacity measured by the load sensor is equal to or greater than a predetermined value, the control unit registers the next call regardless of the results of the determinations by the 1 st and 2 nd determination units.
6. The user detection system according to claim 1,
when the next call is registered, the control section controls the operation of the predetermined car so as to close the door of the car and start the car to a destination floor.
7. The user detection system according to claim 1,
when the 1 st determination unit determines that there is a user who cannot ride on the predetermined car and the 2 nd determination unit determines that the occupancy is low, the control unit determines whether a hall call in a direction opposite to the destination direction of the predetermined car has been registered, and when it is determined that the hall call has not been registered as a result of the determination, registers a next call in a direction opposite to the destination direction of the predetermined car.
CN201811562581.2A 2018-03-22 2018-12-20 User detection system Active CN110294371B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018054948A JP6567719B1 (en) 2018-03-22 2018-03-22 User detection system
JP2018-054948 2018-03-22

Publications (2)

Publication Number Publication Date
CN110294371A CN110294371A (en) 2019-10-01
CN110294371B true CN110294371B (en) 2021-11-05

Family

ID=67766679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811562581.2A Active CN110294371B (en) 2018-03-22 2018-12-20 User detection system

Country Status (2)

Country Link
JP (1) JP6567719B1 (en)
CN (1) CN110294371B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6833959B1 (en) * 2019-12-09 2021-02-24 東芝エレベータ株式会社 Elevator control device and elevator control method
JP7243864B2 (en) * 2020-01-10 2023-03-22 三菱電機株式会社 elevator system
JP7155201B2 (en) * 2020-07-09 2022-10-18 東芝エレベータ株式会社 Elevator user detection system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821272A (en) * 2012-08-16 2012-12-12 安徽中科智能高技术有限责任公司 Video monitoring system with elevator invalid request signal removing function
CN103287939A (en) * 2012-02-24 2013-09-11 东芝电梯株式会社 Apparatus for measuring number of people in elevator, elevator having the apparatus, and elevator system including a plurality of elevators with the apparatus
CN103287931A (en) * 2012-02-24 2013-09-11 东芝电梯株式会社 Elevator system
CN104724557A (en) * 2015-02-15 2015-06-24 英华达(上海)科技有限公司 Elevator load carrying judgment system and method thereof
CN105819289A (en) * 2016-03-31 2016-08-03 乐视控股(北京)有限公司 Elevator control device and method
CN106809708A (en) * 2015-12-01 2017-06-09 东芝电梯株式会社 The cluster management system of elevator
CN106966277A (en) * 2016-01-13 2017-07-21 东芝电梯株式会社 The seating detecting system of elevator
CN107235388A (en) * 2017-07-14 2017-10-10 广州日滨科技发展有限公司 Elevator control method and system
JP6246305B1 (en) * 2016-12-07 2017-12-13 東芝エレベータ株式会社 Elevator security system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101335225B1 (en) * 2010-03-18 2013-11-29 미쓰비시덴키 가부시키가이샤 Guide device for elevator
JP5518915B2 (en) * 2012-02-20 2014-06-11 東芝エレベータ株式会社 Elevator car movement control device and car movement control method
JP2016037366A (en) * 2014-08-08 2016-03-22 三菱電機株式会社 Call registration system for elevator
JP2017052578A (en) * 2015-09-07 2017-03-16 株式会社日立ビルシステム Boarding-off situation prediction presentation method at arrival of car for elevator, and device
JP6499092B2 (en) * 2016-01-12 2019-04-10 株式会社日立ビルシステム Elevator load detection adjustment device and elevator
JP6068694B1 (en) * 2016-01-13 2017-01-25 東芝エレベータ株式会社 Elevator boarding detection system
JP6092433B1 (en) * 2016-01-13 2017-03-08 東芝エレベータ株式会社 Elevator boarding detection system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103287939A (en) * 2012-02-24 2013-09-11 东芝电梯株式会社 Apparatus for measuring number of people in elevator, elevator having the apparatus, and elevator system including a plurality of elevators with the apparatus
CN103287931A (en) * 2012-02-24 2013-09-11 东芝电梯株式会社 Elevator system
CN102821272A (en) * 2012-08-16 2012-12-12 安徽中科智能高技术有限责任公司 Video monitoring system with elevator invalid request signal removing function
CN104724557A (en) * 2015-02-15 2015-06-24 英华达(上海)科技有限公司 Elevator load carrying judgment system and method thereof
CN106809708A (en) * 2015-12-01 2017-06-09 东芝电梯株式会社 The cluster management system of elevator
CN106966277A (en) * 2016-01-13 2017-07-21 东芝电梯株式会社 The seating detecting system of elevator
CN105819289A (en) * 2016-03-31 2016-08-03 乐视控股(北京)有限公司 Elevator control device and method
JP6246305B1 (en) * 2016-12-07 2017-12-13 東芝エレベータ株式会社 Elevator security system
CN107235388A (en) * 2017-07-14 2017-10-10 广州日滨科技发展有限公司 Elevator control method and system

Also Published As

Publication number Publication date
CN110294371A (en) 2019-10-01
JP2019167186A (en) 2019-10-03
JP6567719B1 (en) 2019-08-28

Similar Documents

Publication Publication Date Title
CN108622776B (en) Elevator riding detection system
US10196241B2 (en) Elevator system
CN110294371B (en) User detection system
CN108622777B (en) Elevator riding detection system
JP6068694B1 (en) Elevator boarding detection system
JP6215286B2 (en) Elevator group management system
JP5969147B1 (en) Elevator boarding detection system
CN108622778B (en) Elevator system
JP2008120548A (en) Control device of elevator
CN103663068A (en) Elevator door system and elevator having elevator door system
JP6487082B1 (en) Wheelchair / elevator control system and control method
JP6416326B1 (en) Elevator system and elevator control method
JP2014131932A (en) Elevator system
CN109867180B (en) Elevator system
JP2000026034A (en) Operating device of elevator
JP2018030676A (en) Elevator control system
JP6918897B2 (en) Elevator fullness detection system
JP2018090351A (en) Elevator system
JP2017165541A (en) Image processing apparatus
JP2005255404A (en) Elevator control device
JP7230114B2 (en) Elevator user detection system
JP6271776B1 (en) Elevator boarding detection system
CN113428752B (en) User detection system for elevator
JP6607334B1 (en) Elevator landing guidance device
JP2010163271A (en) Door opening/closing control device for elevator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant