CN111377313A - Elevator system - Google Patents

Elevator system Download PDF

Info

Publication number
CN111377313A
CN111377313A CN201911134281.9A CN201911134281A CN111377313A CN 111377313 A CN111377313 A CN 111377313A CN 201911134281 A CN201911134281 A CN 201911134281A CN 111377313 A CN111377313 A CN 111377313A
Authority
CN
China
Prior art keywords
unit
behavior
elevator
learning
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911134281.9A
Other languages
Chinese (zh)
Other versions
CN111377313B (en
Inventor
纳谷英光
星野孝道
鸟谷部训
羽鸟贵大
前原知明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of CN111377313A publication Critical patent/CN111377313A/en
Application granted granted Critical
Publication of CN111377313B publication Critical patent/CN111377313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/02Control systems without regulation, i.e. without retroactive action
    • B66B1/06Control systems without regulation, i.e. without retroactive action electric
    • B66B1/14Control systems without regulation, i.e. without retroactive action electric with devices, e.g. push-buttons, for indirect control of movements
    • B66B1/18Control systems without regulation, i.e. without retroactive action electric with devices, e.g. push-buttons, for indirect control of movements with means for storing pulses controlling the movements of several cars or cages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/34Details, e.g. call counting devices, data transmission from car to control system, devices giving information to the control system
    • B66B1/3415Control system configuration and the data transmission or communication within the control system
    • B66B1/3446Data transmission or communication within the control system
    • B66B1/3461Data transmission or communication within the control system between the elevator control system and remote or mobile stations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B2201/00Aspects of control systems of elevators
    • B66B2201/40Details of the change of control mode
    • B66B2201/402Details of the change of control mode by historical, statistical or predicted traffic data, e.g. by learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B50/00Energy efficient technologies in elevators, escalators and moving walkways, e.g. energy saving or recuperation technologies

Abstract

The invention provides an elevator system. In the past, even if a figure track representing a global action is used, the behavior of a person cannot be accurately estimated, and there is a possibility that an obstacle is generated in the operation of an elevator. An elevator system (1) is provided with a sensor (30); a recognition unit (40) that recognizes the object based on the measurement data; and an estimation unit (70) that estimates the next motion of the object using the elevator in the current measurement area, based on the actual motion of the current object identified by the identification unit (40) and a learning result (52) indicating a motion pattern of the object learned using the past measurement data. Further, the elevator system (1) is provided with: a determination unit (80) that determines whether the actual behavior of the object matches or does not match the next behavior of the object estimated by the estimation unit (70), and outputs a determination result; a specifying unit (90) that specifies the next behavior of the object based on the determination result; and an allocation unit (100) that allocates a service to the elevator based on the next behavior of the specified object.

Description

Elevator system
Technical Field
The present invention relates to elevator systems.
Background
In order to efficiently control the operation of a plurality of elevators, it is necessary to grasp the presence of passengers in advance and assign elevators to operate in accordance with the presence of passengers. In order to realize the above operation, a technique for predicting the presence of a passenger, that is, the movement of a person is required. As a conventional technique for predicting human movement, for example, a technique described in patent document 1 is known.
Patent document 1 describes that "by detecting a movement trajectory of a human, a global action to be performed by the human can be predicted, and thus recommendation information corresponding to the global action can be provided".
Prior art documents
Patent document
Patent document 1: japanese patent laid-open No. 2010-231470
However, according to the technique described in patent document 1, the global action of the person can be predicted by determining the similarity between the movement trajectory and the trajectory of the figure representing the global action. However, in a case where the similarity between the figure trajectory and the movement trajectory is low in the figure trajectory representing the global behavior of the person, that is, the behavior of the person cannot be predicted.
In addition, it is difficult to realize a graphical track representing a global action that can correspond to the movement of a large number of people. For example, in paragraph 0080 of patent document 1, it is described that the accuracy of prediction is 80% or more as a result of an experiment, but the behavior of a human cannot be accurately predicted with such an accuracy. If the behavior of a person cannot be predicted correctly, for example, only a small number of persons may be picked up on a car of an elevator to which a service is assigned, or conversely, a large number of persons may be picked up, but a person who cannot be picked up may easily occur. If the behavior of the person estimated in this way is different from the actual behavior of the person, there is a possibility that an obstacle may be caused to the operation of the elevator to which the service is allocated.
Disclosure of Invention
Problems to be solved by the invention
The present invention has been made in view of the above circumstances, and an object thereof is to estimate the next behavior of an object using an elevator to smoothly perform an operation of the elevator.
Means for solving the problems
The present invention is an elevator system for controlling the operation of an elevator, wherein the elevator system comprises: a measurement unit that measures an object in a measurement area and outputs measurement data; a recognition unit that recognizes the object based on the measurement data; an estimation unit that estimates a next motion of the object using the elevator in the current measurement area based on the actual motion of the current object recognized by the recognition unit and a learning result indicating a motion pattern of the object learned by the past measurement data; a determination unit that determines whether the actual behavior of the object matches or does not match the next behavior of the object estimated by the estimation unit, and outputs a determination result; a specifying unit that specifies a next behavior of the object based on the determination result; and an allocation unit that allocates a service to the elevator based on the next behavior of the object specified by the specification unit.
Effects of the invention
According to the present invention, by estimating and specifying the next behavior of the object using the elevator in the current measurement area, the service is allocated to the elevator based on the next behavior of the object, and therefore the elevator can be smoothly operated.
Problems, structures, and effects other than those described above will be apparent from the following description of the embodiments.
Drawings
Fig. 1 is a block diagram showing a configuration example of an elevator system according to a first embodiment of the present invention.
Fig. 2 is a block diagram showing a configuration example of a computer according to a first embodiment of the present invention.
Fig. 3 is an explanatory diagram showing an example of behavior of a person according to the first embodiment of the present invention.
Fig. 4 is a flowchart showing an example of a series of processes for learning the behavior of a human in an elevator system according to a first embodiment of the present invention.
Fig. 5 is a flowchart showing an example of a series of processes for estimating the behavior of a person in an elevator system according to the first embodiment of the present invention.
Fig. 6 is an explanatory diagram showing an example of learning results stored in time series according to the first embodiment of the present invention.
Fig. 7 is a flowchart showing an example of a series of processes for relearning the behavior of an unpredictable person in the elevator system according to the first embodiment of the present invention.
Fig. 8 is a flowchart showing an example of a series of processing for predicting the arrival time of a person at an elevator in the elevator system according to the first embodiment of the present invention.
Fig. 9 is a flowchart showing an example of a series of processing for predicting the arrival of a person at an elevator in the elevator system according to the first embodiment of the present invention.
Fig. 10 is a flowchart showing an example of a series of processes for controlling an elevator based on an arrival prediction time in the elevator system according to the first embodiment of the present invention.
Fig. 11 is a block diagram showing a configuration example of an elevator system according to a second embodiment of the present invention.
Fig. 12 is a block diagram showing a configuration example of an elevator system according to a third embodiment of the present invention.
Fig. 13 is a block diagram showing a configuration example of an elevator system according to a fourth embodiment of the present invention.
Fig. 14 is an explanatory diagram showing an example of a data format of a learning result according to a fifth embodiment of the present invention.
Description of reference numerals:
1 … elevator system; 2 … management controller; 4 … edge controller; 5 … smart sensors; 6 … host system; 10 … elevator; 20 … communication network; a 30 … sensor; a 40 … identification portion; a 50 … learning section; 52 … learning results; 60 … storage part; a 70 … estimating unit; 80 … judging section; 90 … determination section; 91 … time series data; 92 … attribute information; 93 … database; 100 … dispenser.
Detailed Description
Hereinafter, specific embodiments of the present invention will be described with reference to the drawings. In the present specification and the drawings, components having substantially the same function or configuration are denoted by the same reference numerals, and redundant description thereof is omitted.
[ first embodiment ]
< example of Structure of Elevator System >
First, a description will be given of a configuration example of an elevator system according to a first embodiment of the present invention.
Fig. 1 is a block diagram showing a configuration example of an elevator system 1 according to a first embodiment.
The elevator system 1 is installed in a building 16, which is an example of a building, and includes a management controller 2, a sensor 30, and an elevator controller 11 included in an elevator 10. A plurality of elevators 10 are installed in a building 16, and the elevator controllers 11 of the elevators 10 are connected to a communication network 20. In addition, the management controller 2 is also connected to the communication network 20.
The sensor 30 is installed at a predetermined position in the building 16 to measure the behavior and position of an object in the building 16, and outputs measurement data when the object is measured. The object includes an obstacle in addition to a person. The sensor 30 is not limited to any type as long as it can measure the position and behavior of the object, and includes, for example, an image sensor such as a camera (an example of an imaging unit), a millimeter wave sensor, a depth sensor, and a device such as a light Detection and ranging. The sensor 30 is directly connected to the management controller 2. If the sensor 30 is a camera, the sensor 30 outputs the image data obtained by imaging the object to the management controller 2 as measurement data. In addition, an existing monitoring camera or the like already installed in the building 16 may be used as the sensor 30. The monitoring camera as described above may be connected to a building management network (not shown) provided separately from the communication network 20 for the elevator controller.
The elevator 10 includes a hoisting machine 12 and a car 14 in addition to an elevator controller 11. The elevator controller 11 controls the hoisting machine 12 to control the operation (lifting, door opening and closing, etc.) of the car 14. Various information can be exchanged between the elevator controller 11 of a certain elevator 10 and the elevator controllers 11 of other elevators 10 installed in the same building 16 via the communication network 20.
Information on car call buttons installed at each floor, information on traveling direction lamps of the elevator, information on position lamps of the car 14, information on car doors, and all information on other elevators flow through the communication network 20. Such information is referred to as "elevator information".
The management controller 2 performs control of a service to which floor a plurality of elevators 10 are assigned. The management controller 2 includes an identification unit 40, a learning unit 50, a storage unit 60, an estimation unit 70, a determination unit 80, a determination unit 90, and an assignment unit 100.
The recognition unit 40 recognizes the object measured by the sensor 30 based on the data input from the sensor 30. Here, the recognition unit 40 can recognize the behavior of a person and the position of a structure. If the sensor 30 is a camera, the recognition unit 40 can recognize an object that moves with the passage of time based on captured data output as measurement data.
The learning unit 50 learns the behavior and position of the object based on the behavior of the person recognized by the recognition unit 40, the position of the structure, and the next behavior of the object specified by the specification unit 90. When the determination result that the determination result is inconsistent is obtained by the determination unit 80, the learning unit 50 learns the behavior pattern of the object based on the actual behavior of the object, and updates the learning result 52. The learning result 52 is data that continuously represents the behavior of the object specified by the specifying unit 90 in the past, and represents a human behavior pattern at a certain time. Examples of the learning method used by the learning unit 50 include machine learning and deep learning. As the learning method, when machine learning or deep learning is used, the learning result 52 is used as a learning completion network.
The storage unit 60 includes the learning result 52 stored in the learning unit 50. In the figure, the storage unit 60 is shown separately from the learning result 52. As the storage unit 60, a large-capacity storage device such as an HDD is used, for example. Upon receiving a request from another functional unit, the storage unit 60 reads the specified entry from the learning result 52, and performs processing for updating the learning result 52.
The estimating unit 70 estimates the next behavior of the object using the elevator 10 in the current measurement area 110 (see fig. 3 described later) based on the actual behavior of the current object recognized by the recognizing unit 40 and the learning result 52 indicating the behavior pattern of the object learned from the past measurement data. Here, the next motion of the object may refer to, for example, a motion after several seconds from the current viewpoint among motions that are continuous every several seconds, or may refer to a discontinuous motion. If the behavior in the future is observed from the time when the sensor 30 measures the object, the behavior at any one timing may be included in the next behavior. The estimating unit 70 can estimate the next behavior of the object based on the learning result 52 read from the storage unit 60 and the behavior of the person and the position of the structure recognized by the recognition unit 40. The next behavior of the object specified by the specifying unit 90 is also input to the estimating unit 70.
The determination unit 80 determines whether the actual behavior of the object matches or does not match the next behavior of the object estimated by the estimation unit 70, and outputs the determination result.
The specifying unit 90 specifies the next behavior of the object based on the determination result of the determining unit 80. The identification unit 90 identifies the object as the car 14 of the elevator 10 based on the result of the determination that the actual behavior of the object, which is the behavior of the object moving to the lobby of the elevator 10, is similar to the next behavior of the object estimated by the estimation unit 70. In addition, the specifying unit 90 can specify the object to be mounted on the car 14 of the elevator 10 based on the determination result when it is determined that the actual behavior of the object, which is the behavior of the object moving toward the lobby of the elevator 10, continues under an arbitrary condition as a result of the determination that the next behavior of the object estimated by the estimating unit 70 is similar to the actual behavior of the object. The arbitrary conditions will be described with reference to fig. 9 described later. The behavior of the object specified by the specifying unit 90 is output to the learning unit 50 and the estimating unit 70.
The assigning unit 100 assigns a service to the elevator 10 based on the next behavior of the object specified by the specifying unit 90. The service assignment means, for example, assigning a car number of the elevator 10, a call of the car 14, and the like so that the car 14 arrives at a floor where a person who wants to use the elevator 10 is located. The assignment unit 100 assigns a service to the elevator 10 on which the object determined by the determination unit 90 to be the car 14 can be mounted, and controls opening and closing of the door of the car 14 of the elevator 10 that has reached the lobby. The behavior of the object estimated by the estimation unit 70 is also input to the distribution unit 100.
Next, the hardware configuration of the management controller 2 of the elevator system 1 and the computer 3 constituting the elevator controller 11 will be described.
Fig. 2 is a block diagram showing an example of the hardware configuration of the computer 3.
The computer 3 is hardware used as a so-called computer. The computer 3 includes a CPU (Central Processing Unit) 31, a rom (read Only memory)32, and a ram (random Access memory)33, which are connected to a bus 34. The computer 3 further includes a display device 35, an input device 36, a nonvolatile memory 37, and a network interface 38.
The CPU31 reads and executes the program code of software for realizing the functions of the present embodiment from the ROM 32. Variables, parameters, and the like generated during the arithmetic processing of the CPU31 are temporarily written in the RAM 33. The functions of the respective units (the recognition unit 40, the learning unit 50, the storage unit 60, the estimation unit 70, the determination unit 80, the specification unit 90, and the assignment unit 100) of the present embodiment are realized by the CPU 31.
The display device 35 is, for example, a liquid crystal display monitor, and displays the result of processing performed by the computer 3 and the like to a maintenance worker. The input device 36 can be used for a predetermined operation input and instruction by a maintenance worker, for example, using a keyboard, a mouse, or the like. Depending on the usage of the computer 3, the display device 35 and the input device 36 may not be provided.
Examples of the nonvolatile memory 37 include an hdd (hard Disk drive), an ssd (solid state drive), a flexible Disk, an optical magnetic Disk, a CD-ROM, a CD-R, a magnetic tape, and a nonvolatile storage medium. In addition to the os (operating system) and various parameters, a program for operating the computer 3 is recorded in the nonvolatile memory 37. The ROM32 and the nonvolatile memory 37 permanently store programs, data, and the like necessary for the CPU31 to operate, and a computer storing programs to be executed by the computer 3 is used as an example of a readable non-transitory recording medium. For example, the nonvolatile memory 37 functions as the storage unit 60 and can store the learning result 52.
The network Interface 38 can transmit and receive various data between devices via a lan (local Area network) connected to a terminal of the NIC, a dedicated line, or the like, using, for example, a NIC (network Interface card) or the like. The management controller 2 can receive measurement data from the sensor 30 via the network interface 38. The management controller 2 gives an operation instruction to the elevator controller 11 through the communication network 20, or acquires the current operation status of the elevator 10 from the elevator controller 11.
< example of human behavior >
Fig. 3 is an explanatory diagram showing an example of human behavior. Here, the floor is divided into areas indicated by a plurality of rectangular frames, and how a person in a certain area moves will be described by way of a plan view.
The sensor 30 installed on the floor can measure the object in the measurement area 110 having a size shown in fig. 3. Therefore, it is possible to recognize the manner in which the person as the object moves, based on the measurement data of the object measured by the sensor 30. The measurement region 110 is configured by dividing the measurement region 110 into a plurality of foundation regions 111 of a predetermined size. In addition, obstacles 114 and 115 indicating pillars and the like are provided in the measurement region 110. The obstacles 114 and 115 may be human beings or structures such as pillars. The obstacles 114 and 115 may be movable objects such as plants, tables, chairs, and garbage boxes.
The estimating unit 70 estimates the next motion of the object based on the base region 111, and the determining unit 80 can determine whether the actual motion of the object indicated by the measurement data matches or does not match the next motion of the object estimated by the estimating unit 70 for each base region 111.
Here, the learning unit 50 learns the behavior pattern of the person 113 detected in the measurement area 110, and the estimation unit 70 sets a learning inference area 112 as an area for inferring the behavior of the person 113 in the measurement area 110. The learning inference region 112 is a region that includes the object and is a learning target for the learning unit 50 to learn the behavior of the object.
Here, it is assumed that a waiting hall of the elevator 10 exists on the right side in fig. 3. The person 113 moving to the right moves while avoiding the obstacles 114 and 115. Thus, several behavior patterns are indicated by arrows with respect to the behavior of the person 113. In the present specification, movement of a local area of a person or the like is referred to as "behavior". The behavior is, for example, an estimation target of the estimation unit 70. Note that a change in behavior that is continuously performed in a predetermined region during a predetermined period is referred to as a "behavior pattern". The behavior pattern is a learning target of the learning unit 50. Reference numerals a to g are given to the base area 111 indicated by each arrow.
However, the behavior pattern of the person 113 differs depending on the kind of the obstacle. For example, assume a case where the obstacle 114 is a person and the obstacle 115 is a structure such as a pillar. In general, a person has a habit of being separated from another person by a certain distance, and therefore it is considered that the person 113 moves so as not to be too close to the obstacle 114. Here, the person 113 moves while avoiding the obstacle 114 so as not to approach the obstacle 114 too much. In this case, the behavior pattern of the person 113 is generally a pattern moving to the positions of the base areas a, b, c, d, and e located above in fig. 3.
On the other hand, when the obstacle 115 in the traveling direction is a person moving to the right, the person 113 easily takes a behavior pattern (arrow e direction) moving to the right side in accordance with the movement of the person ahead (the obstacle 115) to the right side.
Here, the learning unit 50 learns the positional relationship between the person 113 and the obstacles 114 and 115 and the behavior pattern of the person 113. When the person 113 to be learned of the behavior pattern moves in the direction of the arrow b, for example, the learning unit 50 sets a new learning inference region 116 starting from the basic region b, which is the destination of movement of the person 113. Therefore, the learning unit 50 can change the learning inference region (for example, change from the learning inference region 112 to the learning inference region 116) according to the actual behavior of the object, and can learn the behavior pattern of the object based on the change in the learning inference region.
In the present embodiment, the measurement region 110 is represented by a grid composed of the same rectangular basic regions 111, but the configuration of the measurement region 110 is not limited to this. The learning efficiency of the learning unit 50, the estimation accuracy and processing time of the estimation unit 70, and the data amount of the learning result 52 depend on the shape and size of the measurement region 110 and the base region 111.
< learning processing of human behavior >
Here, an example of processing performed by the management controller 2 will be described.
First, a learning process of human behavior will be described with reference to fig. 4.
Fig. 4 is a flowchart showing an example of a series of processes for learning a behavior pattern of a person in the elevator system 1.
First, the sensor 30 performs measurement in the measurement region 110 (S1), and outputs measurement data to the recognition unit 40 of the management controller 2. Next, the recognition unit 40 recognizes the person or object in the measurement area 110 based on the measurement data input from the sensor 30, and generates recognition data such as the position, velocity, acceleration, and the like of each of the recognized person or object (S2).
Next, the learning unit 50 learns the human behavior pattern using the identification data generated in the past and the new identification data generated this time (S3). The learning unit 50 learns the human behavior pattern by using a method of learning new data as teaching learning, such as deep learning, for example, but may learn the human behavior pattern by using another method.
The storage unit 60 stores the learning result 52 including the human behavior pattern learned by the learning unit 50 (S4). For example, when the learning unit 50 uses the above-described deep learning, the learning result 52 is a learning completed network. Therefore, the storage unit 60 can store a plurality of learning completed networks as the learning results 52. According to the flowchart shown in fig. 4 as described above, the management controller 2 can generate the basic learning result 52 and store the learning result 52 in the storage unit 60.
< human behavior presumption processing >
Next, the process of estimating the behavior of the person will be described with reference to fig. 5.
Fig. 5 is a flowchart showing an example of a series of processes for estimating the behavior of a person in the elevator system 1.
The processing of steps S11, S12 is the same as the processing of steps S1, S2 shown in fig. 4, and thus detailed description is omitted. After the identification data is generated in step S12, the estimating unit 70 checks whether or not the learning result 52 is stored in the storage unit 60 (S13).
When the learning result 52 is stored in the storage unit 60 (yes in S13), it can be said that the learning process of the human behavior shown in fig. 4 was executed in the past. Therefore, the estimation unit 70 estimates the behavior of the person based on the measurement data of the object input from the sensor 30 using the learning result 52 read from the storage unit 60 (S14). If the learning result 52 is not stored in the storage unit 60 (no in S13), the estimation unit 70 adds the behavior of the person to the measurement data using the plurality of learning results 52 stored in the storage unit 60 to perform estimation (S15).
After any one of the processing in steps S14 and S15, the determination unit 80 determines whether or not the estimation result matches the current position or behavior of the person (S16). When the determination unit 80 determines that the estimation result matches the current position or behavior of the person (yes in S16), the determination unit 90 temporarily stores the learning result 52 from which the estimation result is derived for the next process in which the estimation unit 70 estimates the behavior of the person (S17). Therefore, the determination section 90 can save the learning result 52.
On the other hand, when the determination unit 80 determines that the estimation result does not match the current position or behavior of the person (no in S16), the specification unit 90 excludes the learning result 52 from the temporarily stored object (S18). After any one of steps S17 and S18, the present process is ended.
In this way, the determination unit 90 temporarily stores a series of choices of the learning result 52 that can be accurately estimated in time series. Here, an example of data temporarily stored in the specifying unit 90 in time series will be described.
Fig. 6 is an explanatory diagram showing an example of the learning result 52 temporarily stored in time series.
Fig. 6 (1) shows an example of the learning result 52 temporarily stored in the specifying unit 90. The learning result 52 is generated at times t0, t1, and t2, which are temporarily stored in the determination unit 90. The learning result 52 at the time t0 is a result of learning again at the same time t0 of the previous day, yesterday, and today, for example. The same applies to the learning result 52 at time t 1. These learning results 52 can be recognized for each generated and relearning time. The data in which the plurality of learning results 52 are stored in time series in this manner is referred to as time series data 91.
Since the learning results 52 are sequentially stored in time series in the time series data 91, for example, a behavior pattern indicating the movement of a person averaged to the behavior of the crowd can be expressed in accordance with the time series learning results 52.
On the other hand, regardless of the surrounding situation, characteristic behaviors such as a person who moves fast by following up another person or a person who moves slowly against a wall can be expressed in accordance with the time-series learning result 52.
Next, the storage unit 60 can acquire and store the learning result 52 temporarily stored in the determination unit 90 via the learning unit 50. By managing the learning results 52 in time series in this manner, the storage unit 60 or the specifying unit 90 can output the learning results 52 corresponding to the time at which the sensor 30 measures the object, which are required by the estimating unit 70 to estimate the next behavior of the object, to the estimating unit 70.
Fig. 6 (2) shows an example of a database 93 in which the learning results 52 are configured. In this case, a database 93 is configured in which attribute information 92 is added to time-series data 91 including a plurality of learning results 52 shown in the data configuration diagram (1). As an example of the attribute information 92, the measurement start date and time of the sensor 30 can be used. In each time-series data 91, for example, a plurality of learning results 52 learned by the learning unit 50 in the measurement region 110 measured by the sensor 30 every several seconds for 1 minute are stored. For example, in database 93, 8 am: 00-8: 04, 8: 00: 00-8: 00: time- series data 91, 8 of 59: 01: 00-8: 01: 59, and time series data 91 between.
The estimation unit 70 can efficiently estimate the behavior of a person at an arbitrary date and time by referring to the attribute information 92. For example, since the behavior of the person at the morning and evening attendance time in an office building is different from the behavior of the person at other leisure time and on the holiday, the time-series data 91 is different, and the behavior of the person can be efficiently estimated by the estimation unit 70 according to the use of the building.
Next, the assignment unit 100 can confirm the operation state of the elevator 10 and determine the operation mode. As the operation mode, for example, several modes are prepared, such as a mode corresponding to a busy hour of a commute time zone, and a mode corresponding to an idle hour at 10 am. Here, in the pattern selected only at the current time, the behavior of the person may not be accurately estimated, and the service may not be accurately allocated, so that the person may stay in the hall. In order to eliminate the stagnation in the hall as described above, the storage unit 60 may store the learning result 52 by combining the learning result 52 managed in time series with the operation mode of the elevator 10 at the time point when the learning result 52 is updated. Here, the attribute information 92 is set to include information of the operation mode. The database 93 manages the time-series data 91 of the learning result 52 learned for each pattern with respect to the attribute information 92. The estimation unit 70 can read the learning result 52 corresponding to the currently set operation mode from the storage unit 60 to more accurately estimate the behavior of the object.
The learning process of the human behavior shown in fig. 4 and the estimation process of the human behavior shown in fig. 5 may be executed sequentially or in parallel.
< human behavior relearning processing >
Next, a process of relearning the behavior of the person who cannot be estimated will be described.
Fig. 7 is a flowchart showing an example of a series of processes for relearning the behavior of an unpredictable person in the elevator system 1.
The processing of steps S21 to S26 is the same as the processing of steps S11 to S16 shown in fig. 5, and thus detailed description is omitted. In step S23, the estimation unit 70 does not perform the process of confirming whether or not the learning result 52 is temporarily stored in the storage unit 60 in the determination unit 90.
In step S26, when the determination unit 80 determines that the estimation result matches the current position or behavior of the person (yes in S26), the present process is terminated. If the determination unit 80 determines that the estimation result does not match the current position or behavior of the person as a result of the processing in step S26 (no in S26), the estimation unit 70 cannot accurately estimate the behavior of the person.
Therefore, the specification unit 90 selects the learning result 52, which is the basis for relearning the human behavior by the learning unit 50, from among the learning results 52 stored in the storage unit 60 (S27). Next, the determination unit 90 outputs the selected learning result 52 to the learning unit 50.
Next, the learning unit 50 learns the position and behavior of the person that cannot be estimated using the learning result 52 input from the determination unit 90 and selected for relearning (S28). Next, the storage unit 60 stores the learning result 52 (relearning result) obtained by relearning by the learning unit 50 (S29). After that, the present process is ended.
After the end of the present process, the determination unit 80 determines whether or not the re-estimation result matches the current position or behavior of the person, and if so, outputs the determination result to the determination unit 90. Next, the assignment unit 100 assigns the service of the elevator 10 based on the behavior of the person specified by the specification unit 90.
The processing of and after step S16 in fig. 5 and the processing of and after step S26 in fig. 7 may be combined and sequentially operated, or may be operated in parallel with each other.
< speculative processing (tracking) of human movement >
Next, the process of estimating the movement of the person will be described.
Fig. 8 is a flowchart showing an example of a series of processing for predicting the arrival time of a person at the elevator 10 in the elevator system 1. In this processing, the continuation of the human behavior, that is, the movement is estimated based on the time-series learning result 52 temporarily stored in the specifying unit 90.
In this process, for example, it is assumed that the sensor 30 is installed in an unillustrated entrance of the building 16. Then, the sensor 30 performs measurement of the object in the measurement region 110 (S31).
Next, the estimation unit 70 confirms that the time-series data 91 temporarily stored in the learning result 52 of the determination unit 90 can be selected based on the measurement data measured by the sensor 30 (S32). If the estimation unit 70 can select the time-series data 91 (no in S32), the present process is ended.
On the other hand, if the estimation unit 70 is unable to select the time-series data 91 (yes at S32), the estimation unit 70 starts a process of selecting from the time-series data 91 of the learning result 52 temporarily stored in the determination unit 90 and sequentially extracting the learning result 52 from the time-series data 91 (S33).
Next, the estimating unit 70 checks whether or not the learning result 52 is present in the time-series data 91 (S34). If the learning result 52 exists in the time-series data 91 (yes at S34), the estimating unit 70 estimates the behavior of the person based on the learning result 52 acquired from the determining unit 90 (S35). Next, the estimation unit 70 updates the position of the person' S destination of movement and the movement elapsed time taken for the person to walk to the position, based on the estimation result (S36).
Here, referring to fig. 3, for example, the position of the destination of movement is the position of the base area 111 indicated by the arrow b direction, and the movement elapsed time is the time taken for the person 113 to move from the current position to the position of the base area 111 indicated by the arrow b direction. Since the position of the destination and the movement elapsed time gradually change, the behavior of the person 113 when moving to the hall, not shown, is estimated.
Therefore, the processes of steps S34 to S36 in which the estimating unit 70 estimates the behavior of the person are repeated until the time-series data 91 has no learning result 52. At this time, the estimation unit 70 repeats the process of estimating the behavior of the person based on the estimated position of the destination. Therefore, the estimation processing of the human behavior is performed for all the humans 113 measured in the measurement area 110.
On the other hand, in step S34, if there is no learning result 52 in the time-series data 91 (no in S34), it is a state in which the final result of the movement elapsed time of the person has been obtained through steps S34 to S36. Therefore, the estimation unit 70 updates the estimated arrival time of the person based on the final result of the movement elapsed time, that is, the time when the person arrives at the hall of the elevator 10 (S37). Thereafter, the estimating unit 70 returns to step S32, and repeats the estimation processing of and after step S32 until there is no time-series data 91.
In this way, the estimation unit 70 learns what movement the person has performed in the past based on the behavior of the person estimated in detail for each of the basic regions 111, and can improve the accuracy of estimating the behavior of the person.
< prediction processing of pedestrian flow >
Next, a process of predicting a human flow will be described.
Fig. 9 is a flowchart showing an example of a series of processes for predicting the arrival of a person at the elevator 10 in the elevator system 1.
The processing of steps S41 to S45 is the same as the processing of steps S31 to S35 shown in fig. 8, and thus detailed description is omitted.
In step S45, after the estimation unit 70 estimates the behavior of the person based on the learning result 52, the determination unit 80 compares the actual behavior of the person measured by the sensor 30 with the behavior of the person estimated by the estimation unit 70, and determines whether or not the actual behavior of the person matches the estimated behavior of the person (S46). When the determination unit 80 determines that the actual human behavior does not match the estimated human behavior (no in S46), the present process is terminated.
On the other hand, when the determination unit 80 determines that the actual human behavior does not match the estimated human behavior (yes at S46), the estimation unit 70 checks whether or not the order of the learning results 52 in the time-series data 91 matches a preset value (S47). The sequence of the learning results 52 indicates the sequence of the learning results t (0) and t (1) at certain times t (0) and t (1), as shown in fig. 6.
If the order of the learning results 52 does not match (no in S47), the estimation unit 70 returns to step S44, and repeats the processing of step S44 and thereafter until there is no learning result 52 in the time-series data 91. For example, if the person stops while moving at time t (0), the person is actually at a position different from the position estimated by the estimation unit 70 from the learning result 52. In this case, the estimation unit 70 skips the learning result t (0), and estimates the position of the person at which the person is to move by referring to the next learning result t (1).
If the order of the learning results 52 matches (yes at S47), the estimating unit 70 identifies that the person as the object arrives at the hall of the elevator 10 (S48). For example, if the destination of the movement of the person estimated at the time t (0) or t (1) is the same as the positions indicated by the learning results t (0) or t (1), the order of the learning results 52 is the same. Therefore, the estimating unit 70 can accurately estimate the time when the person arrives at the hall of the elevator 10. Then, the present process ends.
In the processing shown in fig. 9, the estimation unit 70 sequentially reads the time-series data 91 of the learning result 52 stored in the determination unit 90 to estimate the behavior of the person, and the determination unit 80 confirms whether or not the estimation result matches the actual behavior of the person. Next, when the estimated result of the human behavior matches the actual human behavior under any condition, it is determined that the human has moved to the elevator 10.
Here, the arbitrary condition is determined, for example, based on the position of the learning inference area 112. For example, in the case where the car 14 arrives at a destination floor at a destination time by controlling the hoisting machine 12 of the elevator 10 to which service is allocated by the allocation portion 100, the allocation portion 100 needs to know the presence of a person riding the elevator 10 before the time required for movement of the car 14. Here, if the assignment unit 100 can know the presence of a person using the elevator 10 before the time corresponding to one round trip of the car 14 (for example, 5 minutes ago), it is possible to assign a service to the most suitable elevator 10.
Here, as an arbitrary condition, for example, a critical area in which a time required for a person in the measurement area 110 of the sensor 30 to move to the waiting hall of the elevator 10 is 5 minutes is set. Here, the critical region will be explained. When assigning service to elevator 10, assigning unit 100 cannot actually reach the waiting hall floor immediately before car 14 reaches the waiting hall floor. For example, even if it is estimated that the time when a person arrives at a waiting hall is 1 minute, the service cannot be allocated so that the car 14 arrives at the waiting hall in accordance with the time. The reason for this is that if the car 14 has already started or if the time during which the car 14 stops at multiple floors by boarding a plurality of persons is long, a predetermined travel time is required until the car 14 reaches the waiting hall. Here, the assignment unit 100 is required to assign the elevator 10 with service so that the car 14 arrives at the destination time at least 5 minutes before the arrival time. The time period is referred to as a "critical region".
Thus, when the estimation unit 70 estimates the behavior in the critical area in the order of the learning results 52 stored in the time-series data 91, and the estimation result obtained by estimating the behavior matches the actual behavior of the person, the determination unit 80 can determine that the estimation result is correct. In this case, the determination unit 90 omits the estimation by the estimation unit 70 using the remaining learning result 52, and determines the behavior of the person based on the final result of the movement time required until the person arrives at the elevator 10. Next, the assignment unit 100 can assign the service of the elevator 10 to the specified person.
< Allocation handling of elevators >
Next, a process of assigning the elevator 10 to the person whose behavior is estimated will be described.
Fig. 10 is a flowchart showing an example of a series of processes for controlling the elevator 10 based on the predicted arrival time in the elevator system 1.
The assignment section 100 assigns the car 14 to the floor where the person whose behavior is determined is located at least at the same time when the person having passed through the measurement area 110 of the sensor 30 arrives at the elevator 10. For this reason, it is necessary to provide the assignment section 100 with the shortest time of the predicted arrival time taken for the car 14 to arrive at the lobby floor. Depending on the time difference between the shortest time and the longest time to reach the predicted time, it is possible that the elevator 10 reaches the desired floor until the shortest time and also the time taken until the longest time. Here, the assignment unit 100 may give an instruction to the elevator controller 11 to perform the following control: during the longest time period, the doors (including the car door and the hall door) of the elevator 10 are opened and stand by for the person riding the car 14.
Here, there are a plurality of learning results 52 included in the time-series data 91, and there are also a plurality of final results of the arrival prediction time predicted by the estimation unit 70. For example, as a final result of the arrival prediction time, there are at least two of the shortest time and the longest time.
Here, the assignment unit 100 obtains at least two times, the shortest time and the longest time, as the predicted arrival time from the estimation unit 70 (S51). Next, the assignment unit 100 assigns service to the elevator 10 so that the car 14 arrives at the shortest time of the predicted arrival time (S52).
Next, the distribution unit 100 calculates the time difference between the shortest time and the longest time to reach the predicted time as the standby time (S53). Next, the distribution unit 100 compares the preset standby allowed time with the calculated standby time, and determines whether or not the standby time is within the standby allowed time (S54).
If the standby time is within the standby permission time (yes at S54), the dispenser 100 performs a door open extension process to extend the time to open the door (S55). When the standby time exceeds the standby allowed time (no in S54), the distribution unit 100 ends the process as it is, and shifts to normal door control.
In this way, the assignment unit 100 can use the shortest time to reach the predicted arrival time, thereby allowing the elevator 10 to reach the desired floor with the assigned service according to the arrival time of the person at the lobby. Therefore, the doors are opened when a person arrives at the hall, so that the person can get on the car 14 without waiting, and the convenience of the elevator 10 is improved. Further, by utilizing the maximum time until the predicted time, the dispenser 100 can control the door to be opened until the standby permission time. Therefore, the convenience of the passenger who has sat down can be improved as in the past.
In the elevator system 1 according to the first embodiment described above, the estimation unit 70 repeats the process of locally estimating the behavior pattern of the human being as the target object for each measurement region 110 detected by the sensor 30 using the learning result 52. Therefore, the estimation unit 70 can accurately estimate the behavior of a continuous person, that is, the movement of a person. In addition, the learning result 52 is stored in the time-series data 91. Therefore, the estimation unit 70 can acquire the learning results 52 in time-series order and estimate the behavior of the person.
In the elevator system 1, when the estimated result of the human behavior is different from the result actually measured by the sensor 30, the actual human behavior is relearned as a new behavior pattern to obtain the learning result 52. Since the estimation unit 70 can estimate the behavior of the person using the learning result 52 obtained by the relearning, the accuracy of the estimation result of the behavior can be improved for a person who performs the same behavior as the relearning behavior pattern in the future.
The assignment unit 100 assigns a service to the elevator 10 so that the car 14 stops at the waiting hall when the estimated person arrives at the waiting hall. Therefore, a person can get on the car 14 by a natural operation without knowing the person before arriving at the hall.
Here, as shown in fig. 1 and 3, the elevator system 1 according to the first embodiment is an example in which one sensor 30 is provided, and therefore there is one measurement area 110. However, the elevator system 1 may be configured such that a plurality of sensors 30 are provided in the building 16 or in a region surrounding the building 16. The area in which a person is likely to move can be covered over a wide range by the plurality of measurement areas 110 that can be measured by the plurality of sensors 30. Therefore, the estimation unit 70 can estimate the behavior of the building 16 and people in the areas around the building 16 (including areas other than the building 16) and learn the behavior pattern.
In the elevator system 1 according to the first embodiment, the determination unit 80 determines whether the estimation result by the estimation unit 70 matches or does not match the current position or behavior of the person. However, the determination unit 80 may determine that the estimation result does not match the current position or behavior of the person when the estimation result does not completely match the current position or behavior of the person in one of the base areas 111, for example, when the estimation unit 70 estimates that the person moves to another base area 111 adjacent to the base area 111.
Here, in the elevator system 1 of the first embodiment, all the functional units are installed in the management controller 2 that controls the allocation of the elevators 10. However, these functional sections are not necessarily executed on the management controller 2. For example, the functional unit may be configured by providing an edge controller. The functional unit may be executed by another computer connected to the building management network. In order to efficiently execute the learning unit 50, an external cloud computer or the like may execute the functional unit. Other structural examples of the elevator system will be described below.
[ second embodiment ]
The elevator system can be constructed in various ways. Here, an elevator system according to a second embodiment of the present invention, which is composed of a host system and an edge controller, will be described.
Example of architecture of cloud System and edge controller
Fig. 11 is a block diagram showing a configuration example of an elevator system 1A according to a second embodiment.
The elevator system 1A is configured to separate the environment in which the learning unit 50 is executed, the environment in which the estimation unit 70 is executed, and the sensor 30.
The elevator system 1A includes a host system 6, an edge controller 4 installed in a building 16, a management controller 2A, an elevator controller 11, and a sensor 30. The following is assumed: the upper system 6 and the edge controller 4 can distribute functions by constructing an environment for executing the learning unit 50 in the upper system 6 and an environment for executing the estimation unit 70 in the edge controller 4.
The upper system 6 and the edge controller 4 are connected to each other via a communication network 22 such as a wide area network such as the internet, a local area network, or a wireless closed-circuit network provided by a communication carrier. The upper system 6 is a data center or a cloud system, and is specified by a predetermined url (uniform Resource locator) so that the edge controller 4 can access the upper system.
The processing load of the learning unit 50 is likely to be heavy, and the data capacity of the learning result 52 is likely to be large. It is difficult to realize such high-load and large-capacity data by using a system installed in the building 16. Therefore, in the present embodiment, the learning unit 50 performs the processing and the storage capacity of the storage unit 60 is expanded by using the host system 6 such as a data center or a cloud system, which can process high-performance and large-capacity data. Therefore, the upper system 6 includes the learning unit 50 and the storage unit 60. The learning result 52 is stored in the storage unit 60 and is written or read as appropriate by the learning unit 50.
The management controller 2A of the elevator 10 is also an integrated system, and the processing load of the estimation unit 70 is heavy and the data capacity of the selected learning result 54 is also large. Therefore, the edge controller 4 is newly added to the building 16. The management controller 2A is configured to include only the distribution unit 100.
The edge controller 4 (an example of a control device) has an identification unit 40, an estimation unit 70, a determination unit 80, and a specification unit 90, and is connected to the upper system 6 via the communication network 22. Elevator 10 is installed in building 16. The behavior pattern representing the behavior of the person estimated by the estimating unit 70 and specified by the specifying unit 90 is temporarily stored in the specifying unit 90 as the learning result 54. The learning result 54 is read by the determination unit 90, and the result of estimation of the behavior of the person based on the learning result 54 is compared with the behavior of the person actually measured by the sensor 30. The comparison result generated by the determination unit 90 is output to the estimation unit 70.
When the identification unit 40 of the edge controller 4 acquires the measurement data from the sensor 30, it identifies the position and behavior of the object (person) based on the measurement data from the sensor 30. Next, the recognition unit 40 transmits the recognition result to the upper system 6 via the communication network 22.
The learning unit 50 of the host system 6 executes learning of behavior patterns in the same flow as that of fig. 4. The estimation unit 70 of the edge controller 4 estimates the behavior of the human in the same flow as in fig. 5 to 10. The learning unit 50 and the estimating unit 70 repeat the processing, and the behavior of the person estimated by the estimating unit 70 is collected as the learning result 54 suitable for the situation of the building 16. The learning result 54 is transmitted to the upper system 6 at a predetermined timing, and is acquired as the learning result 52 from the storage unit 60.
The specifying unit 90 notifies the assigning unit 100 of the behavior of the person specified based on the determination result of the determining unit 80. At this time, for example, after several seconds, the assignment unit 100 is notified of information that several persons have moved to the lobby floor. The assigning unit 100 assigns services of the elevator 10 available to the person based on the notification from the determining unit 90.
Here, a new building 17 is assumed to be constructed, and a plurality of elevators 10 are installed in the building 17. A management controller 2A and an edge controller 4 are also provided in the building 17. Therefore, the edge controller 4 installed in the building 17 is connected to the upper system 6 via the communication network 22, and acquires the learning result 52 from the upper system 6. As described above, the learning result 52 is data accumulated by learning for estimating the behavior of a person in the existing building 16.
The edge controller 4 provided in the building 17 can execute the estimation unit 70, the determination unit 80, and the specification unit 90 adapted to the situation of the building 17 using the learning result 52 acquired from the upper system 6. Even when a behavior different from that of the building 16 occurs, the human behavior pattern can be relearned by the same flow as in fig. 8. By relearning the human behavior pattern, the learning result 52 that can be adapted to the situation of the building 17 and can cope with various behavior patterns is accumulated in the upper system 6.
In the elevator system 1A of the second embodiment described above, the functions of the management controller 2 of the first embodiment are distributed to the upper-level system 6 and the edge controller 4. Thus, the higher-level system 6 having high processing capability can execute a function having a high processing load, and the edge controller 4 can execute a plurality of functions. Therefore, the load of the management controller 2A is reduced, and the operation efficiency of the elevator 10 can be improved.
Further, the edge controller 4 can be connected to a sensor 30 such as a monitoring camera already installed in the building 16. Therefore, the sensor 30 may not be newly installed in the building 16. The host system 6 and the edge controller 4 can effectively use measurement data obtained from the existing sensor 30 for predicting and learning human behavior.
The edge controller 4 installed in the newly constructed building 17 can acquire the learning result 52 accumulated in the upper system 6 to estimate the behavior of the person in the building 17. Therefore, the time taken for the edge controller 4 of the building 17 to learn the human behavior pattern can be shortened.
Even in an existing building, the old controller for controlling the elevator may be replaced with the edge controller 4 and the management controller 2A according to the present embodiment. In this case, the edge controller 4 after replacement can also estimate the behavior of the person in the building 17 using the learning result 52 acquired from the upper system 6.
[ third embodiment ]
Next, an elevator system according to a third embodiment of the present invention, which is composed of a host system and an intelligent sensor, will be described.
< example of Structure of host System and Intelligent sensor >
Fig. 12 is a block diagram showing a configuration example of an elevator system 1B according to a third embodiment.
The elevator system 1B is configured such that the environment in which the estimation unit 70 is executed is integrated with the sensor 30, and the environment in which the learning unit 50 is executed is dispersed. In this way, the function of the estimation unit 70 is executed in the smart sensor 5 (an example of a measurement device) in which the environment in which the estimation unit 70 is executed is integrated with the sensor 30.
In the building 16, the smart sensor 5 is provided instead of the edge controller 4 shown in the second embodiment. The smart sensor 5 includes the sensor 30, the recognition unit 40, the estimation unit 70, the determination unit 80, and the specification unit 90, and the processing of each unit can be executed by a CPU (not shown) included in the smart sensor 5 itself.
The smart sensor 5 is connected to the host system 6 via the communication network 22, and can refer to the learning result 52 managed by the host system 6. The learning result 52 referred to by the smart sensor 5 from the host system 6 is temporarily stored as the learning result 54 in the determination unit 90 in the smart sensor 5. Therefore, the estimation unit 70 in the smart sensor 5 can efficiently refer to the learning result 54 and execute the process without accessing the host system 6a plurality of times.
The recognition unit 40 of the smart sensor 5 recognizes the position and behavior of the object (person or object) based on the measurement data from the sensor 30, and transmits the recognition result to the upper system 6 via the communication network 22.
The learning unit 50 of the host system 6 executes learning of behavior patterns in the same flow as that of fig. 4. The estimation unit 70 of the smart sensor 5 estimates the behavior of the person in the same flow as in fig. 5 to 10. The learning unit 50 and the estimating unit 70 repeat the processing, and the behavior of the person estimated by the estimating unit 70 is collected as the learning result 54 suitable for the situation of the building 16. The learning result 54 is transmitted to the upper system 6 at a predetermined timing, and is acquired as the learning result 52 from the storage unit 60.
A new building 17 is constructed, and the management controller 2A and the smart sensor 5 are installed in the building 17. Similarly to the elevator system 1A of the second embodiment described above, the smart sensor 5 installed in the building 17 is connected to the upper system 6 via the communication network 22, and acquires the learning result 52 from the upper system 6. Therefore, the smart sensor 5 installed in the building 17 can execute the estimation unit 70, the determination unit 80, and the identification unit 90 according to the situation of the building 17 using the learning result 52 acquired from the upper system 6.
The elevator system 1B of the third embodiment described above is provided with the smart sensor 5 including the sensor 30. The smart sensor 5 is capable of communicating with the upper system 6. In the third embodiment, the host system 6 can execute a function with a high processing load, and the smart sensor 5 can execute a plurality of functions. Therefore, the load of the management controller 2A is reduced, and the operation efficiency of the elevator 10 can be improved.
[ fourth embodiment ]
Next, an elevator system according to a fourth embodiment of the present invention, which is composed of a higher-level system and a management controller, will be described.
Example of a higher level System constituted by a cloud System
Fig. 13 is a block diagram showing a configuration example of an elevator system 1C according to a fourth embodiment.
The elevator system 1C is configured such that the environment in which the learning unit 50 is executed and the environment in which the estimation unit 70 is executed are integrated into the upper system 6A including a data center and a cloud system.
The upper system 6A includes a recognition unit 40, a learning unit 50, a storage unit 60, an estimation unit 70, a determination unit 80, and a determination unit 90. The plurality of learning results 52 are stored in the storage unit 60.
The sensors 30 installed in the building 16 and the distribution unit 100 of the management controller 2A are connected to the upper system 6 via the communication network 22. The sensor 30 transmits the measurement data to the recognition unit 40 via the communication network 22. The upper system 6A constitutes a functional unit similar to the management controller 2 of the first embodiment and executes predetermined processing.
The management controller 2A installed in the building 16 receives the processing result of the upper system 6A via the communication network 22. Next, the assignment unit 100 of the management controller 2A assigns a service to the elevator 10 based on the processing result of the upper system 6A.
In the elevator system 1C according to the fourth embodiment described above, the sensor 30 and the management controller 2A installed in the building 16 transmit the measurement data measured by the sensor 30 to the upper system 6A. Next, the host system 6A accumulates the behavior pattern learned based on the measurement data received from the sensor 30 as the learning result 52, and the estimation unit 70 estimates the behavior of the person based on the learning result 52. Next, the behavior of the person estimated by the estimating unit 70 and specified by the specifying unit 90 is received by the management controller 2A via the communication network 22. The assignment unit 100 of the management controller 2A assigns a service to the elevator 10 based on the behavior of the person received from the upper system 6A. In this way, in the fourth embodiment, a plurality of functions with high processing loads can be executed by the upper system 6. Therefore, the load of the management controller 2A is reduced, and the operation efficiency of the elevator 10 can be improved.
[ fifth embodiment ]
The learning result 52 used in each of the above embodiments may be managed in a format other than the data format shown in fig. 6.
< structural example of database >
Fig. 14 is an explanatory diagram showing an example of the data format of the learning result 52.
The learning result 52 is stored in the form of data represented as the learning result database 93A. The learning result database 93A manages the time-series data 91 of the learning result 52 for the purpose of estimating the behavior of the person by the estimation unit 70. Therefore, the learning result database 93A includes the building attribute information 94, the attribute information 92, and the time-series data 91.
Examples of buildings in which elevator 10 is installed include office buildings, department stores, hotels, and apartments. Further, the behavior of people varies depending on the attributes of the building. Here, building attribute information 94 is added as a keyword to the learning result database 93A. The building attribute information 94 is information indicating the attribute of each building for which the behavior of a person is estimated. The learning result database 93A can manage the time-series data 91 and the attribute information 92 as entries.
The storage unit 60 manages the learning result 52 in combination with building attribute information 94 of the installed elevator 10, and outputs the learning result 52 combined with the building attribute information 94 corresponding to the building in which the estimation unit 70 estimates the next behavior of the object to the estimation unit 70. Therefore, the estimation unit 70 can select the learning result 52 based on the building attribute information 94 corresponding to the building in which the estimation unit 70 is installed to estimate the next action of the object. The following processes of determining the behavior of the object, identifying the behavior of the object, assigning services, and relearning are performed in the same manner as the processes described in the above embodiments.
In the elevator system according to the fifth embodiment described above, the learning result database 93A including the building attribute information 94 is used. Therefore, for example, when the attributes of newly constructed building 17 are different from the attributes of existing building 16, the attributes of another building that are the same as or similar to the attributes of building 17 are selected from learning result database 93A. The management controller 2A can acquire the learning result 52 of the attribute selected from the learning result database 93A from the upper system 6 to assign a service based on human behavior.
In the second to fourth embodiments, the behavior of a person is learned as unique information for each building 16, 17 and stored in the learning result 52. Therefore, if the attributes of the buildings 16 and 17 are the same (for example, the same office building), the management controller 2A or the like provided in the building 17 acquires the learning result 52 obtained by the behavior learning of the person in the building 16 from the upper system 6. Furthermore, the management controller 2A installed in the building 17 can efficiently use the learning result 52 to assign the service of the elevator 10 to the specified person.
[ modified examples ]
In the above embodiments, the example in which a plurality of elevators 10 are provided in one building has been described, but the elevator system of each embodiment can be configured even in the embodiment in which only one elevator 10 is provided in one building.
The sensor 30 may be a three-dimensional measurement unit that outputs three-dimensional image data obtained by scanning an object as measurement data. As the three-dimensional measurement unit, for example, a laser distance meter or LiDAR may be used. The three-dimensional image data output from the three-dimensional measurement unit is, for example, three-dimensional map data. Therefore, the recognition unit 40 can extract a region including the object from the three-dimensional map data and recognize the object as a person or an object from the region. The estimating unit 70 can estimate the behavior of the object moving with time in the area as the behavior of the person.
It is to be noted that the present invention is not limited to the above-described embodiments, and it is needless to say that various other application examples and modifications can be obtained without departing from the gist of the present invention described in the claims.
For example, in the above-described embodiments, the configurations of the apparatus and the system are described in detail and specifically for easy explanation of the present invention, but the present invention is not necessarily limited to the configurations provided with all the configurations described above. Further, a part of the structure of the embodiment described herein may be replaced with the structure of another embodiment, and the structure of another embodiment may be added to the structure of one embodiment. In addition, as for a part of the configuration of each embodiment, addition, deletion, and replacement of other configurations are also possible.
The control lines and information lines are considered necessary for the description, and not necessarily all the control lines and information lines are shown in the product. In practice, it can also be considered that almost all the structures are connected to each other.

Claims (15)

1. An elevator system which controls the operation of an elevator, wherein,
the elevator system is provided with:
a measurement unit that measures an object in a measurement area and outputs measurement data;
a recognition unit that recognizes the object based on the measurement data;
an estimation unit that estimates a next behavior of the object using the elevator in the current measurement area based on a learning result indicating a behavior pattern of the object, which is learned by the measurement data in the past, and an actual behavior of the object recognized by the recognition unit at present;
a determination unit that determines whether the actual behavior of the object matches or does not match the next behavior of the object estimated by the estimation unit, and outputs a determination result;
a specifying unit that specifies a next behavior of the object based on the determination result; and
and an assigning unit that assigns a service to the elevator based on the next behavior of the object determined by the determining unit.
2. The elevator system of claim 1,
the estimating unit estimates a next behavior of the object based on a base region obtained by dividing the measurement region into predetermined sizes,
the determination unit determines, for each of the base regions, whether the actual motion of the object indicated by the measurement data matches or does not match the next motion of the object estimated by the estimation unit.
3. The elevator system of claim 2,
the specifying unit specifies that the object boards the car of the elevator based on the determination result that the actual behavior of the object, which is the behavior of the object that is determined to move to the lobby of the elevator, is similar to the next behavior of the object that is estimated by the estimating unit.
4. The elevator system of claim 2,
the specifying unit specifies that the object rides on the car of the elevator based on the determination result when it is determined that the actual behavior of the object, which is the behavior of the object moving to the lobby of the elevator, continues under an arbitrary condition with the determination result that the next behavior of the object estimated by the estimating unit is approximate.
5. The elevator system of claim 3,
the learning result continuously indicates the behavior of the object specified by the specifying unit in the past.
6. The elevator system of claim 5,
the elevator system further includes:
a learning unit that learns the behavior pattern of the object based on an actual behavior of the object and updates a learning result when the determination result determined by the determination unit to be inconsistent is obtained; and
and a storage unit for storing the learning result.
7. The elevator system of claim 6,
the learning unit changes a learning inference region of a learning object that includes the object and is used for the behavior of the object, based on the actual behavior of the object, and learns the behavior pattern of the object based on the change in the learning inference region.
8. The elevator system of claim 7,
the storage unit manages the learning result in time series, and outputs the learning result corresponding to the time at which the measurement unit measures the object, which is required for the estimation unit to estimate the next motion of the object, to the estimation unit in accordance with a request from the estimation unit.
9. The elevator system of claim 8,
the storage unit manages the learning result in combination with attribute information of a building in which the elevator is installed, and outputs the learning result in combination with the attribute information corresponding to the building in which the estimation unit estimates the next behavior of the object to the estimation unit.
10. The elevator system of claim 9, wherein,
the storage unit stores the learning result by combining the learning result managed in time series with the operation mode of the elevator at the time point when the learning result is updated,
the estimation unit reads the learning result corresponding to the currently set operation mode from the storage unit, and estimates a next behavior of the object.
11. The elevator system of claim 9, wherein,
the assignment unit assigns a service to the elevator on which the object identified by the identification unit as the object to be mounted on the car can be mounted, and controls opening and closing of a door of the car of the elevator that has reached the hall.
12. The elevator system of claim 10,
the elevator system includes a host system, a control device, and the measurement unit,
the upper system includes the learning unit and the storage unit,
the control device has the recognition unit, the estimation unit, the judgment unit, and the determination unit, is connected to the upper system via a network, and is installed in a building in which the elevator is installed,
the measurement unit is provided in the building,
the identification unit acquires the measurement data from the measurement unit.
13. The elevator system of claim 10,
the elevator system comprises a host system and a measuring device,
the upper system includes the learning unit and the storage unit,
the measurement device has the measurement unit, the recognition unit, the estimation unit, the judgment unit, and the determination unit, is connected to the upper system via a network, and is installed in a building in which the elevator is installed.
14. The elevator system of claim 10,
the elevator system is provided with a superordinate system,
the upper system includes the identification unit, the estimation unit, the determination unit, the learning unit, and the storage unit,
the measurement unit and the allocation unit are connected to the upper system via a network and installed in a building in which the elevator is installed.
15. The elevator system of any of claims 1-14,
the measurement unit is an imaging unit that outputs imaging data obtained by imaging the object as the measurement data,
the recognition unit recognizes the object that moves with the passage of time based on the captured image data.
CN201911134281.9A 2018-12-25 2019-11-19 Elevator system Active CN111377313B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018240568A JP7136680B2 (en) 2018-12-25 2018-12-25 elevator system
JP2018-240568 2018-12-25

Publications (2)

Publication Number Publication Date
CN111377313A true CN111377313A (en) 2020-07-07
CN111377313B CN111377313B (en) 2023-01-06

Family

ID=71140967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911134281.9A Active CN111377313B (en) 2018-12-25 2019-11-19 Elevator system

Country Status (2)

Country Link
JP (1) JP7136680B2 (en)
CN (1) CN111377313B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI789180B (en) * 2021-12-24 2023-01-01 翱翔智慧股份有限公司 Human flow tracking method and analysis method for elevator

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7294500B1 (en) 2022-05-26 2023-06-20 三菱電機株式会社 Vehicle allocation device, vehicle allocation system and vehicle allocation program
CN116663748B (en) * 2023-07-26 2023-11-03 常熟理工学院 Elevator dispatching decision-making method and system based on cyclic neural network

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01203185A (en) * 1988-02-04 1989-08-15 Toshiba Corp Group controller for elevator
US5354957A (en) * 1992-04-16 1994-10-11 Inventio Ag Artificially intelligent traffic modeling and prediction system
CN1243493A (en) * 1998-01-19 2000-02-02 三菱电机株式会社 Elavator management control apparatus
WO2001010763A1 (en) * 1999-08-03 2001-02-15 Mitsubishi Denki Kabushiki Kaisha Apparatus for group control of elevators
JP2006096517A (en) * 2004-09-29 2006-04-13 Mitsubishi Electric Corp Elevator control system
US20090057068A1 (en) * 2006-01-12 2009-03-05 Otis Elevator Company Video Aided System for Elevator Control
JP2013056720A (en) * 2011-09-07 2013-03-28 Toshiba Elevator Co Ltd Elevator operation control method, and device and system for controlling elevator using the same
JP2016103690A (en) * 2014-11-27 2016-06-02 キヤノン株式会社 Monitoring system, monitoring apparatus, and monitoring method
JP2017030894A (en) * 2015-07-30 2017-02-09 株式会社日立製作所 Group management elevator apparatus
CN107526815A (en) * 2017-08-28 2017-12-29 知谷(上海)网络科技有限公司 The determination method and electronic equipment of Move Mode in the range of target area
WO2018012044A1 (en) * 2016-07-11 2018-01-18 株式会社日立製作所 Elevator system and car call prediction method
WO2018116862A1 (en) * 2016-12-22 2018-06-28 ソニー株式会社 Information processing device and method, and program
CN108584579A (en) * 2018-04-24 2018-09-28 姜盎然 A kind of intelligent elevator management control system and method based on passenger demand

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01203185A (en) * 1988-02-04 1989-08-15 Toshiba Corp Group controller for elevator
US5354957A (en) * 1992-04-16 1994-10-11 Inventio Ag Artificially intelligent traffic modeling and prediction system
CN1243493A (en) * 1998-01-19 2000-02-02 三菱电机株式会社 Elavator management control apparatus
WO2001010763A1 (en) * 1999-08-03 2001-02-15 Mitsubishi Denki Kabushiki Kaisha Apparatus for group control of elevators
JP2006096517A (en) * 2004-09-29 2006-04-13 Mitsubishi Electric Corp Elevator control system
US20090057068A1 (en) * 2006-01-12 2009-03-05 Otis Elevator Company Video Aided System for Elevator Control
JP2013056720A (en) * 2011-09-07 2013-03-28 Toshiba Elevator Co Ltd Elevator operation control method, and device and system for controlling elevator using the same
JP2016103690A (en) * 2014-11-27 2016-06-02 キヤノン株式会社 Monitoring system, monitoring apparatus, and monitoring method
JP2017030894A (en) * 2015-07-30 2017-02-09 株式会社日立製作所 Group management elevator apparatus
WO2018012044A1 (en) * 2016-07-11 2018-01-18 株式会社日立製作所 Elevator system and car call prediction method
WO2018116862A1 (en) * 2016-12-22 2018-06-28 ソニー株式会社 Information processing device and method, and program
CN107526815A (en) * 2017-08-28 2017-12-29 知谷(上海)网络科技有限公司 The determination method and electronic equipment of Move Mode in the range of target area
CN108584579A (en) * 2018-04-24 2018-09-28 姜盎然 A kind of intelligent elevator management control system and method based on passenger demand

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI789180B (en) * 2021-12-24 2023-01-01 翱翔智慧股份有限公司 Human flow tracking method and analysis method for elevator

Also Published As

Publication number Publication date
CN111377313B (en) 2023-01-06
JP2020100488A (en) 2020-07-02
JP7136680B2 (en) 2022-09-13

Similar Documents

Publication Publication Date Title
CN111377313B (en) Elevator system
CN110723609B (en) Elevator control method, device, system, computer equipment and storage medium
JP6742962B2 (en) Elevator system, image recognition method and operation control method
US9834405B2 (en) Method and system for scheduling elevator cars in a group elevator system with uncertain information about arrivals of future passengers
CN109311622B (en) Elevator system and car call estimation method
Kwon et al. Sensor-aware elevator scheduling for smart building environments
CN111836771B (en) Elevator system
EP3816081B1 (en) People flow prediction method and people flow prediction system
CN111232772A (en) Method, system, computer-readable storage medium for controlling operation of elevator
CN110750603B (en) Building service prediction method, building service prediction device, building service prediction system, computer equipment and storage medium
CN111344244B (en) Group management control device and group management control method
JPS5939669A (en) Traffic information gathering device for elevator
JP2003221169A (en) Elevator control device
EP3929125A1 (en) Travel-speed based predictive dispatching
KR102515719B1 (en) Vision recognition interlocking elevator control system
CN115783915A (en) Control method, system, equipment and storage medium of building equipment
JP2017030893A (en) Group management elevator apparatus
CN112299176B (en) Method and system for elevator congestion prediction
JP6687266B1 (en) Elevator control system, its control method and program
JP7380921B1 (en) Facility guidance interlocking building system, information processing device, facility guidance method, and computer-readable recording medium
KR102557342B1 (en) System and method for controlling operation of sensor for detecting intruder
JPH072436A (en) Elevator controller
JP7400044B1 (en) Car estimating device and car estimating method
CN113401748A (en) Elevator target layer prediction method, device, computer equipment and storage medium
JP2005206280A (en) Elevator system and group management control device for the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant