WO2021149107A1 - Elevator control system - Google Patents

Elevator control system Download PDF

Info

Publication number
WO2021149107A1
WO2021149107A1 PCT/JP2020/001706 JP2020001706W WO2021149107A1 WO 2021149107 A1 WO2021149107 A1 WO 2021149107A1 JP 2020001706 W JP2020001706 W JP 2020001706W WO 2021149107 A1 WO2021149107 A1 WO 2021149107A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
group
floor
user
elevator
Prior art date
Application number
PCT/JP2020/001706
Other languages
French (fr)
Japanese (ja)
Inventor
祐貴 梅田
釜坂 等
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2020/001706 priority Critical patent/WO2021149107A1/en
Priority to JP2021572129A priority patent/JP7276517B2/en
Publication of WO2021149107A1 publication Critical patent/WO2021149107A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B3/00Applications of devices for indicating or signalling operating conditions of elevators

Definitions

  • This disclosure relates to an elevator control system.
  • Patent Document 1 discloses an elevator control system.
  • the control system forecasts the traffic demand for elevator car dispatch.
  • the control system controls vehicle allocation based on the prediction.
  • the elevator control system described in Patent Document 1 predicts the traffic demand for car dispatch by detecting a registered individual. Therefore, when a user takes an unusual behavior by forming a group of a plurality of people, the control system cannot predict the behavior. That is, the control system cannot accurately predict traffic demand. As a result, the convenience of elevator users is not improved.
  • An object of the present disclosure is to provide an elevator control system capable of improving user convenience.
  • the elevator control system determines the group to which the elevator user belongs by using the imaging information recorded of the users existing at the elevator landing, and characterizes the users belonging to the group. It is provided with a processing device that creates the entire characteristics of the group including the group as group attribute information, and an inference device that creates information on the predicted disembarkation floor of the user using the group attribute information.
  • the elevator control system predicts the disembarkation floor of an unspecified number of user groups by using the image pickup information from the image pickup device. Therefore, the convenience of the user can be improved.
  • FIG. 1 It is a schematic diagram of the elevator in Embodiment 1. It is a block diagram of the control system of the elevator in Embodiment 1.
  • FIG. This is an example of group information stored in the elevator usage status database according to the first embodiment. It is a flowchart of group information creation processing performed by the processing apparatus of an elevator in Embodiment 1. It is a flowchart of group information creation processing performed by the processing apparatus of an elevator in Embodiment 1. It is a block diagram of the learning apparatus of the elevator in Embodiment 1.
  • FIG. This This is an example of a machine learning method performed by the elevator learning device according to the first embodiment. This is an example of group information used for learning in the learning device of the elevator according to the first embodiment.
  • FIG. 5 is a block diagram of an elevator inference device according to the first embodiment. This is an example of the inference processing result performed by the inference device of the elevator in the first embodiment. It is a flowchart of the learning operation performed by the learning apparatus of the elevator in Embodiment 1. It is a flowchart of the inference operation performed by the inference device of the elevator in Embodiment 1.
  • FIG. 5 is a hardware configuration diagram of an elevator inference device according to the first embodiment. It is a schematic diagram of the elevator in Embodiment 2. This is a display example of the elevator car dispatch schedule display panel according to the second embodiment. It is a block diagram of the control panel of the elevator in Embodiment 3.
  • FIG. 1 is a schematic view of an elevator according to the first embodiment.
  • building 1 has two or more floors.
  • the hoistway runs through each floor of Building 1.
  • the machine room (not shown) is provided directly above the hoistway.
  • Each of the plurality of landings 2 is provided on each floor of the building 1. Each of the plurality of landings 2 faces the hoistway.
  • a plurality of image pickup devices 3 are provided on each floor of the building 1.
  • the image pickup device 3 is provided at a position where a plurality of landings 2 on each floor can be imaged.
  • the image pickup device 3 is provided at a position where the behavior of a user who comes and goes between a plurality of landings 2 can be imaged.
  • the plurality of image pickup devices 3 photograph a plurality of landings 2 on each floor.
  • the plurality of imaging devices 3 output imaging information.
  • the hoisting machine 4 is provided in the machine room.
  • the main rope 5 is wound around the hoisting machine 4.
  • the car 6 is provided inside the hoistway.
  • the car 6 is hung on one side of the main rope 5.
  • Balanced weights (not shown) are provided inside the hoistway. The balance weight is hung on the other side of the main rope 5.
  • Each of the plurality of entrances and exits 7 is provided between the hoistway and each of the plurality of landings 2 on each floor. The user passes through the doorway 7 and moves between the landing 2 and the car 6.
  • control panel 8 is provided in the machine room.
  • control panel 8 is provided at the upper part of the hoistway.
  • the control panel 8 is electrically connected to the image pickup apparatus 3 by wire or wirelessly.
  • the control panel 8 is electrically connected to the hoisting machine 4 by wire or wirelessly.
  • the control panel 8 controls the operation of the elevator system as a control system.
  • the control panel 8 controls the movement of the hoisting machine 4.
  • the control panel 8 controls the allocation of one or more cars 6.
  • the control panel 8 receives imaging information from the imaging device 3.
  • the control panel 8 stores imaging information.
  • the control panel 8 uses the image pickup information to predict the floor on which the user at the landing 2 will get off.
  • the control panel 8 includes a prediction model (not shown) for predicting the disembarkation floor of the user.
  • control panel 8 controls the operation of the elevator system by using the predicted exit floor information. The outline will be described below.
  • the imaging device 3 images a plurality of users who have arrived at the landing 2 and creates imaging information.
  • the image pickup apparatus 3 outputs the image pickup information to the control panel 8.
  • the control panel 8 uses the imaging information to consider the plurality of users as one group.
  • the control panel 8 creates personal attribute information for each user from the appearance characteristics of the user.
  • the control panel 8 creates group attribute information from the human attribute information of a plurality of users belonging to the group.
  • the control panel 8 inputs the group attribute information into a prediction model (not shown). As a result, the control panel 8 predicts the floor on which the group will get off the elevator.
  • the control panel 8 registers the floor predicted to be disembarked by the group in the destination floor of the car 6. After that, the control panel 8 dispatches the elevator car.
  • the control panel 8 creates the prediction model using the stored imaging information. The outline will be described below.
  • the imaging device 3 gets off from the elevator, images a group of users existing at the landing 2, and creates imaging information.
  • the image pickup device 3 outputs the image pickup information to the control panel 8.
  • the control panel 8 integrates the group attribute information of the user group and the getting-off floor and stores it as group information.
  • the control panel 8 integrates the group attribute information, the disembarkation floor, and a plurality of other information.
  • the information to be integrated is the boarding floor, boarding time, getting-off time, floor information of the getting-off floor, etc. of the user group.
  • the control panel 8 creates a prediction model using a plurality of stored group information.
  • the prediction model is used to predict the drop-off floor of a user.
  • FIG. 2 is a block diagram of the elevator control system according to the first embodiment.
  • control panel 8 includes a processing device 10, a learning device 20, an inference device 30, a vehicle allocation analysis device 40, and a vehicle allocation device 50.
  • the processing device 10 includes a conversion unit 11, an individual image extraction unit 12, a grouping unit 13, a person attribute determination unit 14, a boarding / alighting determination unit 15, an incidental information unit 16, an integration unit 17, and a group information storage unit 18.
  • the conversion unit 11 receives the imaging information from the imaging device 3.
  • the conversion unit 11 converts the imaging information to create the first image information which is a still image.
  • the conversion unit 11 identifies one or more users recorded in the first image information.
  • the conversion unit 11 creates behavioral characteristic information obtained from the imaging information for each of the users.
  • the behavioral characteristic information includes information on the position of the user in the imaging, information on the moving direction of the user, and information on the moving speed of the user.
  • the conversion unit 11 creates the second image information including the first image information and the behavior characteristic information.
  • the conversion unit 11 determines whether or not the user in the second image information is in the unprocessed state. For example, when the target user is not given the group ID information, the conversion unit 11 determines that the user is in the unprocessed state.
  • the individual image extraction unit 12 creates the third image information by extracting the image of only one user by using the second image information.
  • the third image information is an image in which the whole body of one user obtained by trimming the second image information is recorded.
  • the third image information is an image in which the face of one user obtained by trimming the second image information is recorded.
  • the grouping unit 13 classifies the users recorded in the second image information into one or more groups. For example, the grouping unit 13 determines a plurality of users having similar behavioral characteristic information as the same group.
  • the grouping unit 13 creates group ID information unique to the group to which the user belongs. If the group to which the user belongs does not yet have a unique group ID, the grouping unit 13 assigns new group ID information to the group.
  • the grouping unit 13 updates the second image information by adding the group ID information of the user to the second image information.
  • the grouping unit 13 outputs the group ID information to the integration unit 17.
  • the human attribute determination unit 14 includes a human attribute estimation model for estimating human attributes from image information.
  • Human attributes mean personal characteristics that are inferred from a person's appearance.
  • the human attribute estimation model is a model created by machine learning outside the elevator system.
  • the human attribute determination unit 14 creates the personal attribute information of each user recorded in the second image information. For example, the human attribute determination unit 14 estimates the height, gender, age, clothes, etc. of the user recorded in the second image information. The human attribute determination unit 14 integrates the estimated classification items and creates human attribute information.
  • the boarding / alighting determination unit 15 determines the boarding state and the disembarking state of the user by using the behavior characteristic information of the imaging information.
  • the boarding / alighting determination unit 15 determines whether the user is in a state after getting off the elevator or in a state of waiting for getting on the elevator. After that, the boarding / alighting determination unit 15 creates boarding floor information and getting-off floor information.
  • the boarding / alighting determination unit 15 puts the user in a state of waiting for the elevator to board. Judge that there is. After that, the boarding / alighting determination unit 15 creates boarding floor information "3" and getting-off floor information "0" for the user. "3" in the boarding floor information means that the boarding is waiting on the 3rd floor. "0" in the disembarkation floor information means that the user is not in the disembarked state.
  • the boarding / alighting determination unit 15 is in the state after getting off the elevator. Is determined. After that, the boarding / alighting determination unit 15 creates boarding floor information "0" and getting-off floor information "3" for the user.
  • "0" in the boarding floor information means that the user is not in the boarding waiting state.
  • "3" in the disembarkation floor information means that the user has disembarked on the third floor.
  • the incidental information unit 16 gives the user incidental information that is information other than that related to the individual user.
  • the incidental information unit 16 imparts information on the date and time when the second image information was captured to the user recorded in the second image information.
  • the incidental information unit 16 gives the information of the floor on which the second image information was taken to the user recorded in the second image information.
  • the incidental information unit 16 gives the floor information of the floor on which the second image information is captured to the user recorded in the second image information.
  • the floor information on the floor is that the 5th floor is a toy department and the 12th floor is a restaurant.
  • the integration unit 17 acquires the second image information from the conversion unit 11.
  • the integration unit 17 acquires the third image information from the individual image extraction unit 12.
  • the integration unit 17 acquires group ID information from the grouping unit 13.
  • the integration unit 17 acquires human attribute information from the human attribute determination unit 14.
  • the integration unit 17 acquires boarding floor information and getting-off floor information from the boarding / alighting determination unit 15.
  • the integration unit 17 acquires incidental information from the incidental information unit 16.
  • the integration unit 17 integrates the user information recorded in the second image information and creates the user information. For example, the integration unit 17 integrates the user's third image information, group ID information, personal attribute information, disembarkation information, boarding information, and incidental information recorded in the second image information to create user information. ..
  • the group information storage unit 18 acquires user information from the integration unit 17.
  • the group information storage unit 18 adds the user information to the group information that stores the user information.
  • the group information storage unit 18 stores the group information to which the user information is added.
  • the group information storage unit 18 adds the user A to the group information having the group ID. Add user information.
  • the group information storage unit 18 creates new group information having the group ID. , Add the user information of user B to the new group information.
  • the group information storage unit 18 creates the boarding floor information of the group by using the boarding floor information of the users belonging to the group.
  • the group information storage unit 18 creates the disembarkation floor information of the group by using the disembarkation floor information of the users belonging to the group.
  • the group information storage unit 18 adds boarding floor information and getting-off floor information to the group information.
  • the group information storage unit 18 integrates the human attribute information of the users belonging to the group, the individual image information, and the incidental information to create the group attribute information which is the attribute information of the entire group.
  • the group information storage unit 18 newly creates the group attribute information.
  • the group information storage unit 18 adds the group attribute information to the group information.
  • the learning device 20 uses group information as teacher data to perform so-called supervised learning.
  • the learning device 20 creates a learned inference model that infers the disembarkation floor of the user by using the group information of the user.
  • the inference device 30 includes a learned inference model created by the learning device 20.
  • the inference device 30 infers the disembarkation floor of the user by using the group information of the user.
  • the vehicle allocation analysis device 40 acquires the predicted disembarkation floor information from the inference device 30. For example, the vehicle allocation analysis device 40 calculates the optimum elevator vehicle allocation information from the call information registered from the call buttons on each floor, the destination registration information registered from the inside of the car 6, and the predicted disembarkation information. For example, the vehicle allocation analysis device 40 uses DOAS (Destination Oriented Allocation System: elevator destination forecast system) to calculate the optimum elevator vehicle allocation information.
  • DOAS Disination Oriented Allocation System: elevator destination forecast system
  • the vehicle allocation device 50 acquires vehicle allocation information from the vehicle allocation analysis device 40. For example, the vehicle dispatching device 50 allocates the car 6 based on the vehicle allocation information. For example, the vehicle dispatching device 50 outputs a drive command to the hoisting machine 4 in order to dispatch the car 6.
  • FIG. 3 is an example of group information stored in the elevator usage status database according to the first embodiment.
  • the group attribute information includes a group ID, a group boarding floor information, a group getting-off floor information, and a group attribute information.
  • the group attribute information includes personal attribute information, incidental information, and individual image information of each group.
  • human attribute information includes gender, height, age, body shape, clothing, and other appearance information.
  • group A1005 means a group whose group ID is A1005.
  • Group A1005 is a group to which one woman C belongs. The height of female C is about 160 cm. Female C is about 30 years old. The body shape of Female C is thin. Women C wears a red jacket, white trousers and glasses.
  • the group information of group A1005 is the information that female C boarded on the entrance floor on the first floor on October 1, 2019. “Yes” in the column of group A1005 means that the individual image extraction unit 12 created the third image information of the whole body and the face of the woman C.
  • group A1006 is a group to which one woman belongs.
  • Group A1006 has the same group attributes as group A1005. Therefore, group A1006 is the same group as group A1005.
  • the woman belonging to group A1006 is woman C.
  • the group information of group A1006 is the information that woman C got off at the women's clothing section on the 3rd floor on October 1, 2019.
  • FIGS. 4 and 5 are flowcharts of group information creation processing performed by the elevator processing device according to the first embodiment.
  • step S001 the conversion unit 11 acquires the imaging information from the imaging device 3.
  • step S002 the conversion unit 11 creates continuous still image information using the image pickup information received from the image pickup apparatus 3 as the first image information.
  • step S003 the conversion unit 11 identifies one or more users recorded in the first image information.
  • the conversion unit 11 creates behavioral characteristic information for each of the users based on the continuous still image information.
  • the conversion unit 11 creates the second image information including the first image information and the created behavioral characteristic information.
  • step S004 the conversion unit 11 designates one user recorded in the second image information as the first user to be processed.
  • step S005 the conversion unit 11 determines whether or not the first user is in the unprocessed state. For example, the conversion unit 11 determines that the first user is in the unprocessed state when the group ID information is not given to the first user.
  • step S006 When the conversion unit 11 determines in step S005 that the first user is in an unprocessed state, the operation of step S006 is performed. In step S006, the individual image extraction unit 12 creates the third image information of the first user by using the second image information.
  • step S007 the grouping unit 13 determines whether or not a second user, who is a user having behavioral characteristic information similar to that of the first user, exists in the second image information.
  • step S007 when a user having behavioral characteristic information similar to that of the first user exists in the second image information, the operation of step S008 is performed.
  • step S008 the grouping unit 13 determines whether or not the second user has the group ID information.
  • step S008 when the second user has the group ID information, the operation of step S009 is performed.
  • step S009 the grouping unit 13 creates group ID information of the first user having the same group ID as the second user.
  • the grouping unit 13 outputs the created group ID information to the integration unit 17.
  • the grouping unit 13 assigns the created group ID information to the first user in the second image information.
  • step S010 the human attribute determination unit 14 creates the human attribute information of the first user.
  • step S011 the operation of step S011 is performed.
  • the boarding / alighting determination unit 15 creates boarding floor information and getting-off floor information of the first user based on the behavior characteristic information of the first user.
  • step S012 the incidental information unit 16 creates the incidental information from the second image information.
  • step S013 the integration unit 17 includes the third image information of the first user, the group ID information of the first user, the personal attribute information of the first user, the boarding floor information of the first user, and the first user. Get off floor information and incidental information.
  • the integration unit 17 includes the third image information of the first user, the group ID information of the first user, the personal attribute information of the first user, the boarding floor information of the first user, and the disembarking floor information of the first user. And incidental information are integrated to create user information of the first user.
  • step S014 the group information storage unit 18 acquires the first user information.
  • the group information storage unit 18 determines whether or not, among the plurality of group information stored in the group information storage unit 18, there is one having the same group ID as the group ID of the first user.
  • step S014 If there is group information having the same group ID as the first user in step S014, the operation of step S015 is performed.
  • step S015 the group information storage unit 18 adds the user information of the first user to the group information.
  • step S016 the conversion unit 11 determines whether or not there is a user who does not have the group ID information in the second image information.
  • step S016 if there is no user who does not have the group ID information in the second image information, the processing device 10 ends the processing of the imaging information.
  • step S004 If the conversion unit 11 does not determine the first user as an unprocessed state in step S005, or if there is a user who does not have the group ID information in the second image information in step S016, the operation of step S004 is performed. It is said.
  • step S007 when a user having behavioral characteristic information similar to that of the first user does not exist in the second image information, or in step S008, when the second user does not have group ID information, step S017 The operation of is performed.
  • step S017 the grouping unit 13 creates group ID information of the first user having a new group ID.
  • the grouping unit 13 outputs new group ID information to the integration unit 17.
  • the grouping unit 13 assigns new group ID information to the first user in the second image information.
  • step S010 After that, the operations after step S010 are performed.
  • step S014 If there is no group information having the same group ID as the first user in step S014, the operation of step S018 is performed.
  • step S018 the group information storage unit 18 creates new group information including the group ID of the first user.
  • the group information storage unit 18 adds the user information of the first user to the new group information.
  • step S019 the group information storage unit 18 creates group attribute information, incidental information, individual image information, boarding floor information, and getting-off floor information of the new group using the user information of the first user.
  • step S016 After that, the operations after step S016 are performed.
  • FIG. 6 is a block diagram of the learning device for the elevator according to the first embodiment.
  • the learning device 20 includes a usage status database 21 (hereinafter, usage status DB 21), a learning data acquisition unit 22, and a model generation unit 23.
  • the usage status DB 21 acquires group information from the processing device 10.
  • the usage status DB 21 stores group information.
  • the usage status DB 21 rewrites the boarding floor information of the group information already stored to the getting-off floor information of the same acquired group information.
  • the usage status DB 21 stores some group information in which the disembarkation floor information is "0".
  • the usage status DB 21 acquires the group information D in which the disembarkation floor information is "3" and the boarding floor information is "0".
  • the usage status DB 21 selects the group information E whose group attribute information is similar to the group information D in the group information already stored.
  • the usage status DB 21 determines that the group information D and the group information E are the same group information.
  • the disembarkation floor information of the group information D is "0".
  • the boarding floor information of group information E is "1".
  • the usage status DB 21 rewrites the disembarkation floor information of the group information E to the disembarkation floor information of the group information D. That is, the disembarkation floor information of the group information E is "3". In this way, the usage status DB 21 stores the disembarkation floor information and the boarding floor information of the same group so as to be paired.
  • the usage status DB 21 determines whether or not a sufficient amount of group information for learning is stored. For example, when the usage status DB 21 stores 1000 groups or more of group information, it determines that a sufficient amount of group information for learning is stored. For example, when the usage status DB 21 stores group information for a period of one week or more, it determines that it stores a sufficient amount of group information for learning.
  • the learning data acquisition unit 22 acquires group attribute information, boarding floor information, and getting-off floor information from the usage status DB 21.
  • the model generation unit 23 acquires group attribute information, boarding floor information, and getting-off floor information from the learning data acquisition unit 22.
  • the model generation unit 23 learns the predicted disembarkation floor information based on the learning data created based on the combination of the group attribute information, the boarding floor information, and the disembarking floor information. That is, the model generation unit 23 generates a learned model that infers the optimum predicted exit floor information from the elevator group attribute information and the boarding floor information.
  • the learning device 20 is used to learn the predicted exit floor information of elevator users. For example, it may be connected to the control system of the elevator via a network and may be a device separate from the elevator. Further, the learning device 20 may be built in the elevator system. Further, the learning device 20 may exist on the cloud server.
  • known algorithms such as supervised learning, unsupervised learning, and reinforcement learning can be used.
  • the model generation unit 23 learns the predicted disembarkation floor information by so-called supervised learning according to the neural network model.
  • supervised learning refers to a method of learning a feature in the learning data by giving a set of input and result (label) data to the learning device 20 and inferring the result from the input.
  • a neural network is composed of an input layer composed of a plurality of neurons, an intermediate layer (hidden layer) composed of a plurality of neurons, and an output layer composed of a plurality of neurons.
  • the intermediate layer may be one layer or two or more layers.
  • the neural network predicts disembarkation by so-called supervised learning according to learning data created based on a combination of group attribute information, boarding floor information, and disembarking floor information acquired by the data acquisition unit. Learn floor information.
  • FIG. 7 is an example of a machine learning method performed by the elevator learning device according to the first embodiment.
  • the neural network learns by inputting the group attribute information and the boarding floor information into the input layer and adjusting the weights W1 and W2 so that the result output from the output layer approaches the getting-off floor information.
  • FIG. 8 is an example of group information used for learning in the learning device of the elevator according to the first embodiment.
  • the group attribute information includes a group ID, a group boarding floor information, a group getting-off floor information, and a group attribute information.
  • group A1005 is a group to which female C belongs.
  • the group information of Group A1005 is the information that Women C got on the entrance floor on the 1st floor on October 1, 2019, and then got off at the women's clothing section on the 3rd floor.
  • FIG. 9 is a block diagram of the elevator inference device according to the first embodiment.
  • the inference device 30 includes a learned model storage unit 31, a usage data acquisition unit 32, and an inference unit 33.
  • the trained model storage unit 31 acquires the trained model from the model generation unit 23.
  • the trained model storage unit 31 stores the learning model.
  • the usage data acquisition unit 32 acquires group attribute information and boarding floor information from the processing device 10.
  • the reasoning unit 33 infers the predicted disembarkation floor information obtained by using the trained model.
  • the inference unit 33 acquires the trained model from the trained model storage unit 31.
  • the inference unit 33 acquires group attribute information and boarding floor information from the usage data acquisition unit 32. After that, the inference unit 33 inputs the group attribute information and the boarding floor information into the trained model.
  • the reasoning unit 33 creates the predicted disembarkation floor information inferred from the group attribute information and the boarding floor information.
  • FIG. 10 is an example of the inference processing result performed by the elevator inference device according to the first embodiment.
  • the inference device 30 acquires group attribute information and boarding floor information as INPUT information.
  • the inference device 30 outputs predicted disembarkation floor information as OUTPUT information.
  • group A1101 is a group to which female F belongs.
  • the group information of group A1101 is the information that female F boarded from the entrance on the first floor at 9:35 on October 2, 2019.
  • the inference device 30 applies a prediction model to the information of the group whose group ID is A1101, and predicts that the group will get off at the women's clothing section on the 3rd floor.
  • the inference device 30 (not shown) creates the predicted disembarkation floor information “3”.
  • FIG. 11 is a flowchart of a learning operation performed by the learning device of the elevator according to the first embodiment.
  • step S101 the usage status DB 21 acquires the first group information, which is one group information, from the processing device 10.
  • step S102 the usage status DB 21 determines whether or not the first group information has the disembarkation floor information. When the disembarkation floor information of the first group information is not "0", the usage status DB 21 determines that the first group information has the disembarkation floor information.
  • step S102 when the first group information has the disembarkation floor information, the operation of step S103 is performed.
  • the usage status DB 21 determines whether or not the stored group information has the disembarkation floor information "0" and the same group information as the acquired group information exists.
  • step S103 when the disembarkation floor information is "0" and the same group information as the acquired group information (hereinafter referred to as the second group information) exists, the operation of step S104 is performed.
  • step S104 the usage status DB 21 rewrites the disembarkation floor information of the second group information to the disembarkation floor information of the first group information.
  • step S105 the usage status DB 21 determines whether or not a sufficient amount of group information for learning is stored.
  • step S105 when the usage status DB 21 stores a sufficient amount of group information for learning, the operation of step S106 is performed.
  • step S106 the learning data acquisition unit 22 acquires group attribute information, boarding floor information, and getting-off floor information from the usage status DB 21 as learning data.
  • step S107 the model generation unit 23 performs so-called supervised learning according to the learning data created based on the combination of the group attribute information, the boarding floor information, and the disembarking floor information acquired by the learning data acquisition unit 22. Learn the predicted drop-off floor information and create a trained model.
  • step S108 the trained model storage unit 31 stores the trained model generated by the model generation unit 23.
  • step S109 the usage status DB 21 stores the first group information.
  • step S105 After that, the operations after step S105 are performed.
  • FIG. 12 is a flowchart of an inference operation performed by the elevator inference device according to the first embodiment.
  • step S201 the usage data acquisition unit 32 acquires the third group information from the processing device 10 as the group information for inference.
  • step S202 the usage data acquisition unit 32 determines whether or not the third group information has boarding floor information. When the boarding floor information of the third group information is not "0", the usage data acquisition unit 32 determines that the third group information has the boarding floor information.
  • step S202 when the third group information has boarding floor information, the operation of step S203 is performed.
  • step S203 the inference unit 33 acquires the group attribute information and the boarding floor information possessed by the third group information.
  • step S204 the inference unit 33 acquires the trained model stored in the trained model storage unit 31.
  • the inference unit 33 calculates the boarding / alighting floor of the third group using the trained model.
  • the inference unit 33 creates the predicted disembarkation floor information.
  • step S205 the inference unit 33 outputs the predicted disembarkation floor information to the vehicle allocation analysis device 40.
  • step S206 the vehicle allocation analyzer 40 calculates the elevator car allocation based on the predicted disembarkation floor information.
  • the vehicle allocation analyzer 40 creates vehicle allocation information.
  • step S207 the vehicle dispatching device 50 acquires the car dispatching information.
  • the vehicle dispatching device 50 allocates the elevator car 6 based on the car dispatching information.
  • the processing device 10 creates group attribute information representing the characteristics of the group of elevator users by using the imaging information.
  • the inference device 30 creates predicted disembarkation floor information of an unspecified number of user groups by using the group attribute information. Therefore, the inference device 30 can predict that the users will behave differently from usual when they form a group of a plurality of people. This means that the prediction accuracy of the inference device 30 is improved. As a result, the user can use the elevator for more efficient vehicle allocation. That is, the elevator control system can improve the convenience of the user.
  • control panel 8 is provided with a learning device 20 that infers the predicted disembarkation floor information by using the group attribute information and the disembarkation floor information. Therefore, the elevator control system can create a prediction model of the user's getting-off floor.
  • control panel 8 is provided with a vehicle allocation analyzer 40.
  • the vehicle allocation analysis device 40 creates elevator car allocation information using the predicted exit floor information.
  • the vehicle dispatching device 50 dispatches the elevator car 6 in which the predicted disembarkation floor information is registered as the destination by using the car dispatching information. Therefore, the elevator user can get on the elevator car 6 without registering the destination floor by himself / herself.
  • the conversion unit 11 does not have to be provided in the processing device 10.
  • the conversion unit 11 is provided in the image pickup apparatus 3.
  • the conversion unit outputs the converted imaging information to the processing device 10.
  • the learning device 20 periodically executes the learning process.
  • the frequency of processing that the learning device 20 learns is set in advance. For example, the learning device 20 executes a process of learning once a week.
  • the amount of information stored in the usage status DB 21 increases as the storage period becomes longer. The larger the amount of information stored in the usage status DB 21, the better the prediction accuracy of the model.
  • the usage status DB 21 stores the correct disembarkation floor information of the user who has calculated the wrong estimation result by the inference device 30 as learning data. Therefore, the learning device 20 creates a trained model capable of making more accurate inferences.
  • the learning data acquisition unit 22 may acquire the data of the group attribute information, the boarding floor information, and the getting-off floor information at different timings.
  • the learning data acquisition unit 22 can add an elevator system for collecting learning data to the target on the way or remove it from the symmetry. Further, the learning device that has learned the predicted exit floor information for one elevator system may be applied to another elevator system, and the predicted exit floor information for the other elevator system may be relearned and updated. ..
  • the present invention is not limited to this.
  • the learning algorithm it is also possible to apply reinforcement learning, unsupervised learning, semi-supervised learning, etc. in addition to supervised learning.
  • model generation unit 23 may learn the predicted disembarkation floor information according to the learning data created for the plurality of elevator systems.
  • the model generation unit 23 may acquire learning data from a plurality of elevator systems used in the same area.
  • the model generation unit 23 may learn the predicted disembarkation floor information by using the learning data collected from a plurality of elevator systems that operate independently in different areas.
  • model generation unit 23 deep learning that learns the extraction of the feature amount itself can also be used, and other known methods such as genetic programming and functional logic programming can be used.
  • Machine learning may be performed according to a support vector machine or the like.
  • the reasoning device 30 has been described as outputting the predicted disembarkation floor information using the trained model learned by the model generation unit 23 of the elevator, the trained model is acquired from the outside of another elevator or the like, and this The predicted drop-off floor information may be output based on the trained model.
  • the user can use the elevator having the usual convenience.
  • FIG. 13 is a hardware configuration diagram of the elevator inference device according to the first embodiment.
  • Each function of the inference device 30 can be realized by a processing circuit.
  • the processing circuit includes at least one processor 100a and at least one memory 100b.
  • the processing circuit comprises at least one dedicated hardware 200.
  • each function of the inference device 30 is realized by software, firmware, or a combination of software and firmware. At least one of the software and firmware is written as a program. At least one of the software and firmware is stored in at least one memory 100b. At least one processor 100a realizes each function of the inference device 30 by reading and executing a program stored in at least one memory 100b. At least one processor 100a is also referred to as a central processing unit, a processing unit, an arithmetic unit, a microprocessor, a microcomputer, and a DSP.
  • at least one memory 100b is a non-volatile or volatile semiconductor memory such as RAM, ROM, flash memory, EPROM, EEPROM, magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD, or the like.
  • the processing circuit comprises at least one dedicated hardware 200
  • the processing circuit may be implemented, for example, as a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC, an FPGA, or a combination thereof.
  • NS each function of the monitoring device 9 is realized by a processing circuit.
  • each function of the inference device 30 is collectively realized by a processing circuit.
  • a part may be realized by the dedicated hardware 200, and the other part may be realized by software or firmware.
  • the function of the inference unit 33 is realized by a processing circuit as dedicated hardware 200, and for functions other than the function of the inference unit 33, at least one processor 100a reads a program stored in at least one memory 100b. It may be realized by executing.
  • the processing circuit realizes each function of the inference device 30 by hardware 200, software, firmware, or a combination thereof.
  • each function of the imaging device 3 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30.
  • Each function of the processing device 10 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30.
  • Each function of the learning device 20 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30.
  • Each function of the vehicle allocation analyzer 40 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30.
  • Each function of the vehicle dispatching device 50 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30.
  • Each function of the external information device 70 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30.
  • FIG. 14 is a schematic view of the elevator according to the second embodiment.
  • the same or corresponding parts as those in the first embodiment are designated by the same reference numerals. The explanation of the relevant part is omitted.
  • the vehicle allocation analyzer 40 has a function of outputting vehicle allocation information to the outside.
  • the car dispatch schedule display board 60 is provided on each floor of the building 1.
  • the car dispatch schedule display board 60 is provided around the doorway 7 on each floor.
  • the car dispatch schedule display board 60 is provided at a position where the user at the landing 2 can easily see it.
  • the car dispatch schedule display panel 60 is electrically connected to the vehicle dispatch analyzer 40 by wire or wirelessly.
  • the car dispatch schedule display board 60 acquires the car dispatch information of the elevator from the vehicle dispatch analysis device 40.
  • the car dispatch schedule display board 60 displays the acquired car dispatch information.
  • FIG. 15 is a display example of the elevator car dispatch schedule display panel in the second embodiment.
  • the car dispatch schedule display panel 60 displays information related to the car dispatch of the elevator by using the car dispatch information.
  • the car dispatch schedule display panel 60 displays the stop schedule floors of a plurality of elevator cars.
  • the car dispatch schedule display board 60 displays the estimated time until each elevator car arrives.
  • the vehicle allocation analysis device 40 has a function of outputting vehicle allocation information to the vehicle allocation schedule display panel 60.
  • the car dispatch schedule display panel 60 conveys car dispatch information to elevator users. Therefore, the user can know the number of the elevator car that requires less time to reach the target floor before the car arrives. The user can move to the vicinity of the door of the elevator car in which he / she rides before the car arrives. As a result, the convenience of the elevator user can be improved.
  • the notification device that notifies the user of the car dispatch information is not limited to the car dispatch schedule display panel 60.
  • the notification device may be a voice device capable of outputting car allocation information by voice.
  • the voice device acquires car allocation information from the vehicle allocation analysis device 40.
  • the voice device notifies the user of the car dispatch information by outputting a voice representing the car dispatch information.
  • the user can use the elevator having the usual convenience.
  • the usage status DB 21 stores the correct disembarkation floor information of the user who has calculated the wrong estimation result by the inference device 30 as learning data. Therefore, the learning device 20 creates a trained model capable of making more accurate inferences.
  • FIG. 16 is a block diagram of an elevator control panel according to the third embodiment.
  • the same or corresponding parts as those in the first embodiment are designated by the same reference numerals. The explanation of the relevant part is omitted.
  • control panel 8 includes an external information device 70.
  • the external information device 70 receives arbitrary external information from an input terminal (not shown).
  • the external information device 70 receives weather information, temperature information, event information held in the building 1, sales information on the sales floor, and the like.
  • the incidental information unit 16 acquires external information from the external information device 70.
  • the incidental information unit 16 creates incidental information including external information.
  • the integration unit 17 acquires incidental information including external information from the incidental information unit 16.
  • the integration unit 17 creates user information including external information.
  • the group information storage unit 18 acquires user information including external information from the integration unit 17.
  • the group information storage unit 18 creates group attribute information including external information.
  • the learning device 20 acquires group attribute information including external information from the group information storage unit 18.
  • the learning device 20 uses group attribute information including external information at the time of learning. Therefore, the learning device 20 generates a learned model in which external information is reflected.
  • the inference device 30 acquires group attribute information including external information from the group information storage unit 18.
  • the inference device 30 uses group attribute information including external information at the time of inference. Therefore, the inference device 30 creates the predicted disembarkation floor information that reflects the external information.
  • the control panel 8 includes an external information device 70 that receives arbitrary external information from the outside.
  • the learning device 20 generates a trained model in which external information is reflected.
  • the inference device 30 outputs predicted disembarkation floor information that reflects external information. Therefore, even if the external information changes, the predicted disembarkation floor information will be the information that reflects the change. That is, the prediction accuracy of the predicted disembarkation floor information is improved. Elevators are dispatched more efficiently and faster. As a result, the convenience of the elevator user can be improved.
  • the elevator control system according to the present disclosure can be used for the elevator system.

Abstract

Provided is an elevator control system that uses image information obtained from an imaging device to predict the floors at which an unspecified number of user groups will get off, thereby making it possible to improve user convenience. The elevator control system comprises: a processing device that, using image information in which users present at an elevator landing are recorded, identifies a group to which the elevator users belong, and that creates, as group attribute information, the overall characteristics of the group, including characteristics of the users belonging to the group; and an inference device that uses the group attribute information to create information on the floor at which the users are predicted to get off.

Description

エレベーターの制御システムElevator control system
 本開示は、エレベーターの制御システムに関する。 This disclosure relates to an elevator control system.
 特許文献1は、エレベーターの制御システムを開示する。当該制御システムは、エレベーターのかご配車の交通需要を予測する。当該制御システムは、当該予測に基づいてかご配車の制御を行う。 Patent Document 1 discloses an elevator control system. The control system forecasts the traffic demand for elevator car dispatch. The control system controls vehicle allocation based on the prediction.
日本特許第6417292号公報Japanese Patent No. 6417292
 しかしながら、特許文献1に記載のエレベーターの制御システムは、登録されている個人を検知することで、かご配車の交通需要を予測する。このため、利用者が複数人のグループとなることで普段と異なる行動をとる場合、当該制御システムは、その行動を予測できない。即ち、当該制御システムは、交通需要を正確に予測できない。その結果、エレベーター利用者の利便性は向上されない。 However, the elevator control system described in Patent Document 1 predicts the traffic demand for car dispatch by detecting a registered individual. Therefore, when a user takes an unusual behavior by forming a group of a plurality of people, the control system cannot predict the behavior. That is, the control system cannot accurately predict traffic demand. As a result, the convenience of elevator users is not improved.
 本開示は、上述の課題を解決するためになされた。本開示の目的は、利用者の利便性を向上することができるエレベーターの制御システムを提供することである。 This disclosure was made to solve the above-mentioned problems. An object of the present disclosure is to provide an elevator control system capable of improving user convenience.
 本開示に係るエレベーターの制御システムは、エレベーターの乗場に存在する利用者を記録した撮像情報を用いて、前記エレベーターの利用者が所属するグループを判定し、前記グループに所属する利用者の特徴を含んだ前記グループの全体の特徴をグループ属性情報として作成する処理装置と、前記グループ属性情報を用いて、前記利用者の予測降車階の情報を作成する推論装置と、を備えた。 The elevator control system according to the present disclosure determines the group to which the elevator user belongs by using the imaging information recorded of the users existing at the elevator landing, and characterizes the users belonging to the group. It is provided with a processing device that creates the entire characteristics of the group including the group as group attribute information, and an inference device that creates information on the predicted disembarkation floor of the user using the group attribute information.
 本開示によれば、エレベーターの制御システムは、撮像装置による撮像情報を用いて、不特定多数の利用者グループの降車階を予測する。このため、利用者の利便性を向上することができる。 According to the present disclosure, the elevator control system predicts the disembarkation floor of an unspecified number of user groups by using the image pickup information from the image pickup device. Therefore, the convenience of the user can be improved.
実施の形態1におけるエレベーターの概要図である。It is a schematic diagram of the elevator in Embodiment 1. 実施の形態1におけるエレベーターの制御システムのブロック図である。It is a block diagram of the control system of the elevator in Embodiment 1. FIG. 実施の形態1におけるエレベーターの利用状況データベースに記憶されるグループ情報の例である。This is an example of group information stored in the elevator usage status database according to the first embodiment. 実施の形態1におけるエレベーターの処理装置が行うグループ情報作成処理のフローチャートである。It is a flowchart of group information creation processing performed by the processing apparatus of an elevator in Embodiment 1. 実施の形態1におけるエレベーターの処理装置が行うグループ情報作成処理のフローチャートである。It is a flowchart of group information creation processing performed by the processing apparatus of an elevator in Embodiment 1. 実施の形態1におけるエレベーターの学習装置のブロック図である。It is a block diagram of the learning apparatus of the elevator in Embodiment 1. FIG. 実施の形態1におけるエレベーターの学習装置が行う機械学習手法例である。This is an example of a machine learning method performed by the elevator learning device according to the first embodiment. 実施の形態1におけるエレベーターの学習装置において学習に使用されるグループ情報の例である。This is an example of group information used for learning in the learning device of the elevator according to the first embodiment. 実施の形態1におけるエレベーターの推論装置のブロック図である。FIG. 5 is a block diagram of an elevator inference device according to the first embodiment. 実施の形態1におけるエレベーターの推論装置が行う推論処理結果の例である。This is an example of the inference processing result performed by the inference device of the elevator in the first embodiment. 実施の形態1におけるエレベーターの学習装置が行う学習動作のフローチャートである。It is a flowchart of the learning operation performed by the learning apparatus of the elevator in Embodiment 1. 実施の形態1におけるエレベーターの推論装置が行う推論動作のフローチャートである。It is a flowchart of the inference operation performed by the inference device of the elevator in Embodiment 1. 実施の形態1におけるエレベーターの推論装置のハードウェア構成図である。FIG. 5 is a hardware configuration diagram of an elevator inference device according to the first embodiment. 実施の形態2におけるエレベーターの概要図である。It is a schematic diagram of the elevator in Embodiment 2. 実施の形態2におけるエレベーターのかご配車予定表示盤の表示例である。This is a display example of the elevator car dispatch schedule display panel according to the second embodiment. 実施の形態3におけるエレベーターの制御盤のブロック図である。It is a block diagram of the control panel of the elevator in Embodiment 3.
 実施の形態について添付の図面に従って説明する。なお、各図中、同一または相当する部分には同一の符号が付される。当該部分の重複説明は適宜に簡略化ないし省略する。 The embodiment will be described according to the attached drawings. In each figure, the same or corresponding parts are designated by the same reference numerals. The duplicate description of the relevant part will be simplified or omitted as appropriate.
実施の形態1.
 図1は実施の形態1におけるエレベーターの概要図である。
Embodiment 1.
FIG. 1 is a schematic view of an elevator according to the first embodiment.
 図1のエレベーターシステムにおいて、建築物1は、2フロア以上の階を備える。昇降路は、建築物1の各階を貫く。図示されない機械室は、昇降路の直上に設けられる。 In the elevator system of FIG. 1, building 1 has two or more floors. The hoistway runs through each floor of Building 1. The machine room (not shown) is provided directly above the hoistway.
 複数の乗場2の各々は、建築物1の各階に設けられる。複数の乗場2の各々は、昇降路に対向する。 Each of the plurality of landings 2 is provided on each floor of the building 1. Each of the plurality of landings 2 faces the hoistway.
 複数の撮像装置3は、建築物1の各階に設けられる。撮像装置3は、各階にある複数の乗場2を撮像することができる位置に設けられる。例えば、撮像装置3は、複数の乗場2を往来する利用者の行動を撮像できる位置に設けられる。例えば、複数の撮像装置3は、各階にある複数の乗場2を撮影する。複数の撮像装置3は、撮像情報を出力する。 A plurality of image pickup devices 3 are provided on each floor of the building 1. The image pickup device 3 is provided at a position where a plurality of landings 2 on each floor can be imaged. For example, the image pickup device 3 is provided at a position where the behavior of a user who comes and goes between a plurality of landings 2 can be imaged. For example, the plurality of image pickup devices 3 photograph a plurality of landings 2 on each floor. The plurality of imaging devices 3 output imaging information.
 巻上機4は、機械室に設けられる。主ロープ5は、巻上機4に巻き掛けられる。 The hoisting machine 4 is provided in the machine room. The main rope 5 is wound around the hoisting machine 4.
 かご6は、昇降路の内部に設けられる。かご6は、主ロープ5の一側に吊るされる。図示されない釣合おもりは、昇降路の内部に設けられる。当該釣合おもりは、主ロープ5の他側に吊るされる。 The car 6 is provided inside the hoistway. The car 6 is hung on one side of the main rope 5. Balanced weights (not shown) are provided inside the hoistway. The balance weight is hung on the other side of the main rope 5.
 複数の出入口7の各々は、昇降路と各階にある複数の乗場2の各々との間に設けられる。利用者は、出入口7を通過して、乗場2とかご6とを移動する。 Each of the plurality of entrances and exits 7 is provided between the hoistway and each of the plurality of landings 2 on each floor. The user passes through the doorway 7 and moves between the landing 2 and the car 6.
 例えば、制御盤8は、機械室に設けられる。例えば、制御盤8は、昇降路の上部に設けられる。制御盤8は、撮像装置3と、有線または無線によって電気的に接続されている。制御盤8は、有線または無線によって巻上機4と電気的に接続されている。 For example, the control panel 8 is provided in the machine room. For example, the control panel 8 is provided at the upper part of the hoistway. The control panel 8 is electrically connected to the image pickup apparatus 3 by wire or wirelessly. The control panel 8 is electrically connected to the hoisting machine 4 by wire or wirelessly.
 制御盤8は、制御システムとして、エレベーターシステムの運転を制御する。制御盤8は、巻上機4の運動を制御する。制御盤8は、1つ以上のかご6の配車を制御する。 The control panel 8 controls the operation of the elevator system as a control system. The control panel 8 controls the movement of the hoisting machine 4. The control panel 8 controls the allocation of one or more cars 6.
 制御盤8は、撮像装置3から撮像情報を受け取る。制御盤8は、撮像情報を記憶する。  The control panel 8 receives imaging information from the imaging device 3. The control panel 8 stores imaging information.
 制御盤8は、撮像情報を用いて、乗場2にいる利用者が降車する階を予測する。制御盤8は、利用者の降車階を予測するための図示されない予測モデルを備える。 The control panel 8 uses the image pickup information to predict the floor on which the user at the landing 2 will get off. The control panel 8 includes a prediction model (not shown) for predicting the disembarkation floor of the user.
 例えば、制御盤8は、予測した降車階の情報を用いて、エレベーターシステムの運転を制御する。以下にその概要を説明する。 For example, the control panel 8 controls the operation of the elevator system by using the predicted exit floor information. The outline will be described below.
 撮像装置3は、乗場2へ到着した複数の利用者を撮像し、撮像情報を作成する。撮像装置3は、撮像情報を制御盤8へ出力する。 The imaging device 3 images a plurality of users who have arrived at the landing 2 and creates imaging information. The image pickup apparatus 3 outputs the image pickup information to the control panel 8.
 制御盤8は、当該撮像情報を用いて、当該複数の利用者を1つのグループとみなす。制御盤8は、当該利用者の外見的特徴から、当該利用者それぞれの人属性情報を作成する。制御盤8は、当該グループに所属する複数の利用者の人属性情報から、グループ属性情報を作成する。 The control panel 8 uses the imaging information to consider the plurality of users as one group. The control panel 8 creates personal attribute information for each user from the appearance characteristics of the user. The control panel 8 creates group attribute information from the human attribute information of a plurality of users belonging to the group.
 制御盤8は、当該グループ属性情報を、図示されない予測モデルに入力する。これにより、制御盤8は、当該グループがエレベーターを降車する階を予測する。 The control panel 8 inputs the group attribute information into a prediction model (not shown). As a result, the control panel 8 predicts the floor on which the group will get off the elevator.
 制御盤8は、当該グループが降車すると予測した階を、かご6の行先階に登録する。その後、制御盤8は、エレベーターのかごを配車する。 The control panel 8 registers the floor predicted to be disembarked by the group in the destination floor of the car 6. After that, the control panel 8 dispatches the elevator car.
 制御盤8は、記憶している撮像情報を用いて、当該予測モデルを作成する。以下にその概要を説明する。 The control panel 8 creates the prediction model using the stored imaging information. The outline will be described below.
 撮像装置3は、エレベーターから降車し、乗場2に存在する利用者グループを撮像し、撮像情報を作成する。撮像装置3は、当該撮像情報を制御盤8へ出力する。 The imaging device 3 gets off from the elevator, images a group of users existing at the landing 2, and creates imaging information. The image pickup device 3 outputs the image pickup information to the control panel 8.
 制御盤8は、当該利用者グループのグループ属性情報と降車階とを統合し、グループ情報として記憶する。例えば、制御盤8は、グループ属性情報と降車階とそれ以外の複数の情報とを統合する。具体的には、統合する情報は、当該利用者グループの乗車階、乗車時刻、降車時刻、降車階のフロア情報、等である。 The control panel 8 integrates the group attribute information of the user group and the getting-off floor and stores it as group information. For example, the control panel 8 integrates the group attribute information, the disembarkation floor, and a plurality of other information. Specifically, the information to be integrated is the boarding floor, boarding time, getting-off time, floor information of the getting-off floor, etc. of the user group.
 制御盤8は、記憶している複数のグループ情報を用いて、予測モデルを作成する。当該予測モデルは、利用者の降車階を予測するために利用される。 The control panel 8 creates a prediction model using a plurality of stored group information. The prediction model is used to predict the drop-off floor of a user.
 次に、図2を用いて、実施の形態1におけるエレベーターの制御システムを説明する。
 図2は実施の形態1におけるエレベーターの制御システムのブロック図である。
Next, the elevator control system according to the first embodiment will be described with reference to FIG.
FIG. 2 is a block diagram of the elevator control system according to the first embodiment.
 図2に示されるように、制御盤8は、処理装置10と学習装置20と推論装置30と配車分析装置40と配車装置50とを備える。 As shown in FIG. 2, the control panel 8 includes a processing device 10, a learning device 20, an inference device 30, a vehicle allocation analysis device 40, and a vehicle allocation device 50.
 処理装置10は、変換部11と個別画像抽出部12とグループ化部13と人属性判定部14と乗降判定部15と付帯情報部16と統合部17とグループ情報記憶部18とを備える。 The processing device 10 includes a conversion unit 11, an individual image extraction unit 12, a grouping unit 13, a person attribute determination unit 14, a boarding / alighting determination unit 15, an incidental information unit 16, an integration unit 17, and a group information storage unit 18.
 変換部11は、撮像装置3から撮像情報を受け取る。変換部11は、撮像情報を変換して、静止画像である第1画像情報を作成する。変換部11は、第1画像情報に記録された1人以上の利用者を識別する。 The conversion unit 11 receives the imaging information from the imaging device 3. The conversion unit 11 converts the imaging information to create the first image information which is a still image. The conversion unit 11 identifies one or more users recorded in the first image information.
 変換部11は、当該利用者それぞれに対して、撮像情報から得られた行動特性情報を作成する。例えば、当該行動特性情報は、当該利用者の撮像内における位置の情報と当該利用者の移動方向の情報と当該利用者の移動速度の情報とを備える。 The conversion unit 11 creates behavioral characteristic information obtained from the imaging information for each of the users. For example, the behavioral characteristic information includes information on the position of the user in the imaging, information on the moving direction of the user, and information on the moving speed of the user.
 変換部11は、第1画像情報と当該行動特性情報とを備えた第2画像情報を作成する。 The conversion unit 11 creates the second image information including the first image information and the behavior characteristic information.
 変換部11は、第2画像情報内の利用者が未処理状態か否かを判定する。例えば、変換部11は、対象とする利用者にグループID情報が付与されていない場合、当該利用者は未処理状態であると判定する。 The conversion unit 11 determines whether or not the user in the second image information is in the unprocessed state. For example, when the target user is not given the group ID information, the conversion unit 11 determines that the user is in the unprocessed state.
 個別画像抽出部12は、第2画像情報を用いて、利用者1人のみの像を抽出した第3画像情報を作成する。例えば、第3画像情報は、第2画像情報をトリミング加工して得られた1人の利用者の全身が記録された画像である。例えば、第3画像情報は、第2画像情報をトリミング加工して得られた1人の利用者の顔が記録された画像である。 The individual image extraction unit 12 creates the third image information by extracting the image of only one user by using the second image information. For example, the third image information is an image in which the whole body of one user obtained by trimming the second image information is recorded. For example, the third image information is an image in which the face of one user obtained by trimming the second image information is recorded.
 グループ化部13は、第2画像情報に記録された利用者を、1人以上のグループに分類する。例えば、グループ化部13は、行動特性情報が類似する複数の利用者を同一グループとして判定する。 The grouping unit 13 classifies the users recorded in the second image information into one or more groups. For example, the grouping unit 13 determines a plurality of users having similar behavioral characteristic information as the same group.
 グループ化部13は、当該利用者に関して、所属するグループに固有のグループID情報を作成する。当該利用者が所属するグループにまだ固有のグループIDが無い場合、グループ化部13は、当該グループに新しいグループID情報を割り当てる。 The grouping unit 13 creates group ID information unique to the group to which the user belongs. If the group to which the user belongs does not yet have a unique group ID, the grouping unit 13 assigns new group ID information to the group.
 グループ化部13は、第2画像情報に当該利用者のグループID情報を付与することで、第2画像情報を更新する。 The grouping unit 13 updates the second image information by adding the group ID information of the user to the second image information.
 グループ化部13は、グループID情報を統合部17へ出力する。 The grouping unit 13 outputs the group ID information to the integration unit 17.
 人属性判定部14は、画像情報から人属性を推定するための人属性推定モデルを備える。人属性は、人の外見から推定される個人の特徴を意味する。例えば、当該人属性推定モデルは、本エレベーターシステムの外部において、機械学習によって作成されたモデルである。 The human attribute determination unit 14 includes a human attribute estimation model for estimating human attributes from image information. Human attributes mean personal characteristics that are inferred from a person's appearance. For example, the human attribute estimation model is a model created by machine learning outside the elevator system.
 人属性判定部14は、第2画像情報に記録された利用者それぞれの人属性情報を作成する。例えば、人属性判定部14は、第2画像情報に記録された利用者の身長、性別、年齢、服装などを推定する。人属性判定部14は、推定した分類項目を統合し、人属性情報を作成する。 The human attribute determination unit 14 creates the personal attribute information of each user recorded in the second image information. For example, the human attribute determination unit 14 estimates the height, gender, age, clothes, etc. of the user recorded in the second image information. The human attribute determination unit 14 integrates the estimated classification items and creates human attribute information.
 乗降判定部15は、撮像情報の行動特性情報を用いて、利用者の乗車状態と降車状態とを判定する。乗降判定部15は、利用者がエレベーター降車後の状態か、エレベーター乗車待ちの状態か、のどちらであるかを判定する。その後、乗降判定部15は、乗車階情報と降車階情報とを作成する。 The boarding / alighting determination unit 15 determines the boarding state and the disembarking state of the user by using the behavior characteristic information of the imaging information. The boarding / alighting determination unit 15 determines whether the user is in a state after getting off the elevator or in a state of waiting for getting on the elevator. After that, the boarding / alighting determination unit 15 creates boarding floor information and getting-off floor information.
 例えば、利用者は、建築物1の3階において、出入口7を通過せずに乗場2へ至ったという行動特性情報を持つ場合、乗降判定部15は、当該利用者をエレベーター乗車待ちの状態にあると判定する。その後、乗降判定部15は、当該利用者に関して、乗車階情報「3」と降車階情報「0」とを作成する。乗車階情報の「3」は、3階で乗車待ちの状態であることを意味する。降車階情報の「0」は、当該利用者が降車した状態では無いことを意味する。 For example, if the user has behavioral characteristic information that the user has reached the landing 2 without passing through the entrance 7 on the third floor of the building 1, the boarding / alighting determination unit 15 puts the user in a state of waiting for the elevator to board. Judge that there is. After that, the boarding / alighting determination unit 15 creates boarding floor information "3" and getting-off floor information "0" for the user. "3" in the boarding floor information means that the boarding is waiting on the 3rd floor. "0" in the disembarkation floor information means that the user is not in the disembarked state.
 例えば、利用者は、建築物1の3階において、出入口7を通過して乗場2へ至ったという行動特性情報を持つ場合、乗降判定部15は、当該利用者をエレベーター降車後の状態にあると判定する。その後、乗降判定部15は、当該利用者に関して、乗車階情報「0」と降車階情報「3」とを作成する。乗車階情報の「0」は、当該利用者が乗車待ち状態では無いことを意味する。降車階情報の「3」は、当該利用者が3階で降車した状態であることを意味する。 For example, when the user has the behavioral characteristic information that the user has passed the entrance 7 and reached the landing 2 on the third floor of the building 1, the boarding / alighting determination unit 15 is in the state after getting off the elevator. Is determined. After that, the boarding / alighting determination unit 15 creates boarding floor information "0" and getting-off floor information "3" for the user. "0" in the boarding floor information means that the user is not in the boarding waiting state. "3" in the disembarkation floor information means that the user has disembarked on the third floor.
 付帯情報部16は、利用者個人に関する以外の情報である付帯情報を、利用者に付与する。例えば、付帯情報部16は、第2画像情報が撮像された日時の情報を、第2画像情報に記録された利用者に付与する。例えば、付帯情報部16は、第2画像情報が撮影された階の情報を第2画像情報に記録された利用者に付与する。例えば、付帯情報部16は、第2画像情報が撮像された階のフロア情報を、第2画像情報に記録された利用者に付与する。具体的には、階のフロア情報は、5階はおもちゃ売り場、12階はレストラン、という情報である。 The incidental information unit 16 gives the user incidental information that is information other than that related to the individual user. For example, the incidental information unit 16 imparts information on the date and time when the second image information was captured to the user recorded in the second image information. For example, the incidental information unit 16 gives the information of the floor on which the second image information was taken to the user recorded in the second image information. For example, the incidental information unit 16 gives the floor information of the floor on which the second image information is captured to the user recorded in the second image information. Specifically, the floor information on the floor is that the 5th floor is a toy department and the 12th floor is a restaurant.
 統合部17は、変換部11から第2画像情報を取得する。統合部17は、個別画像抽出部12から第3画像情報を取得する。統合部17は、グループ化部13からグループID情報を取得する。統合部17は、人属性判定部14から人属性情報を取得する。統合部17は、乗降判定部15から乗車階情報と降車階情報とを取得する。統合部17は、付帯情報部16から付帯情報を取得する。 The integration unit 17 acquires the second image information from the conversion unit 11. The integration unit 17 acquires the third image information from the individual image extraction unit 12. The integration unit 17 acquires group ID information from the grouping unit 13. The integration unit 17 acquires human attribute information from the human attribute determination unit 14. The integration unit 17 acquires boarding floor information and getting-off floor information from the boarding / alighting determination unit 15. The integration unit 17 acquires incidental information from the incidental information unit 16.
 統合部17は、第2画像情報に記録されている利用者の情報を統合し、利用者情報を作成する。例えば、統合部17は、第2画像情報に記録された利用者の第3画像情報とグループID情報と人属性情報と降車情報と乗車情報と付帯情報とを統合し、利用者情報を作成する。 The integration unit 17 integrates the user information recorded in the second image information and creates the user information. For example, the integration unit 17 integrates the user's third image information, group ID information, personal attribute information, disembarkation information, boarding information, and incidental information recorded in the second image information to create user information. ..
 グループ情報記憶部18は、統合部17から利用者情報を取得する。 The group information storage unit 18 acquires user information from the integration unit 17.
 グループ情報記憶部18は、利用者情報を記憶しているグループ情報へ追加する。グループ情報記憶部18は、利用者情報が追加されたグループ情報を記憶する。 The group information storage unit 18 adds the user information to the group information that stores the user information. The group information storage unit 18 stores the group information to which the user information is added.
 例えば、ある利用者Aの利用者情報のグループIDが、既にグループ情報記憶部18が記憶しているグループIDである場合、グループ情報記憶部18は、当該グループIDを持つグループ情報に利用者Aの利用者情報を追加する。 For example, when the group ID of the user information of a certain user A is a group ID already stored in the group information storage unit 18, the group information storage unit 18 adds the user A to the group information having the group ID. Add user information.
 例えば、ある利用者Bの利用者情報のグループIDが、まだグループ情報記憶部18が記憶していないグループIDである場合、グループ情報記憶部18は、当該グループIDを有する新規グループ情報を作成し、利用者Bの利用者情報を新規グループ情報に追加する。 For example, when the group ID of the user information of a certain user B is a group ID that is not yet stored in the group information storage unit 18, the group information storage unit 18 creates new group information having the group ID. , Add the user information of user B to the new group information.
 グループ情報記憶部18は、グループに属する利用者の乗車階情報を用いて、グループの乗車階情報を作成する。グループ情報記憶部18は、グループに属する利用者の降車階情報を用いて、グループの降車階情報を作成する。グループ情報記憶部18は、乗車階情報と降車階情報とを当該グループ情報に追加する。 The group information storage unit 18 creates the boarding floor information of the group by using the boarding floor information of the users belonging to the group. The group information storage unit 18 creates the disembarkation floor information of the group by using the disembarkation floor information of the users belonging to the group. The group information storage unit 18 adds boarding floor information and getting-off floor information to the group information.
 グループ情報記憶部18は、グループに属する利用者の人属性情報と個別画像情報と付帯情報とを統合して、グループ全体の属性情報であるグループ属性情報を作成する。利用者情報がグループ情報に新しく追加された場合、グループ情報記憶部18は、新しくグループ属性情報を作成する。グループ情報記憶部18は、グループ属性情報をグループ情報に追加する。 The group information storage unit 18 integrates the human attribute information of the users belonging to the group, the individual image information, and the incidental information to create the group attribute information which is the attribute information of the entire group. When the user information is newly added to the group information, the group information storage unit 18 newly creates the group attribute information. The group information storage unit 18 adds the group attribute information to the group information.
 学習装置20は、グループ情報を教師データとして、いわゆる教師あり学習を行う。学習装置20は、利用者のグループ情報を用いて、利用者の降車階を推論する学習済推論モデルを作成する。 The learning device 20 uses group information as teacher data to perform so-called supervised learning. The learning device 20 creates a learned inference model that infers the disembarkation floor of the user by using the group information of the user.
 推論装置30は、学習装置20で作成された学習済の推論モデルを備える。推論装置30は、利用者のグループ情報を用いて、利用者の降車階を推論する。 The inference device 30 includes a learned inference model created by the learning device 20. The inference device 30 infers the disembarkation floor of the user by using the group information of the user.
 例えば、配車分析装置40は、推論装置30から予測降車階情報を取得する。例えば、配車分析装置40は、各階の呼びボタンから登録される呼び情報とかご6の内部から登録された行先登録情報と予測降車情報とから、最適なエレベーターのかご配車情報を演算する。例えば、配車分析装置40は、DOAS(Destination Oriented Allocation System:エレベーター行先予報システム)を用いて、最適なエレベーターのかご配車情報を演算する。 For example, the vehicle allocation analysis device 40 acquires the predicted disembarkation floor information from the inference device 30. For example, the vehicle allocation analysis device 40 calculates the optimum elevator vehicle allocation information from the call information registered from the call buttons on each floor, the destination registration information registered from the inside of the car 6, and the predicted disembarkation information. For example, the vehicle allocation analysis device 40 uses DOAS (Destination Oriented Allocation System: elevator destination forecast system) to calculate the optimum elevator vehicle allocation information.
 配車装置50は、配車分析装置40からかご配車情報を取得する。例えば、配車装置50は、配車情報をもとに、かご6の配車を行う。例えば、配車装置50は、かご6の配車を実施するために、巻上機4へ駆動命令を出力する。 The vehicle allocation device 50 acquires vehicle allocation information from the vehicle allocation analysis device 40. For example, the vehicle dispatching device 50 allocates the car 6 based on the vehicle allocation information. For example, the vehicle dispatching device 50 outputs a drive command to the hoisting machine 4 in order to dispatch the car 6.
 次に、図3を用いて、グループ情報記憶部18が記憶するグループ情報の例を説明する。
 図3は実施の形態1におけるエレベーターの利用状況データベースに記憶されるグループ情報の例である。
Next, an example of the group information stored in the group information storage unit 18 will be described with reference to FIG.
FIG. 3 is an example of group information stored in the elevator usage status database according to the first embodiment.
 図3に示されるように、グループ属性情報は、グループIDとグループの乗車階情報とグループの降車階情報とグループ属性情報とを備える。 As shown in FIG. 3, the group attribute information includes a group ID, a group boarding floor information, a group getting-off floor information, and a group attribute information.
 グループ属性情報は、各グループの人属性情報と付帯情報と個別画像情報とを備える。 The group attribute information includes personal attribute information, incidental information, and individual image information of each group.
 例えば、人属性情報は、性別と身長と年齢と体型と衣服とその他の外見情報とを備える。 For example, human attribute information includes gender, height, age, body shape, clothing, and other appearance information.
 例えば、グループA1005は、グループIDがA1005であるグループを意味する。グループA1005は、1名の女性Cが所属するグループである。女性Cの身長は、約160cmである。女性Cの年齢は約30歳である。女性Cの体型はやせ型である。女性Cは、赤い上着と白いズボンと眼鏡とを着用している。グループA1005のグループ情報は、女性Cが2019年10月1日に1階エントランスフロアにて乗車した情報である。グループA1005の列に記載された「有」は、個別画像抽出部12が女性Cの全身と顔との第3画像情報を作成したことを意味する。 For example, group A1005 means a group whose group ID is A1005. Group A1005 is a group to which one woman C belongs. The height of female C is about 160 cm. Female C is about 30 years old. The body shape of Female C is thin. Woman C wears a red jacket, white trousers and glasses. The group information of group A1005 is the information that female C boarded on the entrance floor on the first floor on October 1, 2019. “Yes” in the column of group A1005 means that the individual image extraction unit 12 created the third image information of the whole body and the face of the woman C.
 例えば、グループA1006は、1名の女性が所属するグループである。グループA1006は、グループA1005とグループ属性が同じである。従って、グループA1006は、グループA1005と同一のグループである。グループA1006に所属する女性は、女性Cである。グループA1006のグループ情報は、女性Cが2019年10月1日に3階女性洋服売場で降車した情報である。 For example, group A1006 is a group to which one woman belongs. Group A1006 has the same group attributes as group A1005. Therefore, group A1006 is the same group as group A1005. The woman belonging to group A1006 is woman C. The group information of group A1006 is the information that woman C got off at the women's clothing section on the 3rd floor on October 1, 2019.
 このように、女性Cが乗車した情報はA1005に、女性Cが降車した情報はA1006に記録される。 In this way, the information that the woman C got on is recorded in A1005, and the information that the woman C got off is recorded in A1006.
 次に、図4および図5を用いて、処理装置が撮像情報を処理する方法について説明する。
 図4および図5は実施の形態1におけるエレベーターの処理装置が行うグループ情報作成処理のフローチャートである。
Next, a method in which the processing apparatus processes the imaging information will be described with reference to FIGS. 4 and 5.
4 and 5 are flowcharts of group information creation processing performed by the elevator processing device according to the first embodiment.
 ステップS001において、変換部11は、撮像装置3から撮像情報を取得する。 In step S001, the conversion unit 11 acquires the imaging information from the imaging device 3.
 その後、ステップS002の動作が行われる。ステップS002において、変換部11は、撮像装置3から受け取った撮像情報を、第1画像情報として、連続した静止画像情報を作成する。 After that, the operation of step S002 is performed. In step S002, the conversion unit 11 creates continuous still image information using the image pickup information received from the image pickup apparatus 3 as the first image information.
 その後、ステップS003の動作が行われる。ステップS003において、変換部11は、第1画像情報に記録された1人以上の利用者を識別する。変換部11は、連続した静止画像情報をもとに、当該利用者それぞれに対して、行動特性情報を作成する。変換部11は、第1画像情報と作成した行動特性情報とを備えた第2画像情報を作成する。 After that, the operation of step S003 is performed. In step S003, the conversion unit 11 identifies one or more users recorded in the first image information. The conversion unit 11 creates behavioral characteristic information for each of the users based on the continuous still image information. The conversion unit 11 creates the second image information including the first image information and the created behavioral characteristic information.
 その後、ステップS004の動作が行われる。ステップS004において、変換部11は、第2画像情報に記録された利用者1人を、第1利用者として処理対象に指定する。 After that, the operation of step S004 is performed. In step S004, the conversion unit 11 designates one user recorded in the second image information as the first user to be processed.
 その後、ステップS005の動作が行われる。ステップS005において、変換部11は、第1利用者が未処理状態か否かを判定する。例えば、変換部11は、第1利用者にグループID情報が付与されていない場合、第1利用者は未処理状態であると判定する。 After that, the operation of step S005 is performed. In step S005, the conversion unit 11 determines whether or not the first user is in the unprocessed state. For example, the conversion unit 11 determines that the first user is in the unprocessed state when the group ID information is not given to the first user.
 ステップS005で、変換部11が第1利用者を未処理状態であると判定した場合、ステップS006の動作が行われる。ステップS006において、個別画像抽出部12は、第2画像情報を用いて、第1利用者の第3画像情報を作成する。 When the conversion unit 11 determines in step S005 that the first user is in an unprocessed state, the operation of step S006 is performed. In step S006, the individual image extraction unit 12 creates the third image information of the first user by using the second image information.
 その後、ステップS007の動作が行われる。ステップS007において、グループ化部13は、第1利用者と類似する行動特性情報を備えた利用者である第2利用者が、第2画像情報内に存在するか否かを判定する。 After that, the operation of step S007 is performed. In step S007, the grouping unit 13 determines whether or not a second user, who is a user having behavioral characteristic information similar to that of the first user, exists in the second image information.
 ステップS007で、第1利用者と類似する行動特性情報を備える利用者が第2画像情報内に存在する場合、ステップS008の動作が行われる。ステップS008において、グループ化部13は、第2利用者がグループID情報を有するか否かを判定する。 In step S007, when a user having behavioral characteristic information similar to that of the first user exists in the second image information, the operation of step S008 is performed. In step S008, the grouping unit 13 determines whether or not the second user has the group ID information.
 ステップS008で、第2利用者がグループID情報を有する場合、ステップS009の動作が行われる。ステップS009において、グループ化部13は、第2利用者と同じグループIDを有する第1利用者のグループID情報を作成する。グループ化部13は、作成したグループID情報を統合部17へ出力する。グループ化部13は、作成したグループID情報を第2画像情報内の第1利用者に付与する。 In step S008, when the second user has the group ID information, the operation of step S009 is performed. In step S009, the grouping unit 13 creates group ID information of the first user having the same group ID as the second user. The grouping unit 13 outputs the created group ID information to the integration unit 17. The grouping unit 13 assigns the created group ID information to the first user in the second image information.
 その後、ステップS010の動作が行われる。ステップS010において、人属性判定部14は、第1利用者の人属性情報を作成する。 After that, the operation of step S010 is performed. In step S010, the human attribute determination unit 14 creates the human attribute information of the first user.
 その後、ステップS011の動作が行われる。ステップS011において、乗降判定部15は、第1利用者の行動特性情報をもとに、第1利用者の乗車階情報と降車階情報とを作成する。 After that, the operation of step S011 is performed. In step S011, the boarding / alighting determination unit 15 creates boarding floor information and getting-off floor information of the first user based on the behavior characteristic information of the first user.
 その後、ステップS012の動作が行われる。ステップS012において、付帯情報部16は、第2画像情報から付帯情報を作成する。 After that, the operation of step S012 is performed. In step S012, the incidental information unit 16 creates the incidental information from the second image information.
 その後、ステップS013の動作が行われる。ステップS013において、統合部17は、第1利用者の第3画像情報と第1利用者のグループID情報と第1利用者の人属性情報と第1利用者の乗車階情報と第1利用者の降車階情報と付帯情報と取得する。統合部17は、第1利用者の第3画像情報と第1利用者のグループID情報と第1利用者の人属性情報と第1利用者の乗車階情報と第1利用者の降車階情報と付帯情報とを統合して、第1利用者の利用者情報を作成する。 After that, the operation of step S013 is performed. In step S013, the integration unit 17 includes the third image information of the first user, the group ID information of the first user, the personal attribute information of the first user, the boarding floor information of the first user, and the first user. Get off floor information and incidental information. The integration unit 17 includes the third image information of the first user, the group ID information of the first user, the personal attribute information of the first user, the boarding floor information of the first user, and the disembarking floor information of the first user. And incidental information are integrated to create user information of the first user.
 その後、ステップS014の動作が行われる。ステップS014において、グループ情報記憶部18は、第1利用者情報を取得する。グループ情報記憶部18は、グループ情報記憶部18の記憶する複数のグループ情報の中で、第1利用者のグループIDと同じグループIDを持つものが存在するか否かを判定する。 After that, the operation of step S014 is performed. In step S014, the group information storage unit 18 acquires the first user information. The group information storage unit 18 determines whether or not, among the plurality of group information stored in the group information storage unit 18, there is one having the same group ID as the group ID of the first user.
 ステップS014で、第1利用者と同じグループIDを持つグループ情報が存在する場合、ステップS015の動作が行われる。ステップS015において、グループ情報記憶部18は、当該グループ情報に第1利用者の利用者情報を追加する。 If there is group information having the same group ID as the first user in step S014, the operation of step S015 is performed. In step S015, the group information storage unit 18 adds the user information of the first user to the group information.
 その後、ステップS016の動作が行われる。ステップS016において、変換部11は、第2画像情報内において、グループID情報を持たない利用者が存在しないか否かを判定する。 After that, the operation of step S016 is performed. In step S016, the conversion unit 11 determines whether or not there is a user who does not have the group ID information in the second image information.
 ステップS016で、第2画像情報内において、グループID情報を持たない利用者が存在しない場合、処理装置10は、撮像情報の処理を終了する。 In step S016, if there is no user who does not have the group ID information in the second image information, the processing device 10 ends the processing of the imaging information.
 ステップS005で変換部11が第1利用者を未処理状態と判断しない場合、またはステップS016で、第2画像情報内においてグループID情報を持たない利用者が存在する場合、ステップS004の動作が行われる。 If the conversion unit 11 does not determine the first user as an unprocessed state in step S005, or if there is a user who does not have the group ID information in the second image information in step S016, the operation of step S004 is performed. It is said.
 ステップS007で、第1利用者と類似する行動特性情報を備える利用者が第2画像情報内に存在しない場合、またはステップS008で、第2利用者がグループID情報を有さない場合、ステップS017の動作が行われる。ステップS017において、グループ化部13は、新規のグループIDを備える第1利用者のグループID情報を作成する。グループ化部13は、新規のグループID情報を統合部17へ出力する。グループ化部13は、新規のグループID情報を第2画像情報内の第1利用者に付与する。 In step S007, when a user having behavioral characteristic information similar to that of the first user does not exist in the second image information, or in step S008, when the second user does not have group ID information, step S017 The operation of is performed. In step S017, the grouping unit 13 creates group ID information of the first user having a new group ID. The grouping unit 13 outputs new group ID information to the integration unit 17. The grouping unit 13 assigns new group ID information to the first user in the second image information.
 その後、ステップS010以降の動作が行われる。 After that, the operations after step S010 are performed.
 ステップS014で、第1利用者と同じグループIDを持つグループ情報が存在しない場合、ステップS018の動作が行われる。ステップS018において、グループ情報記憶部18は、第1利用者のグループIDを備える新規グループ情報を作成する。グループ情報記憶部18は、新規グループ情報に第1利用者の利用者情報を追加する。 If there is no group information having the same group ID as the first user in step S014, the operation of step S018 is performed. In step S018, the group information storage unit 18 creates new group information including the group ID of the first user. The group information storage unit 18 adds the user information of the first user to the new group information.
 その後、ステップS019の動作が行われる。ステップS019において、グループ情報記憶部18は、第1利用者の利用者情報を用いて、当該新規グループのグループ属性情報と付帯情報と個別画像情報と乗車階情報と降車階情報とを作成する。 After that, the operation of step S019 is performed. In step S019, the group information storage unit 18 creates group attribute information, incidental information, individual image information, boarding floor information, and getting-off floor information of the new group using the user information of the first user.
 その後、ステップS016以降の動作が行われる。 After that, the operations after step S016 are performed.
 次に、図6を用いて、学習装置20の説明をする。
 図6は実施の形態1におけるエレベーターの学習装置のブロック図である。
Next, the learning device 20 will be described with reference to FIG.
FIG. 6 is a block diagram of the learning device for the elevator according to the first embodiment.
 図6に示されるように、学習装置20は、利用状況データベース21(以下、利用状況DB21)と学習データ取得部22とモデル生成部23とを備える。 As shown in FIG. 6, the learning device 20 includes a usage status database 21 (hereinafter, usage status DB 21), a learning data acquisition unit 22, and a model generation unit 23.
 利用状況DB21は、グループ情報を処理装置10から取得する。利用状況DB21は、グループ情報を記憶する。利用状況DB21は、既に記憶しているグループ情報の乗車階情報を、取得した同一のグループ情報の降車階情報に書き換える。 The usage status DB 21 acquires group information from the processing device 10. The usage status DB 21 stores group information. The usage status DB 21 rewrites the boarding floor information of the group information already stored to the getting-off floor information of the same acquired group information.
 例えば、利用状況DB21は、降車階情報が「0」であるグループ情報をいくつか記憶している。利用状況DB21は、降車階情報が「3」かつ乗車階情報が「0」であるグループ情報Dを取得する。その後、利用状況DB21は、既に記憶しているグループ情報において、グループ属性情報がグループ情報Dに類似するグループ情報Eを選択する。利用状況DB21は、グループ情報Dとグループ情報Eとを同一のグループの情報と判定する。グループ情報Dの降車階情報は「0」である。グループ情報Eの乗車階情報は「1」である。利用状況DB21は、グループ情報Eの降車階情報をグループ情報Dの降車階情報に書き換える。即ち、グループ情報Eの降車階情報は「3」となる。このようにして、利用状況DB21は、同一グループの降車階情報と乗車階情報とが対になるように記憶する。 For example, the usage status DB 21 stores some group information in which the disembarkation floor information is "0". The usage status DB 21 acquires the group information D in which the disembarkation floor information is "3" and the boarding floor information is "0". After that, the usage status DB 21 selects the group information E whose group attribute information is similar to the group information D in the group information already stored. The usage status DB 21 determines that the group information D and the group information E are the same group information. The disembarkation floor information of the group information D is "0". The boarding floor information of group information E is "1". The usage status DB 21 rewrites the disembarkation floor information of the group information E to the disembarkation floor information of the group information D. That is, the disembarkation floor information of the group information E is "3". In this way, the usage status DB 21 stores the disembarkation floor information and the boarding floor information of the same group so as to be paired.
 利用状況DB21は、学習するのに十分な量のグループ情報を記憶しているか否かを判定する。例えば、利用状況DB21は、グループ情報を1000グループ分以上記憶している場合、学習するのに十分な量のグループ情報を記憶していると判定する。例えば、利用状況DB21は、1週間以上の期間のグループ情報を記憶している場合、学習するのに十分な量のグループ情報を記憶していると判定する。 The usage status DB 21 determines whether or not a sufficient amount of group information for learning is stored. For example, when the usage status DB 21 stores 1000 groups or more of group information, it determines that a sufficient amount of group information for learning is stored. For example, when the usage status DB 21 stores group information for a period of one week or more, it determines that it stores a sufficient amount of group information for learning.
 学習データ取得部22は、グループ属性情報と乗車階情報と降車階情報とを利用状況DB21から取得する。 The learning data acquisition unit 22 acquires group attribute information, boarding floor information, and getting-off floor information from the usage status DB 21.
 モデル生成部23は、学習データ取得部22からグループ属性情報と乗車階情報と降車階情報とを取得する。 The model generation unit 23 acquires group attribute information, boarding floor information, and getting-off floor information from the learning data acquisition unit 22.
 モデル生成部23は、グループ属性情報と乗車階情報と降車階情報との組み合わせに基づいて作成される学習用データに基づいて、予測降車階情報を学習する。即ち、モデル生成部23は、エレベーターのグループ属性情報と乗車階情報とから最適な予測降車階情報を推論する学習済モデルを生成する。 The model generation unit 23 learns the predicted disembarkation floor information based on the learning data created based on the combination of the group attribute information, the boarding floor information, and the disembarking floor information. That is, the model generation unit 23 generates a learned model that infers the optimum predicted exit floor information from the elevator group attribute information and the boarding floor information.
 なお、学習装置20、エレベーター利用者の予測降車階情報を学習するために使用される。例えば、ネットワークを介してエレベーターの制御システムに接続され、このエレベーターとは別個の装置であってもよい。また、学習装置20は、エレベーターシステムに内蔵されていてもよい。さらに、学習装置20は、クラウドサーバ上に存在していてもよい。 The learning device 20 is used to learn the predicted exit floor information of elevator users. For example, it may be connected to the control system of the elevator via a network and may be a device separate from the elevator. Further, the learning device 20 may be built in the elevator system. Further, the learning device 20 may exist on the cloud server.
 モデル生成部23が用いる学習アルゴリズムは教師あり学習、教師なし学習、強化学習等の公知のアルゴリズムを用いることができる。 As the learning algorithm used by the model generation unit 23, known algorithms such as supervised learning, unsupervised learning, and reinforcement learning can be used.
 例えば、モデル生成部23は、ニューラルネットワークモデルに従って、いわゆる教師あり学習により、予測降車階情報を学習する。ここで、教師あり学習とは、入力と結果(ラベル)のデータの組を学習装置20に与えることで、それらの学習用データにある特徴を学習し、入力から結果を推論する手法をいう。 For example, the model generation unit 23 learns the predicted disembarkation floor information by so-called supervised learning according to the neural network model. Here, supervised learning refers to a method of learning a feature in the learning data by giving a set of input and result (label) data to the learning device 20 and inferring the result from the input.
 ニューラルネットワークは、複数のニューロンからなる入力層、複数のニューロンからなる中間層(隠れ層)、および複数のニューロンからなる出力層で構成される。中間層は、1層または2層以上でもよい。 A neural network is composed of an input layer composed of a plurality of neurons, an intermediate layer (hidden layer) composed of a plurality of neurons, and an output layer composed of a plurality of neurons. The intermediate layer may be one layer or two or more layers.
 例えば、本開示において、ニューラルネットワークは、データ取得部によって取得されるグループ属性情報と乗車階情報と降車階情報との組み合わせに基づいて作成される学習用データに従って、いわゆる教師あり学習により、予測降車階情報を学習する。 For example, in the present disclosure, the neural network predicts disembarkation by so-called supervised learning according to learning data created based on a combination of group attribute information, boarding floor information, and disembarking floor information acquired by the data acquisition unit. Learn floor information.
 次に、図7を用いて、機械学習手法の一例として、ニューラルネットワークを適用した場合について説明する。
 図7は実施の形態1におけるエレベーターの学習装置が行う機械学習手法例である。
Next, with reference to FIG. 7, a case where a neural network is applied will be described as an example of a machine learning method.
FIG. 7 is an example of a machine learning method performed by the elevator learning device according to the first embodiment.
 例えば、図7に示されるように、3層のニューラルネットワークであれば、複数の入力が入力層(X1-X3)に入力されると、その値に重みW1(w11-w16)を掛けて中間層(Y1-Y2)に入力され、その結果にさらに重みW2(w21-w26)をかけて出力層(Z1-Z3)から出力される。この出力結果は、重みW1とW2の値によって変わる。 For example, as shown in FIG. 7, in the case of a three-layer neural network, when a plurality of inputs are input to the input layer (X1-X3), the value is multiplied by the weight W1 (w11-w16) to be intermediate. It is input to the layer (Y1-Y2), and the result is further multiplied by the weight W2 (w21-w26) to be output from the output layer (Z1-Z3). This output result depends on the values of the weights W1 and W2.
 すなわち、ニューラルネットワークは、入力層にグループ属性情報と乗車階情報を入力して出力層から出力された結果が、降車階情報に近づくように重みW1とW2とを調整することで学習する。 That is, the neural network learns by inputting the group attribute information and the boarding floor information into the input layer and adjusting the weights W1 and W2 so that the result output from the output layer approaches the getting-off floor information.
 次に、図8を用いて、学習に使用されるグループ情報の例を説明する。
 図8は実施の形態1におけるエレベーターの学習装置において学習に使用されるグループ情報の例である。
Next, an example of group information used for learning will be described with reference to FIG.
FIG. 8 is an example of group information used for learning in the learning device of the elevator according to the first embodiment.
 図8に示されるように、グループ属性情報は、グループIDとグループの乗車階情報とグループの降車階情報とグループ属性情報とを備える。 As shown in FIG. 8, the group attribute information includes a group ID, a group boarding floor information, a group getting-off floor information, and a group attribute information.
 例えば、グループA1005は、女性Cが所属するグループである。グループA1005のグループ情報は、女性Cが2019年10月1日に1階エントランスフロアにて乗車した後、3階女性洋服売場にて降車した情報である。 For example, group A1005 is a group to which female C belongs. The group information of Group A1005 is the information that Woman C got on the entrance floor on the 1st floor on October 1, 2019, and then got off at the women's clothing section on the 3rd floor.
 その他の情報例も、図8に示す通りである。 Other information examples are also shown in FIG.
 次に、図9を用いて、推論装置30の説明をする。
 図9は実施の形態1におけるエレベーターの推論装置のブロック図である。
Next, the inference device 30 will be described with reference to FIG.
FIG. 9 is a block diagram of the elevator inference device according to the first embodiment.
 図9に示されるように、推論装置30は、学習済モデル記憶部31と利用データ取得部32と推論部33とを備える。 As shown in FIG. 9, the inference device 30 includes a learned model storage unit 31, a usage data acquisition unit 32, and an inference unit 33.
 学習済モデル記憶部31は、モデル生成部23から学習済モデルを取得する。学習済モデル記憶部31は、学習モデルを記憶する。 The trained model storage unit 31 acquires the trained model from the model generation unit 23. The trained model storage unit 31 stores the learning model.
 利用データ取得部32は、処理装置10から、グループ属性情報と乗車階情報とを取得する。 The usage data acquisition unit 32 acquires group attribute information and boarding floor information from the processing device 10.
 推論部33は、学習済モデルを利用して得られる予測降車階情報を推論する。推論部33は、学習済モデル記憶部31から、学習済モデルを取得する。推論部33は、利用データ取得部32からグループ属性情報と乗車階情報とを取得する。その後、推論部33は、当該学習済モデルにグループ属性情報と乗車階情報とを入力する。推論部33は、グループ属性情報と乗車階情報とから推論される予測降車階情報を作成する。 The reasoning unit 33 infers the predicted disembarkation floor information obtained by using the trained model. The inference unit 33 acquires the trained model from the trained model storage unit 31. The inference unit 33 acquires group attribute information and boarding floor information from the usage data acquisition unit 32. After that, the inference unit 33 inputs the group attribute information and the boarding floor information into the trained model. The reasoning unit 33 creates the predicted disembarkation floor information inferred from the group attribute information and the boarding floor information.
 次に、図10を用いて、推論装置30が行う推論の例を説明する。
 図10は実施の形態1におけるエレベーターの推論装置が行う推論処理結果の例である。
Next, an example of inference performed by the inference device 30 will be described with reference to FIG.
FIG. 10 is an example of the inference processing result performed by the elevator inference device according to the first embodiment.
 図10に示されるように、図示されない推論装置30は、INPUT情報として、グループ属性情報と乗車階情報とを取得する。図示されない推論装置30は、OUTPUT情報として、予測降車階情報を出力する。 As shown in FIG. 10, the inference device 30 (not shown) acquires group attribute information and boarding floor information as INPUT information. The inference device 30 (not shown) outputs predicted disembarkation floor information as OUTPUT information.
 例えば、グループA1101は、女性Fが所属するグループである。グループA1101のグループ情報は、女性Fが2019年10月2日9時35分に1階エントランスから乗車した情報である。 For example, group A1101 is a group to which female F belongs. The group information of group A1101 is the information that female F boarded from the entrance on the first floor at 9:35 on October 2, 2019.
 図示されない推論装置30は、グループIDがA1101のグループの情報に予測モデルを適用し、当該グループは3階の女性洋服売場で降車すると予測する。図示されない推論装置30は、予測降車階情報「3」を作成する。 The inference device 30 (not shown) applies a prediction model to the information of the group whose group ID is A1101, and predicts that the group will get off at the women's clothing section on the 3rd floor. The inference device 30 (not shown) creates the predicted disembarkation floor information “3”.
 その他の推論処理の例も、図10に示す通りである。 Examples of other inference processing are also as shown in FIG.
 次に、図11を用いて、学習装置が学習する処理について説明する。
 図11は実施の形態1におけるエレベーターの学習装置が行う学習動作のフローチャートである。
Next, the process of learning by the learning device will be described with reference to FIG.
FIG. 11 is a flowchart of a learning operation performed by the learning device of the elevator according to the first embodiment.
 ステップS101において、利用状況DB21は、処理装置10から1つのグループ情報である第1グループ情報を取得する。 In step S101, the usage status DB 21 acquires the first group information, which is one group information, from the processing device 10.
 その後、ステップS102の動作が行われる。ステップS102において、利用状況DB21は、第1グループ情報が降車階情報を有するか否かを判定する。第1グループ情報の降車階情報が「0」でない場合、利用状況DB21は、第1グループ情報が降車階情報を有すると判定する。 After that, the operation of step S102 is performed. In step S102, the usage status DB 21 determines whether or not the first group information has the disembarkation floor information. When the disembarkation floor information of the first group information is not "0", the usage status DB 21 determines that the first group information has the disembarkation floor information.
 ステップS102で、第1グループ情報が降車階情報を有する場合、ステップS103の動作が行われる。ステップS103において、利用状況DB21は、記憶しているグループ情報の中に、降車階情報が「0」で、かつ取得したグループ情報と同一のグループ情報が存在するか否かを判定する。 In step S102, when the first group information has the disembarkation floor information, the operation of step S103 is performed. In step S103, the usage status DB 21 determines whether or not the stored group information has the disembarkation floor information "0" and the same group information as the acquired group information exists.
 ステップS103で、降車階情報が「0」で、かつ取得したグループ情報と同一のグループ情報(以下、第2グループ情報と呼称する)が存在する場合、ステップS104の動作が行われる。ステップS104において、利用状況DB21は、第2グループ情報の降車階情報を、第1グループ情報が有する降車階情報に書き換える。 In step S103, when the disembarkation floor information is "0" and the same group information as the acquired group information (hereinafter referred to as the second group information) exists, the operation of step S104 is performed. In step S104, the usage status DB 21 rewrites the disembarkation floor information of the second group information to the disembarkation floor information of the first group information.
 その後、ステップS105の動作が行われる。ステップS105において、利用状況DB21は、学習するのに十分な量のグループ情報を記憶しているか否かを判定する。 After that, the operation of step S105 is performed. In step S105, the usage status DB 21 determines whether or not a sufficient amount of group information for learning is stored.
 ステップS105で、利用状況DB21が学習するのに十分な量のグループ情報を記憶している場合、ステップS106の動作が行われる。ステップS106において、学習データ取得部22は、学習用データとして、利用状況DB21からグループ属性情報と乗車階情報と降車階情報とを取得する。 In step S105, when the usage status DB 21 stores a sufficient amount of group information for learning, the operation of step S106 is performed. In step S106, the learning data acquisition unit 22 acquires group attribute information, boarding floor information, and getting-off floor information from the usage status DB 21 as learning data.
 その後、ステップS107の動作が行われる。ステップS107において、モデル生成部23は、学習データ取得部22によって取得されるグループ属性情報と乗車階情報と降車階情報との組み合わせに基づいて作成される学習用データに従って、いわゆる教師あり学習により、予測降車階情報を学習し、学習済モデルを作成する。 After that, the operation of step S107 is performed. In step S107, the model generation unit 23 performs so-called supervised learning according to the learning data created based on the combination of the group attribute information, the boarding floor information, and the disembarking floor information acquired by the learning data acquisition unit 22. Learn the predicted drop-off floor information and create a trained model.
 その後、ステップS108の動作が行われる。ステップS108において、学習済モデル記憶部31は、モデル生成部23が生成した学習済モデルを記憶する。 After that, the operation of step S108 is performed. In step S108, the trained model storage unit 31 stores the trained model generated by the model generation unit 23.
 その後、学習処理を終了する。 After that, the learning process ends.
 ステップS102で第1グループ情報が降車階情報を有さない場合、またはステップS103で、降車階情報が「0」で、かつ同一のグループ情報が存在しない場合、ステップS109の動作が行われる。ステップS109において、利用状況DB21は、第1グループ情報を記憶する。 If the first group information does not have the disembarkation floor information in step S102, or if the disembarkation floor information is "0" and the same group information does not exist in step S103, the operation of step S109 is performed. In step S109, the usage status DB 21 stores the first group information.
 その後、ステップS105以降の動作が行われる。 After that, the operations after step S105 are performed.
 次に、図12を用いて、推論装置30を使って予測降車階情報を得るための処理を説明する。
 図12は、実施の形態1におけるエレベーターの推論装置が行う推論動作のフローチャートである。
Next, with reference to FIG. 12, a process for obtaining predicted disembarkation floor information using the inference device 30 will be described.
FIG. 12 is a flowchart of an inference operation performed by the elevator inference device according to the first embodiment.
 ステップS201において、利用データ取得部32は、推論用のグループ情報として、処理装置10から第3グループ情報を取得する。 In step S201, the usage data acquisition unit 32 acquires the third group information from the processing device 10 as the group information for inference.
 その後、ステップS202の動作が行われる。ステップS202において、利用データ取得部32は、第3グループ情報が乗車階情報を有するか否かを判定する。第3グループ情報の乗車階情報が「0」でない場合、利用データ取得部32は、第3グループ情報が乗車階情報を有すると判定する。 After that, the operation of step S202 is performed. In step S202, the usage data acquisition unit 32 determines whether or not the third group information has boarding floor information. When the boarding floor information of the third group information is not "0", the usage data acquisition unit 32 determines that the third group information has the boarding floor information.
 ステップS202で、第3グループ情報が乗車階情報を有する場合、ステップS203の動作が行われる。ステップS203において、推論部33は、第3グループ情報が有するグループ属性情報と乗車階情報とを取得する。 In step S202, when the third group information has boarding floor information, the operation of step S203 is performed. In step S203, the inference unit 33 acquires the group attribute information and the boarding floor information possessed by the third group information.
 その後、ステップS204の動作が行われる。ステップS204において、推論部33は、学習済モデル記憶部31に記憶された学習済モデルを取得する。推論部33は、学習済モデルを用いて第3グループの測降車階を演算する。推論部33は、予測降車階情報を作成する。 After that, the operation of step S204 is performed. In step S204, the inference unit 33 acquires the trained model stored in the trained model storage unit 31. The inference unit 33 calculates the boarding / alighting floor of the third group using the trained model. The inference unit 33 creates the predicted disembarkation floor information.
 その後、ステップS205の動作が行われる。ステップS205において、推論部33は、予測降車階情報を配車分析装置40に出力する。 After that, the operation of step S205 is performed. In step S205, the inference unit 33 outputs the predicted disembarkation floor information to the vehicle allocation analysis device 40.
 その後、ステップS206の動作が行われる。ステップS206において、配車分析装置40は、当該予測降車階情報をもとに、エレベーターのかご配車を演算する。配車分析装置40は、かご配車情報を作成する。 After that, the operation of step S206 is performed. In step S206, the vehicle allocation analyzer 40 calculates the elevator car allocation based on the predicted disembarkation floor information. The vehicle allocation analyzer 40 creates vehicle allocation information.
 その後、ステップS207の動作が行われる。ステップS207において、配車装置50は、かご配車情報を取得する。配車装置50は、かご配車情報をもとに、エレベーターのかご6を配車する。 After that, the operation of step S207 is performed. In step S207, the vehicle dispatching device 50 acquires the car dispatching information. The vehicle dispatching device 50 allocates the elevator car 6 based on the car dispatching information.
 以上で説明した実施の形態1によれば、処理装置10は、撮像情報を用いて、エレベーター利用者のグループの特徴を表すグループ属性情報を作成する。推論装置30は、グループ属性情報を用いて、不特定多数の利用者グループの予測降車階情報を作成する。このため、推論装置30は、利用者が複数人のグループとなることで普段と異なる行動をとることを予測できる。これは、推論装置30の予測精度が向上することを意味する。その結果、利用者は、より効率的な配車が行われるエレベーターを利用できる。即ち、エレベーターの制御システムは、利用者の利便性を向上することができる。 According to the first embodiment described above, the processing device 10 creates group attribute information representing the characteristics of the group of elevator users by using the imaging information. The inference device 30 creates predicted disembarkation floor information of an unspecified number of user groups by using the group attribute information. Therefore, the inference device 30 can predict that the users will behave differently from usual when they form a group of a plurality of people. This means that the prediction accuracy of the inference device 30 is improved. As a result, the user can use the elevator for more efficient vehicle allocation. That is, the elevator control system can improve the convenience of the user.
 また、制御盤8は、グループ属性情報と降車階情報とを用いて、予測降車階情報を推論する学習装置20を備える。このため、エレベーターの制御システムは、利用者の降車階の予測モデルを作成することができる。 Further, the control panel 8 is provided with a learning device 20 that infers the predicted disembarkation floor information by using the group attribute information and the disembarkation floor information. Therefore, the elevator control system can create a prediction model of the user's getting-off floor.
 また、制御盤8は、配車分析装置40を備える。配車分析装置40は、予測降車階情報を用いて、エレベーターのかご配車情報を作成する。配車装置50は、かご配車情報を用いて、予測降車階情報を行先登録したエレベーターのかご6を配車する。このため、エレベーターの利用者は、自身で行先階を登録することなく、エレベーターのかご6に乗車することができる。 Further, the control panel 8 is provided with a vehicle allocation analyzer 40. The vehicle allocation analysis device 40 creates elevator car allocation information using the predicted exit floor information. The vehicle dispatching device 50 dispatches the elevator car 6 in which the predicted disembarkation floor information is registered as the destination by using the car dispatching information. Therefore, the elevator user can get on the elevator car 6 without registering the destination floor by himself / herself.
 ここで、変換部11は、処理装置10に設けられなくてもよい。例えば、変換部11は、撮像装置3に設けられる。この場合、変換部は、変換後の撮像情報を処理装置10へ出力する。 Here, the conversion unit 11 does not have to be provided in the processing device 10. For example, the conversion unit 11 is provided in the image pickup apparatus 3. In this case, the conversion unit outputs the converted imaging information to the processing device 10.
 また、例えば、学習装置20は、学習する処理を定期的に実行する。学習装置20が学習する処理の頻度は、あらかじめ設定しておく。例えば、学習装置20は、1週間に1度学習する処理を実行する。利用状況DB21が記憶する情報量は、記憶期間が長くなるほど増える。利用状況DB21が記憶する情報量が多いほど、モデルの予測精度は向上する。 Further, for example, the learning device 20 periodically executes the learning process. The frequency of processing that the learning device 20 learns is set in advance. For example, the learning device 20 executes a process of learning once a week. The amount of information stored in the usage status DB 21 increases as the storage period becomes longer. The larger the amount of information stored in the usage status DB 21, the better the prediction accuracy of the model.
 また、利用状況DB21は、推論装置30が間違った推定結果を演算した利用者に関して、当該利用者の正しい降車階情報を学習用データとして記憶する。従って、学習装置20は、より精度の高い推論を行うことができる学習済モデルを作成する。 Further, the usage status DB 21 stores the correct disembarkation floor information of the user who has calculated the wrong estimation result by the inference device 30 as learning data. Therefore, the learning device 20 creates a trained model capable of making more accurate inferences.
 また、例えば、学習データ取得部22は、グループ属性情報と乗車階情報と降車階情報とのデータをそれぞれ別のタイミングで取得しても良い。 Further, for example, the learning data acquisition unit 22 may acquire the data of the group attribute information, the boarding floor information, and the getting-off floor information at different timings.
 また、学習データ取得部22は、学習用データを収集するエレベーターシステムを途中で対象に追加したり、対称から除去したりすることも可能である。さらに、あるエレベーターシステムに関して予測降車階情報を学習した学習装置を、これとは別のエレベーターシステムに適用し、当該別のエレベーターシステムに関して予測降車階情報を再学習して更新するようにしてもよい。 Further, the learning data acquisition unit 22 can add an elevator system for collecting learning data to the target on the way or remove it from the symmetry. Further, the learning device that has learned the predicted exit floor information for one elevator system may be applied to another elevator system, and the predicted exit floor information for the other elevator system may be relearned and updated. ..
 また、本実施の形態では、モデル生成部23が用いる学習アルゴリズムに教師あり学習を適用した場合について説明したが、これに限られるものではない。学習アルゴリズムについては、教師あり学習以外にも、強化学習、教師無し学習、または半教師あり学習等を適用することも可能である。 Further, in the present embodiment, the case where supervised learning is applied to the learning algorithm used by the model generation unit 23 has been described, but the present invention is not limited to this. As for the learning algorithm, it is also possible to apply reinforcement learning, unsupervised learning, semi-supervised learning, etc. in addition to supervised learning.
 また、モデル生成部23は、複数のエレベーターシステムに対して作成される学習用データに従って、予測降車階情報を学習するようにしてもよい。モデル生成部23は、同一のエリアで使用される複数のエレベーターシステムから学習用データを取得してもよい。モデル生成部23は、異なるエリアで独立して動作する複数のエレベーターシステムから収集される学習用データを利用して予測降車階情報を学習してもよい。 Further, the model generation unit 23 may learn the predicted disembarkation floor information according to the learning data created for the plurality of elevator systems. The model generation unit 23 may acquire learning data from a plurality of elevator systems used in the same area. The model generation unit 23 may learn the predicted disembarkation floor information by using the learning data collected from a plurality of elevator systems that operate independently in different areas.
 また、モデル生成部23に用いられる学習アルゴリズムとしては、特徴量そのものの抽出を学習する、深層学習(Deep Learning)を用いることもでき、他の公知の方法、例えば遺伝的プログラミング、機能論理プログラミング、サポートベクターマシンなどに従って機械学習を実行してもよい。 Further, as a learning algorithm used in the model generation unit 23, deep learning that learns the extraction of the feature amount itself can also be used, and other known methods such as genetic programming and functional logic programming can be used. Machine learning may be performed according to a support vector machine or the like.
 また、推論装置30は、エレベーターのモデル生成部23で学習した学習済モデルを用いて予測降車階情報を出力するものとして説明したが、他のエレベーター等の外部から学習済モデルを取得し、この学習済モデルに基づいて予測降車階情報を出力するようにしてもよい。 Further, although the reasoning device 30 has been described as outputting the predicted disembarkation floor information using the trained model learned by the model generation unit 23 of the elevator, the trained model is acquired from the outside of another elevator or the like, and this The predicted drop-off floor information may be output based on the trained model.
 なお、例えば、利用者が希望する降車階と推論装置30が推論した予測降車階とが異なっていた場合、利用者は、自身で希望する降車階を行先登録する。従って、当該利用者は、通常の利便性を有するエレベーターを利用することができる。 For example, if the disembarkation floor desired by the user and the predicted disembarkation floor inferred by the inference device 30 are different, the user himself / herself registers the desired disembarkation floor as the destination. Therefore, the user can use the elevator having the usual convenience.
 次に、図13を用いて、推論装置30の例を説明する。
 図13は実施の形態1におけるエレベーターの推論装置のハードウェア構成図である。
Next, an example of the inference device 30 will be described with reference to FIG.
FIG. 13 is a hardware configuration diagram of the elevator inference device according to the first embodiment.
 推論装置30の各機能は、処理回路により実現し得る。例えば、処理回路は、少なくとも1つのプロセッサ100aと少なくとも1つのメモリ100bとを備える。例えば、処理回路は、少なくとも1つの専用のハードウェア200を備える。 Each function of the inference device 30 can be realized by a processing circuit. For example, the processing circuit includes at least one processor 100a and at least one memory 100b. For example, the processing circuit comprises at least one dedicated hardware 200.
 処理回路が少なくとも1つのプロセッサ100aと少なくとも1つのメモリ100bとを備える場合、推論装置30の各機能は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせで実現される。ソフトウェアおよびファームウェアの少なくとも一方は、プログラムとして記述される。ソフトウェアおよびファームウェアの少なくとも一方は、少なくとも1つのメモリ100bに格納される。少なくとも1つのプロセッサ100aは、少なくとも1つのメモリ100bに記憶されたプログラムを読み出して実行することにより、推論装置30の各機能を実現する。少なくとも1つのプロセッサ100aは、中央処理装置、処理装置、演算装置、マイクロプロセッサ、マイクロコンピュータ、DSPともいう。例えば、少なくとも1つのメモリ100bは、RAM、ROM、フラッシュメモリ、EPROM、EEPROM等の、不揮発性または揮発性の半導体メモリ、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVD等である。 When the processing circuit includes at least one processor 100a and at least one memory 100b, each function of the inference device 30 is realized by software, firmware, or a combination of software and firmware. At least one of the software and firmware is written as a program. At least one of the software and firmware is stored in at least one memory 100b. At least one processor 100a realizes each function of the inference device 30 by reading and executing a program stored in at least one memory 100b. At least one processor 100a is also referred to as a central processing unit, a processing unit, an arithmetic unit, a microprocessor, a microcomputer, and a DSP. For example, at least one memory 100b is a non-volatile or volatile semiconductor memory such as RAM, ROM, flash memory, EPROM, EEPROM, magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD, or the like.
 処理回路が少なくとも1つの専用のハードウェア200を備える場合、処理回路は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC、FPGA、またはこれらの組み合わせで実現される。例えば、監視装置9の各機能は、それぞれ処理回路で実現される。例えば、推論装置30の各機能は、まとめて処理回路で実現される。 If the processing circuit comprises at least one dedicated hardware 200, the processing circuit may be implemented, for example, as a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC, an FPGA, or a combination thereof. NS. For example, each function of the monitoring device 9 is realized by a processing circuit. For example, each function of the inference device 30 is collectively realized by a processing circuit.
 推論装置30の各機能について、一部を専用のハードウェア200で実現し、他部をソフトウェアまたはファームウェアで実現してもよい。例えば、推論部33の機能については専用のハードウェア200としての処理回路で実現し、推論部33の機能以外の機能については少なくとも1つのプロセッサ100aが少なくとも1つのメモリ100bに格納されたプログラムを読み出して実行することにより実現してもよい。 For each function of the inference device 30, a part may be realized by the dedicated hardware 200, and the other part may be realized by software or firmware. For example, the function of the inference unit 33 is realized by a processing circuit as dedicated hardware 200, and for functions other than the function of the inference unit 33, at least one processor 100a reads a program stored in at least one memory 100b. It may be realized by executing.
 このように、処理回路は、ハードウェア200、ソフトウェア、ファームウェア、またはこれらの組み合わせで推論装置30の各機能を実現する。 In this way, the processing circuit realizes each function of the inference device 30 by hardware 200, software, firmware, or a combination thereof.
 図示されないが、撮像装置3の各機能も、推論装置30の各機能を実現する処理回路と同等の処理回路で実現される。処理装置10の各機能も、推論装置30の各機能を実現する処理回路と同等の処理回路で実現される。学習装置20の各機能も、推論装置30の各機能を実現する処理回路と同等の処理回路で実現される。配車分析装置40の各機能も、推論装置30の各機能を実現する処理回路と同等の処理回路で実現される。配車装置50の各機能も、推論装置30の各機能を実現する処理回路と同等の処理回路で実現される。外部情報装置70の各機能も、推論装置30の各機能を実現する処理回路と同等の処理回路で実現される。 Although not shown, each function of the imaging device 3 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30. Each function of the processing device 10 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30. Each function of the learning device 20 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30. Each function of the vehicle allocation analyzer 40 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30. Each function of the vehicle dispatching device 50 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30. Each function of the external information device 70 is also realized by a processing circuit equivalent to a processing circuit that realizes each function of the inference device 30.
実施の形態2.
 図14は実施の形態2におけるエレベーターの概要図である。なお、実施の形態1と同一または相当部分には同一符号が付される。当該部分の説明は省略される。
Embodiment 2.
FIG. 14 is a schematic view of the elevator according to the second embodiment. The same or corresponding parts as those in the first embodiment are designated by the same reference numerals. The explanation of the relevant part is omitted.
 図14に示されるように、配車分析装置40は、かご配車情報を外部に出力する機能を備える。 As shown in FIG. 14, the vehicle allocation analyzer 40 has a function of outputting vehicle allocation information to the outside.
 かご配車予定表示盤60は、建築物1の各階に設けられる。例えば、かご配車予定表示盤60は、各階の出入口7の周辺に設けられる。例えば、かご配車予定表示盤60は、乗場2にいる利用者が容易に視認できる位置に設けられる。かご配車予定表示盤60は、配車分析装置40と有線または無線によって電気的に接続される。かご配車予定表示盤60は、配車分析装置40からエレベーターのかご配車情報を取得する。かご配車予定表示盤60は、取得したかご配車情報を表示する。 The car dispatch schedule display board 60 is provided on each floor of the building 1. For example, the car dispatch schedule display board 60 is provided around the doorway 7 on each floor. For example, the car dispatch schedule display board 60 is provided at a position where the user at the landing 2 can easily see it. The car dispatch schedule display panel 60 is electrically connected to the vehicle dispatch analyzer 40 by wire or wirelessly. The car dispatch schedule display board 60 acquires the car dispatch information of the elevator from the vehicle dispatch analysis device 40. The car dispatch schedule display board 60 displays the acquired car dispatch information.
 図15は実施の形態2におけるエレベーターのかご配車予定表示盤の表示例である。 FIG. 15 is a display example of the elevator car dispatch schedule display panel in the second embodiment.
 図15に示されるように、かご配車予定表示盤60は、かご配車情報を用いて、エレベーターのかご配車に関する情報を表示する。例えば、かご配車予定表示盤60は、複数のエレベーターかごの停車予定フロアを表示する。例えば、かご配車予定表示盤60は、各エレベーターかごが到着するまでの予測時間を表示する。 As shown in FIG. 15, the car dispatch schedule display panel 60 displays information related to the car dispatch of the elevator by using the car dispatch information. For example, the car dispatch schedule display panel 60 displays the stop schedule floors of a plurality of elevator cars. For example, the car dispatch schedule display board 60 displays the estimated time until each elevator car arrives.
 以上で説明した実施の形態2によれば、配車分析装置40は、かご配車予定表示盤60へかご配車情報を出力する機能を備える。かご配車予定表示盤60は、エレベーターの利用者にかご配車情報を伝える。このため、当該利用者は、かごが到着する前に、目的の階への所要時間が少ないエレベーターかごの号機を知ることができる。当該利用者は、かごが到着する前に、自らが乗るエレベーターかごのドア付近へ移動することができる。その結果、エレベーターの利用者の利便性を向上することができる。 According to the second embodiment described above, the vehicle allocation analysis device 40 has a function of outputting vehicle allocation information to the vehicle allocation schedule display panel 60. The car dispatch schedule display panel 60 conveys car dispatch information to elevator users. Therefore, the user can know the number of the elevator car that requires less time to reach the target floor before the car arrives. The user can move to the vicinity of the door of the elevator car in which he / she rides before the car arrives. As a result, the convenience of the elevator user can be improved.
 かご配車情報を利用者へ報知する報知装置は、かご配車予定表示盤60に限らない。例えば、当該報知装置は、かご配車情報を音声出力できる音声装置でもよい。当該音声装置は、配車分析装置40からかご配車情報を取得する。当該音声装置は、かご配車情報を表す音声を出力することで、利用者にかご配車情報を知らせる。 The notification device that notifies the user of the car dispatch information is not limited to the car dispatch schedule display panel 60. For example, the notification device may be a voice device capable of outputting car allocation information by voice. The voice device acquires car allocation information from the vehicle allocation analysis device 40. The voice device notifies the user of the car dispatch information by outputting a voice representing the car dispatch information.
 なお、例えば、利用者が希望する降車階と推論装置30が推論した予測降車階とが異なっていた場合、利用者は、自身で希望する降車階を行先登録する。従って、当該利用者は、通常の利便性を有するエレベーターを利用することができる。 For example, if the disembarkation floor desired by the user and the predicted disembarkation floor inferred by the inference device 30 are different, the user himself / herself registers the desired disembarkation floor as the destination. Therefore, the user can use the elevator having the usual convenience.
 また、利用状況DB21は、推論装置30が間違った推定結果を演算した利用者に関して、当該利用者の正しい降車階情報を学習用データとして記憶する。従って、学習装置20は、より精度の高い推論を行うことができる学習済モデルを作成する。 Further, the usage status DB 21 stores the correct disembarkation floor information of the user who has calculated the wrong estimation result by the inference device 30 as learning data. Therefore, the learning device 20 creates a trained model capable of making more accurate inferences.
実施の形態3.
 図16は実施の形態3におけるエレベーターの制御盤のブロック図である。なお、実施の形態1と同一または相当部分には同一符号が付される。当該部分の説明は省略される。
Embodiment 3.
FIG. 16 is a block diagram of an elevator control panel according to the third embodiment. The same or corresponding parts as those in the first embodiment are designated by the same reference numerals. The explanation of the relevant part is omitted.
 図16に示されるように、制御盤8は、外部情報装置70を備える。 As shown in FIG. 16, the control panel 8 includes an external information device 70.
 外部情報装置70は、図示されない入力端末から、任意の外部情報を受け取る。例えば、外部情報装置70は、天気情報、気温情報、建築物1において催されるイベント情報、または売場フロアにおけるセールス情報等を受け取る。 The external information device 70 receives arbitrary external information from an input terminal (not shown). For example, the external information device 70 receives weather information, temperature information, event information held in the building 1, sales information on the sales floor, and the like.
 付帯情報部16は、外部情報装置70から外部情報を取得する。付帯情報部16は、外部情報を含んだ付帯情報を作成する。 The incidental information unit 16 acquires external information from the external information device 70. The incidental information unit 16 creates incidental information including external information.
 統合部17は、付帯情報部16から外部情報を含んだ付帯情報取得する。統合部17は、外部情報を含んだ利用者情報を作成する。 The integration unit 17 acquires incidental information including external information from the incidental information unit 16. The integration unit 17 creates user information including external information.
 グループ情報記憶部18は、統合部17から外部情報を含んだ利用者情報を取得する。グループ情報記憶部18は、外部情報を含んだグループ属性情報を作成する。 The group information storage unit 18 acquires user information including external information from the integration unit 17. The group information storage unit 18 creates group attribute information including external information.
 学習装置20は、グループ情報記憶部18から外部情報を含んだグループ属性情報を取得する。学習装置20は、外部情報を備えたグループ属性情報を学習時に使用する。このため、学習装置20は、外部情報が反映された学習済モデルを生成する。 The learning device 20 acquires group attribute information including external information from the group information storage unit 18. The learning device 20 uses group attribute information including external information at the time of learning. Therefore, the learning device 20 generates a learned model in which external information is reflected.
 推論装置30は、グループ情報記憶部18から外部情報を含んだグループ属性情報を取得する。推論装置30は、外部情報を含んだグループ属性情報を推論時に使用する。このため、推論装置30は、外部情報を反映した予測降車階情報を作成する。 The inference device 30 acquires group attribute information including external information from the group information storage unit 18. The inference device 30 uses group attribute information including external information at the time of inference. Therefore, the inference device 30 creates the predicted disembarkation floor information that reflects the external information.
 以上で説明した実施の形態3によれば、制御盤8は、外部から任意の外部情報を受け取る外部情報装置70を備える。学習装置20は、外部情報が反映された学習済モデルを生成する。推論装置30は、外部情報を反映した予測降車階情報を出力する。このため、外部情報が変化した場合においても、予測降車階情報は、当該変化を反映した情報となる。即ち、予測降車階情報の予測精度が向上する。エレベーターは、より効率的かつより素早く配車される。その結果、エレベーター利用者の利便性を向上することができる。 According to the third embodiment described above, the control panel 8 includes an external information device 70 that receives arbitrary external information from the outside. The learning device 20 generates a trained model in which external information is reflected. The inference device 30 outputs predicted disembarkation floor information that reflects external information. Therefore, even if the external information changes, the predicted disembarkation floor information will be the information that reflects the change. That is, the prediction accuracy of the predicted disembarkation floor information is improved. Elevators are dispatched more efficiently and faster. As a result, the convenience of the elevator user can be improved.
 以上のように、本開示に係るエレベーターの制御システムは、エレベーターシステムに利用できる。 As described above, the elevator control system according to the present disclosure can be used for the elevator system.
 1 建築物、 2 乗場、 3 撮像装置、 4 巻上機、 5 主ロープ、 6 かご、 7 出入口、 8 制御盤、 10 処理装置、 11 変換部、 12 個別画像抽出部、 13 グループ化部、 14 人属性判定部、 15 乗降判定部、 16 付帯情報部、 17 統合部、 18 グループ情報記憶部、 20 学習装置、 21 利用状況データベース、 22 学習データ取得部、 23 モデル生成部、 30 推論装置、 31 学習済モデル記憶部、 32 利用データ取得部、 33 推論部、 40 配車分析装置、 50 配車装置、 60 かご配車予定表示盤、 70 外部情報装置 1 building, 2 landing, 3 imaging device, 4 hoisting machine, 5 main rope, 6 basket, 7 doorway, 8 control panel, 10 processing device, 11 conversion unit, 12 individual image extraction unit, 13 grouping unit, 14 Human attribute judgment unit, 15 boarding / alighting judgment unit, 16 incidental information unit, 17 integration unit, 18 group information storage unit, 20 learning device, 21 usage status database, 22 learning data acquisition unit, 23 model generation unit, 30 inference device, 31 Learned model storage unit, 32 usage data acquisition unit, 33 inference unit, 40 vehicle allocation analyzer, 50 vehicle allocation device, 60 car allocation schedule display panel, 70 external information device

Claims (5)

  1.  エレベーターの乗場に存在する利用者を記録した撮像情報を用いて、前記エレベーターの利用者が所属するグループを判定し、前記グループに所属する利用者の特徴を含んだ前記グループの全体の特徴をグループ属性情報として作成する処理装置と、
     前記グループ属性情報を用いて、前記利用者の予測降車階の情報を作成する推論装置と、
    を備えたエレベーターの制御システム。
    Using the imaging information recorded of the users existing at the elevator landing, the group to which the elevator user belongs is determined, and the overall characteristics of the group including the characteristics of the users belonging to the group are grouped. The processing device created as attribute information and
    An inference device that creates information on the predicted exit floor of the user using the group attribute information, and
    Elevator control system with.
  2.  前記グループが降車した階を表す降車階情報と前記グループ属性情報とを用いて、グループ属性情報から予測降車階の情報を推論するための学習済モデルを生成する学習装置を備えた請求項1に記載のエレベーターの制御システム。 The first aspect of claim 1 is provided with a learning device for generating a trained model for inferring predicted disembarkation floor information from group attribute information by using the disembarkation floor information representing the floor on which the group disembarked and the group attribute information. Described elevator control system.
  3.  前記予測降車階の情報を反映した前記エレベーターを制御するかご配車情報を作成する配車分析装置を備えた請求項1または請求項2のいずれかに記載のエレベーターの制御システム。 The elevator control system according to claim 1 or 2, further comprising a vehicle allocation analyzer that creates vehicle allocation information that controls the elevator that reflects the information on the predicted exit floor.
  4.  前記配車分析装置は、前記エレベーターの制御の情報を前記利用者に報知する報知装置へ前記かご配車情報を出力する請求項3に記載のエレベーターの制御システム。 The elevator control system according to claim 3, wherein the vehicle allocation analysis device outputs the car allocation information to a notification device that notifies the user of the elevator control information.
  5.  外部から入力された外部情報を出力する外部情報装置と、
     前記外部情報を含む前記グループ属性情報を作成する前記処理装置と、
    を備えた請求項1から請求項4のいずれかに記載のエレベーターの制御システム。
    An external information device that outputs external information input from the outside,
    The processing device that creates the group attribute information including the external information, and
    The elevator control system according to any one of claims 1 to 4.
PCT/JP2020/001706 2020-01-20 2020-01-20 Elevator control system WO2021149107A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2020/001706 WO2021149107A1 (en) 2020-01-20 2020-01-20 Elevator control system
JP2021572129A JP7276517B2 (en) 2020-01-20 2020-01-20 elevator control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/001706 WO2021149107A1 (en) 2020-01-20 2020-01-20 Elevator control system

Publications (1)

Publication Number Publication Date
WO2021149107A1 true WO2021149107A1 (en) 2021-07-29

Family

ID=76992108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/001706 WO2021149107A1 (en) 2020-01-20 2020-01-20 Elevator control system

Country Status (2)

Country Link
JP (1) JP7276517B2 (en)
WO (1) WO2021149107A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024009389A1 (en) * 2022-07-05 2024-01-11 三菱電機株式会社 Assistance system for arrangement work on devices of elevator

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005335893A (en) * 2004-05-27 2005-12-08 Mitsubishi Electric Corp Traffic demand predicting device of elevator
US20120090922A1 (en) * 2009-06-03 2012-04-19 Kone Corporation Elevator system
JP2016016950A (en) * 2014-07-09 2016-02-01 東芝エレベータ株式会社 Elevator system
WO2017090179A1 (en) * 2015-11-27 2017-06-01 三菱電機株式会社 Elevator group management control device and group management control method
JP2018008758A (en) * 2016-07-11 2018-01-18 株式会社日立製作所 Elevator system and car call estimation method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3041775B1 (en) * 2013-09-03 2019-07-31 Otis Elevator Company Elevator dispatch using facial recognition
JP2017052578A (en) * 2015-09-07 2017-03-16 株式会社日立ビルシステム Boarding-off situation prediction presentation method at arrival of car for elevator, and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005335893A (en) * 2004-05-27 2005-12-08 Mitsubishi Electric Corp Traffic demand predicting device of elevator
US20120090922A1 (en) * 2009-06-03 2012-04-19 Kone Corporation Elevator system
JP2016016950A (en) * 2014-07-09 2016-02-01 東芝エレベータ株式会社 Elevator system
WO2017090179A1 (en) * 2015-11-27 2017-06-01 三菱電機株式会社 Elevator group management control device and group management control method
JP2018008758A (en) * 2016-07-11 2018-01-18 株式会社日立製作所 Elevator system and car call estimation method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024009389A1 (en) * 2022-07-05 2024-01-11 三菱電機株式会社 Assistance system for arrangement work on devices of elevator

Also Published As

Publication number Publication date
JP7276517B2 (en) 2023-05-18
JPWO2021149107A1 (en) 2021-07-29

Similar Documents

Publication Publication Date Title
CN103287931B (en) Elevator device
JP3375643B2 (en) Elevator management control device
JP4870863B2 (en) Elevator group optimum management method and optimum management system
JP5264886B2 (en) Elevator group management device
JP2573715B2 (en) Elevator control device
KR940009984B1 (en) Elevator control device
KR100973882B1 (en) Control system for elevator
CN109292579A (en) Elevator device, image-recognizing method and progress control method
CN105270937B (en) Elevator group management apparatus
CN109311622A (en) Elevator device and car call estimation method
WO2021149107A1 (en) Elevator control system
CN115676539B (en) High-rise elevator cooperative scheduling method based on Internet of things
JP2019156607A (en) Elevator system
JP4575030B2 (en) Elevator traffic demand prediction device and elevator control device provided with the same
CN111344244B (en) Group management control device and group management control method
JP2003221169A (en) Elevator control device
JP4995248B2 (en) Elevator traffic demand prediction device
JP5570742B2 (en) Elevator group management control device
JP2022042166A (en) Elevator system and operation management method of elevator device
CN105836553B (en) The group management control apparatus of elevator
KR102515719B1 (en) Vision recognition interlocking elevator control system
JP6536484B2 (en) Elevator system
JPH0449183A (en) Elevator control device
CN114314224A (en) Elevator and elevator control method
JPH0331173A (en) Group management control device for elevator

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915176

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021572129

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915176

Country of ref document: EP

Kind code of ref document: A1