CN117185072A - Elevator system - Google Patents

Elevator system Download PDF

Info

Publication number
CN117185072A
CN117185072A CN202310661299.4A CN202310661299A CN117185072A CN 117185072 A CN117185072 A CN 117185072A CN 202310661299 A CN202310661299 A CN 202310661299A CN 117185072 A CN117185072 A CN 117185072A
Authority
CN
China
Prior art keywords
mirror
car
user
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310661299.4A
Other languages
Chinese (zh)
Inventor
郑国龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Elevator and Building Systems Corp
Original Assignee
Toshiba Elevator Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Elevator Co Ltd filed Critical Toshiba Elevator Co Ltd
Publication of CN117185072A publication Critical patent/CN117185072A/en
Pending legal-status Critical Current

Links

Landscapes

  • Indicating And Signalling Devices For Elevators (AREA)
  • Image Analysis (AREA)

Abstract

The elevator system of the application can prevent false detection caused by a mirror so as to accurately detect a user riding in an elevator car. An elevator system according to an embodiment is an elevator system in which a mirror is provided in a car, and includes an imaging unit, a mirror area setting unit, and a detection processing unit. The imaging unit captures an image of a range including a hall near an entrance from within the car. The mirror region setting unit sets a mirror region at a position of the mirror on the captured image obtained by the imaging unit, and sets an edge around the mirror region. The detection processing unit determines whether or not a user is in the mirror region based on the state of the edge set around the mirror region by the edge setting unit when the user who is riding in the car is detected from the captured image.

Description

Elevator system
The present application is based on Japanese patent application 2022-092218 (application day: 2022, 6, 7 days) and enjoys priority of the application. The present application is incorporated by reference into this application in its entirety.
Technical Field
Embodiments of the present application relate to an elevator system that detects a user using a camera provided in an elevator car.
Background
Conventionally, the following systems are known: a camera is provided in the car of the elevator, and the number of users who take the elevator in the car is detected by processing an image captured by the camera, and the detection result is reflected in the operation control of the elevator.
Such a system requires accurate detection of the user by image processing. However, if a mirror is provided in the car, a user reflected in the mirror may be erroneously detected, and the number of passengers may be doubly counted.
Disclosure of Invention
As a method for preventing the false detection by the mirror, a method of masking an area (mirror area) corresponding to a place where the mirror is installed on a captured image is generally used. However, when the mirror area is masked, for example, in the case where a user stands on the front surface of the mirror, a problem arises in that the user is not detected and the number of passengers who have not counted up the elevator.
The application provides an elevator system capable of preventing false detection caused by a mirror to accurately detect a user riding in an elevator car.
An elevator system according to an embodiment is an elevator system in which a mirror is provided in a car, and includes an imaging unit, a mirror area setting unit, and a detection processing unit. The imaging unit captures an image of a range including a hall near an entrance from within the car. The mirror region setting unit sets a mirror region at a position of the mirror on the captured image obtained by the imaging unit, and sets an edge around the mirror region. The detection processing unit determines whether or not a user is in the mirror region based on the state of the edge set around the mirror region by the edge setting unit when the user who is riding in the car is detected from the captured image.
According to the elevator system having the above configuration, it is possible to prevent false detection by the mirror and to accurately detect a user riding in the car.
Drawings
Fig. 1 is a diagram showing the structure of an elevator system according to a first embodiment.
Fig. 2 is a view showing a configuration of an entrance peripheral portion in the car according to the first embodiment.
Fig. 3 is a diagram showing an example of an image captured by the camera according to the first embodiment.
Fig. 4 is a view showing an example of a captured image when the user is located at one end side of the mirror in the first embodiment.
Fig. 5 is a view showing an example of a captured image when the user is positioned in front of the mirror in the first embodiment.
Fig. 6 is a view schematically showing a peripheral portion of a mirror region in which an image is captured in the first embodiment, and shows a case where a user is not in front of the mirror.
Fig. 7 is a view schematically showing a peripheral portion of a mirror region in which an image is captured in the first embodiment, and shows a case where a user is positioned in front of the mirror.
Fig. 8 is a flowchart for explaining the operation of the elevator system according to the first embodiment.
Fig. 9 is a block diagram showing a functional configuration of a mirror detection setting unit in the second embodiment.
Fig. 10 is a diagram showing an example of an image of a frame N according to the second embodiment.
Fig. 11 is a diagram showing an example of an image of the frame n+1 according to the second embodiment.
Fig. 12 is a diagram showing an example of a change in luminance gradient between images in the second embodiment.
Fig. 13 is a diagram showing an example of a change in luminance gradient, luminance value, and histogram between images in the second embodiment.
Fig. 14 is a diagram showing an example of the marking of the mirror block in the second embodiment.
Fig. 15 is a diagram for explaining a method of generating a mirror region in the second embodiment.
Fig. 16 is a diagram for explaining another method of generating a mirror region in the second embodiment.
Fig. 17 is a flowchart for explaining the operation of the elevator system according to the second embodiment.
Fig. 18 is a flowchart showing details of the mirror region generation process executed in step S26 of fig. 17.
Detailed Description
Hereinafter, embodiments will be described with reference to the drawings.
The disclosure is merely an example, and the application is not limited to the following embodiments. Variations that would be readily apparent to one skilled in the art are of course included within the scope of this disclosure. In order to make the description more clear, the dimensions, shapes, and the like of the respective portions may be schematically shown in the drawings by changing them with respect to the actual embodiments. Corresponding elements are denoted by the same reference numerals in the various drawings, and detailed description thereof is sometimes omitted.
(first embodiment)
Fig. 1 is a diagram showing the structure of an elevator system according to a first embodiment. In this case, 1 car is taken as an example, but a plurality of cars are similarly configured.
A camera 12 serving as an imaging unit is provided above the entrance of the car 11. Specifically, the camera 12 is provided in a door header plate 11a covering an upper portion of an entrance of the car 11 so that a lens portion faces directly downward. The camera 12 has an ultra-wide angle lens such as a fish eye lens, and photographs a subject including the car 11 over a wide range at a field angle of 180 degrees or more. The camera 12 is capable of capturing images of several frames (e.g., 30 frames/second) in succession within 1 second.
The camera 12 may not be located above the entrance of the car 11 as long as it is located near the car door 13. For example, the elevator system may be a place where the entire car room including the entire area of the floor surface in the car 11 and the hall 15 near the entrance when the door is opened can be imaged, such as a ceiling surface near the entrance of the car 11.
A hall door 14 is provided in the hall 15 of each floor so as to be openable and closable at the entrance of the car 11. The hall doors 14 engage with the car doors 13 to perform opening and closing operations when the car 11 arrives. The power source (door motor) is located on the car 11 side, and the hall door 14 is opened and closed only following the car door 13. In the following description, the hoistway door 14 is also opened when the car door 13 is opened, and the hoistway door 14 is also closed when the car door 13 is closed.
Each image (video) continuously captured by the camera 12 is analyzed and processed in real time by the image processing device 20. In fig. 1, the image processing device 20 is taken out of the car 11 for convenience, but in practice, the image processing device 20 is housed in the door lintel plate 11a together with the camera 12.
The image processing apparatus 20 has a storage section 21 and a detection section 22. The storage section 21 has a buffer area for sequentially storing images captured by the camera 12 and temporarily storing data necessary for processing by the detection section 22. As the preprocessing of the captured image, the storage unit 21 may store an image subjected to the processing such as distortion correction, enlargement and reduction, and partial cutting. The storage unit 21 includes a mirror information storage area 21a for storing information (installation location, size, shape, etc.) about the mirror 50 (see fig. 3) installed in the car 11.
The detection unit 22 is constituted by, for example, a microprocessor, and detects a user located in the car 11 or in the hall 15 using the captured image of the camera 12. The mirror region setting unit 22a and the detection processing unit 22b constitute a functional division of the detection unit 22. These may be realized by software, hardware such as IC (Integrated Circuit), or a combination of software and hardware. In addition, the elevator control device 30 may be provided with a part or all of the functions of the image processing device 20.
The mirror region setting unit 22a sets a mirror region at a position corresponding to the mirror 50 of the captured image obtained by the camera 12, and sets an edge around the mirror region. When a user riding in the car 11 is detected from the captured image, the detection processing unit 22b determines whether the user is in the mirror region based on the state of the edge set around the mirror region by the mirror region setting unit 22 a.
The elevator control device 30 is constituted by a computer provided with a CPU, ROM, RAM or the like. The elevator control device 30 includes an operation control unit 31, a door opening/closing control unit 32, and a notification unit 33. The operation control unit 31 performs operation control of the car 11. The door opening/closing control unit 32 controls the opening/closing of the doors 13 of the car 11 when the car arrives at the hall 15. Specifically, the door opening/closing control unit 32 opens the car door 13 when the car 11 reaches the hall 15, and closes the door after a predetermined time elapses.
Here, for example, when the detection processor 22c detects a user in the vicinity of the car door 13 during the door opening operation of the car door 13, the door opening/closing controller 32 performs door opening/closing control for avoiding a door accident (a pull-in accident into the door box). Specifically, the door opening/closing control unit 32 temporarily stops the door opening operation of the car door 13, moves in the opposite direction (door closing direction), or slows down the door opening speed of the car door 13. The notification unit 33 draws attention from the user in the car 11 based on the detection result of the detection processing unit 22 c.
Fig. 2 is a view showing a configuration of the surrounding portion of the doorway in the car 11.
A car door 13 provided to be openable and closable in an entrance of the car 11. In the example of fig. 2, a double door type car door 13 is shown, and two door panels 13a, 13b constituting the car door 13 are opened and closed in opposite directions along the front width direction (horizontal direction). The "front width" is the same as the entrance of the car 11.
Front posts 41a, 41b are provided on both sides of the entrance of the car 11, and surround the entrance of the car 11 together with the door lintel plate 11 a. The "front column" is also called an entrance column or an entrance frame, and a door camera for accommodating the car door 13 is generally provided on the rear surface side. In the example of fig. 2, when the car door 13 is opened, one door panel 13a is accommodated in a door box 42a provided on the rear surface side of the front column 41a, and the other door panel 13b is accommodated in a door box 42b provided on the rear surface side of the front column 41 b.
One or both of the front posts 41a and 41b are provided with a display 43, an operation panel 45 provided with a destination floor button 44, and the like, and a speaker 46. In the example of fig. 2, a speaker 46 is provided on the front pillar 41a, and a display 43 and an operation panel 45 are provided on the front pillar 41 b.
As shown in fig. 3, a rectangular mirror 50 is provided at a position facing the entrance of the rear surface 49 of the car 11. The mirror 50 is used, for example, as a rearview mirror when a wheelchair user gets off the elevator from the car 11. In addition, as the mirror 50, not only "glass type" but also "stainless steel mirror type" is included.
A camera 12 having an ultra-wide angle lens such as a fish eye lens is provided in a central portion of a door lintel plate 11a at an upper portion of an entrance of the car 11. The camera 12 captures images of the hall 15 in the car 11 and in the vicinity of the doorway at a predetermined frame rate (e.g., 30 frames/second). The image captured by the camera 12 is supplied to an image processing device 20 shown in fig. 1 for detection processing for detecting a user or an object.
Fig. 3 is a diagram showing an example of a captured image of the camera 12. The elevator hall 15 is shown in a state where the car door 13 (door panels 13a, 13 b) and the elevator hall door 14 (door panels 14a, 14 b) are fully opened, and the entire car room and the vicinity of the doorway are photographed at an angle of view of 180 degrees or more from the upper part of the doorway of the car 11. The hall 15 is located at the upper side, and the car 11 is located at the lower side.
In the hall 15, door pockets 17a and 17b are provided on both sides of the entrance of the car 11, and a belt-shaped hall sill 18 having a predetermined width is disposed on a floor surface 16 between the door pockets 17a and 17b in the opening/closing direction of the hall door 14. Further, a belt-shaped car threshold 47 having a predetermined width is disposed on the entrance side of the floor surface 19 of the car 11 in the opening/closing direction of the car door 13.
Here, when the mirror 50 is provided in the car 11, there is a problem that a user who has a mirror 50 in a captured image is erroneously detected and the number of passengers is double counted. As a method for preventing false detection by the mirror 50, a method of masking an area (mirror area) corresponding to the installation place of the mirror 50 on a captured image is general. However, if the mirror area is shielded, even if the user stands on the front surface of the mirror 50, the user cannot be detected, and thus there is a problem that the number of passengers is not counted.
This situation is illustrated in fig. 4 and 5.
Fig. 4 is a view showing an example of a captured image when the user is located at one end side of the mirror 50, and fig. 5 is a view showing an example of a captured image when the user is located in front of the mirror 50. P1 and P2 in the figure represent users. The user P1 rides on the elevator at a position (in this example, near the side surface 48 a) in the car 11 where the mirror 50 is not reflected. The user P2 rides on the mirror 50, and the user P2 is reflected on the mirror 50.
As shown in fig. 4, even if the user P2 is located near the mirror 50, if it is not in front of the mirror 50, the user P2 can be detected even if the mirror area is masked on the captured image. However, as shown in fig. 5, if the user P2 is located in front of the mirror 50, the user P2 cannot be detected when the mirror area is shielded.
A method of correctly detecting the user P2 without masking the mirror region ME will be described below.
Fig. 6 and 7 are diagrams schematically showing peripheral portions of the mirror sub-regions in a state where the captured image is divided in predetermined block units. Fig. 6 corresponds to fig. 4, showing the situation where the user is not in front of the mirror 50. Fig. 7 corresponds to fig. 5, with the user positioned in front of the mirror 50. In the figure, ME represents the mirror image area, RI represents the real image of the user P2, and MI represents the mirror image of the user P2.
In the present embodiment, information on the mirror 50 is provided in advance, a mirror region ME is set on a captured image based on the information, and an edge is set around the mirror region ME. "edge" refers to a boundary line. That is, the "set edge around the mirror region ME" is an edge that is detected as a boundary line of the mirror region ME drawn around the mirror region ME on the captured image.
Here, in the case where the user P2 is not in front of the mirror 50, only the mirror image MI of the user P2 appears in the mirror region ME, and thus the edge set around the mirror region ME is continuous (refer to fig. 6). In contrast, when the user P2 comes in front of the mirror 50, the real image RI of the user P2 is superimposed on the mirror region ME, and thus the edge set around the mirror region ME is partially hidden (see fig. 7). Therefore, if the state of the edge set around the mirror area ME is focused, it can be determined whether the user P2 is within the mirror area ME on the captured image.
In the example of fig. 6, since the edge set around the mirror area ME is not interrupted, it can be determined that the user P2 is not present in the mirror area ME. In the example of fig. 7, since the edge set around the mirror area ME is interrupted, it can be determined that the user P2 is present in the mirror area ME. In this case, in the above-described method of shielding the mirror area ME, the user P2 located in the mirror area ME is not detected, and there is a problem that the number of passengers is not counted. In contrast, in this embodiment (the method of detecting a user based on whether or not there is an interruption at the edge of the mirror area ME), the user P2 in the mirror area ME can be detected, and thus the number of passengers can be accurately counted.
Next, an operation of the elevator system in the present embodiment will be described.
Fig. 8 is a flowchart for explaining the operation of the elevator system. The processing shown in the flowchart is mainly performed by the image processing apparatus 20.
Now, assume a case where the car 11 opens at any floor. The camera 12 provided in the car 11 captures images of the hall 15 in the car 11 and in the vicinity of the doorway at a predetermined frame rate. The images captured by the camera 12 are stored in the storage section 21 of the image processing apparatus 20 in time series.
The detection section 22 of the image processing apparatus 20 reads out the images stored in the storage section 21 in time series (step S11). The detection unit 22 sets a mirror region ME at the position of the mirror 50 on the images based on the information on the mirror stored in the mirror information storage region 21a, and sets an edge around the mirror region ME (step S12). Specifically, when the mirror region ME is set at the position of the mirror 50 on the image, the detection unit 22 draws an edge, which is a boundary line of the mirror region ME, based on the coordinate information of the outer periphery of the mirror region ME.
The detection unit 22 detects a user riding in the car 11 based on a change in brightness or the like when comparing brightness values of the respective images in units of blocks (step S13). As a detection method, for example, an image taken when the car 11 is not occupied may be used as a reference image, and the reference image may be compared with a currently taken image to detect the presence or absence of a user.
Here, when the user is detected (yes in step S14), the detection unit 22 executes the following processing to prevent false detection by the mirror 50.
First, the detection unit 22 confirms the state (continuity) of the edge set around the mirror region ME from the captured image. As a result, when the edge set around the mirror region ME is broken (yes in step S15), the detection unit 22 determines that the user is in the mirror region ME (step S16). In this case, the detection unit 22 processes the image of the user detected in the mirror region ME as a real image, and includes the user in the number of passengers to count (step S18). This state is shown in fig. 7.
On the other hand, when the edge set around the mirror region ME is not broken (no in step S15), the detection unit 22 determines that the user is not in the mirror region ME (step S17). In this case, the detection unit 22 processes the image of the user detected in the mirror region ME as a virtual image so as not to be included in the number of passengers. This state is shown in fig. 6.
As described above, according to the first embodiment, by focusing on the state of the edge set around the mirror area ME, it is possible to accurately detect the user located in the mirror area ME, and to include the user in the number of passengers and count the number of passengers. In addition, when a virtual image of the user appears in the mirror region ME, false detection of the virtual image as the user can be prevented, and the number of passengers can be accurately counted only with the real image of the user as the object.
In the first embodiment, the case where the car 11 is opened is described as an example, but the same applies to the case where the car 11 is closed. That is, it is sufficient to set the mirror region ME on the captured image of the camera 12 obtained when the door is closed, and set the edge around the mirror region ME, thereby determining whether the user is in the mirror region ME based on the state of the edge.
In the same manner, when the mirror 50 is provided on the side surface 48a or the side surface 48b, the mirror region ME is set at the position of the mirror 50 on the captured image, and the edge is set around the mirror region ME, and it is sufficient to determine whether or not the user is in the mirror region ME based on the state of the edge.
(second embodiment).
Next, a second embodiment will be described.
In the first embodiment described above, the description has been given assuming that the information on the mirror 50 is provided in advance, but since the installation place, the size, the shape, and the like of the mirror 50 are different depending on the object, it takes time to provide the information on the mirror 50 in advance for each object. Therefore, in the second embodiment, a case will be described in which the position of the mirror 50 is detected on the captured image without requiring information on the mirror 50, and the mirror region ME is set therein.
Fig. 9 is a block diagram showing a functional configuration of the mirror region setting unit 22a in the second embodiment. The mirror region setting section 22a is provided in the detection section 22 of the image processing apparatus 20 shown in fig. 1. In the second embodiment, the mirror region setting section 22a has a feature amount extraction section 61, a mirror image block detection section 62, and a mirror region generation section 63.
The feature amount extraction section 61 reads each image stored in the storage section 21 in time series, and extracts feature amounts from these images in units of blocks. The "block unit" is a unit when an image is divided into blocks of a predetermined size in a matrix. The "feature amount" is a numerical value representing the feature/characteristic of an image by a specified amount, and includes, for example, a luminance gradient, a luminance value, a histogram, and the like. The mirror block detection section 62 determines a pair of blocks having symmetrical motion in the image based on the change in the feature amount of the image extracted by the feature amount extraction section 61. The mirror block detection unit 62 detects one of the pair of blocks as a mirror block based on the position of the car threshold 47 of the car 11 on the captured image. The "mirror image block" refers to a block recognized as an image in the mirror 50 provided in the car 11. The mirror region generating unit 63 generates a mirror region from the aggregate of the mirror blocks detected by the mirror block detecting unit 62.
Hereinafter, a method for correctly setting the mirror region ME at a portion corresponding to the setting position of the mirror 50 on the photographed image without requiring information about the mirror 50 will be described.
(a) Detecting symmetrical motion
Fig. 10 is a diagram showing an example of an image of a frame N, and fig. 11 is a diagram showing an example of an image of a frame n+1 (N is an arbitrary integer). P1 and P2 in the figure represent users. The user P1 rides on the elevator (in this example, near the side surface 48 a) at a position in the car 11 where the mirror 50 is not reflected. The user P2 is located near the entrance of the car 11 and is in a state of riding in the car 11. A mirror 50 is provided on a position facing the entrance on the rear surface 49 of the car 11, and a user P2 is reflected on the mirror 50.
On the photographed image, the real image of the user P2 and the mirror image of the user P2 reflected on the mirror 50 perform a symmetrical motion. Therefore, in order to detect the position of the mirror 50, a portion symmetrically moving on the photographed image can be detected. Specifically, the captured image is divided in predetermined block units, and feature amounts (for example, luminance gradients) are extracted in the block units. Then, the amount of change when comparing the feature amounts of the image of the frame N and the image of the frame n+1 in units of blocks is found, and a pair of blocks having symmetrical motion is determined from the amount of change.
Fig. 12 is a diagram showing an example of a change in luminance gradient between images. The a part is the upper side of the image (hall side), and the B part is the lower side of the image (car rear side). Arrows in the figure indicate the gradient direction of the luminance.
In the image of frame N, it is assumed that the luminance gradient of block B (Xb, yb) of the a portion is 270 °, and the luminance gradient of block k (Xk, yk) of the B portion is 90 °. In the image of the frame n+1, it is assumed that the luminance gradient of the block B (Xb, yb) of the a portion is 225 °, and the luminance gradient of the block k (Xk, yk) of the B portion is 135 °.
The amount of change in the luminance gradient of block b (Xb, yb) is-45 ° (225 ° -270 °). The amount of change in the luminance gradient of block k (Xk, yk) is +45° (135 ° -90 °). The amount of change in the luminance gradient is indicative of motion. Thus, it can be seen that a symmetrical motion is generated in block b and block k in the image. In this case, the difference when the amount of change in the luminance gradient of the block b is compared with the amount of change in the luminance gradient of the block k is substantially zero.
In the example of fig. 12, the description has been focused on the block B of the portion a and the block k of the portion B, but actually, the luminance gradient is extracted from the entire image in units of blocks, a pair of blocks having symmetrical motion on the captured image is determined from the difference in the amount of change, and one of the pair of blocks is detected as a mirror image block. As described later, since the mirror 50 is often provided on the rear surface 49 of the car 11, a block near the lower side of the captured image is detected as a mirror image block based on the position of the threshold 40.
As shown in fig. 13, as the feature quantity, besides the luminance gradient, a luminance value or a histogram may be extracted, and a mirror block may be detected from the change quantity thereof. In the example of fig. 13, "luminance gradient" is extracted as a feature quantity 1, "luminance value" is extracted as a feature quantity 2, and "histogram" is extracted as a feature quantity 3. In this case, the value obtained by adding the amount of change in the luminance gradient, the amount of change in the luminance value, and the amount of change in the histogram to the preset weight value is used for detecting the mirror block.
If the difference of the variation amounts of the luminance gradient extracted as the feature quantity 1 is set to be "- θ b °-θ k Degree ", the weighted value is set to" G θ ", score S of feature quantity 1 θ The results were obtained as follows.
S θ =(-θ b °-θ k °)×G θ
If the difference of the variation amounts of the luminance values extracted as the feature quantity 2 is set to "beta bk "set the weighting value to" G " β ", score S of feature quantity 2 β The results were obtained as follows.
S β =(β bk )×G β
If it is to beThe difference of the variation amounts of the histogram extracted for the feature amount 3 is set to "Hist b -Hist k "set the weighting value to" G " hist ", score S of feature quantity 3 hist The results were obtained as follows.
S hist =(Hist b -Hist k )×G β
Here, G θ >G β >G hist . The weighting value G of the brightness gradient is made in advance θ The luminance gradient has a characteristic related to the direction, and therefore, the reliability is highest in the case of judging symmetrical motion.
If the score S of the feature quantity 1 θ Score S of feature quantity 2 β Score S of feature quantity 3 hist If the added value is within the preset threshold TH1, the block b and the block k are determined to be a pair of blocks having symmetrical motion. The threshold TH1 is set in consideration of the lighting environment of the car 11, and is an ideal value of 0±error. That is, the feature amount change of the block b and the feature amount change of the block k are symmetrical, and the difference between the two is determined to have symmetrical motion as the difference between the two changes approaches zero.
In this way, if a luminance value or histogram is used in addition to the luminance gradient, a pair of blocks having symmetrical motion in the captured image can be more accurately determined. The determination may be performed by adding a feature other than the luminance value or the histogram.
(b) Generation of mirror regions
Fig. 14 is a diagram showing an example of the marking of the mirror block. Fig. 15 is a diagram showing an example of the generation of the mirror region. "M" is a flag indicating that it is detected as a mirror block. "Mb" is a mark indicating that the mirror block is determined to be a part of the mirror.
When the hall 15 is set to the upper side on the photographed image, a block existing below the car threshold 47 among the pair of blocks having symmetrical movement detected in the above (a) is detected as a mirror image block. This is because a mirror 50 is often provided on the rear surface 49 of the car 11. When both of the pair of blocks are located below the car threshold 47, one of the blocks near the lower side of the captured image is detected as a mirror image block.
As shown in fig. 14, a mark M is added to a block detected as a mirror block. For each image obtained as a captured image in time series, a mirror block is detected, and in the case where the same block is detected as a mirror block a certain number of times C1 or more (at least 2 times or more), the mirror block is determined as a part of the mirror 50 for generation of the mirror region ME. At this time, the mark M is replaced with the mark Mb. The reason why the number of times C1 is equal to or greater than the predetermined number is to exclude blocks that are irregularly erroneously detected as mirror blocks due to the influence of noise or the like.
Here, as shown in fig. 15, when a rectangular area is formed by the aggregate of mirror blocks denoted by Mb occupying, for example, 75% or more, the rectangular area is generated as the mirror area ME. The rectangular area is made because the shape of the mirror is generally rectangular.
Further, as another method, for example, a rectangular region to be searched may be expanded on the condition that a mirror block including a mark Mb accounts for, for example, 75% or more, and a finally obtained rectangular region may be generated as the mirror region ME.
Fig. 16 shows a specific example. Now, it is assumed that the mirror blocks denoted by "Mb1" to "Mb1" are determined as a part of the mirror 50.
First, a first rectangular area including a mirror block "Mb1" is set as a retrieval object. Next, a second rectangular region including the first rectangular region and the adjacent mirror blocks of "Mb2", "Mb3" is set as a search target. In the same manner as above, the third rectangular region including the second rectangular region and the adjacent mirror blocks of "Mb4", "Mb5", "Mb6" →the fourth rectangular region including the third rectangular region and the adjacent mirror blocks of "Mb7", "Mb8" →the fifth rectangular region … … including the fourth rectangular region and the adjacent mirror blocks of "Mb9", "Mb 10". In this way, the rectangular region to be searched is expanded under the above conditions, and the fifth rectangular region to be finally obtained is defined as the mirror region ME.
If such a method is used, the mirror region ME can be correctly generated even if a block that is not recognized as a mirror image block is partially contained due to, for example, a case where noise is contained in an image, a case where a part of the mirror 50 is contaminated, or the like.
Next, the operation of the elevator system in the second embodiment will be described.
Fig. 17 is a flowchart for explaining the operation of the elevator system. The processing shown in the flowchart is mainly performed by the image processing apparatus 20.
Now, assume a case where the car 11 opens at any floor. The camera 12 provided in the car 11 captures images of the hall 15 in the car 11 and in the vicinity of the doorway at a predetermined frame rate. The images captured by the camera 12 are stored in the storage section 21 of the image processing apparatus 20 in chronological order.
The detection section 22 of the image processing apparatus 20 reads out the images stored in the storage section 21 in time series (step S21). The detection unit 22 divides the images into predetermined block units, and extracts, for example, a luminance gradient as a feature amount from the block units (step S22). The detection unit 22 compares the amounts of change in the feature amounts in the captured images for each image in units of blocks, and calculates differences in the feature amounts change (step S23).
Here, when a pair of blocks whose differences in the feature amount change are within the preset threshold TH1 are detected, the detection unit 22 determines the blocks as blocks having symmetrical motion (step S24). The detection unit 22 detects a block existing below the captured image as a mirror image block from among the pair of blocks based on the position of the car threshold 47 (step S25).
Specifically, as described with reference to fig. 12, when the amount of change in the luminance gradient of block b and the amount of change in the luminance gradient of block k are compared between the image of frame N and the image of frame n+1, if the difference in the amounts of change therebetween is within the threshold TH1, block b and block k are determined as a pair of blocks having symmetrical motion. In this case, since the block k exists on the lower side of the captured image, it is detected as a mirror image block.
In this way, when several mirror blocks recognized as part of the mirror 50 are detected from the captured image, the detection section 22 generates a mirror region ME on the captured image from the above-described aggregate of mirror blocks (step S26). Fig. 12 shows details of the mirror region generation process performed in step S26 described above.
The detection section 22 marks a block detected as a mirror image block on the captured image (step S31). At this time, the detection section 22 counts the number of times of detection as a mirror image block for each block (step S32). When the same block is detected as the mirror block a predetermined number of times C1 or more (yes in step S33), the detection unit 22 determines the mirror block as a part of the mirror 50 (step S34), and generates a mirror region ME on the captured image based on the determined assembly of the mirror blocks (step S35).
Specifically, as described in fig. 15, when a rectangular region is formed by occupying a certain proportion or more of an aggregate of mirror blocks (Mb) determined as a part of the mirror 50, a mirror region ME is generated based on the edge of the rectangular region. Alternatively, as described with reference to fig. 16, the rectangular region to be searched is expanded on the condition that the mirror image block (Mb) is included in a proportion equal to or greater than a predetermined proportion, and the image region ME is generated based on the edge of the rectangular region to be finally obtained.
In this way, when the mirror region ME is generated on the captured image, the detection process of the user focusing on the edge of the mirror region ME is performed later in the same manner as in the first embodiment (see steps S13 to S19 of fig. 8).
As described above, according to the second embodiment, even when the mirror region ME is set at the position of the mirror 50 in view of the portion having symmetrical motion on the captured image, the user located in the mirror region ME can be accurately detected as in the first embodiment. In particular, in the second embodiment, since information about the mirror 50 is not necessary, there is an advantage in that the aspect of the present application can be applied regardless of the specification of the car 11.
In the second embodiment, the case where the car 11 is opened is described as an example, but the same applies to the case where the car 11 is closed. That is, by detecting one of a pair of blocks having symmetrical motion as a mirror block from an image captured by the camera 12 during door closing, the mirror region ME can be generated from an aggregate of the mirror blocks.
In addition, even when the mirror 50 is provided on the side surface 48a or the side surface 48b, the mirror region ME can be generated by the same method as described above. In this case, when a pair of blocks having symmetrical motion is determined from the captured image, a block located below the car threshold 47 and near the left or right end of the captured image may be detected as a mirror block, and the mirror region ME may be generated from an aggregate of the mirror blocks.
In order to increase the likelihood of the detection target, a speed change amount (intensity) of the feature amount may be added. The "speed change amount of the feature amount" refers to the speed at which the feature amount changes between frame images. That is, if there is a pair of blocks whose feature amounts change at the same speed, one of the pair of blocks is detected as a mirror block.
According to at least one embodiment described above, it is possible to provide an elevator system capable of preventing false detection by a mirror to accurately detect a user riding in an elevator car.
In addition, although several embodiments of the present application have been described, these embodiments are presented by way of example and are not meant to limit the scope of the application. These novel embodiments can be implemented in various other modes, and various omissions, substitutions, and changes can be made without departing from the spirit of the application. These embodiments and modifications thereof are included in the scope and gist of the application, and are included in the application described in the claims and their equivalents.

Claims (6)

1. An elevator system including a mirror provided in a car, the elevator system comprising:
an imaging unit that images a range including a hall near an entrance from within the car;
a mirror region setting unit that sets a mirror region at a position of the mirror on the captured image obtained by the imaging unit and sets an edge around the mirror region; and
and a detection processing unit that, when a user riding in the car is detected from the captured image, determines whether or not the user is in the mirror region based on the state of the edge set around the mirror region by the mirror region setting unit.
2. An elevator system according to claim 1, characterized in that,
when the edge set around the mirror region is interrupted, the detection processing unit determines that the user is in the mirror region, and processes the image of the user detected in the mirror region as a real image.
3. An elevator system according to claim 1, characterized in that,
when the edge set around the mirror region is not interrupted, the detection processing unit determines that the user is not in the mirror region, and processes the image of the user detected in the mirror region as a mirror image.
4. An elevator system according to claim 1, characterized in that,
when the detection processing unit determines that the user is within the mirror region, the detection processing unit includes the user in the number of passengers.
5. An elevator system according to claim 1, characterized in that,
the mirror region setting unit acquires information on the mirror in the car, and sets the mirror region based on the position of the mirror on the captured image.
6. An elevator system according to claim 1, characterized in that,
the mirror region setting unit detects the position of the mirror based on the change information of the feature amount of the portion having the symmetrical motion on the captured image, and sets the mirror region at the position of the mirror.
CN202310661299.4A 2022-06-07 2023-06-06 Elevator system Pending CN117185072A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022092218A JP7305849B1 (en) 2022-06-07 2022-06-07 elevator system
JP2022-092218 2022-06-07

Publications (1)

Publication Number Publication Date
CN117185072A true CN117185072A (en) 2023-12-08

Family

ID=87072382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310661299.4A Pending CN117185072A (en) 2022-06-07 2023-06-06 Elevator system

Country Status (2)

Country Link
JP (1) JP7305849B1 (en)
CN (1) CN117185072A (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6683173B2 (en) 2017-05-15 2020-04-15 フジテック株式会社 Elevator equipment

Also Published As

Publication number Publication date
JP2023179122A (en) 2023-12-19
JP7305849B1 (en) 2023-07-10

Similar Documents

Publication Publication Date Title
CN109928290B (en) User detection system
CN113428752B (en) User detection system for elevator
JP7230114B2 (en) Elevator user detection system
CN112429609B (en) User detection system for elevator
CN111704012A (en) User detection system of elevator
CN117246862A (en) Elevator system
CN117185072A (en) Elevator system
CN117185071A (en) Elevator system
CN113428750B (en) User detection system for elevator
CN112340560B (en) User detection system for elevator
CN115703609A (en) Elevator user detection system
CN111717748B (en) User detection system of elevator
CN112441497B (en) User detection system for elevator
CN112456287B (en) User detection system for elevator
CN113911868B (en) Elevator user detection system
CN113428751A (en) User detection system of elevator
CN115108425B (en) Elevator user detection system
CN112551292B (en) User detection system for elevator
JP7077437B2 (en) Elevator user detection system
JP7375137B1 (en) Elevator user detection system
CN111704013A (en) User detection system of elevator
CN115703608A (en) User detection system of elevator
CN111453588A (en) Elevator system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination