CN115703608A - User detection system of elevator - Google Patents

User detection system of elevator Download PDF

Info

Publication number
CN115703608A
CN115703608A CN202210486735.4A CN202210486735A CN115703608A CN 115703608 A CN115703608 A CN 115703608A CN 202210486735 A CN202210486735 A CN 202210486735A CN 115703608 A CN115703608 A CN 115703608A
Authority
CN
China
Prior art keywords
moving body
reflection
detection
car
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210486735.4A
Other languages
Chinese (zh)
Inventor
榎原孝明
野本学
白仓邦彦
木村纱由美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Elevator and Building Systems Corp
Original Assignee
Toshiba Elevator Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Elevator Co Ltd filed Critical Toshiba Elevator Co Ltd
Publication of CN115703608A publication Critical patent/CN115703608A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)
  • Elevator Door Apparatuses (AREA)

Abstract

The invention can restrain the shadow over-detection caused by the lighting environment and accurately detect the users in the elevator car and the elevator waiting hall. A user detection system of an elevator of an embodiment comprises: a reflection estimation unit (24 d) that estimates a reflection area including a portion where illumination light is reflected and the periphery thereof in a captured image of a camera provided in the car; a moving body detection unit (24 b) that changes the processing when detecting a moving body from the captured image, on the basis of the reflection level of the reflection region estimated by the reflection estimation unit (24 d); and a human detection unit (24 c) that detects the moving body as a human on the basis of the information on the moving body detected by the moving body detection unit (24 b).

Description

User detection system of elevator
This application is based on Japanese patent application 2021-130126 (application date: 8/6/2021), to which priority is granted. This application is incorporated by reference into this application in its entirety.
Technical Field
Embodiments of the present invention relate to a user detection system for an elevator.
Background
When the doors of the elevator car are opened, fingers of a user or the like located in the car may be pulled into the door obscuration. When a user in a waiting hall enters the car, the user may collide with the top end of the door during the closing of the door. In order to prevent such a door accident, a system is known in which a user in a hall or a user in a car is detected by using one camera provided in the car and reflected in door opening/closing control.
Disclosure of Invention
In the above system, the presence or absence of a user is detected based on the luminance difference between frames of a captured image. However, in a car or a hall, when light for illumination is reflected by a floor surface via a user, a change in luminance occurs due to a change in the amount of light incident on a camera. At this time, a phenomenon called "shadow" occurs in the reflected portion of the light and its surroundings, and the "shadow" may be excessively detected as a user (see S2 in fig. 7B). In addition, "over detection" is the same as "false detection" in the sense that a shadow is erroneously detected as a user.
The invention provides a user detection system of an elevator, which can inhibit the over detection of shadow caused by lighting environment and can accurately detect the user in a passenger car or an elevator waiting hall.
A user detection system of an elevator of one embodiment comprises: a reflection estimation unit that estimates a reflection area including a portion where illumination light is reflected and its surroundings in a captured image of a camera provided in a car; a moving body detection unit that changes processing for detecting a moving body from the captured image, based on the reflection level of the reflection region estimated by the reflection estimation unit; and a human detection unit that detects the moving body as a human based on the information on the moving body detected by the moving body detection unit.
According to the elevator user detection system configured as described above, it is possible to accurately detect a user in a car or a waiting hall by suppressing the excessive detection of shadows caused by the lighting environment.
Drawings
Fig. 1 is a diagram showing a configuration of an elevator user detection system according to an embodiment.
Fig. 2 is a diagram showing a configuration of a portion around an entrance in a car according to this embodiment.
Fig. 3 is a diagram for explaining a coordinate system in real space in this embodiment.
Fig. 4 is a diagram showing an example of an image captured by the camera in the embodiment.
Fig. 5 is a diagram schematically showing the configuration of the boarding detection area in this embodiment.
Fig. 6 is a diagram for explaining false detection of a shadow generated in the pull-in detection region in this embodiment.
Fig. 7A is a diagram for explaining shading due to reflection of illumination light in the embodiment, and shows a case where there is no user in the car.
Fig. 7B is a diagram for explaining shading due to reflection of illumination light in the embodiment, and shows a case where a user is present in the car.
Fig. 8 is a diagram showing an example of the reflection region in this embodiment.
Fig. 9 is a diagram showing an example of a captured image including a person and a shadow in the embodiment.
Fig. 10 is a diagram showing a state in which a change in the luminance value of a person in the captured image of fig. 9 is observed in the x-axis direction.
Fig. 11 is a diagram showing a state in which a change in the luminance value of the shadow on the captured image of fig. 9 is observed in the x-axis direction.
Fig. 12 is a diagram for explaining a method of calculating the intensity of the mountain-shaped edge in the present embodiment.
Fig. 13 is a diagram showing a specific example of the calculation of the intensity of the mountain-shaped edge.
Fig. 14 is a flowchart showing the processing operation of the user detection system.
Fig. 15 is a flowchart showing the detection process executed in step S103 of fig. 14.
Detailed Description
Hereinafter, embodiments will be described with reference to the drawings.
The disclosure is merely an example, and the invention is not limited to the contents described in the following embodiments. Variations that may be readily apparent to one skilled in the art are of course included within the scope of the present disclosure. In the drawings, the dimensions, shapes, and the like of the respective portions are schematically shown in some cases by being modified from those of the actual embodiment in order to make the description more clear. In the drawings, corresponding elements are denoted by the same reference numerals, and detailed description thereof may be omitted.
Fig. 1 is a diagram showing a configuration of an elevator user detection system according to an embodiment. In addition, although one car is described as an example, the same configuration is applied to a plurality of cars.
A camera 12 is provided at an upper portion of an entrance of the car 11. Specifically, the camera 12 is provided in the door lintel plate 11a covering the upper part of the doorway of the car 11 so that the lens portion thereof is inclined at a predetermined angle directly below, or toward the waiting hall 15 or toward the inside of the car 11.
The camera 12 is a small-sized monitoring camera such as an in-vehicle camera, for example, and has a wide-angle lens or a fisheye lens, and can continuously capture images of several frames (for example, 30 frames/second) in 1 second. The camera 12 is activated when the car 11 arrives at the hall 15 at each floor, for example, and performs imaging including the vicinity of the car door 13 and the hall 15. The camera 12 may be constantly in operation during operation of the car 11.
The imaging range at this time is adjusted to L1+ L2 (L1 > > L2). L1 is a photographing range on the hall side, and has a predetermined distance from the car door 13 to the hall 15. L2 is a car-side imaging range and has a predetermined distance from the car door 13 to the car back surface. L1 and L2 are ranges in the depth direction, and ranges in the width direction (direction orthogonal to the depth direction) are at least larger than the lateral width of the car 11.
In the hall 15 at each floor, a hall door 14 is openably and closably provided at an arrival entrance of the car 11. The hoistway doors 14 engage with the car doors 13 to perform opening and closing operations when the car 11 arrives. The power source (door motor) is located on the car 11 side, and the hoistway doors 14 are opened and closed only following the car doors 13. In the following description, when the car doors 13 are opened, the hoistway doors 14 are also opened, and when the car doors 13 are closed, the hoistway doors 14 are also closed.
Each image (video) continuously captured by the camera 12 is analyzed and processed in real time by the image processing device 20. Note that, although the image processing device 20 is shown as being removed from the car 11 for convenience in fig. 1, the image processing device 20 is actually housed in the header plate 11a together with the camera 12.
The image processing apparatus 20 includes a storage unit 21 and a detection unit 22. The storage unit 21 is formed of a storage device such as a RAM. The storage unit 21 sequentially stores images captured by the camera 12, and has a buffer area for temporarily storing data necessary for the processing of the detection unit 22. In addition, the storage unit 21 may store an image subjected to processing such as distortion correction, enlargement and reduction, and partial cropping as preprocessing of the captured image.
The detection unit 22 is composed of, for example, a microprocessor, and detects a user in the car 11 or the hall 15 using an image captured by the camera 12. The detection unit 22 is functionally divided into a detection region setting unit 23 and a detection processing unit 24. These elements may be implemented by software, may be implemented by hardware such as an IC (Integrated Circuit), or may be implemented by a combination of software and hardware. In addition, the elevator control device 30 may have a part or all of the functions of the image processing device 20.
The detection area setting unit 23 sets at least one detection area for detecting the user on the captured image obtained from the camera 12. In the present embodiment, a detection area E1 for detecting a user in the hall 15 and detection areas E2 and E3 for detecting a user in the car 11 are set. The detection area E1 is used as a boarding detection area, and is set from an entrance (car door 13) of the car 11 to the waiting hall 15. The detection area E2 is used as a pull-in detection area and is set in the entrance pillars 41a and 41b in the car 11. The detection area E3 is used as a pull-in detection area as in the detection area E2, and is set on the floor surface 19 on the entrance side and the exit side in the car 11 (see fig. 3).
The detection processing unit 24 includes an edge extraction unit 24a, a moving body detection unit 24b, a person detection unit 24c, and a reflection estimation unit 24d, and analyzes the captured image obtained from the camera 12 to detect a user present in the car 11 or the hall 15. The edge extraction section 24a, the moving body detection section 24b, the human detection section 24c, and the reflection estimation section 24d will be described in detail later with reference to fig. 7 to 13. When the user detected by the detection processing unit 24 is present in any of the detection areas E1 to E3, a predetermined countermeasure process (door opening/closing control) is executed.
The elevator control device 30 is constituted by a computer having a CPU, ROM, RAM, and the like. The elevator control device 30 controls the operation of the car 11. The elevator control device 30 includes a door opening/closing control unit 31 and a warning unit 32.
The door opening/closing control unit 31 controls opening/closing of the doors of the car doors 13 when the car 11 arrives at the waiting hall 15. Specifically, the door opening/closing control unit 31 opens the car doors 13 when the car 11 arrives at the waiting hall 15, and closes the doors after a predetermined time has elapsed. However, when the detection processing unit 22b detects a user in the detection area E1 during the door closing operation of the car doors 13, the door opening/closing control unit 31 prohibits the door closing operation of the car doors 13, and re-opens the car doors 13 in the fully open direction to maintain the door open state.
When the detection processing unit 22b detects a user in the detection area E2 or E3 during the door opening operation of the car door 13, the door opening/closing control unit 31 performs door opening/closing control for avoiding a door accident (an accident of pulling into a door dark box). Specifically, the door opening/closing control unit 31 temporarily stops the door opening operation of the car doors 13, moves in the reverse direction (door closing direction), or slows down the door opening speed of the car doors 13.
Fig. 2 is a diagram showing a configuration of a portion around an entrance in the car 11.
A car door 13 is provided to be openable and closable at an entrance of the car 11. In the example of fig. 2, the double-opening car door 13 is shown, and the two door panels 13a and 13b constituting the car door 13 are opened and closed in opposite directions to each other in the front width direction (horizontal direction). The "front width" is the same as the doorway of the car 11.
Entrance pillars 41a and 41b are provided on both sides of the doorway of the car 11, and surround the doorway of the car 11 together with the lintel plate 11 a. The "entrance pillar" is also called a front pillar, and generally has a door box for receiving the car door 13 on the back side. In the example of fig. 2, when the car door 13 is opened, one door panel 13a is housed in a door dark box 42a provided on the back side of the entrance pillar 41a, and the other door panel 13b is housed in a door dark box 42b provided on the back side of the entrance pillar 41 b. One or both of the entrance posts 41a and 41b are provided with a display 43, an operation panel 45 on which a destination floor button 44 and the like are disposed, and a speaker 46. In the example of fig. 2, a speaker 46 is provided on the inlet post 41a, and a monitor 43 and an operation panel 45 are provided on the inlet post 41 b.
The video camera 12 is provided in a lintel plate 11a disposed horizontally above the doorway of the car 11. Here, the camera 12 is attached in accordance with the door closing position of the car door 13 in order to detect the user of the hall 15 until the door is closed. Specifically, if the car door 13 is of a double-opening type, the camera 12 is attached to the center portion of the lintel plate 11 a. In addition, a lighting device 48 using, for example, an LED is provided on the ceiling surface in the car 11.
As shown in fig. 3, the camera 12 captures an image in which a direction horizontal to the car door 13 provided at the doorway of the car 11 is an X axis, a direction from the center of the car door 13 to the lobby 15 (a direction perpendicular to the car door 13) is a Y axis, and a height direction of the car 11 is a Z axis.
Fig. 4 is a diagram showing an example of the captured image by the camera 12. The upper side is a waiting hall 15, and the lower side is the interior of the car 11. In the figure, 16 denotes a floor surface of the hall 15, and 19 denotes a floor surface of the car 11. E1, E2, E3 denote detection regions.
The car door 13 has two door panels 13a, 13b that move in opposite directions to each other on a car sill 47. This is also true of the hall door 14, which has two door panels 14a, 14b that move in opposite directions on the hall sills 18. The door panels 14a and 14b of the hall door 14 move in the door opening and closing direction together with the door panels 13a and 13b of the car door 13.
The camera 12 is installed at an upper portion of an entrance of the car 11. Therefore, when the car 11 opens at the waiting hall 15, the predetermined range (L1) on the waiting hall side and the predetermined range (L2) in the car are photographed as shown in fig. 1. In a predetermined range (L1) on the waiting hall side, a detection area E1 for detecting a user who gets into the car 11 is set.
In the actual space, the detection area E1 has a distance L3 from the center of the doorway (front width) toward the hall (L3 ≦ the imaging range L1 on the hall side). The lateral width W1 of the detection region E1 at the time of full opening is set to a distance equal to or greater than the lateral width W0 of the entrance (front width). The detection area E1 is set to include the doorsills 18 and 47 and to remove the dead space of the doorcases 17a and 17b, as indicated by oblique lines in fig. 4. Further, the lateral dimension (X-axis direction) of the detection area E1 may be changed in accordance with the opening/closing operation of the car doors 13. Further, the vertical dimension (Y-axis direction) of the detection area E1 may be changed in accordance with the opening and closing operation of the car doors 13.
As shown in fig. 5, the detection area E1 serving as the boarding detection area is composed of a boarding intention estimation area E1a, an approach detection area E1b, and an on-threshold detection area E1 c. The riding intention estimation area E1a is an area for estimating whether or not the user has an riding intention to face the car 11. The approach detection area E1b is an area for detecting the approach of a user to the doorway of the car 11. The on-threshold detection region E1c is a region for detecting passage of the user over the thresholds 18, 47.
Here, the present system has detection areas E2 and E3 independently of the detection area E1 for detecting boarding. The detection areas E2, E3 are used as pull-in detection areas. The detection area E2 is set to have a predetermined width along the inner side surfaces 41a-1 and 41b-1 of the entrance pillars 41a and 41b of the car 11. The detection area E2 may be set according to the width of the inner side surfaces 41a-1 and 41 b-1. The detection area E3 is set to have a predetermined width along the car threshold 47 of the floor surface 19 of the car 11.
When a user is detected in the detection area E2 or E3 during the door opening operation of the car doors 13, a countermeasure such as temporarily stopping the door opening operation of the car doors 13, moving in the reverse direction (door closing direction), or slowing down the door opening speed of the car doors 13 is executed. In addition, a warning such as "please leave the door" is issued by sound broadcasting.
(problem of detection processing)
In general, pull-in detection is premised on the luminance change of the image in the detection areas E2 and E3, which are pull-in detection areas, being accurately expressed by intrusion of the user. However, since the detection regions E2 and E3 are set in the car 11, they are strongly affected by the lighting environment in the car room. That is, as shown in fig. 6, even in the case where the user P1 rides on the elevator at a position away from the car door 13, the shadow S1 of the user P1 enters the detection area E2 or E3 due to the illumination light of the illumination device 48. When the shadow S1 enters the detection area E2 or E3, a large luminance change occurs in the image along with the movement of the shadow S1, and the shadow S1 may be overdetected as the user P1.
This is the same in the boarding detection process. That is, the detection area E1 as the boarding detection area is set in the waiting hall 15 around the doorway of the car 11. When a shadow enters the detection area E1 due to the lighting environment of the hall 15, there is a possibility that the shadow is over-detected due to a change in brightness on the image.
Brightness variation due to shading
Fig. 7A and 7B are diagrams for explaining shadows generated due to reflection of illumination light. Fig. 7A shows a case where the user P1 is not present in the car 11, and fig. 7B shows a case where the user P1 is present in the car 11.
As shown in fig. 7A, when the light irradiated from the lighting equipment 48 is reflected by the floor surface 19 in the case where there is no user P1 in the car 11, the reflected light of substantially the same amount as the irradiated light is incident on the camera 12. However, as shown in fig. 7B, if a user P1 stands between the lighting apparatus 48 and the floor surface 19, the light of the lighting apparatus 48 is reflected by the floor surface 19 via the user P1, so the amount of reflected light incident on the camera 12 changes, and brightness changes. At this time, a phenomenon called "shadow" occurs in the vicinity of the user P1, and the shadow may be overdetected as the user P1. S2 in the figure is shaded. The shadow S1 shown in fig. 6 has a luminance variation darker than the floor surface 19, and the shadow S2 has a luminance variation brighter than the floor surface 19.
Similarly, in the hall 15, depending on the lighting environment of the hall 15, if a shadow due to reflected light is generated in the vicinity of the user, the shadow may be overdetected as the user. In particular, for example, when a down lamp is used as the lighting device, since the floor surface is partially irradiated, shadows due to reflected light are easily generated.
Therefore, in the present embodiment, the detection processing unit 24 of the image processing apparatus 20 shown in fig. 1 is provided with the following functions (edge extraction, moving body detection, person detection, and reflection estimation unit) to estimate a reflection area where a shadow exists and change the detection processing in the reflection area when detecting a user by using an edge change between images (frames) continuously obtained as captured images. The "edge change" refers to a state in which an edge extracted from the same position between images has changed. The "edge variation" includes an "edge difference" which is a difference of edge intensities. Hereinafter, the functions (edge extraction, moving body detection, person detection, and reflection estimation unit) of the detection processing unit 24 will be described in detail, taking the case of obtaining an edge difference as an example of an edge change.
(a) Reflection estimation
First, reflection estimation will be described.
The reflection estimation is one of functions necessary for suppressing false detection of "shadow". Since the occurrence location of the "shadow" cannot be specified, the reflection estimation unit 24d estimates a reflection region including a portion where the illumination light is reflected and its surroundings (floor surface or shadow) in the captured image. The estimation of the reflection area is performed by analyzing a deviation (luminance distribution) of the luminance value of each pixel of the captured image. This is because, if only the brightness value is looked at, for example, in the case of a person wearing white clothes, the reflection area and the white clothes cannot be distinguished.
The estimation of the reflection area is performed by analyzing the luminance distribution of each pixel in a range of, for example, 13 × 13 pixels using 1 image or a plurality of images. The analysis range of the luminance distribution may be fixed, or may be automatically changed according to parameter setting or an imaging subject.
The analysis range of the luminance distribution may also be changed according to the height of the installation position of the camera 12, the type of the illumination apparatus, and the like. Since the position of occurrence of the "shadow" tends to be expanded as the height of the camera 12 is farther from the floor surface, it is preferable to analyze the luminance distribution of each pixel in a wide range. In addition, in the case of using a downlight as the illumination device, a "shadow" is easily generated, and therefore it is preferable to analyze the luminance distribution of each pixel in a wide range.
In addition, when the reflection region is estimated, the edge distribution may be analyzed. Since the edge distribution also reflects the variation in the luminance value of each image, the reflection area can be estimated in the same manner as the luminance distribution. The method of extracting the edge will be described later.
Fig. 8 is a diagram showing an example of the reflection area. Reference numeral 51 in the figure denotes a person, specifically, a user in the car 11.
The reflective area RE includes a plurality of areas RE-1 to RE-4 having different reflection levels. Areas comprising many bright luminance values have a higher reflection level. In the example of fig. 8, the reflection levels of the reflection areas RE existing in the vicinity of the human figure 51 are classified into 0 to 4 stages (0-cloth 1-cloth 2-cloth 3-cloth 4-0 non-reflection).
Region RE-1: reflection level 4.
Region RE-2: reflection level 3.
Region RE-3: reflection level 2.
Region RE-4: reflection level 1.
In the example of FIG. 8, the regions RE-1 to RE-4 are schematically shown for the sake of simplifying the explanation, but actually, they have a more complicated form. In general, if the light is reflected strongly by a metal part such as a car threshold 47 in the car 11, the reflection level becomes high. However, if a shadow of the person 51 is generated around the portion where the illumination light is reflected, the reflection level becomes different due to the shadow. For example, when the illuminated light is reflected by the floor surface via the person 51, the reflection level around the person 51 changes in a complicated manner due to light passing through the gap between the fingers of the hand of the person 51, light blocked by the palm or the like, and the like.
In this way, a luminance change occurs in a portion where the reflection level changes in the vicinity of the human figure 51, that is, a portion where a shadow is generated, and the shadow may be overdetected as the human figure 51. Therefore, in the present embodiment, in order to suppress such an excessive detection of the shadow, a region in which the luminance value is shifted in the vicinity of the human figure 51 is estimated as the reflection region RE in which the shadow exists, and the detection process (process of detecting a moving body to be described later) in the reflection region RE is changed.
(b) Edge extraction
The edge extraction is a function necessary for suppressing the false detection of the "shadow", and is not necessarily required when focusing only on the false detection of the "shadow". The edge extraction unit 24a extracts edge information from the captured image of the camera 12. In this case, edge information may be extracted from one image or a plurality of images. "edge" refers to a boundary line where the luminance value of each pixel of an image changes discontinuously. For example, an edge extraction filter such as a sobel filter or a laplacian filter is used to extract a portion where a luminance value changes characteristically in an image as an edge. The edge information includes the direction and intensity of the intensity gradient.
The edge intensity is determined by the luminance gradient. The range for obtaining the luminance gradient may be, for example, a range of 3 × 3 pixels, or may be other ranges. The range of the luminance gradient may be fixed, or may be automatically changed according to parameter setting or an imaging target.
The combination of direction and intensity of the intensity gradient
The edge extraction unit 24a obtains the direction and intensity of the luminance gradient for each pixel of the captured image, and extracts an edge from which a shadow region is removed based on information obtained by combining the directions and intensities. In the direction of the luminance gradient, in addition to the 4 directions of upper → lower, lower → upper, left → right, right → left (horizontal vertical direction), there are 4 directions of upper left → lower right, lower left → upper right, upper right → lower left → lower right → upper left (oblique direction). In order to suppress the over-detection of the shadow, it is preferable to obtain a luminance gradient in at least two directions.
In addition, an edge where co-occurrence is established may be extracted. For example, for the pixel of interest, edges in directions having luminance gradients in the left and right directions may also be extracted. The edge intensity is calculated by averaging the brightness values in the selected directions, or the like.
Mountain-shaped edge
The edge extraction unit 24a extracts an edge whose luminance value changes in a mountain shape as an edge from which the shadow region is removed.
Fig. 9 is a diagram showing an example of a captured image including a person and a shadow. Reference numeral 51 in the figure denotes a person, specifically, a user in the car 11. In the figure, 52 is a shadow formed on the floor surface in the car 11, and schematically shows a shadow of a hand of the person 51 protruding forward. Fig. 10 is a diagram showing a state in which a change in luminance value of the image 53 corresponding to the human 51 is observed in the x-axis direction. Fig. 11 is a diagram showing a state in which a change in the luminance value of the image 54 corresponding to the shadow 52 is observed in the x-axis direction.
As shown in fig. 10, in the image 53 corresponding to the person 51, there are a plurality of edges whose luminance values discontinuously change according to wrinkles or the like of fingers of the hand of the person 51 or clothes. On the other hand, as shown in fig. 11, the change in the luminance value inside the image 54 corresponding to the shadow 52 is flat, and the luminance value changes at the boundary portion, but the direction of the luminance gradient is one direction. Therefore, in order to suppress the over-detection of the shadow 52, it is effective to extract an edge (hereinafter referred to as a mountain-shaped edge) having a combination of luminance gradients in two or more directions and intensities thereof and having a continuously changing luminance value in a mountain shape. By performing edge extraction focusing on such a mountain-shaped edge, an edge other than the shadow region can be efficiently extracted from the captured image, and detection processing that is not affected by the motion of the shadow can be realized by using the edge difference, which is the change in the edge.
A method of calculating the intensity of the mountain-shaped edge will be described with reference to fig. 12 and 13.
For example, a pixel located at the center of a range of 3 × 3 pixels is set as a target pixel, and luminance differences in 4 directions, up, down, left, and right, are obtained for the target pixel. The average of these luminance differences is calculated as the intensity of the mountain-shaped edge.
In 256 gradations, the luminance value of the pixel of interest is "191". When the luminance value of a pixel located above the target pixel is "0 (black)", the luminance value of a pixel located on the right side of the target pixel is "64", the luminance value of a pixel located below the target pixel is "127", and the luminance value of a pixel located on the left side of the target pixel is "255 (white)", the intensity of the mountain edge is obtained by the following calculation.
{(191-0)+(191-64)+(191-127)+0}/4=95.5
Since the luminance value of the pixel located on the left side of the target pixel is larger than the target pixel, it is calculated as "0". From the above expression, the edge intensity at the position of the pixel is obtained as "96 (normalized to 95.5 as an integer)".
(c) Moving body detection
The moving body detection is a function of detecting an object having a certain motion on a captured image. In general, a method of detecting the presence or absence of a moving body from a luminance change (luminance difference) between images is used. However, since the luminance changes also in the portion where the "shadow" occurs, there is a possibility that the portion is overdetected as a moving body (i.e., a user).
Therefore, in the present embodiment, the moving body detection is performed by using the edge difference. The moving body detection unit 24b compares the edges extracted by the edge extraction unit 24a with each other among the images consecutively obtained as captured images to obtain an edge difference, and detects a moving body from the edge difference.
The "edge difference" specifically means a difference in edge strength. If the explanation is made with the example of fig. 13, it is now assumed that the edge intensity in the pixel of interest of the first image is calculated as "96". If the edge intensity of the same pixel of interest of the next image is "10", the edge intensity difference is 96-10=86. For example, if the threshold value is set to "40", the threshold value is "86" or more, and thus it is determined that there is motion in the part of the target pixel.
As another method, the difference may be obtained by binarizing the edge intensity.
For example, when the threshold is set to "40", the edge intensity "96" is binarized to "255", and the edge intensity "40" is binarized to "0". Since the difference between the two is 255-0=255 and is not "0", it is determined that there is motion.
In the example of fig. 9, 55 in the figure indicates a pixel (moving pixel) determined to have motion. In the image 53 of the person 51, a plurality of moving pixels 55 exist in a part of the hand or clothes, but no moving pixel 55 exists in the image 54 of the shadow 52. As described later, it is possible to determine whether or not a moving body is a human figure from the distribution of the moving pixels 55.
Edge difference and luminance difference
The edge difference and the luminance difference may be used together to detect the moving body. In this case, the moving body detecting unit 24b obtains a luminance difference (difference in luminance values) between images continuously obtained as captured images, which is different from the edge difference, and detects a moving body based on the luminance difference and the edge difference. As a method of integrating the edge difference result AND the luminance difference result, there are the following logical operation (AND/OR operation, etc.) AND parameter change, etc.
AND operation: when a moving pixel on an image is detected in both of the edge difference and the luminance difference, it is determined that a moving body exists in a predetermined range including the moving pixel.
OR operation: the luminance difference is used for a region with a large number of edges (a region with a low possibility of shading), and the edge difference is used for a region with a small number of edges (a region with a high possibility of shading). The "region with a large number of edges" refers to a region in which the number of edges (number of pixels) extracted by the edge extraction unit 24a is equal to or greater than a predetermined number determined as a criterion for determining a shadow. The "region with few edges" refers to a region in which the number of edges (number of pixels) extracted by the edge extraction unit 24a is less than a predetermined number determined as a criterion for determining a shadow.
Parameter change: in the region with a large number of edges (region with a small possibility of shading), the parameter of the luminance difference is easily detected (that is, the threshold value of the luminance difference is lower than the standard value), and in the region with a small number of edges (region with a high possibility of shading), the parameter of the luminance difference is difficult to detect (that is, the threshold value of the luminance difference is higher than the standard value).
Detection of moving body taking into account reflection region
The factor of the luminance change is not only "shadow", but also "shadow" caused by the illumination light. Therefore, if the moving body detection is simply performed only by the edge change, there is a possibility that a portion where the shadow is generated is over-detected as a moving body (i.e., a user). As described with reference to fig. 8, the region in which the shadow is generated in the captured image is estimated from the reflection region including the portion where the illumination light is reflected and the periphery thereof.
The moving body detecting unit 24b changes the process when detecting a moving body from the captured image, based on the reflection level of the reflection region estimated by the reflection estimating unit 24d. The "change of the process when detecting a moving body from the captured image" specifically means that the threshold value for the edge difference is changed. In this case, a region with a higher reflection level (i.e., a bright region) can be considered as a region in which excessive detection due to shading is more likely to occur. Therefore, the threshold value for the edge difference is increased to be higher than the standard value, making it difficult to detect a moving body.
For example, if the threshold for the edge difference is set to TH1, TH1 is increased by 1 level in the region RE-4 of the reflection level 1 shown in fig. 8. Similarly, TH1 is increased by 2 steps in the region RE-3 of reflection level 2, TH1 is increased by 3 steps in the region RE-3 of reflection level 3, and TH1 is increased by 4 steps in the region RE-4 of reflection level 4.
Even in the case of a configuration in which moving body detection is performed using only the luminance difference, the threshold value for the luminance difference may be changed stepwise according to the reflection level. This makes it possible to strictly determine the luminance difference in the region with a high reflection level and suppress the over-detection due to the shadow.
Further, the moving body detection may be performed using both the luminance difference and the edge difference. In this case, the moving body detecting section 24b uses the luminance difference and the edge difference according to the reflection level of the reflection area, respectively. That is, if the reflection level of the reflection area is lower than a certain level (an area where excessive detection due to shading is difficult to occur), the moving body detection is performed using the luminance difference, and if the reflection level of the reflection area is equal to or higher than the certain level (an area where excessive detection due to shading is easy to occur). The "constant level" may be arbitrarily set, and may be, for example, level 3 out of reflection levels 0 to 4.
(d) Person detection
The human detection unit 24c detects a moving body detected by the moving body detection unit 24b as a human based on information on the moving body. The "person" specifically refers to a user present in the car 11 or the hall 15. The "information of the moving body" includes at least one of distribution of moving pixels, a size of the moving body, and the number of times of detection of the moving body.
"distribution of moving pixels" means a distribution state of moving pixels within a prescribed range. For example, if 40 (i.e., about 10%) or more moving pixels exist in the range of 20 × 20 pixels, it is determined that the motion is a human motion. "moving body size" means the size of an aggregate in which moving pixels are continuous. For example, if 40 or more motion pixels exist as a continuous aggregate, it is determined that the motion is a human motion. The "number of times of moving body detection" indicates the number of times each of the images is detected as a moving body. For example, if the same position on the image is detected as a moving body more than a certain number of times, it is determined that the movement is a person.
Edge information and motion information
The human detection may be performed by using both the edge information and the moving body information. In this case, the human detecting unit 24c performs human detection by changing the criterion of human detection using any one of the distribution of moving pixels obtained as the moving body information, the size of the moving body, and the number of times of moving body detection, based on the edge information.
Specifically, the human detecting unit 24c performs human detection in a region with a large number of edges (a region with a small possibility of shading) in the captured image by making the distribution of moving pixels or the moving body size smaller than that in a region with a small number of edges. Alternatively, in a region with a large number of edges (a region with a small possibility of shading) in the captured image, the number of times of moving object detection is smaller than that in a region with a small number of edges, and even a region detected as a moving object once, for example, can be determined as a person.
Changing the parameters of person detection according to the level of reflection
The parameter for human detection may be changed based on the reflection level of the reflection area estimated by the reflection estimation unit 24d. For example, when the reflection level of the reflection area is equal to or higher than a certain level (an area where excessive detection due to shading is likely to occur), human detection is performed by setting the distribution of moving pixels or the criterion for determining the size of a moving body higher than those of other areas. In addition, when the reflection level of the reflection area is equal to or higher than a certain level (an area where excessive detection due to shading is likely to occur), the criterion for determining the number of times of moving body detection is higher than that of the other areas, and when the moving body is detected at or higher than the certain number of times, the person is determined. The "constant level" may be arbitrarily set, and may be, for example, level 3 out of reflection levels 0 to 4.
The present system uses the detection processing unit 24 having the above-described configuration to detect a person (user) from a captured image, and executes predetermined handling processing (door opening/closing control) when the person is present in any of the detection areas E1 to E3 shown in fig. 3. Hereinafter, the processing operation of the present system will be described by taking pull-in detection as an example.
Fig. 14 is a flowchart showing the processing operation of the present system. The processing shown in this flowchart is executed by the image processing device 20 and the elevator control device 30 shown in fig. 1.
First, as the initial setting, the detection region setting unit 23 of the detection unit 22 included in the image processing apparatus 20 executes the detection region setting process (step S100). This detection area setting process is executed, for example, when the camera 12 is installed or when the installation position of the camera 12 is adjusted, as follows.
That is, the detection area setting unit 22a sets the detection area E1 having a distance L3 from the doorway to the lobby 15 in a state where the car doors 13 are fully opened. As shown in fig. 4, the detection area E1 includes the doorsills 18 and 47, and is set by removing the dead space of the doorcases 17a and 17 b. Here, in the fully opened state of the car doors 13, the detection area E1 has a dimension in the lateral direction (X-axis direction) of W1 and has a distance equal to or greater than the lateral width W0 of the doorway (front width). The detection region setting unit 22a sets a detection region E2 having a predetermined width along the inner side surfaces 41a-1 and 41b-1 of the entrance pillars 41a and 41b of the car 11, and sets a detection region E3 having a predetermined width along the car sill 47 of the floor surface 19 of the car 11.
In a normal operation, when the car 11 arrives at the waiting hall 15 at an arbitrary floor (yes in step S101), the elevator control device 30 starts the door opening operation of the car door 13 (step S102). In accordance with this door opening operation, the camera 12 photographs a predetermined range (L1) on the lobby side and a predetermined range (L2) in the car at a predetermined frame rate (e.g., 30 frames/second). The image pickup by the camera 12 may be continuously performed from a state where the car 11 is closed.
The image processing apparatus 20 acquires images captured by the camera 12 in time series, sequentially stores these images in the storage unit 21, and executes the following detection processing (pull-in detection processing) in real time (step S103). Further, as the preprocessing of the captured image, distortion correction, enlargement and reduction, cutting of a part of the image, and the like may be performed.
Fig. 15 shows the detection process executed in step S103 described above. This detection processing is executed by the detection processing section 24 of the image processing apparatus 20. Hereinafter, description will be given assuming a case where a mountain-shaped edge is extracted from a captured image.
First, in order to eliminate the influence of "shadows" included in the captured image, when the captured image (original image) of the camera 12 is acquired (step S201), the detection processing unit 24 analyzes the distribution of the luminance values of the pixels of the captured image, and creates an image (hereinafter referred to as a luminance distribution deviation image) (step S202) indicating the luminance distribution. As shown in fig. 8, in the reflection region RE, a bright luminance value of a reflection portion of the illumination light and a dark luminance value of the ground or the shadow coexist, and the deviation of the luminance distribution is large.
The detection processing unit 24 estimates the reflection region RE using the deviation image of the luminance distribution (step S302). The detection processing unit 24 creates a threshold image of the edge difference based on the reflection levels of the respective regions RE-1 to RE4 included in the reflection region RE (step S303). The "threshold image of edge difference" is an image in which the threshold TH1 used when binarizing the edge difference is expressed in pixel units. The threshold TH1 is set according to the reflection levels of the respective regions RE-1 to RE 4. In this case, the higher the reflection level is, the more easily the region in which the over-detection due to the shadow is generated, and therefore the threshold TH1 is set higher than the standard value. Conversely, the lower the reflection level is, the more difficult the region is to generate the over-detection due to the shadow, so the threshold TH1 is set lower than the standard value.
In this way, when the threshold TH1 is set according to the reflection levels of the respective regions RE-1 to RE4, the detection processing unit 24 acquires the respective images (original images) from the storage unit 21 in time series (step S201), and creates an image composed of only mountain-shaped edges for each of the images (step S202). Specifically, the detection processing unit 24 extracts, as a ridge edge, an edge having a combination of directions and intensities of luminance gradients in two or more directions and a luminance value varying in a ridge shape, and creates an image composed only of the ridge edge (hereinafter, referred to as a ridge edge image).
Next, the detection processing unit 24 performs difference binarization on the mountain-shaped edge image using the threshold value image created in step S303 (step S203). Specifically, as described with reference to fig. 13, the detection processing unit 24 obtains a luminance gradient for each pixel of the mountain-shaped edge image, and obtains an edge difference when the intensity of the luminance gradient is compared at the same pixel position in the next image. The detection processing unit 24 acquires a threshold value TH1 set for each reflection region from the threshold value image, and binarizes the edge difference based on the threshold value TH 1. In this case, the threshold TH1 is set high in a region where the reflection level is high (a region where the over-detection due to the shadow is likely to occur), so that the difference binarization of the mountain edge image can be performed while suppressing the over-detection due to the shadow.
The detection processing section 24 also performs differential binarization of the original image as a captured image (step S204). Specifically, the detection processing unit 24 compares the luminance values of the respective pixels of the image at the same pixel position in the next image to obtain a luminance difference, and binarizes the luminance difference by a preset threshold TH 2. The threshold TH2 at this time may reflect the reflection level of the reflection region, and may be set in advance to a value corresponding to the reflection level, as in the case of the threshold TH 1.
The detection processing section 24 integrates the edge difference binarized value of each pixel obtained from the mountain-shaped edge image and the luminance difference binarized value of each pixel obtained from the original image (step S205), and detects the presence or absence of a moving body based on the result of the integration (step S206). As described above, a method of integrating the edge difference AND the luminance difference includes a logical operation (AND/OR operation, etc.) AND a parameter change.
In this way, when a moving body (moving pixel) is detected, the detection processing section 24 detects a person based on information of the moving body (step S207). More specifically, the detection processing section 24 determines whether or not the moving body is a motion of a person based on at least one of the distribution of moving pixels obtained as information of the moving body, the size of the moving body, and the number of times of detection of the moving body. For example, in the case of detecting a person based on the distribution of moving pixels, if there are about 10% or more moving pixels in a predetermined pixel range, the person detection unit 24c determines that the range including the moving pixels is the movement of the person.
The detection processing unit 24 changes a parameter for human detection based on the reflection level of the reflection area. For example, when the reflection level of the reflection area is equal to or higher than a certain level (an area where excessive detection due to shading is likely to occur), human detection is performed by increasing the distribution of moving pixels or the criterion for determining the size of a moving body. Alternatively, the criterion for the number of times of moving body detection is increased, and a person is determined when the detected moving body is detected a predetermined number of times or more.
In the present embodiment, the "person" is a user who is in the car 11 or the hall 15, and the motion of the clothes, hands, or the like of the user is expressed as motion pixels on the captured image (see fig. 7).
In the example of fig. 15, the edge difference and the luminance difference are used in combination, but the moving body detection processing may be performed only by the edge difference, and a person (user) may be detected from the distribution of moving pixels obtained as a result of the detection. In this case, the processing of steps S204 and S205 of fig. 15 is not necessary.
Returning to fig. 14, when the user is detected by the detection process during the door opening operation, the detection processing portion 24 determines whether the user is in the detection area E2 or the detection area E3 set as the pull-in detection area in the car 11 (step S104). If the user is in the detection area E2 or the detection area E3 (yes in step S104), the detection processing unit 24 outputs a pull-in detection signal to the elevator control device 30. As a result, the elevator control device 30 temporarily stops the door opening operation of the car door 13 by the door opening/closing control unit 31 as a handling process related to the pull-in detection area, and restarts the door opening operation from the stop position several seconds later (step S105).
As the above-described handling process, the door opening speed of the car doors 13 may be made slower than normal, or the door opening operation may be restarted after the car doors 13 are moved slightly in the reverse direction (door closing direction). Further, the warning portion 32 of the elevator control device 30 may be activated to emit a warning sound by making an audio broadcast through the speaker 46 in the car 11 to call the user' S attention to move the user away from the car door 13 (step S106). While the user is detected in the detection area E2 or the detection area E3, the above-described processing is repeated. This can prevent a user from being pulled into the door obscura 42a or 42b when the user is located near the car door 13, for example.
(riding detection processing)
In the example of fig. 14, the pull-in detection process is described as an example, but the boarding detection process is also the same. That is, when the car 11 starts to close at an arbitrary floor, the detection process described in fig. 15 is executed. In this case, in consideration of the "shadow" generated by the lighting environment of the hall 15, a reflection area including the reflection portion of the illumination light in the hall 15 and its surroundings is estimated, and the threshold value of the edge difference is set according to the reflection level of the reflection area. The threshold value for the luminance difference may be set in advance in accordance with the reflection level of the reflection region.
As described above, when the user is detected based on the edge difference and the luminance difference of the captured image, it is determined whether or not the user is in the detection area E1 set as the boarding detection area in the lobby 15. When the user is in the detection area E1 and is detected to be facing the door 13 of the car 11, an elevator-boarding detection signal is output from the detection processing unit 24 to the elevator control device 30. Thus, as a countermeasure associated with the boarding detection area, the elevator control device 30 temporarily stops the door closing operation of the car doors 13 by the door opening/closing control unit 31, moves the car doors 13 in the reverse direction (door closing direction), or decreases the door opening speed of the car doors 13 from the normal speed.
As described above, according to the present embodiment, the user can be detected while suppressing the over-detection of the shadow caused by the illumination light by estimating the reflection area including the reflection portion of the illumination light and the periphery thereof and changing the process of detecting the moving body based on the reflection level of the reflection area.
In particular, in the case of a configuration in which the detection process is performed using the edge difference, by setting the threshold value of the edge difference based on the reflection level of the reflection area, it is possible to accurately detect the user while reliably eliminating the influence of the shadow, and it is possible to realize a coping process corresponding to the detection result.
In the above-described embodiment, the case where the user is detected from the entire captured image has been described, but the user may be detected for each detection area set in advance in the captured image. For example, in the case of the door opening operation, the user in the detection area E2 or E3 is detected based on the edge difference of the image by focusing on the image in the detection area E2 or E3 shown in fig. 4. In addition, if the door closing operation is performed, the user located in the detection area E1 is detected from the edge difference of the image by focusing on the image in the detection area E1 shown in fig. 4.
In the above-described embodiment, the edge difference (difference in edge intensity) has been described as an example of the edge change, but the change in the edge may be determined using a rectangle such as normalized correlation, for example. In short, any method can be used as long as the state of edge change between images can be detected, without being limited to the edge difference.
According to at least one embodiment described above, it is possible to provide an elevator user detection system capable of accurately detecting a user in a car or a hall by suppressing the excessive detection of a shadow caused by a lighting environment.
Several embodiments of the present invention have been described, but these embodiments are presented as examples and are not intended to limit the scope of the invention. These new embodiments can be implemented in other various ways, and various omissions, substitutions, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and the equivalent scope thereof.

Claims (10)

1. A user detection system for an elevator, which is provided with a camera that is provided in a car and that captures an image of a predetermined range including the inside of the car, is characterized by comprising:
a reflection estimation unit that estimates a reflection area including a portion where illumination light is reflected and its periphery in a captured image of the camera;
a moving body detection unit that changes processing for detecting a moving body from the captured image, based on the reflection level of the reflection region estimated by the reflection estimation unit; and
a human detection unit that detects the moving body as a human based on the information on the moving body detected by the moving body detection unit.
2. The user detection system of an elevator according to claim 1,
the reflection estimation unit estimates the reflection area based on a distribution of luminance values of each pixel of the captured image.
3. The user detection system of an elevator according to claim 1,
the reflection estimation unit estimates the reflection area based on a distribution of edges extracted from the captured image.
4. The user detection system of an elevator according to claim 1,
the moving body detection unit has a process of detecting a moving body based on an edge change of each image continuously obtained as the captured image, and increases a threshold value for the edge change as a reflection level of the reflection area is higher.
5. The user detection system of an elevator according to claim 1,
the moving body detection unit has a process of detecting a moving body based on a luminance difference and an edge change of each image continuously obtained as the captured image, and performs moving body detection using the luminance difference when the reflection level of the reflection area is lower than a certain level, and performs moving body detection using the edge change when the reflection level of the reflection area is equal to or higher than the certain level.
6. The user detection system of an elevator according to claim 1,
the human detection unit detects the moving body as a human on the basis of at least one of the distribution of moving pixels, the size of the moving body, and the number of times of detecting the moving body, which are obtained as information on the moving body.
7. The user detection system of an elevator according to claim 6,
when the reflection level of the reflection area is equal to or higher than a predetermined level, the human detection unit increases the distribution of the moving pixels or the criterion for determining the size of the moving body.
8. The user detection system of an elevator according to claim 6,
the human detection unit increases the criterion for determining the number of times of moving body detection when the reflection level of the reflection area is equal to or higher than a predetermined level.
9. The user detection system of an elevator according to claim 1,
the image processing apparatus further includes a control unit that executes a handling process associated with the detection area when the person is detected within a detection area preset on the captured image.
10. The user detection system of an elevator according to claim 9,
the detection area is set near a door in the passenger car,
the control unit controls a door opening/closing operation as the handling process so that the person is not caught by the door during the door opening operation of the car.
CN202210486735.4A 2021-08-06 2022-05-06 User detection system of elevator Pending CN115703608A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021130126A JP7276992B2 (en) 2021-08-06 2021-08-06 Elevator user detection system
JP2021-130126 2021-08-06

Publications (1)

Publication Number Publication Date
CN115703608A true CN115703608A (en) 2023-02-17

Family

ID=85180647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210486735.4A Pending CN115703608A (en) 2021-08-06 2022-05-06 User detection system of elevator

Country Status (2)

Country Link
JP (1) JP7276992B2 (en)
CN (1) CN115703608A (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4546155B2 (en) 2004-06-02 2010-09-15 パナソニック株式会社 Image processing method, image processing apparatus, and image processing program
JP4663756B2 (en) 2008-04-28 2011-04-06 株式会社日立製作所 Abnormal behavior detection device
JP2012084012A (en) 2010-10-13 2012-04-26 Canon Inc Image processing device, processing method therefor, and program
JP6377797B1 (en) 2017-03-24 2018-08-22 東芝エレベータ株式会社 Elevator boarding detection system
JPWO2020008538A1 (en) 2018-07-03 2020-07-27 三菱電機株式会社 Material estimation device and robot
JP6849760B2 (en) 2019-08-26 2021-03-31 東芝エレベータ株式会社 Elevator user detection system

Also Published As

Publication number Publication date
JP2023024068A (en) 2023-02-16
JP7276992B2 (en) 2023-05-18

Similar Documents

Publication Publication Date Title
CN112340577B (en) User detection system for elevator
CN109879130B (en) Image detection system
CN113428752A (en) User detection system for elevator
CN112429609B (en) User detection system for elevator
CN113942905B (en) Elevator user detection system
CN115703609A (en) Elevator user detection system
CN113428751B (en) User detection system of elevator
CN115108425B (en) Elevator user detection system
CN115703608A (en) User detection system of elevator
JP7077437B2 (en) Elevator user detection system
CN111704013A (en) User detection system of elevator
CN113428750B (en) User detection system for elevator
CN112441490B (en) User detection system for elevator
CN112456287B (en) User detection system for elevator
JP7375137B1 (en) Elevator user detection system
CN112340581B (en) User detection system for elevator
CN113911868B (en) Elevator user detection system
JP6871324B2 (en) Elevator user detection system
JP7305849B1 (en) elevator system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination