CN111717742B - Image processing apparatus and method - Google Patents

Image processing apparatus and method Download PDF

Info

Publication number
CN111717742B
CN111717742B CN201911199450.7A CN201911199450A CN111717742B CN 111717742 B CN111717742 B CN 111717742B CN 201911199450 A CN201911199450 A CN 201911199450A CN 111717742 B CN111717742 B CN 111717742B
Authority
CN
China
Prior art keywords
camera
image processing
car
image
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911199450.7A
Other languages
Chinese (zh)
Other versions
CN111717742A (en
Inventor
田村聪
木村纱由美
野田周平
横井谦太朗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Elevator and Building Systems Corp
Original Assignee
Toshiba Elevator Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Elevator Co Ltd filed Critical Toshiba Elevator Co Ltd
Publication of CN111717742A publication Critical patent/CN111717742A/en
Application granted granted Critical
Publication of CN111717742B publication Critical patent/CN111717742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/02Control systems without regulation, i.e. without retroactive action
    • B66B1/06Control systems without regulation, i.e. without retroactive action electric
    • B66B1/14Control systems without regulation, i.e. without retroactive action electric with devices, e.g. push-buttons, for indirect control of movements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66BELEVATORS; ESCALATORS OR MOVING WALKWAYS
    • B66B1/00Control systems of elevators in general
    • B66B1/34Details, e.g. call counting devices, data transmission from car to control system, devices giving information to the control system
    • B66B1/3415Control system configuration and the data transmission or communication within the control system
    • B66B1/3446Data transmission or communication within the control system

Abstract

The invention aims to inhibit the reduction of detection accuracy of a user caused by the deviation of the mounting position of a camera. According to one embodiment, the image processing device performs image processing of images taken by the camera including the inside of the car and the hall in order to detect a user near the door of the car. The image processing apparatus includes an acquisition unit, a detection unit, and a setting unit. The acquisition unit acquires, from the camera, an image captured in a state where a mark that can be distinguished from the floor surface of the car and the floor surface of the hall is provided. The detection unit identifies a mark from the acquired image, and detects a shift in the mounting position of the camera based on the identified mark. The setting unit sets a setting value related to image processing when the offset of the mounting position of the camera is detected.

Description

Image processing apparatus and method
The present application is based on Japanese patent application 2019-053668 (filing date: 3/20/2019) on which priority is enjoyed. This application is hereby incorporated by reference in its entirety.
Technical Field
Embodiments of the present invention relate to an image processing apparatus.
Background
In recent years, various technologies have been proposed for preventing people and objects from being pinched by the car door of an elevator. For example, the following techniques are proposed: a camera is used to detect a user moving towards the elevator and to extend the door opening time of the door of the elevator.
In such a technique, it is necessary to detect a user moving toward an elevator with high accuracy from an image captured by a camera. However, if the mounting position of the camera is shifted, the image captured by the camera may be rotated or shifted in the left-right direction, and thus the detection accuracy of the user may be lowered.
Therefore, it is desirable to realize a new technique capable of suppressing a decrease in detection accuracy of a user even when a displacement occurs in the mounting position of a camera.
Disclosure of Invention
An object of an embodiment of the present invention is to provide an image processing apparatus capable of suppressing a decrease in detection accuracy of a user due to a shift in an attachment position of a camera.
According to one embodiment, the image processing device performs image processing of images taken by the camera including the inside of the car and the hall in order to detect a user near the door of the car. The image processing apparatus includes an acquisition unit, a detection unit, and a setting unit. The acquisition unit acquires, from the camera, an image captured in a state where a mark distinguishable from the floor surface of the car and the floor surface of the hall is provided. The detection unit identifies the mark from the acquired image, and detects the displacement of the mounting position of the camera based on the identified mark. The setting unit sets a setting value related to the image processing when the offset of the mounting position of the camera is detected.
According to the image processing apparatus having the above configuration, it is possible to suppress a decrease in detection accuracy of the user due to a shift in the mounting position of the camera.
Drawings
Fig. 1 is a diagram showing a schematic configuration example of an elevator system according to an embodiment.
Fig. 2 is a diagram showing an exemplary hardware configuration of an image processing device included in the elevator system.
Fig. 3 is a diagram showing an image captured without a shift in the mounting position of the camera.
Fig. 4 is a diagram showing an image captured when there is a shift in the mounting position of the camera.
Fig. 5 is a diagram showing an example of marks provided in the imaging range of the camera.
Fig. 6 is a block diagram showing an exemplary functional configuration of the image processing apparatus.
Fig. 7 is a flowchart showing an example of the processing procedure of the image processing apparatus in the calibration function.
Fig. 8 is a diagram for supplementing the flowchart shown in fig. 7, and is a diagram showing an image captured by a camera.
Detailed Description
Hereinafter, embodiments will be described with reference to the drawings. The present invention is not limited to the following embodiments, but is merely an example. Variations that would be readily apparent to one of skill in the art are of course included within the scope of the disclosure. In order to make the description clearer, the dimensions, shapes, and the like of the respective portions may be schematically shown in the drawings by changing them with respect to the actual embodiments. In the drawings, corresponding elements are denoted by the same reference numerals, and detailed description thereof may be omitted.
Fig. 1 is a diagram showing a schematic configuration example of an elevator system according to an embodiment.
A camera 12 is provided above the entrance of the car 11. Specifically, the door lintel plate 11a covering the upper part of the doorway of the car 11 is provided with a lens portion of the camera 12 in a direction of photographing in the car 11 and in the hall 15. The camera 12 is a small-sized monitoring camera such as an in-vehicle camera, and has a wide-angle lens, and continuously captures images of several frames (for example, 30 frames/sec) within 1 second.
The camera 12 may be turned on all the time to perform shooting all the time, or may be turned on at a predetermined timing to start shooting, turned off at a predetermined timing to end shooting. For example, the camera 12 may be turned on when the moving speed of the car 11 is less than a predetermined value, and may be turned off when the moving speed of the car 11 is equal to or greater than a predetermined value. In this case, when the car 11 starts decelerating to stop at a predetermined floor and the moving speed is less than a predetermined value, the camera 12 is turned on to start shooting, and when the car 11 starts accelerating to move toward a floor different from the predetermined floor and the moving speed is equal to or greater than the predetermined value, the camera 12 is turned off to end shooting. That is, the imaging by the camera 12 continues from the start of deceleration of the car 11 to stop at a predetermined floor, the movement speed being less than a predetermined value, including the period in which the car 11 stops at the predetermined floor, to the start of acceleration of the car 11 from the predetermined floor to another floor, and the movement speed being equal to or greater than the predetermined value.
The imaging range of the camera 12 is set to L1+L2 (L1.gtoreq.L2). L1 is an imaging range on the hall 15 side, and is set from the car door 13 toward the hall 15. L2 is an imaging range on the car 11 side, and is set from the car door 13 toward the car rear surface. L1 and L2 are ranges in the depth direction, and the range in the width direction (direction orthogonal to the depth direction) is at least larger than the lateral width of the car 11.
In each floor hall 15, a hall door 14 is provided so as to be openable and closable at an arrival opening of the car 11. The hall door 14 engages with the car door 13 when the car 11 arrives, and performs an opening and closing operation. The power source (door motor) is located on the car 11 side, and the hoistway door 14 is opened and closed only following the car door 13. In the following description, it is assumed that the hoistway door 14 is also opened when the car door 13 is opened, and the hoistway door 14 is also closed when the car door 13 is closed.
Each image (video) continuously captured by the camera 12 is subjected to image processing in real time by the image processing device 20. Specifically, the image processing apparatus 20 detects (movement of) the user closest to the car door 13 from a change in the brightness value of the image in a preset area (hereinafter referred to as detection area), determines whether the detected user has intention to ride the car 11, determines whether the detected user's hand or arm has a possibility of being pulled into the door camera, and the like. The result of the image processing by the image processing device 20 is reflected in the control processing (mainly, door opening/closing control processing) of the elevator control device 30 as needed.
The elevator control device 30 controls the opening and closing of the doors of the car doors 13 when the car 11 reaches the hall 15. Specifically, when the car 11 arrives at the hall 15, the elevator control device 30 opens the car door 13 and closes the door after a predetermined time elapses.
However, when the image processing device 20 detects an intended user who has landed on the car 11, the elevator control device 30 prohibits the door closing operation of the car door 13, and maintains the door opening state (extends the door opening time of the car door 13). When the image processing device 20 detects a user who has a possibility of pulling his/her hand or arm into the door box, the elevator control device 30 prohibits the door opening operation of the car door 13, or makes the door opening speed of the car door 13 slower than normal, or plays a message or the like prompting the car door 13 to be separated from the car door 11, and notifies the user that there is a possibility that his/her hand or arm is pulled into the door box.
In fig. 1, the image processing device 20 is shown as being taken out of the car 11 for convenience, but in practice, the image processing device 20 is housed in the door lintel plate 11a together with the camera 12. In fig. 1, the case where the camera 12 and the image processing apparatus 20 are provided separately is illustrated, but the camera 12 and the image processing apparatus 20 may be integrally provided as one apparatus. Further, in fig. 1, a case is illustrated in which the image processing device 20 and the elevator control device 30 are provided separately, but the functions of the image processing device 20 may be mounted on the elevator control device 30.
Fig. 2 is a diagram showing an example of the hardware configuration of the image processing apparatus 20.
As shown in fig. 2, the image processing apparatus 20 is connected to a bus 21, such as a nonvolatile memory 22, a CPU23, a main memory 24, and a communication device 25.
The nonvolatile memory 22 stores various programs including, for example, an Operating System (OS) and the like. The programs stored in the nonvolatile memory 22 include a program for executing the above-described image processing (more specifically, user detection processing described later) and a program for realizing a calibration function described later (hereinafter, referred to as a calibration program).
The CPU23 is, for example, a processor that executes various programs stored in the nonvolatile memory 22. Further, the CPU23 performs control of the entire image processing apparatus 20.
The main memory 24 is used, for example, as a work area or the like required when the CPU23 executes various programs.
The communication device 25 has a function of controlling communication (transmission and reception of signals) with external devices such as the camera 12 and the elevator control device 30 by wire or wireless.
Here, as described above, the image processing apparatus 20 performs the user detection process of detecting the user nearest to the car door 13 based on the change in the brightness value of the image in the detection area set in advance. In this user detection process, in order to pay attention to a change in the luminance value of an image in a detection area set in advance, it is necessary to always set the detection area at a determined position on the image.
However, in the operation of the elevator system, if the installation position (installation angle) of the camera 12 is shifted due to an impact or the like on the car 11 or the camera 12, for example, the detection area is also shifted, and therefore the image processing device 20 focuses on a change in the brightness value of the image of an area different from the area actually intended to be focused on, and as a result, there is a possibility that a user (object) that needs to be detected originally, a user (object) that does not need to be detected originally, or the like may not be detected erroneously.
Fig. 3 shows an example of an image captured without a shift in the mounting position of the camera 12. Although not shown in fig. 1, a door sill (hereinafter referred to as a car door sill) 13a for guiding opening and closing of the car door 13 is provided on the car 11 side. Similarly, a threshold (hereinafter referred to as hall threshold) 14a for guiding the opening and closing of the hall door 14 is provided on the hall 15 side. In fig. 3, the hatched portion indicates a detection area e1 set in the image. Here, as an example, assume that the detection area e1 is set to have a predetermined range from the long side on the car 11 side of the long side of the rectangular car threshold 13a toward the hall 15 side in order to detect a user in the hall 15. In order to prevent the hand and arm from being pulled into the door camera bellows, the detection area may be set on the car 11 side, or a plurality of detection areas may be set on both the hall 15 side and the car 11 side.
On the other hand, fig. 4 shows an example of an image captured when there is a shift in the mounting position of the camera 12. The hatched portion in fig. 4 shows a detection area e1 set in the image in the same manner as in fig. 3.
As shown in fig. 4, when there is a shift in the mounting position of the camera 12, the image captured by the camera 12 becomes, for example, a rotated image (tilted image) as compared with the case shown in fig. 3. However, since the detection area e1 is set at a determined position on the image as in fig. 3, it is set to have a predetermined range from the long side on the car 11 side toward the hall 15 side among the long sides of the rectangular car threshold 13a as originally shown in fig. 3, but is set to have a predetermined range from a position completely unrelated to the long side of the rectangular car threshold 13a as shown in fig. 4. As a result, as described above, there is a possibility that a user who is originally required to be detected cannot be detected, or a user who is not required to be detected is erroneously detected. In fig. 4, the case where the image is rotated due to the displacement of the mounting position of the camera 12 is illustrated, but the same possibility exists also in the case where the image is displaced in the left-right direction due to the displacement of the mounting position of the camera 12.
Therefore, the image processing apparatus 20 of the present embodiment has a calibration function for detecting whether or not the mounting position of the camera 12 is shifted, and if so, can set a detection area at an appropriate position in accordance with the shift. The calibration function will be described in detail below.
In addition, in realizing the calibration function, for example, the mark m shown in fig. 5 needs to be set within the shooting range of the camera 12. The flag m is set, for example, by a maintenance person who performs a maintenance check of the elevator system. Here, the mark m is a square shape and includes 4 black circular marks as a pattern, but any mark may be used as long as the mark m is a square shape having all 4 corners at right angles and includes a pattern that can be distinguished from other objects (for example, the floor surfaces of the car 11 and the hall 15) included in the imaging range of the camera 12.
Fig. 6 is a block diagram showing an example of the functional configuration of the image processing apparatus 20 according to the present embodiment. Here, the functional configuration related to the above-described calibration function will be mainly described.
As shown in fig. 6, the image processing apparatus 20 includes a storage unit 201, an image acquisition unit 202, an offset detection unit 203, a setting processing unit 204, a notification processing unit 205, and the like. As shown in fig. 6, the offset detection unit 203 further includes an identification processing unit 231, a calculation processing unit 232, a detection processing unit 233, and the like.
In the present embodiment, the respective units 202 to 205 are described as being realized by executing a calibration program (i.e., software) stored in the nonvolatile memory 22 by, for example, the CPU23 (i.e., the computer of the image processing apparatus 20) shown in fig. 2, but the respective units 202 to 205 may be realized by hardware or by a combination of software and hardware. In the present embodiment, the storage unit 201 is configured by, for example, the nonvolatile memory 22 shown in fig. 2, another storage device, or the like.
The storage unit 201 stores a set value related to the calibration function. The set value related to the calibration function includes a value indicating the relative position of the mark with respect to the reference point (hereinafter, referred to as a 1 st set value). The reference point is a position that is an index for detecting whether or not the mounting position of the camera 12 is shifted, and corresponds to the center of the long side on the car 11 side among the long sides of the rectangular car threshold 13a, for example. Note that the reference point may be a position included in the imaging range of the camera 12 if there is no displacement in the mounting position of the camera 12, instead of the center of the long side on the car 11 side of the long side of the rectangular car sill 13a, and any position may be set as the reference point.
The set value related to calibration includes a value (hereinafter referred to as a 2 nd set value) indicating the relative position of the camera 12 with respect to a reference point included in an image (reference image) when the mounting position of the camera 12 is not shifted.
Further, the set values related to the calibration include values indicating the relative positions of the vertexes (four corners) of the car sill 13a with respect to the reference point (hereinafter, referred to as 3 rd set values). In the present embodiment, it is assumed that the detection area is set to have a predetermined range from the long side of the car 11 side of the long side of the rectangular car threshold 13a toward the hall 15 side, and therefore, the 3 rd set value described above includes a value indicating the relative position of each vertex of the car threshold 13a with respect to the reference point, but the 3 rd set value is not limited to this, and a value corresponding to the area in which the detection area is to be set is set. For example, when the detection area is set near the door camera in order to prevent the hand or arm from being pulled into the door camera, the 3 rd set value may include a value indicating the relative position of each feature point of the door camera with respect to the reference point.
The calibration-related set values include values indicating the height from the floor surface of the car 11 to the camera 12 and the angle of view (focal length) of the camera 12 (hereinafter referred to as camera set values).
The storage unit 201 may store an image (reference image) captured without a shift in the mounting position of the camera 12.
The image acquisition unit 202 acquires an image (hereinafter referred to as a captured image) captured by the camera 12 in a state where a plurality of marks m are provided on a floor surface in the car 11. In the present embodiment, although it is assumed that a plurality of marks m are provided on the floor surface in the car 11, the marks m may be provided on the floor surface on the hall 15 side or on the car threshold 13a and the hall threshold 14a as long as the marks m are positions at which the relative positions to the reference point (the center of the car threshold 13a in the present embodiment) can be determined.
The offset detection section 203 performs recognition processing on the captured image acquired by the image acquisition section 202, and recognizes (extracts) a plurality of marks m included in the captured image. Further, the offset detection unit 203 detects an offset of the mounting position of the camera 12 based on the plurality of recognized marks m. The functions of the recognition processing unit 231, the calculation processing unit 232, and the detection processing unit 233 included in the offset detection unit 203 will be described below together with the description of the flowcharts, and therefore, detailed descriptions thereof will be omitted here.
When the offset detection unit 203 detects that the mounting position of the camera 12 is offset, the setting processing unit 204 sets a detection area at an appropriate position corresponding to the offset in the captured image acquired by the image acquisition unit 202. Thereby, a detection area in which the displacement of the mounting position of the camera 12 is taken into consideration is set on the captured image. The coordinate values of the detection area set at the appropriate positions according to the offset may be stored in the storage unit 201.
When the offset detection unit 203 detects that the mounting position of the camera 12 is offset, the notification processing unit 205 notifies (a manager) a monitoring center monitoring the operation state of the elevator system or the like, and a maintenance person (a terminal held) who sets the flag m to perform maintenance inspection of the elevator system of the offset (an abnormality). Further, the notification is made via the communication device 25, for example.
Next, a processing procedure of the image processing apparatus 20 in the calibration function in the present embodiment will be described with reference to a flowchart of fig. 7. The series of processing shown in fig. 7 may be executed not only at the time of regular maintenance, but also before the operation of the elevator system.
First, the image acquisition unit 202 acquires an image (captured image) captured with a plurality of marks m provided on a floor surface in the car 11 from the camera 12 (step S1). Here, as an example, a case is assumed in which the captured image i1 shown in fig. 8 is acquired by the image acquisition unit 202. As shown in fig. 8, the captured image i1 includes two marks m1 and m2 provided along the long side of the car sill 13a at both ends of the rectangular car sill 13a.
Next, the recognition processing unit 231 included in the offset detection unit 203 performs recognition processing on the captured image acquired by the image acquisition unit 202, and recognizes (extracts) the plurality of marks m included in the captured image (step S2). As described above, when the captured image i1 is acquired in step S1, the recognition processing unit 231 performs the recognition processing on the captured image i1 shown in fig. 8, and recognizes (extracts) the marks m1, m2 included in the captured image i 1.
The identification of the mark m included in the captured image may be performed by, for example, previously setting a pattern included in the mark m, or in the case of the present embodiment, identifying an object including 4 black circles included in a square as the pattern as the mark m, or may be performed using other known image identification techniques.
Next, the recognition processing unit 231 calculates the relative positions of the camera 12 with respect to the plurality of marks m obtained as a result of the recognition processing in step S2 and the angle of the 3-axis of the camera 12 (the mounting angle of the camera 12) based on the camera set values (the height of the camera 12 and the angle of view of the camera 12) stored in the storage unit 201 as set values (step S3). As described above, when the marks m1 and m2 are recognized from the captured image i1 in step S2, the recognition processing unit 231 calculates the relative position of the camera 12 with respect to the mark m1 and the relative position of the camera 12 with respect to the mark m2 as the relative positions of the camera 12 with respect to the plurality of marks m. In fig. 8, a point p1 corresponds to a portion regarded as a mark m1, and a point p2 corresponds to a portion regarded as a mark m2.
Here, as an example, assume a case where the center point (center of gravity) of the square mark m is regarded as the mark m, and the relative position of the camera 12 with respect to the mark m is calculated. Here, the center of gravity of the mark m is regarded as the mark m, but which part of the mark m is regarded as the mark m may be arbitrarily set. For example, any one of the vertices of the square mark m may be regarded as the mark m.
The calculation processing unit 232 included in the offset detection unit 203 calculates the relative position of the camera 12 with respect to the reference point based on the relative positions of the camera 12 with respect to the plurality of marks m calculated by the recognition processing unit 231 and the 1 st set value stored as the set value in the storage unit 201 (step S4).
As described above, in the case where the relative position of the camera 12 with respect to the marks m1 and m2 is calculated in step S3, the calculation processing unit 232 calculates the relative position of the camera 12 with respect to the reference point by combining the relative position of the camera 12 with respect to the mark m1 and the relative position of the mark m1 as the 1 st set value with respect to the reference point. Similarly, the calculation processing unit 232 calculates the relative position of the camera 12 with respect to the reference point by combining the relative position of the camera 12 with respect to the mark m2 and the relative position of the mark m2, which is the 1 st set value, with respect to the reference point. In fig. 8, a point p3 corresponds to a reference point.
Next, the detection processing unit 233 included in the offset detection unit 203 determines whether or not the mounting position of the camera 12 is offset based on the relative position of the camera 12 with respect to the reference point calculated by the calculation processing unit 232 and the 2 nd set value stored as the set value in the storage unit 201 (step S5). Specifically, the detection processing unit 233 determines whether or not the relative position of the camera 12 with respect to the reference point calculated by the calculation processing unit 232 matches the relative position of the camera 12 with respect to the reference point, which is the 2 nd set value, and detects whether or not the mounting position of the camera 12 is shifted.
When it is determined that the relative positions of the cameras 12 with respect to the reference point are identical and that there is no shift in the mounting positions of the cameras 12 (yes in step S5), the detection processing unit 233 determines that there is no shift in the mounting positions of the cameras 12 and that there is no need to reset the detection area, and ends the series of processing here.
On the other hand, when it is determined that the relative positions of the cameras 12 with respect to the reference points are not identical and that the mounting positions of the cameras 12 are shifted (no in step S5), the setting processing unit 204 sets a detection area at an appropriate position corresponding to the shift in the mounting positions of the cameras 12 in the captured image acquired by the image acquisition unit 202, based on the relative positions of the cameras 12 with respect to the reference points calculated by the calculation processing unit 232, the 3 rd set value and the camera set value stored as set values in the storage unit 201 (step S6).
In the present embodiment, since it is assumed that a detection area having a predetermined range from the car sill 13a to the hall 15 side is set, first, the setting processing unit 204 calculates the relative position of each peak of the car sill 13a with respect to the camera 12 by combining the relative position of the camera 12 with respect to the reference point calculated by the calculation processing unit 232 and the relative position of each peak of the car sill 13a with respect to the reference point, which is the 3 rd set value. In fig. 8, points p4 to p7 correspond to the respective vertexes of the car threshold 13a.
Thereafter, the setting processing unit 204 sets the detection area based on the calculated relative positions of the vertices of the car threshold 13a with respect to the camera 12, the 3-axis angle of the camera 12 calculated by the recognition processing unit 231, and the angle of view of the camera 12 stored in the storage unit 201 as the camera setting value.
As a result, in the captured image i1 acquired by the image acquisition unit 202, as shown in the diagonally shaded portion of fig. 8, a detection area e1 corresponding to the displacement of the mounting position of the camera 12, that is, a detection area e1 having a predetermined range from the long side of the car 11 side of the car sill 13a toward the hall 15 side is set.
Then, the notification processing unit 205 notifies (the manager of) the monitoring center or (the terminal of) the maintenance person of the fact that the mounting position of the camera 12 has been shifted via the communication device 25 (step S7), and ends the series of processes here.
In step S5 shown in fig. 7, whether or not the mounting position of the camera 12 is shifted is determined (detected) based on whether or not the relative position of the camera 12 with respect to the reference point included in the captured image and the relative position of the camera 12 with respect to the reference point included in the reference image are identical, but may be configured as follows: even when the mounting position of the camera 12 is shifted, the detection area is not reset as long as the shift is to such an extent that the accuracy of the user detection process is not affected. That is, the processing of step S5 may be executed based on whether or not the difference (offset degree) between the relative position of the camera 12 with respect to the reference point included in the captured image and the relative position of the camera 12 with respect to the reference point included in the reference image is within a predetermined range, and if the offset degree is not within the predetermined range, it may be determined that the mounting position of the camera 12 is offset.
In addition, the setting of the detection area in the present embodiment refers to resetting of the set detection area, and the setting of the detection area may be expressed as correction of the detection area. In this case, the relative position of the camera 12 with respect to the reference point and the angle of the 3-axis of the camera 12 are both values necessary for realizing correction of the detection area, and thus may be expressed as correction values.
In the present embodiment, the image processing device 20 acquires images captured in a state where a plurality of marks m distinguishable from the floor surface of the car 11 and the floor surface of the hall 15 are provided from the camera 12, recognizes the plurality of marks m from the acquired images, detects a deviation in the mounting position of the camera 12 from the recognized plurality of marks m, and sets a set value related to image processing (user detection processing) when the deviation in the mounting position of the camera 12 is detected. The set value related to the image processing includes (coordinate values of) a detection area set for the captured image for detecting the user nearest to the car door 13.
According to such a configuration, even when the mounting position of the camera 12 is shifted, an appropriate detection area can be set for an image (for example, a rotated image or an image shifted in the left-right direction) captured by the camera 12, and therefore, a decrease in detection accuracy of the user can be suppressed.
Although several embodiments of the present invention have been described, these embodiments are presented as examples and are not intended to limit the scope of the invention. These novel embodiments can be implemented in various other modes, and various omissions, substitutions, and changes can be made without departing from the spirit of the invention. These embodiments and modifications thereof are included in the scope and gist of the invention, and are included in the invention described in the claims and their equivalents.

Claims (5)

1. An image processing apparatus for performing image processing of an image including an interior of a car and a hall captured by a camera in order to detect a user in the vicinity of a door of the car, the image processing apparatus comprising:
an acquisition unit that acquires, from the camera, an image captured in a state where a mark distinguishable from the floor surface of the car and the floor surface of the hall is provided;
a storage unit that stores a 1 st set value indicating a relative position of the mark with respect to a reference point, the reference point being set as a position that is an index for detecting whether or not a mounting position of the camera is shifted, and a 2 nd set value indicating a relative position of the camera with respect to the reference point;
a detection unit that identifies the mark from the acquired image, and detects a shift in the mounting position of the camera based on the identified mark; and
a setting unit that sets a setting value related to the image processing when a deviation of the mounting position of the camera is detected,
the marks are provided at positions where the relative positions to a threshold for guiding the opening and closing of the door of the car can be determined, the marks are provided along both ends of the threshold on a floor surface in the car,
the detection unit calculates the relative position of the camera with respect to the identified marker based on the identified marker,
calculating the relative position of the camera with respect to a reference point included in the acquired image based on the calculated relative position of the camera with respect to a marker and the 1 st set value,
and comparing the calculated relative position of the camera with the reference point with the 2 nd set value, and detecting the offset of the mounting position of the camera when the calculated relative position of the camera and the 2 nd set value are not consistent.
2. The image processing apparatus according to claim 1, wherein,
the detection unit calculates an installation angle of the camera based on the recognized mark.
3. The image processing apparatus according to claim 2, wherein,
the setting means sets a setting value related to the image processing based on the calculated relative position of the camera with respect to the reference point and the calculated mounting angle of the camera.
4. The image processing apparatus according to claim 1, wherein,
the set values related to the image processing include an area for detecting the user set for an image captured by the camera.
5. The image processing apparatus according to claim 1, wherein,
and a notification unit configured to notify a manager of occurrence of an abnormality when a displacement of the mounting position of the camera is detected.
CN201911199450.7A 2019-03-20 2019-11-29 Image processing apparatus and method Active CN111717742B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-053668 2019-03-20
JP2019053668A JP6781291B2 (en) 2019-03-20 2019-03-20 Image processing device

Publications (2)

Publication Number Publication Date
CN111717742A CN111717742A (en) 2020-09-29
CN111717742B true CN111717742B (en) 2023-05-26

Family

ID=72557607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911199450.7A Active CN111717742B (en) 2019-03-20 2019-11-29 Image processing apparatus and method

Country Status (4)

Country Link
JP (1) JP6781291B2 (en)
CN (1) CN111717742B (en)
MY (1) MY197589A (en)
SG (1) SG10201911324SA (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011190070A (en) * 2010-03-16 2011-09-29 Taisei Corp Elevator control system and elevator control method
CN102334142A (en) * 2009-02-24 2012-01-25 三菱电机株式会社 Human tracking device and human tracking program
JP2012057967A (en) * 2010-09-06 2012-03-22 Nippon Signal Co Ltd:The Camera calibration device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001042120A1 (en) * 1999-12-08 2001-06-14 Shemanske Kenneth J Ii Elevator door control device
JP2004173037A (en) * 2002-11-21 2004-06-17 Kyocera Corp Optical-axis deviation detecting apparatus of vehicle-mounted camera
JP2009143722A (en) * 2007-12-18 2009-07-02 Mitsubishi Electric Corp Person tracking apparatus, person tracking method and person tracking program
JP2011195227A (en) * 2010-03-17 2011-10-06 Toshiba Elevator Co Ltd Tracking photographing system for crime prevention for elevator
JP5325251B2 (en) * 2011-03-28 2013-10-23 株式会社日立製作所 Camera installation support method, image recognition method
JP6012982B2 (en) * 2012-02-24 2016-10-25 京セラ株式会社 Calibration processing apparatus, camera calibration apparatus, camera system, and camera calibration method
JP6009894B2 (en) * 2012-10-02 2016-10-19 株式会社デンソー Calibration method and calibration apparatus
JP6480824B2 (en) * 2015-07-27 2019-03-13 株式会社日立製作所 Distance image sensor parameter adjustment method, parameter adjustment apparatus, and elevator system
JP6046286B1 (en) * 2016-01-13 2016-12-14 東芝エレベータ株式会社 Image processing device
JP6377796B1 (en) * 2017-03-24 2018-08-22 東芝エレベータ株式会社 Elevator boarding detection system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102334142A (en) * 2009-02-24 2012-01-25 三菱电机株式会社 Human tracking device and human tracking program
JP2011190070A (en) * 2010-03-16 2011-09-29 Taisei Corp Elevator control system and elevator control method
JP2012057967A (en) * 2010-09-06 2012-03-22 Nippon Signal Co Ltd:The Camera calibration device

Also Published As

Publication number Publication date
MY197589A (en) 2023-06-26
JP6781291B2 (en) 2020-11-04
JP2020152545A (en) 2020-09-24
CN111717742A (en) 2020-09-29
SG10201911324SA (en) 2020-10-29

Similar Documents

Publication Publication Date Title
CN109928290B (en) User detection system
CN108622777B (en) Elevator riding detection system
CN108622776B (en) Elevator riding detection system
JP6317004B1 (en) Elevator system
JP6693627B1 (en) Image processing device
JP6242966B1 (en) Elevator control system
JP6367411B1 (en) Elevator system
CN110294391B (en) User detection system
CN111717768B (en) Image processing apparatus and method
CN111717742B (en) Image processing apparatus and method
CN111689324B (en) Image processing apparatus and image processing method
CN111960206B (en) Image processing apparatus and marker
CN112429609B (en) User detection system for elevator
CN111717738B (en) Elevator system
CN112551292B (en) User detection system for elevator
CN113874309B (en) Passenger detection device for elevator and elevator system
CN115108425B (en) Elevator user detection system
CN112441497B (en) User detection system for elevator
CN111453588B (en) Elevator system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40032487

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant