CN109508667B - Elevator video anti-pinch method and elevator video monitoring device - Google Patents

Elevator video anti-pinch method and elevator video monitoring device Download PDF

Info

Publication number
CN109508667B
CN109508667B CN201811331913.6A CN201811331913A CN109508667B CN 109508667 B CN109508667 B CN 109508667B CN 201811331913 A CN201811331913 A CN 201811331913A CN 109508667 B CN109508667 B CN 109508667B
Authority
CN
China
Prior art keywords
elevator
elevator door
image
images
door
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811331913.6A
Other languages
Chinese (zh)
Other versions
CN109508667A (en
Inventor
吴达泉
甘春标
徐小锋
彭强
朱小京
易晓敏
黄丹华
陈宇哲
游朝阳
田国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lmecq Elevator Co ltd
Zhejiang University ZJU
Original Assignee
Lmecq Elevator Co ltd
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lmecq Elevator Co ltd, Zhejiang University ZJU filed Critical Lmecq Elevator Co ltd
Priority to CN201811331913.6A priority Critical patent/CN109508667B/en
Publication of CN109508667A publication Critical patent/CN109508667A/en
Application granted granted Critical
Publication of CN109508667B publication Critical patent/CN109508667B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B50/00Energy efficient technologies in elevators, escalators and moving walkways, e.g. energy saving or recuperation technologies

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Elevator Door Apparatuses (AREA)
  • Indicating And Signalling Devices For Elevators (AREA)

Abstract

The invention relates to an elevator video anti-pinch method and an elevator video monitoring device, wherein the method comprises the steps of respectively acquiring RGB images and depth images of an elevator door direction from an elevator car when receiving a signal that an elevator door is opened in place; when signals of other objects except the elevator door are received, fusing the RGB image and the depth image into an elevator door fusion image with depth information, and comparing the elevator door fusion image with a first reference image and a second reference image respectively to judge whether the other objects except the elevator door are obstacles; and if the gray scales of all the pixel points of the elevator door fusion image are between the gray scales of the pixel points corresponding to the first reference image and the second reference image, judging that other objects existing outside the elevator door are barriers. The invention can detect various dead angles, thereby avoiding the occurrence of the clamping event of the elevator.

Description

Elevator video anti-pinch method and elevator video monitoring device
Technical Field
The invention belongs to the field of elevator video monitoring, and particularly relates to an elevator video anti-pinch method and an elevator video monitoring device.
Background
The existing elevator generally adopts an 'infrared light curtain' to detect whether barriers exist between elevator doors, and the elevator light curtain is a light type elevator door safety protection device and is suitable for passenger elevators and goods elevators. An infrared emitter is arranged on one side of the elevator door, an infrared receiver is arranged on the other side of the elevator door, an infrared emitting tube is arranged in the infrared emitter, the elevator door can continuously scan the car door area from top to bottom in cooperation with the infrared receiver, and if one beam of light is blocked, the control system can send out a door opening signal. Although the elevator light curtain can find most obstacles, the elevator light curtain still has a sensing blind area, and accidents of clamping people and objects of the elevator frequently occur. For example, passengers may get stuck with their hands or legs when the elevator is closed; an accident that an elevator door clamps a pet rope can also happen when a pet is pulled into the elevator; in order for passengers to temporarily prevent the elevator door from closing, an accident of elevator inclusion is caused by placing an obstacle such as a mineral water bottle in the area of the elevator door, and thus, a new scheme capable of effectively avoiding such an accident is required.
Disclosure of Invention
The invention aims to provide an elevator video anti-pinch method and an elevator video monitoring device, which can effectively avoid elevator accidents, can control the actions of elevator doors through an intelligent video algorithm, are beneficial to improving the elevator riding experience of passengers, reduce the waiting time of the passengers and improve the overall operation efficiency of the elevator.
The invention provides a video anti-pinch method, which comprises the following steps:
step one: when receiving a signal that an elevator door is opened in place, respectively acquiring RGB images and depth images of the elevator door direction from an elevator car;
step two: when signals of other objects except the elevator door are received, fusing the RGB image and the depth image into an elevator door fusion image with depth information, and comparing the elevator door fusion image with a first reference image and a second reference image respectively to judge whether the other objects except the elevator door are barriers or not:
if the gray scales of all the pixel points of the elevator door fusion image are between the gray scales of the pixel points corresponding to the first reference image and the second reference image, judging that other objects existing outside the elevator door are barriers;
the first reference image is a fused image of the outer side sealing surface of the elevator door when the elevator is in a completely opened state, and the second reference image is a fused image of the inner side of the elevator door when the elevator is in a completely closed state.
Further, the second step further includes:
and identifying the acquired RGB image based on the neural network model, judging whether other objects exist outside the elevator door, and if so, generating signals of the other objects exist outside the elevator door.
Further, the second step further includes:
assuming that the gray value l (i) of the first reference image is smaller than the gray value r (i) of the second reference image, if the gray value d (i) of all pixels of the elevator door fusion image is smaller than r (i), a part of d (i) is larger than l (i), and a part of d (i) is smaller than l (i), continuously acquiring two groups of RGB images and depth images, fusing the two groups of RGB images into two elevator door fusion images, and estimating the average moving speed of an object by the following formula
Figure BDA0001860239530000021
Figure BDA0001860239530000022
wherein ,
Figure BDA0001860239530000023
the method comprises the steps of multiplying the sum of the motion distances of each point on an object obtained by multiplying the conversion coefficient from one gray value to the distance after accumulating the gray value difference of each corresponding pixel point on the object in two elevator door fusion images, wherein n is the number of pixel points of the object extracted from the elevator door fusion images, and delta t represents the time interval for acquiring two groups of RGB images and depth images;
calculating a safe average speed of an object
Figure BDA0001860239530000024
Figure BDA0001860239530000025
wherein ,dmin Is the nearest distance from the inner side of the elevator door in the elevator door fusion image pixel point fused by the next group of images in the two groups of RGB images and depth images which are continuously acquired, d min The thickness of the elevator door is more than 0 and less than that of the elevator door, T is the time required by the elevator door from a door opening state to a closing state, lambda is a safety amplification factor, and lambda is more than 1;
when the average movement speed of the object
Figure BDA0001860239530000032
Greater than the safe average speed->
Figure BDA0001860239530000031
When the elevator door is closed, the object is not clamped.
Further, the second step further includes:
transmitting image information in the elevator car to a cloud server; and
and acquiring the latest neural network model and the image fusion algorithm from the cloud server.
Further, the video anti-pinch method further comprises the following steps:
when other objects existing outside the elevator door are judged to be barriers, sending out voice alarm;
and sending a door opening signal, a door closing prohibiting signal or an immediate closing signal to the elevator control device according to the obstacle judging result.
The invention also provides an elevator video monitoring device, which comprises:
the video acquisition module is used for respectively acquiring RGB images and depth images of the elevator door direction from the elevator car when receiving signals of the elevator door in place;
the embedded control module is used for fusing the RGB image and the depth image into an elevator door fusion image with depth information when receiving signals of other objects except the elevator door, comparing the elevator door fusion image with a first reference image and a second reference image respectively, and judging whether the other objects except the elevator door are barriers or not:
if the gray scales of all the pixel points of the elevator door fusion image are between the gray scales of the pixel points corresponding to the first reference image and the second reference image, judging that other objects existing outside the elevator door are barriers;
the first reference image is a fused image of the outer side sealing surface of the elevator door when the elevator is in a completely opened state, and the second reference image is a fused image of the inner side of the elevator door when the elevator is in a completely closed state.
Further, the embedded control module also identifies the acquired RGB image based on the neural network model, judges whether other objects exist outside the elevator door, and if so, generates signals of the other objects outside the elevator door.
Further, the embedded control module assumes that the gray value l (i) of the first reference image is smaller than the gray value r (i) of the second reference image, if the gray value d (i) of all pixels of the elevator door fusion image is smaller than r (i), a part of d (i) is larger than l (i), and a part of d (i) is smaller than l (i), two groups of RGB images and depth images are continuously acquired, fused into two elevator door fusion images, and the average moving speed of the object is estimated by the following formula
Figure BDA0001860239530000041
Figure BDA0001860239530000042
wherein ,
Figure BDA0001860239530000043
the method comprises the steps that the difference of gray values of corresponding pixel points on an object in two elevator door fusion images is accumulated, n is the number of pixel points of the object extracted from the elevator door fusion images, and delta t represents the time interval of acquisition of two groups of RGB images and depth images;
calculating a safe average speed of an object
Figure BDA0001860239530000044
Figure BDA0001860239530000045
wherein ,dmin The nearest distance from the outside of the elevator door in the elevator door fusion image pixel point fused by the next group of images in the two groups of RGB images and the depth images which are continuously acquired, d min The thickness of the elevator door is more than 0 and less than that of the elevator door, T is the time required by the elevator door from a door opening state to a closing state, lambda is a safety amplification factor, and lambda is more than 1; and
when the average movement speed of the object
Figure BDA0001860239530000046
Greater than the safe average speed->
Figure BDA0001860239530000047
When the elevator door is closed, the object is not clamped.
Further, the embedded control module sends image information in the elevator car to a cloud server; and
and acquiring the latest neural network model and the image fusion algorithm from the cloud server.
Further, the elevator video monitoring device further comprises:
the voice alarm module is used for sending out voice alarm when judging that other objects existing outside the elevator door are barriers;
and the data transmission module is used for sending a door opening signal, a door closing prohibiting signal or an immediate closing signal to the elevator control device according to the obstacle judging result.
Compared with the prior art, the invention has the beneficial effects that: various dead angles can be detected, so that the occurrence of the elevator clamping event is avoided.
Drawings
Fig. 1 is a schematic view of a camera mounting location of an elevator video surveillance device of the present invention;
fig. 2 is a schematic structural view of an elevator video monitoring apparatus of the present invention;
fig. 3 is a schematic diagram of an embedded control module of an elevator video monitoring apparatus of the present invention;
fig. 4 is a schematic diagram of a video anti-pinch method of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the embodiments shown in the drawings, but it should be understood that the embodiments are not limited to the present invention, and functional, method, or structural equivalents and alternatives according to the embodiments are within the scope of protection of the present invention by those skilled in the art.
The embodiment provides an elevator video anti-pinch method, which comprises the following steps:
step one: when receiving a signal that an elevator door is opened in place, respectively acquiring RGB images and depth images of the elevator door direction from an elevator car;
step two: when signals of other objects except the elevator door are received, the RGB image and the depth image are fused into an elevator door fusion image with depth information, the elevator door fusion image is respectively compared with a first reference image and a second reference image, and whether the other objects except the elevator door are barriers is judged:
if the gray scales of all the pixel points of the elevator door fusion image are between the gray scales of the pixel points corresponding to the first reference image and the second reference image, judging that other objects existing outside the elevator door are barriers;
the first reference image is a fused image of the outer side sealing surface of the elevator door when the elevator is in a fully opened state, and the second reference image is a fused image of the inner side of the elevator door when the elevator is in a fully closed state.
The video anti-pinch method for the elevator can minimize the occurrence rate of the accident of clamping the elevator by the person and the object, so that the safety of the elevator is improved.
In this embodiment, the second step further includes:
and identifying the acquired RGB image based on the neural network model, judging whether other objects exist outside the elevator door, and if so, generating signals of the other objects exist outside the elevator door.
In this embodiment, the second step further includes:
assuming that the gray value l (i) of the first reference image is smaller than the gray value r (i) of the second reference image, if the gray value d (i) of all pixels of the elevator door fusion image is smaller than r (i), a part of d (i) is larger than l (i), and a part of d (i) is smaller than l (i), continuously acquiring two groups of RGB images and depth images, fusing the two groups of RGB images into two elevator door fusion images, and estimating the average moving speed of an object by the following formula
Figure BDA0001860239530000061
Figure BDA0001860239530000062
wherein ,
Figure BDA0001860239530000063
the difference between gray values of corresponding pixel points on an object in two elevator door fusion images is multiplied by a conversion coefficient from gray value to distanceThe sum of the motion distances of each point on the obtained object is n, which is the number of pixels of the object extracted from the elevator door fusion image, and delta t represents the time interval of acquisition of two groups of RGB images and depth images;
calculating a safe average speed of an object
Figure BDA0001860239530000064
Figure BDA0001860239530000065
wherein ,dmin Is the nearest distance from the inner side of the elevator door in the elevator door fusion image pixel point fused by the next group of images in the two groups of RGB images and depth images which are continuously acquired, d min The thickness of the elevator door is more than 0 and less than that of the elevator door, T is the time required by the elevator door from a door opening state to a closing state, lambda is a safety amplification factor, and lambda is more than 1;
when the average movement speed of the object
Figure BDA0001860239530000066
Greater than the safe average speed->
Figure BDA0001860239530000067
When the elevator door is closed, the object is not clamped.
In this embodiment, the second step further includes:
transmitting image information in the elevator car to a cloud server; and
and acquiring the latest neural network model and the image fusion algorithm from the cloud server.
In this embodiment, the method for preventing video clamping of an elevator further includes:
when other objects existing outside the elevator door are judged to be barriers, sending out voice alarm;
and sending a door opening signal, a door closing prohibiting signal or an immediate closing signal to the elevator control device according to the obstacle judging result.
The embodiment also provides an elevator video monitoring device, which comprises:
the video acquisition module is used for respectively acquiring RGB images and depth images of the elevator door direction from the elevator car when receiving signals of the elevator door in place;
the embedded control module is used for fusing the RGB image and the depth image into an elevator door fusion image with depth information when receiving signals of other objects except the elevator door, comparing the elevator door fusion image with the first reference image and the second reference image respectively, and judging whether the other objects except the elevator door are barriers or not:
if the gray scales of all the pixel points of the elevator door fusion image are between the gray scales of the pixel points corresponding to the first reference image and the second reference image, judging that other objects existing outside the elevator door are barriers;
the first reference image is a fused image of the outer side sealing surface of the elevator door when the elevator is in a fully opened state, and the second reference image is a fused image of the inner side of the elevator door when the elevator is in a fully closed state.
The elevator video monitoring device provided by the embodiment can minimize the occurrence rate of the elevator clamping accidents to improve the safety of the elevator.
In this embodiment, the embedded control module further identifies the collected RGB image based on the neural network model, and determines whether other objects exist outside the elevator door, if so, generates a signal indicating that other objects exist outside the elevator door.
In this embodiment, the embedded control module assumes that the gray value l (i) of the first reference image is smaller than the gray value r (i) of the second reference image, if the gray value d (i) of all pixels of the elevator door fusion image is smaller than r (i), a part of d (i) is larger than l (i), and a part of d (i) is smaller than l (i), two sets of RGB images and depth images are continuously acquired, fused into two elevator door fusion images, and the average moving speed of the object is estimated by the following formula
Figure BDA0001860239530000071
Figure BDA0001860239530000072
wherein ,
Figure BDA0001860239530000081
the method comprises the steps of multiplying the sum of the motion distances of each point on an object obtained by multiplying the conversion coefficient from one gray value to the distance after accumulating the gray value difference of each corresponding pixel point on the object in two elevator door fusion images, wherein n is the number of pixel points of the object extracted from the elevator door fusion images, and delta t represents the time interval for acquiring two groups of RGB images and depth images;
calculating a safe average speed of an object
Figure BDA0001860239530000082
Figure BDA0001860239530000083
wherein ,dmin Is the nearest distance from the inner side of the elevator door in the elevator door fusion image pixel point fused by the next group of images in the two groups of RGB images and depth images which are continuously acquired, d min The thickness of the elevator door is more than 0 and less than that of the elevator door, T is the time required by the elevator door from a door opening state to a closing state, lambda is a safety amplification factor, and lambda is more than 1; and
when the average movement speed of the object
Figure BDA0001860239530000084
Greater than the safe average speed->
Figure BDA0001860239530000085
When the elevator door is closed, the object is not clamped.
In the embodiment, the embedded control module sends image information in the elevator car to the cloud server; and
and acquiring the latest neural network model and the image fusion algorithm from the cloud server.
In this embodiment, the elevator video monitoring device further includes:
the voice alarm module is used for sending out voice alarm when judging that other objects existing outside the elevator door are barriers;
and the data transmission module is used for sending a door opening signal, a door closing prohibiting signal or an immediate closing signal to the elevator control device according to the obstacle judging result.
The present invention will be described in further detail below.
As shown in fig. 1 and 3, a coordinate system shown in fig. 1 is established in the elevator car, the RGB camera 12 and the depth camera 11 are both wide-angle lenses and are installed in the middle of the elevator car, the RGB camera 12 and the depth camera 11 are tightly fixed together by one fixing device, and the shooting angles of the two cameras are kept parallel.
As shown in fig. 2, the elevator video monitoring device comprises an embedded control module 1, a video acquisition module 2, a local storage module 3, a data transmission module 4 and a voice alarm module 5.
The embedded control module 1 is used for receiving image data in the elevator car through a camera, performing corresponding image analysis processing work (comprising object identification and image information fusion), sending a signal of whether an elevator door is provided with an obstacle or not to a control cabinet and updating a corresponding algorithm from a cloud server.
The video acquisition module 2 is connected to the embedded control module 1, and obtains general image data through the RGB camera 12, and obtains an image with depth information through the depth camera 11, wherein the RGB camera 12 should be photographed in parallel with the depth camera 11 as much as possible and be close to the depth camera as much as possible.
The local storage module 3 is used for storing the neural network model, the image fusion algorithm and the local video monitoring picture.
The data transmission module 4 is connected with the embedded control module 1, the embedded control module 1 and the control cabinet are assisted to mutually send elevator data, the embedded control module 1 can send signals of whether the elevator door has an obstacle or not to the control cabinet, and the control cabinet gives certain feedback information; in addition, the embedded control module 1 can also acquire the latest neural network model and the latest image fusion algorithm from the cloud server through the 4G network.
The voice alarm module 5 is connected with the embedded control module 1 and is used for sending out warning information, so that the actions of passengers, which are not civilized, are prevented, and the occurrence of the elevator clamping event is prevented.
As shown in fig. 3, the embedded control module 1 mainly includes: ARM chip 9, FPGA (Field-Programmable Gate Array, field programmable gate array) 10, depth camera 11, RGB camera 12, 4G module 13, SD Card14, SDRAM 15, reset module 6, power module 7, and external crystal oscillator 8.
The ARM chip 9 is mainly used for logic management of the whole device and controls the program running flow of the whole video monitoring device. The communication between the ARM chip 9 and the FPGA 10 adopts two modes: SPI and bus. When the two are required to transmit larger data quantity, a bus mode is adopted, and when the data quantity transmitted between the two is smaller, SPI is adopted for communication so as to achieve the aim of most reasonably distributing resources.
The FPGA 10 is connected with the ARM chip 9, the RGB camera 12 and the depth camera 11 and is mainly used for relevant calculation in the aspect of images. The method comprises the steps of calling a neural network model to identify objects between elevator doors on a photo, fusing an RGB photo and a depth photo by using an image fusion algorithm, and judging whether an obstacle exists in the elevator doors or not by completing comparison of fused images.
The 4G module 13 is configured to establish a socket network connection with the cloud server, so that the entire embedded control module 1 performs related data interaction with the cloud server, and the embedded control module 1 may send video information in the elevator car to the cloud server and acquire the latest neural network model and the image fusion algorithm from the cloud server, so as to improve reliability of the entire device.
The depth camera 11 is connected with the FPGA 10 through a USB port, and is mainly used for taking depth photographs in the car, namely for ranging, and gray values of pixel points on images of points at different distances are different. The RGB camera 12 is connected to the FPGA 10 through a USB port, and is mainly used for taking RGB pictures in the car.
SD Card14 is used primarily to store video information within the car. The SDRAM 15 is connected with the ARM chip 9 through a bus, and is used as a memory of the ARM chip 9 and is mainly used for simple calculation. The reset module 6 may be used when the device is commissioned, malfunctioning or in need of service. The power module 7 is mainly used for supplying power to the embedded control module 1, and provides stable and reliable voltage for different chips. The external crystal oscillator 8 provides an oscillation source which is not provided for the ARM chip 9.
As shown in fig. 4, the process of implementing video anti-pinch by using the elevator video monitoring device is as follows:
step 1: initialization of the device is performed. First, a piece of checkerboard cloth is used for depth camera calibration. Then, the elevator door is controlled to be completely opened and closed by hand, a thin plate is placed first, the thin plate is slightly larger than the opening space of the elevator door, one surface of the thin plate is attached to the outer side of the elevator door, two photos are taken by using the RGB camera 12 and the depth camera 11, and a first fusion picture is obtained through an image fusion algorithm. Then, the elevator door is closed, and a second fused picture is taken and obtained, again by both cameras. The two fused images are stored in a data storage module as reference pictures containing depth information of pixel points on both inner and outer surfaces of the elevator car door. The device can be compared with a newly shot fusion image when in operation.
Step 2: in the elevator operation stage, once the elevator door opening in-place signal is received from the elevator control cabinet, the embedded control module 1 sends instructions to the two cameras, and the RGB camera 12 and the depth camera 11 respectively return images to the embedded control module 1 after respectively acquiring the images in the elevator door direction.
Step 3: the embedded control module 1 calls the neural network model to identify the object in the photograph taken by the RGB camera 12, judges whether other objects are present except for the elevator door, if so, proceeds to step 4, and if not, proceeds to step 6.
Step 4: and (3) fusing the two photos into one photo with depth information by using an image fusion algorithm, judging whether the object is an obstacle in the door closing process of the elevator, if so, performing step 5, and if not, performing step 6.
The judging process of the obstacle is as follows: according to the elevator door fusion image obtained by shooting, the visual operation interval of the elevator door is divided into rectangular intervals, and then each rectangular interval is divided into a line segment. Since the thickness of the elevator door is small enough relative to the distance of the camera from the elevator door, for each pixel of the object on the new photo, the pixel of the corresponding position of the car door, i.e. the two end points of each line segment, can be found approximately on the previous two fused photos, the gray levels of the three pixels are compared, if the gray level of the newly taken photo is between the gray levels of the original two photos, the object can be determined as an obstacle, and if all the pixels are not between the gray levels of the corresponding pixels of the original two photos, the object is not an obstacle.
Specifically, a point a1 on the obstacle is spaced from the camera by d1, and two points inside and outside the elevator car door in the same horizontal plane are respectively b1, c1, and are spaced from the camera by l1, r1. If d1 is between l1 and r1, the object is determined to be an obstacle, other pixels of the object are determined by the same method, and if all the pixels cannot determine the object as an obstacle, the object is considered to be not an obstacle.
Further, the state of the obstacle may be classified into 6 kinds, assuming l (i) < r (i). First category: the gray values d (i) of all points of the object on the photo are between the gray values l (i) and r (i) of the corresponding points of the car door; the second category: the gray values d (i) of all points of the object on the photo are smaller than r (i), a part of d (i) is larger than l (i), and a part of d (i) is smaller than l (i); third category: the gray values d (i) of all points of the object on the photo are larger than l (i), a part of d (i) is larger than r (i), and a part of d (i) is smaller than r (i); fourth category: the gray value d (i) of one part of points of the object on the photo is between the gray values l (i) and r (i) of the corresponding points of the car door, the gray value d (i) < l (i) of the other part of points is larger than r (i), and the gray value d (i) of the other part of points is larger than r (i); fifth category: the gray scale of all points of the object on the photo is smaller than d (i) < l (i); sixth category: the gray scale d (i) > r (i) of all points on the photograph.
When the object on the photograph is in the first type of state, it is indicated that the object is small and in the elevator car door zone. At this time, the embedded control module 1 performs state two-time operation, compares the magnitude relation between the average speed of the object and the safe average speed, and when the average speed of the object is greater than the safe average speed, the embedded control module 1 sends an obstacle-free signal to the control cabinet, and the elevator door immediately starts to be closed; when the average speed of the object is smaller than the safe average speed, the embedded control module 1 sends a door opening signal or a door closing prohibiting signal to the control cabinet.
When the object on the photograph is in the third type of state, it is indicated that no part of the object enters the elevator car and all parts of the object are outside the inside of the elevator door. At this time, the embedded control module 1 does not perform subsequent calculation, and is in a waiting state until the object is in a second state.
When the object on the photograph is in the fourth type of state, it is indicated that some of the object has entered the elevator car and some has not entered the elevator car. At this time, the embedded control module 1 does not perform subsequent calculation, and is in a waiting state until the object is in a second state.
When the object on the photo is in the fifth type of state, indicating that the object has completely entered the elevator car, the embedded control module 1 has now signaled to the control cabinet that the elevator is free of obstacles and that the elevator door has started to close.
When the object on the photo is in the sixth type of state, indicating that the object has not entered the elevator car, the embedded control module 1 is in a state of waiting for the object to enter.
Specifically, when the object on the photograph is in the second type of state, the average speed of the object can be approximately estimated by taking two fused photographs in succession:
Figure BDA0001860239530000121
wherein ,
Figure BDA0001860239530000122
representing the average speed of movement of the object +.>
Figure BDA0001860239530000123
The sum of the motion distances of each point on the object obtained by multiplying the accumulated difference of the gray values of each corresponding pixel point on the two photo objects by the conversion coefficient from one gray value to the distance is represented, n represents the number of the pixels of the object extracted from the photo, and Δt represents the time interval of shooting the two photos.
The safe average speed can be further obtained through calculation
Figure BDA0001860239530000124
Figure BDA0001860239530000125
wherein ,dmin Is the nearest distance from the outer side of the elevator door in the pixel point for finally taking the picture, d min Can be obtained by subtracting the gray value of the pixel point on the inner side of the elevator door from the gray value of the pixel point of the object and multiplying the gray value by the conversion coefficient of the gray value and the distance, d min Must be greater than 0 and must be less than the thickness of the elevator door, T is the time required for the elevator door to go from the open to the closed state, λ is the safety factor, and λ must be greater than 1. Step 6 can also be performed when the average speed of movement of the object is greater than the safe average speed of the object and the elevator door is closed without pinching the object.
Step 5: the embedded control module 1 sends out an alarm through the voice controller to remind, and sends out a door opening signal or a door closing prohibiting signal to the control cabinet through the data transmission module 4, and then step 2 is executed.
Step 6: the embedded control module 1 sends a barrier-free signal to the control cabinet through the data transmission module 4, and the elevator door immediately starts to be closed, so that the waiting time of passengers is reduced, and the overall operation efficiency of the elevator is improved.
When the elevator door is completely opened and closed, the RGB camera 12 and the depth camera 11 are used for respectively acquiring pictures and then fusing, and the two finally acquired fused pictures are used as reference pictures. In the elevator door closing process, the video monitoring device acquires images of the elevator door direction through the RGB camera 12 and the depth camera 11, performs object identification and image fusion through the neural network model and the image fusion algorithm, and judges whether the elevator door encounters an obstacle or not when being closed by comparing with the previous reference picture. Compared with the prior light curtain monitoring, the invention can detect various dead angles, thereby avoiding the occurrence of the elevator clamping event.
The above list of detailed descriptions is only specific to practical embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent embodiments or modifications that do not depart from the spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (8)

1. An elevator video pinch prevention method, comprising:
step one: when receiving a signal that an elevator door is opened in place, respectively acquiring RGB images and depth images of the elevator door direction from an elevator car;
step two: when signals of other objects except the elevator door are received, fusing the RGB image and the depth image into an elevator door fusion image with depth information, and comparing the elevator door fusion image with a first reference image and a second reference image respectively to judge whether the other objects except the elevator door are barriers or not:
if the gray scales of all the pixel points of the elevator door fusion image are between the gray scales of the pixel points corresponding to the first reference image and the second reference image, judging that other objects existing outside the elevator door are barriers;
the first reference image is a fused image of the outer side sealing surface of the elevator door when the elevator is in a fully opened state, and the second reference image is a fused image of the inner side of the elevator door when the elevator is in a fully closed state;
assuming that the gray value l (i) of the first reference image is smaller than the gray value r (i) of the second reference image, if the gray value d (i) of all pixels of the elevator door fusion image is smaller than r (i), a part of d (i) is larger than l (i), and a part of d (i) is smaller than l (i), continuously acquiring two groups of RGB images and depth images, fusing the two groups of RGB images into two elevator door fusion images, and estimating the average moving speed of an object by the following formula
Figure QLYQS_1
Figure QLYQS_2
wherein ,
Figure QLYQS_3
the method comprises the steps of multiplying the sum of the motion distances of each point on an object obtained by multiplying the conversion coefficient from one gray value to the distance after accumulating the gray value difference of each corresponding pixel point on the object in two elevator door fusion images, wherein n is the number of pixels of the object extracted from the elevator door fusion images, and Deltat represents the time interval for acquiring two groups of RGB images and depth images;
calculating a safe average speed of an object
Figure QLYQS_4
Figure QLYQS_5
wherein ,dmin Is the nearest distance from the inner side of the elevator door in the elevator door fusion image pixel point fused by the next group of images in the two groups of RGB images and depth images which are continuously acquired, d min The thickness of the elevator door is more than 0 and less than that of the elevator door, T is the time required by the elevator door from a door opening state to a closing state, lambda is a safety amplification factor, and lambda is more than 1;
when the average movement speed of the object
Figure QLYQS_6
Greater than the safe average speed->
Figure QLYQS_7
When the elevator door is closed, the object is not clamped.
2. The method for preventing clamping of elevator video according to claim 1, wherein the second step further comprises:
and identifying the acquired RGB image based on the neural network model, judging whether other objects exist outside the elevator door, and if so, generating signals of the other objects exist outside the elevator door.
3. The method for preventing clamping of elevator video according to claim 2, wherein the second step further comprises:
transmitting image information in the elevator car to a cloud server; and
and acquiring the latest neural network model and the image fusion algorithm from the cloud server.
4. The elevator video pinch prevention method according to any one of claims 1 to 3, further comprising:
when other objects existing outside the elevator door are judged to be barriers, sending out voice alarm;
and sending a door opening signal, a door closing prohibiting signal or an immediate closing signal to the elevator control device according to the obstacle judging result.
5. An elevator video monitoring device, comprising:
the video acquisition module is used for respectively acquiring RGB images and depth images of the elevator door direction from the elevator car when receiving signals of the elevator door in place;
the embedded control module is used for fusing the RGB image and the depth image into an elevator door fusion image with depth information when receiving signals of other objects except the elevator door, comparing the elevator door fusion image with a first reference image and a second reference image respectively, and judging whether the other objects except the elevator door are barriers or not:
if the gray scales of all the pixel points of the elevator door fusion image are between the gray scales of the pixel points corresponding to the first reference image and the second reference image, judging that other objects existing outside the elevator door are barriers;
the first reference image is a fused image of the outer side sealing surface of the elevator door when the elevator is in a fully opened state, and the second reference image is a fused image of the inner side of the elevator door when the elevator is in a fully closed state;
assuming that the gray value l (i) of the first reference image is smaller than the gray value r (i) of the second reference image, if the gray value d (i) of all pixels of the elevator door fusion image is smaller than r (i), a part of d (i) is larger than l (i), and a part of d (i) is smaller than l (i), continuously acquiring two groups of RGB images and depth images, fusing the two groups of RGB images into two elevator door fusion images, and estimating the average moving speed of an object by the following formula
Figure QLYQS_8
Figure QLYQS_9
wherein ,
Figure QLYQS_10
fusing images for two elevator doorsThe sum of the motion distances of each point on the object obtained by multiplying the accumulated difference of the gray values of each corresponding pixel point on the object by the conversion coefficient from one gray value to the distance, n is the number of the pixels of the object extracted from the elevator door fusion image, and Deltat represents the time interval of acquisition of two groups of RGB images and depth images;
calculating a safe average speed of an object
Figure QLYQS_11
Figure QLYQS_12
wherein ,dmin Is the nearest distance from the inner side of the elevator door in the elevator door fusion image pixel point fused by the next group of images in the two groups of RGB images and depth images which are continuously acquired, d min The thickness of the elevator door is more than 0 and less than that of the elevator door, T is the time required by the elevator door from a door opening state to a closing state, lambda is a safety amplification factor, and lambda is more than 1;
when the average moving speed V of the object is greater than the safe average speed V s When the elevator door is closed, the object is not clamped.
6. The elevator video monitoring device of claim 5, wherein the embedded control module further identifies the acquired RGB images based on a neural network model to determine whether other objects are outside the elevator door, and if so, generates a signal that other objects are outside the elevator door.
7. The elevator video monitoring device of claim 6, wherein the embedded control module sends image information within an elevator car to a cloud server; and
and acquiring the latest neural network model and the image fusion algorithm from the cloud server.
8. The elevator video monitoring device of any of claims 5-7, further comprising:
the voice alarm module is used for sending out voice alarm when judging that other objects existing outside the elevator door are barriers;
and the data transmission module is used for sending a door opening signal, a door closing prohibiting signal or an immediate closing signal to the elevator control device according to the obstacle judging result.
CN201811331913.6A 2018-11-09 2018-11-09 Elevator video anti-pinch method and elevator video monitoring device Active CN109508667B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811331913.6A CN109508667B (en) 2018-11-09 2018-11-09 Elevator video anti-pinch method and elevator video monitoring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811331913.6A CN109508667B (en) 2018-11-09 2018-11-09 Elevator video anti-pinch method and elevator video monitoring device

Publications (2)

Publication Number Publication Date
CN109508667A CN109508667A (en) 2019-03-22
CN109508667B true CN109508667B (en) 2023-05-16

Family

ID=65748019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811331913.6A Active CN109508667B (en) 2018-11-09 2018-11-09 Elevator video anti-pinch method and elevator video monitoring device

Country Status (1)

Country Link
CN (1) CN109508667B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110745675B (en) * 2019-10-17 2022-07-08 宁波微科光电股份有限公司 Elevator protection method based on TOF camera
CN110733960A (en) * 2019-10-17 2020-01-31 宁波微科光电股份有限公司 method for preventing hands of elevator from being clamped
CN112850396A (en) * 2019-11-28 2021-05-28 宁波微科光电股份有限公司 Elevator foreign matter detection method and system, computer storage medium and elevator
CN111967434B (en) * 2020-08-31 2023-04-07 湖北科技学院 Machine vision anti-pinch system based on deep learning
CN112529953B (en) * 2020-12-17 2022-05-03 深圳市普渡科技有限公司 Elevator space state judgment method and device and storage medium
CN112801071B (en) * 2021-04-14 2021-08-20 浙江大学 Elevator asynchronous door opening recognition system and method based on deep learning
CN113269111B (en) * 2021-06-03 2024-04-05 昆山杜克大学 Video monitoring-based elevator abnormal behavior detection method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014227287A (en) * 2013-05-27 2014-12-08 三菱電機株式会社 Elevator controller and elevator control method
JP2015160699A (en) * 2014-02-27 2015-09-07 三菱電機ビルテクノサービス株式会社 elevator system
CN107285173A (en) * 2017-07-13 2017-10-24 广州日滨科技发展有限公司 Elevator door control method, device and system
CN107298354A (en) * 2017-07-13 2017-10-27 广州日滨科技发展有限公司 Elevator door-motor method for monitoring operation states, device and system
CN107416630A (en) * 2017-09-05 2017-12-01 广州日滨科技发展有限公司 The detection method and system of the improper closing of elevator

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014227287A (en) * 2013-05-27 2014-12-08 三菱電機株式会社 Elevator controller and elevator control method
JP2015160699A (en) * 2014-02-27 2015-09-07 三菱電機ビルテクノサービス株式会社 elevator system
CN107285173A (en) * 2017-07-13 2017-10-24 广州日滨科技发展有限公司 Elevator door control method, device and system
CN107298354A (en) * 2017-07-13 2017-10-27 广州日滨科技发展有限公司 Elevator door-motor method for monitoring operation states, device and system
CN107416630A (en) * 2017-09-05 2017-12-01 广州日滨科技发展有限公司 The detection method and system of the improper closing of elevator

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于RGB特征与深度特征融合的物体识别算法;卢良锋 等;《计算机工程》;20160531;第42卷(第5期);第186-193页 *
防拖曳事故电梯影像光幕的研制;刘健巧;《CNKI优秀硕士学位论文全文库》;20130115;全文 *

Also Published As

Publication number Publication date
CN109508667A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN109508667B (en) Elevator video anti-pinch method and elevator video monitoring device
KR102001962B1 (en) Apparatus for control a sliding door
CN109842787A (en) A kind of method and system monitoring throwing object in high sky
CN104499864A (en) Anti-collision anti-jamming bus safety door system
JP2015120573A (en) Elevator with image recognition function
CN110240036B (en) Device and method for detecting electric bicycle trying to enter elevator
CN101723226A (en) System and method of machine vision three-dimensional detection elevator light curtain
CN109447090B (en) Shield door obstacle detection method and system
CN111369708A (en) Vehicle driving information recording method and device
TW201313597A (en) Safe control device and method for lift
CN105321289B (en) A kind of round-the-clock monitoring image intellectual analysis and warning system and method
KR20120119144A (en) Apparatus and method of camera-based intelligent management
CN112367475B (en) Traffic incident detection method and system and electronic equipment
CN111160220B (en) Deep learning-based parcel detection method and device and storage medium
CN103581527B (en) Tracking photographing method, device and security protection host in security protection system
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
CN113435278A (en) Crane safety detection method and system based on YOLO
KR20160069685A (en) recognizing system of vehicle number for parking crossing gate
JP2013052738A (en) Detector for rushing-into-train
JP2007031017A (en) Control device for elevator
CN113954826B (en) Vehicle control method and system for vehicle blind area and vehicle
CN114359839A (en) Method and system for identifying entrance of electric vehicle into elevator
CN112406700B (en) Blind area early warning system based on upper and lower binocular vision analysis range finding
CN115631562A (en) Channel gate visitor management system, method and storage medium
JP2010195537A (en) Monitoring device in car of elevator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant