CN115953732A - Robot elevator taking detection method and device, electronic equipment and storage medium - Google Patents

Robot elevator taking detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115953732A
CN115953732A CN202211627749.XA CN202211627749A CN115953732A CN 115953732 A CN115953732 A CN 115953732A CN 202211627749 A CN202211627749 A CN 202211627749A CN 115953732 A CN115953732 A CN 115953732A
Authority
CN
China
Prior art keywords
elevator
robot
change rate
car
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211627749.XA
Other languages
Chinese (zh)
Inventor
唐忠平
程伟
陈刚
仲兆峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Building Technology Guangzhou Co Ltd
Original Assignee
Hitachi Building Technology Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Building Technology Guangzhou Co Ltd filed Critical Hitachi Building Technology Guangzhou Co Ltd
Priority to CN202211627749.XA priority Critical patent/CN115953732A/en
Publication of CN115953732A publication Critical patent/CN115953732A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B50/00Energy efficient technologies in elevators, escalators and moving walkways, e.g. energy saving or recuperation technologies

Abstract

The invention discloses a robot elevator taking detection method, a device, electronic equipment and a storage medium, wherein the robot elevator taking detection method comprises the following steps: acquiring an image of a detection area through a sensor when an elevator door of an elevator is opened; carrying out target detection tracking on the image to obtain motion data and a behavior detection result of an object in a detection area; calculating the score of the robot as an object according to the motion data and the behavior detection result; when the score is larger than the score threshold value, the robot is determined to be the robot, the robot is identified to take the elevator by utilizing the motion data of the robot taking the elevator and the behavior, the accuracy of identifying the robot is improved, the problem that the robot is judged by mistake as a child with weaker behavior capability to cause the elevator to be suspended for operation and use is avoided, the normal operation of the elevator when the robot takes the elevator is ensured, and the consistency of elevator control of the robot and the elevator control of a detection system is ensured.

Description

Robot elevator taking detection method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of elevator control, in particular to a robot elevator taking detection method, a robot elevator taking detection device, electronic equipment and a storage medium.
Background
With the intelligent development demand of elevators, the robot is adopted in buildings to deliver materials such as express delivery, takeaway and the like, so that the technology is widely applied.
Waiting for ladder in prior art, there is the technical scheme that uses detecting system to judge that the personnel that gets into the elevator is children or adult based on height and area projection, can further judge whether for the children that the action ability is weak through the button control operation that analyst carried out after personnel get into the elevator, if judge for the children that the action ability is weak, then control the elevator and suspend the operation. However, for distinguishing the personnel types based on the height and area projection and the key operation after the personnel enter the elevator, because the robot serving in the building has larger shape difference, robots with different height and area projections and various shapes are continuously emerged, because the robot controls the elevator calling through the communication with the elevator car, the elevator calling key does not need to be operated after the robot enters the elevator car, and when the detection system only judges the entering object based on the height and area projection and the key control operation, the personnel and the robot cannot be correctly distinguished.
Under the condition that no communication connection exists between the detection system and the robot elevator control, when the robot is judged to enter an object based on height and area projection and key control operation, the robot is possibly judged to be a child with weaker behavior ability, so that the elevator is suspended for operation, and logic conflict between the robot elevator control and the detection system elevator control is caused.
Disclosure of Invention
The invention provides a robot elevator taking detection method, a robot elevator taking detection device, electronic equipment and a storage medium, and aims to solve the problem that in the prior art, the robot is judged as a child by mistake, so that an elevator is not used for running.
In a first aspect, the present invention provides a robot elevator detection method, including:
acquiring an image of a detection area through a sensor when an elevator door of an elevator is opened;
carrying out target detection tracking on the image to obtain motion data and a behavior detection result of an object in the detection area;
calculating the score of the robot as the object according to the motion data and the behavior detection result;
and when the score is larger than a preset score threshold value, determining that the object is a robot.
In a second aspect, the present invention provides a robot elevator detection device, including:
the image acquisition module is used for acquiring an image of the detection area through the sensor when the elevator door of the elevator is opened;
the target tracking module is used for carrying out target detection tracking on the image to obtain motion data and a behavior detection result of the object in the car;
the score calculating module is used for calculating the score of the robot which is the object according to the motion data and the behavior detection result;
and the robot determining module is used for determining that the object is a robot when the score is greater than a preset score threshold value.
In a third aspect, the present invention provides an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the robot ride detection method of the first aspect of the invention.
In a fourth aspect, the present invention provides a computer-readable storage medium storing computer instructions for causing a processor to execute the method for detecting robot elevator riding of the first aspect of the present invention.
According to the embodiment of the invention, when the elevator door of the elevator is opened, the image of the detection area is acquired through the sensor, the image is subjected to target detection and tracking to obtain the motion data and the behavior detection result of the object in the detection area, the score of the object, namely the robot, is calculated according to the motion data and the behavior detection result, the object is determined to be the robot when the score is larger than the preset score threshold, the embodiment acquires the motion data, such as the moving direction, the speed, the projection area, the running posture and the like of the detected object and the behavior detection result of key operation after the object enters the elevator car, the motion data and the behavior of the robot in the process of taking the elevator are fully utilized to determine whether the robot takes the elevator or not, the accuracy of identifying the robot is improved, the problem that the elevator is suspended for use due to the fact that the robot misjudges the robot as a child with weaker behavior ability to cause the elevator is solved, the elevator normally runs when the robot takes the elevator, and the consistency of the elevator control of the robot and the detection system is ensured.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of the installation of a sensor in the elevator of the embodiment;
fig. 2 is a flowchart of a robot elevator-taking detection method according to an embodiment of the present invention;
fig. 3 is a flowchart of a robot elevator-taking detection method according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a robot elevator detection device according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
Example one
Fig. 2 is a flowchart of a robot elevator-taking detection method according to an embodiment of the present invention, which may be applied to identify a situation where a robot takes an elevator, where the method may be performed by a robot elevator-taking detection device, which may be implemented in a form of hardware and/or software, and the robot elevator-taking detection device may be configured in an electronic device, such as a controller of an elevator.
As shown in fig. 2, the robot elevator-taking detection method includes:
s201, collecting an image of a detection area through a sensor when an elevator door of the elevator is opened.
As shown in fig. 1, in this embodiment, a sensor 3 is installed on the car of the elevator, and the sensor may be a general camera, such as a black and white camera, an RGB camera, or an active light sensor, such as a TOF (Time of flight) depth sensor, a structured light sensor, or the like, that is, a sensor that senses by emitting light and receiving light to obtain a depth image, of course, the sensor may also be a radar, and the present embodiment does not limit the type of the sensor.
In one example, the sensor of the present embodiment may be an active optical sensor, the sensor 3 may be mounted on the car 1 such that the detection area of the sensor 3 covers the area inside the car 1 and the area of the elevator hall 2, and optionally, the sensor 3 may be mounted on a lintel of a car door of the car 1, for example, at a middle position of the lintel of the car door, such that the detection area of the sensor 3 covers the area inside the car 1 and the area of the elevator hall 2.
In one example, the number of the sensors 3 may be one, that is, the sensors 3 may be active optical sensors having a large angle of view, further, when the number of the sensors 3 is one, the angle of the sensors 3 may be adjustable or fixed, or, that is, the angle of the light emission axis of the sensors 3 with the vertical direction may be variable, that is, the sensors 3 may adjust the angle, so that the detection area of the sensors 3 may be enlarged or reduced.
As shown in fig. 1, in another example, the number of the sensors 3 may be two, among the two sensors 3, the light emitting axis of one sensor 3 faces the car 1 to collect images of the area in the car 1, and the light emitting axis of the other sensor 3 faces the elevator waiting hall 2 to collect images of the area in the elevator waiting hall 2, optionally, the detection ranges of the two sensors 3 have an overlapping area to ensure that the images of all the areas from the car 1 to the elevator waiting hall 2 can be collected, and at the same time, when the sensors are 1 active light sensors in the vertical direction of the light emitting axes, the overexposure problem generated after the elevator is closed when the images of all the areas from the car 1 to the elevator waiting hall 2 are collected at the same time is avoided. For example, the angles of the light emission axes of the two sensors 3 with respect to the vertical direction may be adjusted to adjust the detection areas of the two sensors 3. Of course, the angle of the light emission axis of the two sensors 3 to the vertical direction may also be fixed.
In practical application, the sensor 3 can collect depth images according to a preset frame rate and send the depth images to a controller of an elevator, the elevator usually opens a door when reaching a certain target floor, and at the moment, the sensor 3 collects depth images of an area in the car 1 and an area of the elevator waiting hall 2.
S202, carrying out target detection tracking on the image to obtain motion data and a behavior detection result of the object in the detection area.
In this embodiment, the motion data includes at least one of a moving direction, a moving speed, a projection area, and a moving posture of the object, and the behavior detection result may be whether the object operates a key in the car, where the moving direction is an included angle between a moving direction of the detected object toward the elevator door and a plane where the elevator door is located, the projection area is a projection of an outer contour of the detected object in a direction perpendicular to a bottom of the car, the moving speed may be a speed at which the detected object enters the car from a waiting hall, the moving posture may be a height change of the object, an outer contour swing amplitude, and the like, and operating the key in the car may be selecting a key corresponding to a target floor to turn on or off the key of the elevator door.
The embodiment may train a target detection and tracking model in advance, where the target detection and tracking model may identify an object and track the object, obtain at least one of a moving direction, a moving speed, a projection area, and a moving posture of the object, and identify whether the object operates a key in the car.
And S203, calculating the score of the robot as the object according to the motion data and the behavior detection result.
In this embodiment, a score calculation manner may be set according to a feature that the robot enters the car from the hall, for example, the robot travels into the car from the hall perpendicular to the door when taking the elevator, in this process, the motion attitude, speed, height, and the like of the robot are stable, and the robot has no action of swinging hands and angles compared to the human, and the projected area of the robot on the car ground is relatively fixed. Of course, each item of data may be weighted and a score for the robot as a target of the weighted sum may be calculated.
The higher the score, the higher the probability that the object is a robot, whereas the lower the score, the lower the probability that the object is a robot.
And S204, determining that the object is a robot when the score is larger than a preset score threshold.
The present embodiment may set a score threshold value of the robot as the object, determine that the object is the robot when the score of the object is greater than the score threshold value, control the door of the elevator to be closed according to the elevator door control strategy of the robot as the object, and control the car to directly drive the floor to which the robot needs to reach, for example, when only the robot takes the elevator in the car.
According to the embodiment of the invention, when the elevator door of the elevator is opened, the image of the detection area is acquired through the sensor, the image is subjected to target detection and tracking to obtain the motion data and the behavior detection result of the object in the detection area, the score of the object, namely the robot, is calculated according to the motion data and the behavior detection result, the object is determined to be the robot when the score is larger than the preset score threshold value, the embodiment acquires the motion data, such as the moving direction, the speed, the projection area, the running posture and the like of the detected object and the behavior detection result of key operation after the object enters the elevator car, the motion data and the behavior of the robot in the process of taking the elevator are fully utilized to determine whether the robot takes the elevator or not, the accuracy of identifying the robot is improved, the problem that the elevator is used in a suspended mode due to the fact that the robot misjudges the robot as a child with weak behavior is solved, the elevator normally runs when the robot takes the elevator, and the consistency of elevator control of the robot and the elevator control of the detection system is ensured.
Example two
Fig. 3 is a flowchart of a robot elevator-taking detection method according to a second embodiment of the present invention, which is optimized based on the first embodiment of the present invention, and as shown in fig. 3, the robot elevator-taking detection method includes:
s301, when the elevator door of the elevator is opened, the sensor is controlled to collect multi-frame images of the detection area according to a preset frame rate, and an image sequence is obtained.
In practical application, when the elevator car of the elevator reaches the elevator waiting hall of the target floor in the ascending or descending process, the elevator stops running, and the elevator door is controlled to be opened so that after passengers in the elevator car leave the elevator, objects with elevator taking demands in the elevator waiting hall enter the elevator car, wherein the objects can be people, pets, robots and other objects needing to take the elevator.
When the elevator door is opened, the sensor may collect depth images according to a preset frame rate, and send the depth images to the controller of the elevator, as shown in fig. 1, when the number of the sensor 3 is one, the sensor 3 may be controlled to collect multi-frame images of a detection area including the elevator waiting hall 2 and the car 1 when the elevator door starts to be opened, when the number of the sensor 3 is two, the sensor facing the car 1 may be controlled to collect multi-frame images of the area of the car 1, and the sensor facing the elevator waiting hall 2 may be controlled to collect multi-frame images of the area of the elevator waiting hall, it should be noted that the image collected by the sensor 3 may start from the opening of the elevator door and end when the elevator door is closed, in addition, the frame rate at which the sensor collects images may be a fixed frame rate, or may be a dynamically adjustable frame rate.
After the sensor collects one frame of image, the image can be preprocessed, such as denoising, binarization and other image processing, to generate an image sequence.
S302, inputting the image sequence into a target detection tracking model, obtaining at least one item of moving direction, moving speed, projection area and operation posture of an object in a detection area as motion data of the object, and identifying whether the key operation in the car is taken as a behavior detection result of the object after the object enters the car.
In this embodiment, the target detection and tracking model may be a neural network such as RNN, CNN, DNN, and the like, when the target detection and tracking model is trained, an image with a movement direction, a movement speed, a projection area, a movement posture, and a behavior detection result of an object in the image labeled is used as training data, the image sequence after labeling is input into the target detection and tracking model, a loss rate is calculated by predicting the movement direction, the movement speed, the projection area, the movement posture, and the behavior detection result of each object in the image sequence, and model parameters are adjusted by the loss rate until the loss rate is smaller than a preset value, so as to obtain the trained target detection and tracking model.
Of course, after the object is identified, the moving direction, the projection area, the position, and the moving posture of the object may be calculated by the installation position of the sensor, the imaging principle, the internal parameter, the external parameter, and the like of the sensor, and the moving speed may be calculated by the position difference and the frame rate of the object in the two frames of images.
And S303, calculating the moving speed, the projection area and the change rate of the running attitude of the object detected by two adjacent frames of images to obtain the speed change rate, the projection area change rate and the attitude change rate.
Specifically, the present embodiment inputs the image sequence into the target detection and tracking model, and obtains the moving direction, the moving speed, the projection area, and the moving posture of the object detected by each frame of image in the image sequence, and for the same object, the speed change rate, the projection area change rate, and the posture change rate can be calculated through the moving speed, the projection area, and the moving posture detected by two adjacent frames of image. The change rate is obtained by respectively calculating the difference value of the speed, the area and the posture detected by the two adjacent frames of images and then calculating the numerical calculation ratio of the difference value to the value detected by the previous frame of image in the two adjacent frames of images.
In an optional embodiment, the pose data may include height and outline data of the object, and then a difference between heights of the object detected by two adjacent frames of images may be calculated, a ratio of the difference between the heights to a height of the object detected by a previous frame of image in the two adjacent frames of images is calculated to obtain a height change rate, a contour difference between outline data of the object detected by the two adjacent frames of images is calculated, a ratio of the contour difference to an outline of the object detected by the previous frame of image in the two adjacent frames of images is calculated to obtain an outline change rate, an average of the height change rate and the outline change rate is calculated to obtain a pose change rate of the object detected by the two adjacent frames of images, and the pose of the object is measured by the height and outline data of the object.
S304, respectively calculating the average values of the speed change rate, the projection area change rate and the attitude change rate to obtain a speed change rate average value, a projection area change rate average value and an attitude change rate average value.
In the embodiment, the target detection and tracking are performed through the image sequence, each item of data of the object can be obtained from each frame of image, the change rate of each item of data is calculated through two adjacent frames of images, the mean value of the change rate of each item of data is calculated, the smooth filtering processing can be performed on the change rate of each item of data, the problem of inaccurate score calculation caused by abnormal change rate is avoided, the anti-interference performance of the method is improved, and the score calculation is more accurate.
And S305, acquiring a behavior value corresponding to a behavior detection result, wherein the behavior value is equal to a negative value when the detection result is that the keys in the car are operated, and is equal to a positive value when the detection result is that the keys in the car are not operated.
Illustratively, when the detection result is that the object operates the floor selection key in the car, the behavior value corresponding to the detection result is equal to-1, which indicates that the object has a high probability of not being the robot, and when the detection result is that the object does not operate the floor selection key in the car, the behavior value corresponding to the detection result is equal to 1, which indicates that the object has a high probability of being the robot.
S306, calculating the score of the robot as an object by adopting the moving direction, the average value of the speed change rate, the average value of the projection area change rate, the average value of the posture change rate and the behavior value, wherein the score is positively correlated with the moving direction and negatively correlated with the average value of the speed change rate, the average value of the projection area change rate, the average value of the posture change rate and the behavior value.
Illustratively, the score calculation formula for a robot as an object is as follows:
Figure BDA0004004233100000091
in the above formula, a is the moving direction, B, C, D, E are the average value of the velocity change rate, the average value of the projection area change rate, the average value of the attitude change rate, and the behavior value, respectively, and in another example, the parameters a, B, C, D, E in the formula may be the values after normalization. Of course, weights may also be set for the parameters A, B, C, D, E. The embodiment does not limit the way of calculating the score.
And S307, determining that the object is the robot when the score is larger than a preset score threshold value.
The present embodiment may set a score threshold value of the object as the robot, determine that the object is the robot when the score S of the object is greater than the score threshold value, and control the elevator door to close according to the elevator door control policy of the object as the robot.
And S308, detecting whether an elevator taking object exists in the elevator waiting hall or not through images after the robot is detected to enter the elevator car.
In the embodiment, each object in the elevator waiting hall is tracked through the image sequence, illustratively, the object sequence can be established, each object in the object sequence can maintain a position parameter, and when it is determined that the object belonging to the robot enters the car through the position parameter, whether an elevator riding object exists in the elevator waiting hall or not can be detected through the image.
In one example, after the position, the face orientation, the volume and the moving speed of the object in the elevator waiting hall are obtained through images, different weights are respectively given to the position, the face orientation, the volume and the moving speed, the weighted sum of the position, the face orientation, the volume and the moving speed is calculated to be a behavior score of the object, when the behavior score is larger than a preset score, the object is determined to have the intention of taking the elevator, position tracking is carried out on the object with the intention of taking the elevator, after all the objects with the intention of taking the elevator in the elevator waiting hall are positioned in the car, the elevator waiting hall is determined to have no elevator taking object, S310 can be executed, otherwise S309 is executed to continuously wait for all the objects with the intention of taking the elevator to enter the car.
And S309, controlling the elevator door to close after detecting that all the elevator taking objects in the elevator waiting hall enter the elevator car.
All objects needing to take the elevator enter the elevator car, the elevator door can be controlled to be closed, the elevator door does not need to be closed after the time of waiting for the closed elevator door, and the operation efficiency of the elevator is improved.
And S310, controlling the elevator door to close.
After the robot enters the car and objects needing to take the elevator in the elevator waiting hall all enter the car, the elevator door can be controlled to be closed, the elevator door does not need to be closed after the time of closing the elevator door is reached, and the operation efficiency of the elevator is improved.
In another optional embodiment, after the robot is detected to enter the car and the elevator door is closed, whether other elevator taking objects exist in the car is detected through images, if not, the target floor called by the robot is obtained, and the car is controlled to run straight to the target floor. Specifically, the area in the car can be identified through the image, if no other elevator taking object exists in the car, the fact that only the robot takes the elevator is indicated, the car of the elevator can be controlled to directly run to the target floor to which the robot needs to reach, and therefore the operation efficiency of the elevator is improved.
According to the embodiment, when the elevator door of the elevator is opened, the control sensor acquires multi-frame images from the detection area according to the preset frame rate to obtain an image sequence, the image sequence is input into the target detection tracking model, at least one item of the moving direction, the moving speed, the projection area and the operation posture of an object in the detection area is obtained as the motion data of the object, and whether the key operation in the elevator car is used as the behavior detection result of the object is identified after the object enters the elevator car or not is identified.
Furthermore, on one hand, the change rate of each item of data is subjected to smooth filtering processing, the problem of inaccurate score calculation caused by abnormal change rate is avoided, the anti-interference performance of the method is improved, and the score calculation is more accurate. On the other hand, the score is positively correlated with the moving direction and negatively correlated with the average value of the speed change rate, the average value of the projection area change rate, the average value of the posture change rate and the behavior value, the score truly reflects the running characteristic and the behavior characteristic of the robot taking the elevator, and the accuracy of the score is high.
EXAMPLE III
Fig. 4 is a schematic structural diagram of a robot elevator detection device according to a third embodiment of the present invention. As shown in fig. 4, the robot boarding detection device includes:
the image acquisition module 401 is used for acquiring an image of a detection area through a sensor when an elevator door of the elevator is opened;
a target tracking module 402, configured to perform target detection tracking on the image to obtain motion data and a behavior detection result of an object in the car;
a score calculating module 403, configured to calculate a score that the object is a robot according to the motion data and the behavior detection result;
a robot determining module 404, configured to determine that the object is a robot when the score is greater than a preset score threshold.
Optionally, the image acquisition module 401 includes:
and the image sequence generating unit is used for controlling the sensor to collect multi-frame images for the detection area according to the preset frame rate to obtain an image sequence when the elevator door of the elevator is opened.
Optionally, the target tracking module 402 comprises:
and the model input unit is used for inputting the image sequence into a target detection tracking model, obtaining at least one item of moving direction, moving speed, projection area and running posture of the object in the detection area as the motion data of the object, and identifying whether the key operation in the car exists after the object enters the car as the behavior detection result of the object.
Optionally, the motion data includes at least one of a moving direction, a moving speed, a projected area at the bottom of the car, and a moving posture of the object in each frame of image, the behavior detection result includes whether the object operates a key inside the car, and the score calculating module 403 includes:
the change rate calculation unit is used for calculating the change rates of the moving speed, the projection area and the running attitude of the object detected by two adjacent frames of images to obtain a speed change rate, a projection area change rate and an attitude change rate;
the mean value calculating unit is used for calculating the mean values of the speed change rate, the projection area change rate and the attitude change rate respectively to obtain a mean value of the speed change rate, a mean value of the projection area change rate and a mean value of the attitude change rate;
the behavior value acquisition unit is used for acquiring a behavior value corresponding to the behavior detection result, wherein the behavior value is equal to a negative value when the detection result is that the keys in the car are operated, and is equal to a positive value when the detection result is that the keys in the car are not operated;
a score calculating unit for calculating a score of the object being the robot by using the moving direction, the velocity change rate mean, the projected area change rate mean, the attitude change rate mean and the behavior value, wherein the score is positively correlated with the moving direction, and negatively correlated with the velocity change rate mean, the projected area change rate mean, the attitude change rate mean and the behavior value.
Optionally, the change rate calculating unit includes:
a height difference calculating subunit, configured to calculate a difference value between heights of the objects detected by two adjacent frames of images;
the height change rate calculation subunit is used for calculating the ratio of the height difference value to the height of the object detected by the previous image in the two adjacent images to obtain a height change rate;
the contour difference value calculating subunit is used for calculating the contour difference value of the outer contour of the object detected by the two adjacent frames of images;
the contour change rate calculating subunit is used for calculating the ratio of the contour difference value to the outer contour of the object detected in the previous image in the two adjacent images to obtain the outer contour change rate;
and the posture change rate calculating subunit is used for calculating the average value of the height change rate and the outer contour change rate to obtain the posture change rate of the object detected in the two adjacent frames of images.
Optionally, the method further comprises:
the elevator taking object detection module of the elevator waiting hall is used for detecting whether an elevator taking object still exists in the elevator waiting hall or not through the image after the robot is detected to enter the elevator car;
the first elevator door control module is used for controlling the elevator door to be closed after detecting that all elevator taking objects in the elevator waiting hall enter the elevator car;
and the first elevator door control module is used for controlling the elevator door to be closed.
Optionally, the method further comprises:
the elevator taking object detection module in the elevator car is used for detecting whether other elevator taking objects exist in the elevator car or not through the image after the robot is detected to enter the elevator car;
the target floor acquisition module is used for acquiring a target floor called by the robot;
and the straight driving control module is used for controlling the car to directly drive the target floor.
The robot elevator-taking detection device provided by the embodiment of the invention can execute the robot elevator-taking detection method provided by the first embodiment and the second embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
FIG. 5 illustrates a schematic diagram of an electronic device 50 that may be used to implement an embodiment of the invention. The electronic device 50 is intended to represent various forms of digital computers, such as desktop computers, workstations, servers, blade servers, mainframe computers, and the like. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 50 includes at least one processor 51, and a memory communicatively connected to the at least one processor 51, such as a Read Only Memory (ROM) 52, a Random Access Memory (RAM) 53, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 51 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 52 or the computer program loaded from a storage unit 58 into the Random Access Memory (RAM) 53. In the RAM 53, various programs and data necessary for the operation of the electronic apparatus 50 can also be stored. The processor 51, the ROM 52, and the RAM 53 are connected to each other via a bus 54. An input/output (I/O) interface 55 is also connected to bus 54.
A plurality of components in the electronic apparatus 50 are connected to the I/O interface 55, including: an input unit 56 such as a keyboard, a mouse, a sensor, and the like; an output unit 57 such as various types of displays, speakers, and the like; a storage unit 58 such as a magnetic disk, an optical disk, or the like; and a communication unit 59 such as a network card, modem, wireless communication transceiver, etc. The communication unit 59 allows the electronic device 50 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 51 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the processor 51 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 51 performs the various methods and processes described above, such as the robot ride detection method.
In some embodiments, the robot ride detection method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 58. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 50 via the ROM 52 and/or the communication unit 59. When the computer program is loaded into the RAM 53 and executed by the processor 51, one or more steps of the robot boarding detection method described above may be performed. Alternatively, in other embodiments, the processor 51 may be configured to perform the robot ride detection method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Computer programs for implementing the methods of the present invention can be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired result of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A robot elevator riding detection method is characterized by comprising the following steps:
acquiring an image of a detection area through a sensor when an elevator door of an elevator is opened;
carrying out target detection tracking on the image to obtain motion data and a behavior detection result of the object in the detection area;
calculating the score of the robot as the object according to the motion data and the behavior detection result;
and determining that the object is a robot when the score is larger than a preset score threshold value.
2. The method of claim 1, wherein the acquiring an image of an area within a car of the elevator with a sensor while an elevator door of the elevator is open comprises:
when the elevator door of the elevator is opened, the control sensor collects multi-frame images for the detection area according to a preset frame rate to obtain an image sequence.
3. The method of claim 1, wherein the performing target detection tracking on the image to obtain motion data and behavior detection results of the object in the car comprises:
inputting the image sequence into a target detection tracking model, obtaining at least one of the moving direction, the moving speed, the projection area and the running posture of the object in the detection area as the motion data of the object, and identifying whether the key operation in the car is performed after the object enters the car as the behavior detection result of the object.
4. The method of any one of claims 1-3, wherein the motion data comprises at least one of a moving direction, a moving speed, a projected area at the bottom of the car, and a moving posture of the object in each frame of image, the behavior detection result comprises whether the object operates a key inside the car, and the calculating the score that the object is a robot according to the motion data and the behavior detection result comprises:
calculating the moving speed, the projection area and the change rate of the operation posture of the object detected by two adjacent frames of images to obtain a speed change rate, a projection area change rate and a posture change rate;
respectively calculating the average values of the speed change rate, the projection area change rate and the attitude change rate to obtain a speed change rate average value, a projection area change rate average value and an attitude change rate average value;
acquiring a behavior value corresponding to the behavior detection result, wherein the behavior value is equal to a negative value when the detection result is that the keys in the car are operated, and is equal to a positive value when the detection result is that the keys in the car are not operated;
adopt moving direction, speed change rate mean value, projected area change rate mean value, gesture change rate mean value and the action value calculates the score that the object is the robot, wherein, the score with moving direction positive correlation, with speed change rate mean value, projected area change rate mean value, gesture change rate mean value and the action value negative correlation.
5. The method of claim 4, wherein the operational pose comprises a height of the object and an outer contour of the object, and calculating the pose change rate comprises:
calculating the difference value of the heights of the objects detected by two adjacent frames of images;
calculating the ratio of the height difference value to the height of the object detected by the previous image in the two adjacent images to obtain a height change rate;
calculating a contour difference value of the outer contour of the object detected by two adjacent frames of images;
calculating the ratio of the contour difference value to the contour of the object detected in the previous image in the two adjacent images to obtain the change rate of the contour;
and calculating the average value of the height change rate and the outer contour change rate to obtain the posture change rate of the object detected in the two adjacent frames of images.
6. The method of any one of claims 1-3, further comprising:
after the robot is detected to enter the elevator car, whether an elevator taking object still exists in the elevator waiting hall is detected through the image;
if yes, controlling the elevator door to close after all the elevator taking objects in the elevator waiting hall are detected to enter the elevator car;
if not, controlling the elevator door to close.
7. The method of claim 6, further comprising:
after the robot is detected to enter the car, whether other elevator taking objects exist in the car is detected through the image;
if not, acquiring a target floor called by the robot;
and controlling the car to run straight to the target floor.
8. A robot elevator detection device is characterized by comprising:
the image acquisition module is used for acquiring an image of the detection area through the sensor when the elevator door of the elevator is opened;
the target tracking module is used for carrying out target detection tracking on the image to obtain motion data and a behavior detection result of the object in the car;
the score calculating module is used for calculating the score of the robot which is the object according to the motion data and the behavior detection result;
and the robot determining module is used for determining that the object is a robot when the score is greater than a preset score threshold value.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the robot ride detection method of any one of claims 1-7.
10. A computer-readable storage medium, wherein the computer-readable storage medium stores computer instructions for causing a processor to implement the robot elevator-taking detection method according to any one of claims 1-7 when executed.
CN202211627749.XA 2022-12-16 2022-12-16 Robot elevator taking detection method and device, electronic equipment and storage medium Pending CN115953732A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211627749.XA CN115953732A (en) 2022-12-16 2022-12-16 Robot elevator taking detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211627749.XA CN115953732A (en) 2022-12-16 2022-12-16 Robot elevator taking detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115953732A true CN115953732A (en) 2023-04-11

Family

ID=87296497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211627749.XA Pending CN115953732A (en) 2022-12-16 2022-12-16 Robot elevator taking detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115953732A (en)

Similar Documents

Publication Publication Date Title
US20220108607A1 (en) Method of controlling traffic, electronic device, roadside device, cloud control platform, and storage medium
CN102036899B (en) Video-based system and method of elevator door detection
US20220185625A1 (en) Camera-based sensing devices for performing offline machine learning inference and computer vision
JP6134641B2 (en) Elevator with image recognition function
US11029743B2 (en) Information processing device and information processing method
CN111984008A (en) Robot control method, device, terminal and storage medium
US20170139471A1 (en) Adaptive user presence awareness for smart devices
CN111377325A (en) Enhancing elevator sensor operation for improved maintenance
CN113326786A (en) Data processing method, device, equipment, vehicle and storage medium
CN116443682B (en) Intelligent elevator control system
CN115953732A (en) Robot elevator taking detection method and device, electronic equipment and storage medium
CN111597889A (en) Method, device and system for detecting target movement in video
CN112801072B (en) Elevator non-flat-layer door opening fault recognition device and method based on computer vision
CN115973885A (en) Object behavior determination method and device, electronic equipment and storage medium
CN114282776A (en) Method, device, equipment and medium for cooperatively evaluating automatic driving safety of vehicle and road
CN114919570A (en) Parking obstacle avoidance method and device, electronic equipment and storage medium
CN114754744A (en) Reservoir water level dynamic monitoring method based on computer image recognition
CN115937979A (en) Object behavior detection method and device, electronic equipment and storage medium
CN115973886A (en) Detection area adjusting method and device, electronic equipment and storage medium
US20200122963A1 (en) Elevator car leveling sensor
CN115783925A (en) Sensor frame rate adjusting method and device, control equipment and storage medium
CN112573336A (en) Automatic starting control system and method for escalator
CN115724325A (en) Light curtain false touch detection method and device, electronic equipment and storage medium
CN116216470A (en) Elevator door control method and device, electronic equipment and storage medium
JP7397228B1 (en) Information processing device, information processing method, program and information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination