CN115998927A - Intelligent killing method and robot system for indoor scene epidemic risk monitoring - Google Patents

Intelligent killing method and robot system for indoor scene epidemic risk monitoring Download PDF

Info

Publication number
CN115998927A
CN115998927A CN202211553636.XA CN202211553636A CN115998927A CN 115998927 A CN115998927 A CN 115998927A CN 202211553636 A CN202211553636 A CN 202211553636A CN 115998927 A CN115998927 A CN 115998927A
Authority
CN
China
Prior art keywords
information
risk
disinfection
risk area
killing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211553636.XA
Other languages
Chinese (zh)
Inventor
王向伟
沙建军
高继鑫
彭锐晖
吕永胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Jiuwei Huadun Science And Technology Research Institute Co ltd
Original Assignee
Qingdao Jiuwei Huadun Science And Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Jiuwei Huadun Science And Technology Research Institute Co ltd filed Critical Qingdao Jiuwei Huadun Science And Technology Research Institute Co ltd
Priority to CN202211553636.XA priority Critical patent/CN115998927A/en
Publication of CN115998927A publication Critical patent/CN115998927A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an intelligent killing method and a robot system for indoor scene epidemic risk monitoring, belongs to the technical field of intelligent robots, and aims at solving the problems that a main flow killing robot is single in function, has no important killing, is insufficient in automation degree and the like; the main control computing platform realizes the functions of monitoring the risk area by the killing robot, constructing the map of the autonomous environment and planning the path through the risk area monitoring module, the risk area positioning and constructing the map module and the risk area path planning module; the AGV mobile platform receives the planned path and goes to the target point, and meanwhile the killing execution unit realizes the killing operation. According to the method, the risk area is monitored in real time, the track of the heavy personnel is accurately and independently killed, and the risk area monitoring, evaluating and killing operation is combined, so that the killing efficiency and epidemic prevention effect are greatly improved.

Description

Intelligent killing method and robot system for indoor scene epidemic risk monitoring
Technical Field
The invention discloses an intelligent killing method and a robot system for indoor scene epidemic risk monitoring, and belongs to the technical field of intelligent robots.
Background
Up to now, various new types of killing robots have shown a very significant effect in the killing of virus contaminated areas. However, current mainstream biocidal robots often suffer from the following disadvantages: 1) The disinfection function is single, the risk area monitoring, evaluation and disinfection operation are not combined, and the spread and occurrence of epidemic situation can not be effectively inhibited; 2) The method has the advantages that the method does not have important sterilization, the mechanical sterilization operation is only carried out according to a preset path in the sterilization process, and the method does not have differences to all areas and does not pay attention to important targets or areas; 3) The automation degree is not high, the map is constructed by manual control, the disinfection path is manually set, and the autonomous planning operation cannot be realized.
Disclosure of Invention
The invention discloses an intelligent disinfection method and a robot system for indoor scene epidemic risk monitoring, which solve the problems of single disinfection function, non-key disinfection and insufficient automation degree of a disinfection robot in the prior art.
The intelligent killing robot system comprises an epidemic situation risk monitoring module, a risk area positioning and mapping module, a risk area path planning module and a killing execution unit, wherein the epidemic situation risk monitoring module is used for sensing and fusing a mask and abnormal body temperature personnel in a detection area through optical infrared imaging, the risk area positioning and mapping module is used for acquiring track information and environment information of key personnel in the risk area through fusion of an optical image and a depth image, meanwhile, real-time three-dimensional mapping is carried out on the risk area, autonomous mapping of the risk area is realized, the risk area path planning module is used for enabling the killing robot to autonomously navigate to a designated position of the risk area while avoiding an obstacle, killing execution of key personnel tracks in the risk area is realized, after the killing robot is started and navigated, the killing execution unit is started synchronously according to the existence of abnormal body temperature personnel, and the killing execution unit is started and executes killing operation.
The hardware part of the epidemic risk monitoring module comprises a high-definition infrared camera and a depth camera which are arranged on an AGV mobile platform, the epidemic risk monitoring module utilizes the infrared camera and the depth camera to conduct image acquisition and processing, an optical image obtained by the depth camera is input into a YOLOv4-tiny model, when personnel pass through in a dangerous area, the mask wearing condition of the personnel in the image is detected, face anchor frame information is obtained, real-time infrared images are synchronously captured, the face area of the optical image is converted into the face area of the infrared image by utilizing coordinate system conversion, the highest temperature in the area is obtained, the result is stored, and if the personnel do not wear a mask or the body temperature is higher than a certain threshold value, the important personnel are regarded as.
The risk area positioning and mapping module uses an improved ORB-SLAM2 algorithm, firstly performs feature extraction, feature matching and pose estimation on a depth image and an optical image, realizes accurate positioning of a killing-eliminating robot in a risk area, simultaneously performs filtering, segmentation and clustering processing on point cloud information of the depth image, sequentially reversely projects each point cloud cluster to a two-dimensional layer and matches with a detected two-dimensional object, acquires center point coordinates and type information of the detected object according to the point cloud information, stores key personnel track information, simultaneously adds obstacle information of an environment to a local map, realizes real-time three-dimensional mapping of the risk area, and finally sends the track information and the obstacle information of the key personnel to a risk area path planning module for subsequent processing through an ROS system and a simulation platform after processing.
The center point coordinates of the detected object form track information of key personnel, and the type information of the detected object comprises obstacle information, personnel information with masks and abnormal body temperature personnel information.
The risk area path planning module is used for receiving object information sent by the positioning and mapping module at fixed time based on a path searching algorithm of A, respectively storing the tracks of different people, adding barrier information to the navigation frame when the track points reach a certain number, sequentially searching the risk area path by taking the track points of key people as target points, enabling the robot to pass through the barrier to reach the target points in the risk area, executing corresponding killing measures according to the types of the key people, visualizing the system, returning the robot to an initial position and adjusting the posture after all the track points of the key people are killed, continuously monitoring the risk area and waiting for the execution of the next task.
The disinfection execution unit is arranged on the AGV mobile platform and moves along with the disinfection robot, and after a person passes by, the disinfection execution unit performs ultraviolet lamp disinfection and alcohol atomization disinfection on the track points of the person.
An intelligent killing method for indoor scene epidemic risk monitoring, which uses an intelligent killing robot system for indoor scene epidemic risk monitoring, comprises the following steps:
s1: the infrared image and the optical image are fused to realize risk area monitoring;
s2: automatically identifying a risk area according to the surrounding environment, and positioning and mapping;
s3: receiving three-dimensional object information and judging type information of a detected object;
s4: and the disinfection execution and risk assessment are carried out, the AGV mobile platform receives a planning path sent by the main control computing platform, the planning path is driven to a target area, the alcohol spraying and ultraviolet lamps are started to carry out disinfection operation when the AGV mobile platform is started, and meanwhile, whether a warning is started or not is selected according to the existence of abnormal body temperature personnel in key personnel.
S1 comprises the following steps:
s1.1: determining targets to be identified and detected, including pedestrians, masks, tables and chairs, and making and marking data sets according to the targets;
s1.2: inputting the data set into a YOLOv4-tiny model for training, and carrying out data enhancement, parameter adjustment and improvement to enable the model to achieve a good detection effect and obtain trained model parameters;
s1.3: loading the trained model into a program, carrying out real-time target detection on an optical image, surrounding a detected target by utilizing a two-dimensional anchor frame, displaying object names and confidence information, and then storing the detection result of the optical image for matching with an infrared image so as to judge mask wearing conditions and abnormal body temperature conditions of personnel in a risk area;
s1.4: and loading an infrared image, matching the infrared image with a face region of an optical image, extracting the highest gray value of the face region of the infrared image in real time, converting the highest gray value into a temperature value, storing the temperature data, and finally carrying out visual display by using OpenCV.
S2 comprises the following steps:
s2.1: after the optical image and the depth image are aligned in time, ORB characteristic points are extracted from the optical image and are matched with each other, and pose estimation is achieved together with point cloud information of the depth image;
s2.2: and (3) preserving the point cloud of the depth map, performing plane segmentation, object segmentation and object clustering, and preserving the point cloud cluster after the depth image processing.
S2.3: the processed point cloud is back projected to two dimensions, matching is carried out on the processed point cloud and the detected object in sequence according to the distance difference value of the center point, a type label of the three-dimensional object is obtained, and meanwhile, the coordinates of the center point and the object type information of the three-dimensional object are stored;
s2.4: and sending the object information to the risk area path planning module through the ROS system and the simulation platform.
S3 comprises the following steps:
s3.1: receiving three-dimensional object information and judging object type information;
s3.2: if the object is an obstacle, storing the object into an obstacle container, loading the coordinates of the central point of the obstacle into a map, performing visual display, and continuously receiving the detected three-dimensional object information;
s3.3: if the object is a person without a mask or a person with abnormal body temperature, judging whether the coordinates of the center point of the object are similar to those of the center point of the previous object, and judging whether the object is added to the target point container according to the distance between the coordinates of the center point;
s3.4: when the objects in the target point container reach a certain number of threshold values, the object information in the target point container is sequentially exported, and the target point is sequentially subjected to path search and an AGV movement command is issued.
S3.5: s3.4, simultaneously, selecting whether to start an alarm mode according to whether the type of the target point is a body temperature abnormal person, and if all the target points do not have the body temperature abnormal person, starting a killing execution mode only;
s3.6: visual display is carried out on the ROS system and the simulation platform, cubes with different colors are drawn at the central point of the ROS system and text marking is carried out, people without masks are yellow, and people with abnormal body temperature are red;
s3.7: when all the target points in the container are executed, the initial position is used as the target point, and a moving command is issued to the AGV moving platform, so that the intelligent killing robot returns to the initial position and enters a waiting task state.
The beneficial effects of the invention are as follows: the invention has more comprehensive risk monitoring function, can continuously monitor the body temperature of the risk area and detect the mask, and simultaneously kills the risk area, so that epidemic prevention and killing operation are combined; the problem of non-key disinfection is solved, the system realizes accurate disinfection on key personnel tracks in a risk area, and the disinfection efficiency is remarkably improved; the problem of degree of automation is not enough has been overcome, specifically this system need not the manual control robot to establish the environment map or set for the path of killing, and risk area monitoring and the operation process of killing are complete autonomous operation, and its specific advantage is as follows:
1) The system can detect the mask and body temperature of personnel in the scene in real time, and divide personnel without mask and abnormal body temperature into key personnel so as to implement corresponding disinfection strategies;
2) And tracking and killing the track of key personnel. The system can accurately and timely kill the track of the heavy personnel, timely control and accurately prevent the spread of epidemic situation, and improve the effect of epidemic situation prevention and killing efficiency;
3) And performing risk assessment on the scene. When a person with abnormal body temperature appears in the scene, the person can timely detect and alarm prompt and track disinfection so as to achieve the purpose of regional risk assessment;
4) The system has higher automation degree. The whole process does not need to manually control the robot to construct a map, does not need to manually set a killing path, and does not need manual intervention in the whole process of monitoring, mapping, path planning, killing and returning to an initial position of key personnel of a killing robot system.
Drawings
Fig. 1 is a system configuration diagram of an embodiment of the present invention.
FIG. 2 is a system schematic flow chart of an embodiment of the present invention.
FIG. 3 is a schematic flow chart of an epidemic risk monitoring module according to an embodiment of the present invention.
Fig. 4 is a schematic flow chart of a risk area locating and mapping module according to an embodiment of the present invention.
Fig. 5 is a schematic flow chart of a risk area path planning module according to an embodiment of the present invention.
Detailed Description
The invention is further described in connection with the following detailed description.
The intelligent disinfection robot system for indoor scene epidemic situation risk monitoring comprises an epidemic situation risk monitoring module, a risk area positioning and mapping module, a risk area path planning module and a disinfection execution unit, wherein the epidemic situation risk monitoring module does not wear a mask and abnormal body temperature personnel in a detection area through optical infrared imaging sensing and fusion, the risk area positioning and mapping module acquires track information and environment information of key personnel in the risk area through fusion of an optical image and a depth image, simultaneously carries out real-time three-dimensional mapping on the risk area, realizes autonomous mapping on the risk area, and the risk area path planning module utilizes the key personnel track information and a local three-dimensional map constructed in real time to enable the disinfection robot to autonomously navigate to a designated position of the risk area while avoiding an obstacle, the disinfection execution unit realizes alarm and disinfection execution of key track points in the risk area, and after the disinfection robot is started to navigate, the disinfection execution unit is synchronously started to execute disinfection operation.
The epidemic risk monitoring module is shown in fig. 3, the hardware part of the epidemic risk monitoring module comprises a high-definition infrared camera and a depth camera which are arranged on an AGV mobile platform, the epidemic risk monitoring module utilizes the infrared camera and the depth camera to carry out image acquisition and processing, an optical image obtained by the depth camera is input into a YOLOv4-tiny model, when a person passes through the risk area, the mask wearing condition of the person in the image is detected, face anchor frame information is obtained, then a real-time infrared image is captured, the face area of the optical image is converted into the face area of the infrared image by utilizing coordinate system conversion, the highest temperature in the area is obtained, the result is stored, and if the person does not wear a mask or the body temperature is higher than a certain threshold value, the person is regarded as a key person.
The risk area positioning and mapping module is as shown in fig. 4, and uses an improved ORB-SLAM2 algorithm to perform feature extraction, feature matching and pose estimation on a depth image and an optical image to realize accurate positioning of a killing robot in a risk area, and meanwhile, perform filtering, segmentation and clustering processing on point cloud information of the depth image, then sequentially back-project each point cloud to a two-dimensional layer and match with a detected two-dimensional object, acquire central point coordinates and type information of the detected object according to the point cloud information, store key personnel track information, and simultaneously add obstacle information of an environment to a local map to realize real-time three-dimensional mapping of the risk area, and finally, process track information and obstacle information of key personnel and send the processed track information and obstacle information to the risk area path planning module through an ROS system and a simulation platform for subsequent processing.
The center point coordinates of the detected object form track information of key personnel, and the type information of the detected object comprises obstacle information, personnel information with masks and abnormal body temperature personnel information.
The risk area path planning module is as shown in fig. 5, based on the path searching algorithm of the A, receives the object information sent by the positioning and mapping module at fixed time, stores the tracks of different people respectively, adds the obstacle information to the navigation frame when the track points reach a certain number, sequentially uses the track points of key people as target points to search the risk area path, enables the robot to reach the target points through the obstacles in the risk area, executes corresponding killing measures according to the types of the key people, and simultaneously visualizes the system.
The disinfection execution unit is arranged on the AGV mobile platform and moves along with the disinfection robot, and after a person passes by, the disinfection execution unit performs ultraviolet lamp disinfection and alcohol atomization disinfection on the track points of the person.
An intelligent killing method for indoor scene epidemic risk monitoring, which uses an intelligent killing robot system for indoor scene epidemic risk monitoring, comprises the following steps:
s1: the infrared image and the optical image are fused to realize risk area monitoring;
s2: positioning and mapping are carried out from the main risk area according to the surrounding environment;
s3: receiving three-dimensional object information and judging type information of a detected object;
s4: and the disinfection execution and risk assessment are carried out, the AGV mobile platform receives a planning path sent by the main control computing platform, the planning path is driven to a target area, the alcohol spraying and ultraviolet lamps are started to carry out disinfection operation when the AGV mobile platform is started, and meanwhile, whether a warning is started or not is selected according to the existence of abnormal body temperature personnel in key personnel.
S1 comprises the following steps:
s1.1: determining targets to be identified and detected, including pedestrians, masks, tables and chairs, and making and marking data sets according to the targets;
s1.2: inputting the data set into a YOLOv4-tiny model for training, and carrying out data enhancement, parameter adjustment and improvement to enable the model to achieve a good detection effect and obtain trained model parameters;
s1.3: loading the trained model into a program, carrying out real-time target detection on an optical image, surrounding a detected target by utilizing a two-dimensional anchor frame, displaying object names and confidence information, and then storing the detection result of the optical image for matching with an infrared image so as to judge mask wearing conditions and abnormal body temperature conditions of personnel in a risk area;
s1.4: and loading an infrared image, matching the infrared image with a face region of an optical image, extracting the highest gray value of the face region of the infrared image in real time, converting the highest gray value into a temperature value, storing the temperature data, and finally carrying out visual display by using OpenCV.
S2 comprises the following steps:
s2.1: after the optical image and the depth image are aligned in time, ORB characteristic points are extracted from the optical image and are matched with each other, and pose estimation is achieved together with point cloud information of the depth image;
s2.2: and (3) preserving the point cloud of the depth map, performing plane segmentation, object segmentation and object clustering, and preserving the point cloud cluster after the depth image processing.
S2.3: the processed point cloud is back projected to two dimensions, matching is carried out on the processed point cloud and the detected object in sequence according to the distance difference value of the center point, a type label of the three-dimensional object is obtained, and meanwhile, the coordinates of the center point and the object type information of the three-dimensional object are stored;
s2.4: and sending the object information to the risk area path planning module through the ROS system and the simulation platform.
S3 comprises the following steps:
s3.1: receiving three-dimensional object information and judging object type information;
s3.2: if the object is an obstacle, storing the object into an obstacle container, loading the coordinates of the central point of the obstacle into a map, performing visual display, and continuously receiving the detected three-dimensional object information;
s3.3: if the object is a person without a mask or a person with abnormal body temperature, judging whether the coordinates of the center point of the object are similar to those of the center point of the previous object, and judging whether the object is added to the target point container according to the distance between the coordinates of the center point;
s3.4: when the objects in the target point container reach a certain number of threshold values, the object information in the target point container is sequentially exported, and the target point is sequentially subjected to path search and an AGV movement command is issued.
S3.5: while executing SS3.4, according to whether the type of the target point is abnormal body temperature personnel, whether an alarm mode is started or not is selected, if all the target points have no abnormal body temperature personnel, only a disinfection execution mode is started;
s3.6: visual display is carried out on the ROS system and the simulation platform, cubes with different colors are drawn at the central point of the ROS system and text marking is carried out, people without masks are yellow, and people with abnormal body temperature are red;
s3.7: when all the target points in the container are executed, the initial position is used as the target point, and a moving command is issued to the AGV moving platform, so that the intelligent killing robot returns to the initial position and enters a waiting task state.
As shown in FIG. 1, the epidemic situation killing robot system consists of a main control computing platform, an ROS system, a simulation platform, an AGV moving platform, a killing execution unit, an external sensor and an alarm. The main control computing platform is loaded on the top of the AGV mobile platform by an embedded computer or a microcomputer and is responsible for computing operation and function realization of core modules such as an epidemic situation risk monitoring module, a risk area positioning and mapping module, a risk area path planning module and the like. After image data acquired by an external sensor are processed, monitoring of a risk area and identification of key personnel are realized, key personnel track and surrounding environment information are acquired, and finally an AGV killing path is output to enable the AGV to move to a target point.
The specific hardware environment adopted by the main control computing platform is an Intel (R) Core (TM) i7-7700HQ CPU@2.80GHz processor, an 8G memory and a 1T hard disk, the operating system adopts Linux, particularly Ubuntu18.04, and the operating environment comprises OpenCV 3.4.10, PCL 1.8.1 and Pangolin, pytorch 1.9.1. The ROS system and the simulation platform realize the communication and simulation between the main control computing platform and an internal module thereof, the AGV moving platform and an external sensor. The ROS system and the simulation platform are specifically ROS media. The AGV moving platform moves and moves the chassis for the intelligent killing robot, receives the planned path output by the main control computing platform, and drives to the target point according to the path. The AGV moving platform adopts a four-wheel differential AGV moving chassis, and is concretely Agilex Robotics SCOUT MINI.
The disinfection execution unit is a disinfection execution mechanism of the intelligent disinfection robot, achieves the disinfection operation on the tracks of key personnel, receives a disinfection instruction after the AGV moving platform reaches the risk area, automatically opens the disinfection execution unit, executes the disinfection operation until the disinfection operation of the risk area is completed, and returns to the initial position. The sterilizing execution unit adopts an ultraviolet irradiation lamp and an alcohol sprayer to jointly sterilize, and specifically comprises a Panasonic SJD3603 and an AUX AJ-H811.
The external sensor is a sensing module of the intelligent disinfection robot, is carried in front of the AGV mobile platform, realizes data acquisition of an infrared image, an optical image and a depth image of an indoor environment area, and is communicated with the main control computing platform through the ROS system to further perform data processing work. The external sensor is an infrared imaging sensor, an optical imaging sensor or a 3D imaging sensor, in particular an XCORE FT infrared camera or a Intel RealSense D435i depth camera (optical image and depth image are synchronously acquired).
In S2.2, a minimum of 10000 cloud points are Ping Miandian, the distance between the cloud points is 0.02m, the cloud points are regarded as different objects, and each object contains a minimum of 250 cloud points. In S3.3, if the square root of the coordinates of the center points of the two objects is smaller than 1m, the same object is considered to be at the same place, the information is not stored, and if the square root of the coordinates of the center points of the two objects is larger than 1m, the coordinates of the center points of the two objects and the type of the object are stored in the target point container. In S3.4, when the number of target points reaches 5, object information is derived.
As shown in fig. 2, the principle flow of the system is as follows: firstly, collecting data of a risk area through an external sensor; then, monitoring a risk area, positioning and mapping the risk area and planning a path of the risk area through a main control computing platform, and outputting a surrounding environment map and planning path points; secondly, the AGV mobile platform receives the target point and moves to the target area, meanwhile starts the killing operation, and executes a corresponding killing strategy; and finally, returning to the initial position to continuously monitor the risk area after all the important personnel tracks are killed, and waiting for the next task execution.
It should be understood that the above description is not intended to limit the invention to the particular embodiments disclosed, but to limit the invention to the particular embodiments disclosed, and that the invention is not limited to the particular embodiments disclosed, but is intended to cover modifications, adaptations, additions and alternatives falling within the spirit and scope of the invention.

Claims (10)

1. The intelligent disinfection robot system for indoor scene epidemic risk monitoring is characterized by comprising an epidemic risk monitoring module, a risk area positioning and mapping module, a risk area path planning module and a disinfection execution unit, wherein the epidemic risk monitoring module does not wear a mask and abnormal body temperature personnel in a detection area through optical infrared imaging sensing and fusion, the risk area positioning and mapping module acquires track information and environment information of key personnel in the risk area through fusion of an optical image and a depth image, simultaneously carries out real-time three-dimensional mapping on the risk area, realizes autonomous mapping on the risk area, and the risk area path planning module utilizes the key personnel track information and a real-time constructed local three-dimensional map to enable the disinfection robot to independently navigate to a designated position of the risk area while avoiding an obstacle, so that the disinfection execution unit realizes alarm and disinfection execution of key track points in the risk area, and when the disinfection robot is started, the disinfection execution unit is started synchronously, and executes the disinfection operation.
2. The intelligent disinfection robot system for indoor scene epidemic risk monitoring according to claim 1, wherein the hardware part of the epidemic risk monitoring module comprises a high-definition infrared camera and a depth camera which are installed on an AGV moving platform, the epidemic risk monitoring module performs image acquisition and processing by using the infrared camera and the depth camera, an optical image obtained by the depth camera is input into a YOLOv4-tiny model, when a person passes through the area of the risk, the wearing condition of a mask of the person in the image is detected, face anchor frame information is obtained, real-time infrared images are synchronously captured, the face area of the optical image is converted into the face area of the infrared image by using coordinate system conversion, the highest temperature in the face area is obtained, the result is stored, and if the person does not wear the mask or the body temperature is higher than a certain threshold, the person is regarded as a key person.
3. The intelligent killing robot system for indoor scene epidemic risk monitoring according to claim 2, wherein the risk area positioning and mapping module uses an improved ORB-SLAM2 algorithm, firstly performs image alignment, feature extraction, feature matching and pose estimation on a depth image and an optical image, realizes accurate positioning of the killing robot in a risk area, simultaneously performs filtering, segmentation and clustering processing on point cloud information of the depth image, sequentially back projects each point cloud to a two-dimensional layer and matches with a detected two-dimensional object, acquires center point coordinates and type information of the detected object according to the point cloud information, stores key personnel track information, simultaneously adds obstacle information of an environment to a local map, realizes real-time three-dimensional mapping of the risk area, and finally processes the track information and the obstacle information of the key personnel and sends the processed track information and obstacle information to the risk area path planning module through an ROS system and a simulation platform for subsequent processing.
4. The intelligent disinfection robot system for indoor scene epidemic risk monitoring according to claim 3, wherein the center point coordinates of the detected object form track information of key personnel, and the type information of the detected object comprises obstacle information, personnel information without mask and abnormal body temperature personnel information.
5. The intelligent disinfection robot system for indoor scene epidemic risk monitoring according to claim 4, wherein the risk zone path planning module receives object information sent by the positioning and mapping module at fixed time based on a path searching algorithm of A, stores tracks of different people respectively, adds barrier information to the navigation frame when track points of key people reach a certain number, sequentially takes track points of the key people as target points to search the risk zone path, enables the robot to pass through the barrier to reach the target points in the risk zone, executes corresponding disinfection measures according to the types of the key people, simultaneously visualizes the system, returns to an initial position and adjusts the posture after the track points of all the key people are disinfected, and continues to monitor the risk zone and wait for execution of next tasks.
6. The intelligent disinfection robot system for indoor scene epidemic risk monitoring according to claim 5, wherein the disinfection execution unit is arranged on the AGV moving platform and moves together with the disinfection robot, and after waiting for the passing of the personnel, the disinfection execution unit performs ultraviolet lamp disinfection and alcohol atomization disinfection on the track points of the personnel.
7. An intelligent disinfection method for indoor scene epidemic risk monitoring, which uses the intelligent disinfection robot system for indoor scene epidemic risk monitoring as claimed in claim 6, is characterized by comprising the following steps:
s1: the infrared image and the optical image are fused to realize risk area monitoring;
s2: automatically identifying a risk area according to the surrounding environment, and positioning and mapping;
s3: receiving three-dimensional object information and judging type information of a detected object;
s4: and the disinfection execution and risk assessment are carried out, the AGV mobile platform receives a planning path sent by the main control computing platform, the planning path is driven to a target area, the alcohol spraying and ultraviolet lamps are started to carry out disinfection operation when the AGV mobile platform is started, and meanwhile, whether a warning is started or not is selected according to the existence of abnormal body temperature personnel in key personnel.
8. The intelligent disinfection method for indoor scene epidemic risk monitoring according to claim 7, wherein S1 comprises:
s1.1: determining targets to be identified and detected, including pedestrians, masks, tables and chairs, and making and marking data sets according to the targets;
s1.2: inputting the data set into a YOLOv4-tiny model for training, and carrying out data enhancement, parameter adjustment and improvement to enable the model to achieve a good detection effect and obtain trained model parameters;
s1.3: loading the trained model into a program, carrying out real-time target detection on an optical image, surrounding a detected target by utilizing a two-dimensional anchor frame, displaying object names and confidence information, and then storing the detection result of the optical image for matching with an infrared image so as to judge mask wearing conditions and abnormal body temperature conditions of personnel in a risk area;
s1.4: and loading an infrared image, matching the infrared image with a face region of an optical image, extracting the highest gray value of the face region of the infrared image in real time, converting the highest gray value into a temperature value, storing the temperature data, and finally carrying out visual display by using OpenCV.
9. The intelligent disinfection method for indoor scene epidemic risk monitoring according to claim 8, wherein S2 comprises:
s2.1: after the optical image and the depth image are aligned in time, ORB characteristic points are extracted from the optical image and are matched with each other, and pose estimation is achieved together with point cloud information of the depth image;
s2.2: the point cloud of the depth map is stored, plane segmentation, object segmentation and object clustering are carried out, and after the depth image is processed, the point cloud cluster is stored;
s2.3: the processed point cloud is back projected to two dimensions, matching is carried out on the processed point cloud and the detected object in sequence according to the distance difference value of the center point, a type label of the three-dimensional object is obtained, and meanwhile, the coordinates of the center point and the object type information of the three-dimensional object are stored;
s2.4: and sending the object information to the risk area path planning module through the ROS system and the simulation platform.
10. The intelligent disinfection method for indoor scene epidemic risk monitoring according to claim 9, wherein S3 comprises:
s3.1: receiving three-dimensional object information and judging object type information;
s3.2: if the object is an obstacle, storing the object into an obstacle container, loading the coordinates of the central point of the obstacle into a map, performing visual display, and continuously receiving the detected three-dimensional object information;
s3.3: if the object is a person without a mask or a person with abnormal body temperature, judging whether the coordinates of the center point of the object are similar to those of the center point of the previous object, and judging whether the object is added to the target point container according to the distance between the coordinates of the center point;
s3.4: when the objects in the target point container reach a certain number of threshold values, sequentially deriving object information in the target point container, sequentially performing path search on the target point and issuing an AGV movement command;
s3.5: s3.4, simultaneously, selecting whether to start an alarm mode according to whether the type of the target point is a body temperature abnormal person, and if all the target points do not have the body temperature abnormal person, starting a killing execution mode only;
s3.6: visual display is carried out on the ROS system and the simulation platform, cubes with different colors are drawn at the central point of the ROS system and text marking is carried out, people without masks are yellow, and people with abnormal body temperature are red;
s3.7: when all the target points in the container are executed, the initial position is used as the target point, and a moving command is issued to the AGV moving platform, so that the intelligent killing robot returns to the initial position and enters a waiting task state.
CN202211553636.XA 2022-12-06 2022-12-06 Intelligent killing method and robot system for indoor scene epidemic risk monitoring Pending CN115998927A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211553636.XA CN115998927A (en) 2022-12-06 2022-12-06 Intelligent killing method and robot system for indoor scene epidemic risk monitoring

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211553636.XA CN115998927A (en) 2022-12-06 2022-12-06 Intelligent killing method and robot system for indoor scene epidemic risk monitoring

Publications (1)

Publication Number Publication Date
CN115998927A true CN115998927A (en) 2023-04-25

Family

ID=86023782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211553636.XA Pending CN115998927A (en) 2022-12-06 2022-12-06 Intelligent killing method and robot system for indoor scene epidemic risk monitoring

Country Status (1)

Country Link
CN (1) CN115998927A (en)

Similar Documents

Publication Publication Date Title
WO2017133453A1 (en) Method and system for tracking moving body
CN111496770A (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
CN105629970A (en) Robot positioning obstacle-avoiding method based on supersonic wave
WO2015180021A1 (en) Pruning robot system
Derry et al. Automated doorway detection for assistive shared-control wheelchairs
CN110082781A (en) Fire source localization method and system based on SLAM technology and image recognition
CN112066994B (en) Local autonomous navigation method and system for fire-fighting robot
CN111612823A (en) Robot autonomous tracking method based on vision
McGreavy et al. Next best view planning for object recognition in mobile robotics
KR101862545B1 (en) Method and system for providing rescue service using robot
Luber et al. Better models for people tracking
Frese et al. Multi-sensor obstacle tracking for safe human-robot interaction
CN115998927A (en) Intelligent killing method and robot system for indoor scene epidemic risk monitoring
Lunenburg et al. Tech united eindhoven team description 2012
Ma et al. A probabilistic framework for stereo-vision based 3d object search with 6d pose estimation
Sanchez-Lopez et al. Semantic situation awareness of ellipse shapes via deep learning for multirotor aerial robots with a 2D LIDAR
CN113084776B (en) Intelligent epidemic prevention robot and system based on vision and multi-sensor fusion
Wang et al. Development of a vision system and a strategy simulator for middle size soccer robot
Rizvi et al. Human detection and localization in indoor disaster environments using uavs
Ryu Autonomous robotic strategies for urban search and rescue
JP2023500785A (en) A Human-Robot Guidance System for Detecting Agricultural Objects in Unstructured and Multi-Noise Environments with Integrated Laser and Vision
Uzawa et al. Dataset Generation for Deep Visual Navigation in Unstructured Environments
Bloss Simultaneous sensing of location and mapping for autonomous robots
TWI817727B (en) A smart lawn mower and the control system thereof
Arlotta et al. An EKF-Based Multi-Object Tracking Framework for a Mobile Robot in a Precision Agriculture Scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination