WO2022088430A1 - Inspection and cleaning method and apparatus of robot, robot, and storage medium - Google Patents

Inspection and cleaning method and apparatus of robot, robot, and storage medium Download PDF

Info

Publication number
WO2022088430A1
WO2022088430A1 PCT/CN2020/136691 CN2020136691W WO2022088430A1 WO 2022088430 A1 WO2022088430 A1 WO 2022088430A1 CN 2020136691 W CN2020136691 W CN 2020136691W WO 2022088430 A1 WO2022088430 A1 WO 2022088430A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
data
target
visual
cleaning
Prior art date
Application number
PCT/CN2020/136691
Other languages
French (fr)
Chinese (zh)
Inventor
沈孝通
侯林杰
秦宝星
程昊天
Original Assignee
上海高仙自动化科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202011182069.2A external-priority patent/CN112287834A/en
Priority claimed from CN202011182064.XA external-priority patent/CN112287833A/en
Priority claimed from CN202011186175.8A external-priority patent/CN112315383B/en
Application filed by 上海高仙自动化科技发展有限公司 filed Critical 上海高仙自动化科技发展有限公司
Publication of WO2022088430A1 publication Critical patent/WO2022088430A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • A47L11/24Floor-sweeping machines, motor-driven
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the present application relates to the field of robotics, for example, to a robot inspection and cleaning method, device, robot and storage medium.
  • the cleaning robot can complete simple and repetitive cleaning tasks through unmanned driving technology, greatly reducing labor costs and realizing the automation of cleaning work.
  • the robot When the robot is patrolling and cleaning, it generally drives according to the pre-planned navigation map, and fully covers and cleans the ground during the driving process.
  • the above patrol cleaning method results in low cleaning efficiency of the robot.
  • the present application provides a robot inspection and cleaning method, device, robot and storage medium.
  • a robot inspection and cleaning method including:
  • the visual data is detected by a preset network to obtain the object position and object type of the target object, wherein the preset network is obtained by training the original sample data and the sample annotation data corresponding to the original sample data;
  • the robot is controlled to perform inspection and cleaning tasks according to the object position and object type of the target object.
  • a robot inspection and cleaning device including:
  • the acquisition module is set to collect visual data within the field of vision of the robot
  • the detection module is configured to detect the visual data through a preset network to obtain the object position and object type of the target object, wherein the preset network uses the original sample data and the sample annotation data corresponding to the original sample data. trained;
  • the control module is configured to control the robot to perform the inspection and cleaning task according to the object position and the object type of the target object.
  • a robot comprising: at least one processor, a memory, and at least one program, wherein the at least one program is stored in the memory and executed by the at least one processor, the at least one program includes using Instructions for executing the above-mentioned robot inspection and cleaning method.
  • a non-volatile computer-readable storage medium containing computer-executable instructions, when the computer-executable instructions are executed by at least one processor, the at least one processor is made to execute the above-mentioned robot inspection and cleaning method .
  • FIG. 1 is a schematic structural diagram of a robot according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a method for patrolling and cleaning a robot according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of another robot inspection and cleaning method according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of another robot inspection and cleaning method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of the principle of a robot recognizing visual data according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of another robot inspection and cleaning method according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a contamination detection network provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a training method for a contamination detection network provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of another robot inspection and cleaning method according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a robot inspection and cleaning device provided in an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of another robot inspection and cleaning device according to an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of another robot inspection and cleaning device according to an embodiment of the application.
  • FIG. 13 is a schematic structural diagram of another robot inspection and cleaning device provided by an embodiment of the application.
  • FIG. 14 is a schematic structural diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the robot may include: a sensor 10 , a controller 11 and an execution component 12 .
  • the sensor 10 includes a perception sensor and a positioning sensor installed on the robot body, and the sensor 10 is used to collect visual data within the field of view, which can be different types of cameras, lidar, infrared ranging, ultrasonic IMU (Inertial Measurement Unit, inertial measurement unit), odometer and other single or multiple sensors.
  • the controller 11 may include a chip and a control circuit, mainly by receiving the visual data collected by the sensor 10, to actively identify the target objects (such as garbage and obstacles, etc.) existing in the field of view of the robot, and to execute based on the position and type of the target object. Patrol cleaning tasks.
  • the execution component 12 includes a walking component and a cleaning component, and is configured to receive control instructions from the controller 11, navigate to the location of the target object according to the planned travel path, and perform cleaning operations.
  • the execution subject of the following method embodiments may be a robot inspection and cleaning device, and the device may be implemented as part or all of the above robot through software, hardware, or a combination of software and hardware.
  • the following method embodiments are described by taking the execution subject being a robot as an example.
  • FIG. 2 is a schematic flowchart of a method for patrolling and cleaning a robot according to an embodiment of the present application. This embodiment relates to the process of how the robot performs inspection and cleaning of the workspace. As shown in Figure 2, the method may include:
  • the area to be cleaned can be inspected and cleaned by the robot.
  • the to-be-cleaned area refers to an area where the robot needs to perform inspection and cleaning, which may correspond to the environment where the robot is located.
  • the robot can generate a field of vision path for covering the field of vision of the area to be cleaned through its own field of view and the electronic map of the area to be cleaned.
  • electronic maps include but are not limited to grid maps, topological maps and vector maps.
  • the robot drives according to the vision path, and actively collects visual data within the field of view during the driving process, and actively identifies and cleans the target objects in the visual data, so as to realize active inspection of the area to be cleaned.
  • a vision sensor is arranged on the robot, so that the robot can collect data from an area within its field of view through the vision sensor, thereby obtaining visual data within the field of view of the vision sensor.
  • the types of visual data collected by the visual sensors are also different.
  • the above-mentioned visual data may be image data, video data, or point cloud data.
  • the above-mentioned visual sensor may be a camera.
  • the robot can continuously shoot the area within its field of view through the camera to obtain surveillance video, and use the surveillance video as visual data to be identified.
  • the robot can also directly photograph the area within the field of view through the camera to obtain the photographed image, and use the photographed image as the visual data to be recognized.
  • the preset network is obtained by training the original sample data and the sample labeling data corresponding to the original sample data.
  • the robot After obtaining the visual data within the field of view, the robot inputs the visual data to the preset network, identifies the target object in the visual data through the preset network, and outputs the object position and object type of the target object.
  • the target object may be garbage and/or obstacles.
  • the object type may include various types of garbage, such as plastic bags, napkins, paper scraps, fruit peels, and vegetable leaves.
  • the object type may also include the results of classifying various types of garbage based on garbage classification criteria, such as recyclable garbage, kitchen waste, hazardous garbage, and other garbage.
  • the object types When the target object is an obstacle, the object types may include large-sized obstacles, small-sized obstacles, dynamic obstacles, static obstacles, and semi-static obstacles.
  • the training data of the preset network may be an original sample data set collected according to actual training requirements, or may be an original sample data set in a training database.
  • S203 Control the robot to perform an inspection and cleaning task according to the object position and object type of the target object.
  • the robot After determining that there is a target object in the field of view, the robot can perform targeted inspection and cleaning tasks based on the object position and object type of the target object.
  • the inspection and cleaning method for a robot collects visual data within the field of view of the robot, identifies the visual data through a preset network, obtains the object position and object type of the target object, and determines the target object according to the target object.
  • the object position and object type control the robot to perform patrol cleaning tasks.
  • the robot can achieve the field of vision coverage of the entire workspace through its own field of view, and the robot can actively identify the target objects existing in the field of view through the preset network, so that the robot only needs to focus on the target in the workspace.
  • Objects, and perform inspection and cleaning tasks based on the location and type of the target object eliminating the need for full-path cleaning of the entire workspace, which greatly improves the cleaning efficiency of the robot.
  • the preset network is a pre-trained neural network
  • the original sample data is visual sample data
  • the sample annotation data corresponding to the original sample data is the visual sample object position and type of the sample object that have been marked. sample.
  • FIG. 3 is a schematic flowchart of another robot inspection and cleaning method according to an embodiment of the present application. As shown in Figure 3, the method may include:
  • the pre-trained neural network is obtained by training the visual sample data and the visual sample data marked with the position of the sample object and the type of the sample object.
  • a pre-trained neural network can be a network model that is pre-established and configured in the robot after training to recognize the target object in the visual data and output the object position and object type of the target object.
  • the above pre-trained neural networks can be based on You Only Look Once (YOLO), RetinaNet, Single Shot MultiBox Detector (SSD) or faster regional convolutional neural networks (Faster- Region Convolutional Neural Networks, Faster-RCNN) and other networks.
  • the robot After obtaining the visual data within the field of view, the robot inputs the visual data to the pre-trained neural network, identifies the target object in the visual data through the pre-trained neural network, and outputs the object position and object type of the target object.
  • the training data of the above-mentioned pre-trained neural network may be a visual sample data set collected according to actual training requirements, or may be a visual sample data set in a training database.
  • the visual sample data set includes visual sample data to be identified, and visual sample data marked with the position of the sample object and the type of the sample object.
  • the sample objects may include garbage and obstacles on the ground, and the garbage may include plastic bags, napkins, paper scraps, fruit peels, and the like.
  • the visual sample data is used as the input of the pre-trained neural network, and the sample object position and sample object type existing in the visual sample data are used as the expected output of the pre-trained neural network, and the corresponding The loss function trains the pre-trained neural network until the convergence condition of the loss function is reached, thereby obtaining the above-mentioned pre-trained neural network.
  • the selected visual sample data set can be clustered to obtain reference frames (anchors) with different aspect ratios and different sizes. For example, taking a common garbage dataset as an example, k-means clustering operation is performed on the garbage dataset to learn reference frames with different aspect ratios and different sizes from the garbage dataset.
  • the robot can input the image to be detected into the pre-trained neural network.
  • the pre-trained neural network can extract the feature map corresponding to the image to be detected through the darknet sub-network, and for each grid on the feature map, predict the description information of the above reference frames with different aspect ratios and different sizes, wherein the description information includes The confidence level of the reference frame, the location information of the reference frame, and the category information of the reference frame. Next, based on the confidence of the reference frame and the category information of the reference frame, filter out the reference frame with lower probability, and perform non-maximum suppression processing on the remaining reference frame to obtain the final detection result, which is the visual data.
  • the object position and object type of the target object can extract the feature map corresponding to the image to be detected through the darknet sub-network, and for each grid on the feature map, predict the description information of the above reference frames with different aspect ratios and different sizes, wherein the description information includes The confidence level of the reference frame, the location information of the reference frame, and the category information of the reference frame
  • the above S303 may include:
  • the target storage component and the target cleaning component can be respectively selected for the object type.
  • the robot may be provided with recyclable garbage storage components, kitchen waste storage components, hazardous waste storage components and other garbage storage components. In this way, after obtaining the object type of the target object, the robot can select the target storage component from all the set storage components based on the object type. For example, when the object type of the target object obtained by the robot is vegetable leaves, the robot can select the kitchen waste storage component as the target storage component.
  • the target cleaning component can be selected from all the set cleaning components based on the object type.
  • the robot may be provided with a vacuuming assembly, a dry mopping assembly, a wet mopping assembly, a drying assembly, a water absorbing assembly, and the like.
  • the target cleaning component can be selected from all the set cleaning components based on the object type. For example, target objects such as vegetable leaves and fruit peels may leave stains on the ground. Therefore, after the robot cleans such target objects to the kitchen waste storage component, it needs to be wiped with the wet mop component, and then used to dry Dry components are dried, according to which the robot can select wet mopping components and drying components as target cleaning components.
  • the robot can plan a cleaning route based on the object position of the target object, and control the robot to drive along the cleaning route to the target position, and then clean the target object into the target storage assembly at the target position. , and use the selected target cleaning component to clean the cleaned area based on the corresponding cleaning strategy.
  • the above S303 may include: according to the object type, determining whether the robot can pass the target object; if the robot cannot pass the target object, generate an escape path according to the object position and the target navigation point, and control the robot to drive to the target navigation point according to the escape path.
  • the object types of the target object may include large-sized obstacles and small-sized obstacles.
  • small-sized obstacles the unique chassis structure of the robot enables the robot to cross small-sized obstacles; for large-sized obstacles, due to the large size of the obstacles, it is difficult for the robot to go over the large-sized obstacles. OK, causing the robot to be trapped. Therefore, when the robot performs the inspection and cleaning task, the robot needs to determine whether the robot can pass the target object according to the object type of the identified target object. That is, when the identified target object is a large-sized obstacle, the robot cannot move forward over the large-sized obstacle. At this time, the robot enters the escape mode to avoid the large-sized obstacle.
  • the robot selects a path point from the initial cleaning path as the target navigation point, and generates an escape path based on the position information of the large-sized obstacle and the target navigation point, and controls the robot to drive according to the escape path to target navigation point.
  • the robot inspection and cleaning method collects visual data within the field of vision of the robot, identifies the visual data through a pre-trained neural network, obtains the object position and object type of the target object, and determines the object position and object type according to the object.
  • the location and object type control the robot to perform patrol cleaning tasks.
  • the robot can achieve the field of vision coverage of the entire workspace through its own field of view, and the robot can actively identify the target objects existing in the field of view through the pre-trained neural network, so that the robot only needs to focus on the objects in the workspace.
  • the foregoing pre-trained neural network may include a feature extraction layer, a feature fusion layer, and an object recognition layer.
  • the above S302 may include:
  • Robots can choose a deep learning network as this feature extraction layer.
  • the feature extraction layer may be a darknet network or other network structures.
  • the robot inputs the collected visual data into the pre-trained neural network, and extracts the features in the visual data through the feature extraction layer in the pre-trained neural network to obtain multi-scale feature data.
  • each scale feature data includes description information of a reference frame corresponding to each grid in the scale feature data.
  • the description information includes the confidence level of the reference frame, the location information of the reference frame, and the category information of the reference frame.
  • the above reference frame can be obtained by performing a clustering operation on the training data of the pre-trained neural network.
  • the number of feature extraction blocks in the feature extraction layer can be reduced.
  • the feature extraction layer includes two feature extraction blocks, which are a first feature extraction block and a second feature extraction block, respectively.
  • the process of the above S401 may be: extracting the first scale feature data in the visual data through the first feature extraction block, and extracting the second scale feature data in the visual data through the second feature extraction block.
  • the first scale feature data and the second scale feature data can be arbitrarily selected from 13*13 scale feature data, 26*26 scale feature data and 52*52 scale feature data for combination.
  • the first scale feature data may be 13*13 scale feature data
  • the second scale feature data may be 26*26 scale feature data
  • the feature extraction layer inputs the extracted multi-scale feature data to the feature fusion layer, and the multi-scale feature data is feature-fused through the feature fusion layer.
  • the robot uses the feature fusion layer to combine the 13*13 scale feature data and 26*26 scale feature data.
  • Feature fusion is performed on the scale feature data to obtain the fused feature data.
  • S403. Determine the object position and object type of the target object through the object recognition layer according to the multi-scale feature data and the fused feature data.
  • the feature extraction layer inputs the extracted multi-scale feature data to the object recognition layer, and the feature fusion layer also inputs the fused feature data to the object recognition layer, and the multi-scale feature data and the fused feature data are processed by the object recognition layer.
  • the object recognition layer can perform coordinate transformation and coordinate scaling on the reference frame in the multi-scale feature data and the fused feature data, and restore the reference frame in the multi-scale feature data and the fused feature data to the original data.
  • the restored frame of reference Next, non-maximum suppression processing is performed on the restored reference frame, redundant reference frames are filtered out, and description information of the filtered reference frame is output, so as to obtain the object position and object type of the target object.
  • the above-mentioned recognition process of visual data through the pre-trained neural network is introduced.
  • the robot inputs the collected visual data within the visual field to the feature extraction layer 501 in the pre-trained neural network, and extracts the features in the visual data through the first feature extraction block 5011 in the feature extraction layer 501 to obtain 13*13 scale feature data , and extract the features in the visual data through the second feature extraction block 5012 in the feature extraction layer to obtain 26*26 scale feature data.
  • the robot inputs the 13*13 scale feature data and the 26*26 scale feature data into the feature fusion layer 502 in the pre-trained neural network, and the feature fusion layer 502 performs the 13*13 scale feature data and the 26*26 scale feature data.
  • Feature fusion to obtain the fused feature data.
  • the robot inputs the 13*13 scale feature data, 26*26 scale feature data and the fused feature data into the object recognition layer 503 in the pre-trained neural network, and the 13*13 scale feature data, 26*26
  • the scale feature data and the fused feature data are processed by coordinate transformation, coordinate scaling and non-maximum suppression, so as to identify the target object in the visual data, and output the object position and object type of the target object.
  • the pre-trained neural network can perform feature fusion on the multi-scale feature data in the visual data, and recognize the target object based on the fused feature data and the multi-scale feature data, thereby improving the robot recognition effect.
  • the feature extraction layer in the pre-trained neural network only includes two feature extraction blocks. Compared with the feature extraction layer including three feature extraction blocks, the feature extraction layer is reduced on the premise that the recognition effect of the robot can be satisfied. The number of feature extraction blocks in the middle, thereby improving the recognition speed of the robot.
  • the robot usually collects visual data within the field of view through a camera.
  • the object position of the target object recognized by the robot through the pre-trained neural network is calculated in the image coordinate system.
  • the method may further include: Obtain the first correspondence between the image coordinate system of the robot and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system; according to the first correspondence and the second correspondence, Transform the object position.
  • the obtaining of the first correspondence between the image coordinate system of the robot and the radar coordinate system may include: respectively obtaining the first correspondence between the robot's pixel coordinate system and the radar coordinate system for the same object to be collected. data and second data; match the first data and the second data to obtain multiple sets of matched feature points; determine the image coordinate system and radar coordinates of the robot according to the multiple sets of matched feature points The first correspondence between the systems.
  • the object to be collected can be set on a corner in advance.
  • the robot is provided with a camera and a laser radar, and the robot controls the camera and the laser radar to collect data from different angles of the object to be collected set on the corner of the wall, thereby obtaining the first data and the second data.
  • the feature points in the first data and the second data are respectively detected, and the feature points in the first data and the second data are matched to obtain multiple sets of matched feature points.
  • four sets of matching feature points or more need to be determined.
  • the corresponding equations are established, and the correspondence between the robot's image coordinate system and the radar coordinate system can be obtained by solving the equations.
  • the robot converts the obtained object position through the first correspondence between the robot's image coordinate system and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system, so that the final obtained object position is obtained.
  • the actual position of the target object is more accurate, and then the robot is controlled to perform inspection and cleaning tasks based on the accurate object position, which improves the cleaning accuracy and cleaning efficiency of the robot.
  • the target object is dirt
  • the preset network is a preset dirt detection network
  • the dirt detection network is obtained by training a visual semantic segmentation dataset and a dirt dataset, wherein,
  • the dirty data set includes the original sample data and sample annotation data corresponding to the original sample data.
  • FIG. 6 is a schematic flowchart of another method for patrolling and cleaning a robot according to an embodiment of the present application. As shown in Figure 6, the method may include:
  • the dirty detection network is obtained by training a visual semantic segmentation data set and a dirty data set, and the dirty data set includes original sample data and sample labeling data corresponding to the original sample data.
  • the above-mentioned contamination detection network is a deep learning model, which can be a network model that is pre-established and configured in the robot after training on the visual semantic segmentation dataset and the contamination dataset, so as to detect the target contamination existing in the visual data.
  • the above-mentioned visual semantic segmentation data set may be the Cityscapes data set;
  • the above-mentioned sample labeling data refers to the original sample data for which the dirty position of the sample and the dirty type of the sample have been marked.
  • the above-mentioned contamination detection network can be established based on a weighted convolutional harmonic dense network (Fully Convolutional Harmonic Dense Net, FCHarDNet), U-Net, V-Net, and Pyramid Scene Parsing Net (PSPNet) and other networks.
  • FCHarDNet Weighted convolutional Harmonic Dense Net
  • FCHarDNet Weighted convolutional Harmonic Dense Net
  • U-Net U-Net
  • V-Net V-Net
  • PSPNet Pyramid Scene Parsing Net
  • the robot After obtaining the visual data within the field of view, the robot inputs the visual data into the trained dirt detection network, detects the target dirt in the visual data through the dirt detection network, and outputs the target dirt position and target dirt. type of contamination.
  • the target soiling types may include liquid soiling and solid soiling.
  • the robot can extract dirt features in the visual data through a dirt detection network, and determine the target dirt type according to the dirt features.
  • the soil characteristics may include soil particle size and soil transparency (soil transparency refers to the light transmittance of soil).
  • the above-mentioned contamination detection network may include a downsampling layer and a deconvolution layer.
  • the foregoing S602 may include: performing a hierarchical downsampling operation on the visual data through the downsampling layer to obtain a multi-resolution intermediate feature map;
  • the deconvolution layer performs a hierarchical deconvolution operation on the multi-resolution intermediate feature map to obtain the target dirt position and target dirt type in the visual data.
  • the contamination detection network includes N downsampling layers and N deconvolution layers.
  • the input layer in Figure 7 is used to input visual data
  • the output layer is used to output the target dirty position and target dirty type in the visual data.
  • the robot inputs the collected visual data within the field of view into the input layer, and performs hierarchical down-sampling operations on the visual data through N down-sampling layers to extract the dirty features in the visual data and obtain different resolutions. feature map.
  • Target soiling type is used to output the target dirty position and target dirty type in the visual data.
  • the contamination detection network may further include an attention threshold block.
  • the above process of performing a hierarchical deconvolution operation on the multi-resolution intermediate feature map through the deconvolution layer may be: enhancing and suppressing the multi-resolution intermediate feature map layer by layer through an attention threshold block , and perform a deconvolution operation.
  • FIG. 7 only shows that the number N of downsampling layers and deconvolution layers included in the contamination detection network is 4 as an example. This embodiment does not limit the downsampling layers and deconvolution layers included in the contamination detection network.
  • the number N of downsampling layers and deconvolution layers included in the dirt detection network can be set correspondingly according to the actual application requirements.
  • the upsampling operation of the multi-resolution intermediate feature map is realized through the deconvolution layer, and only the intermediate feature map and the convolution kernel in the deconvolution layer need to be deconvolved. Compared with using bilinear For the interpolated upsampling layer, the time of contamination detection is greatly shortened, and the efficiency of contamination detection is improved.
  • the robot After determining that the target is dirty within the field of view, the robot navigates to the target dirty position, and cleans the target dirty position in a targeted manner based on the target dirt type.
  • the process of the above S603 may be: generating a target cleaning strategy according to the target dirt type; controlling the robot to navigate to the target dirt location, and using the target cleaning strategy to clean the target dirt location .
  • the robot can generate a target cleaning strategy for cleaning according to the obtained target dirt type.
  • the target soiling type is liquid soiling
  • the target cleaning strategy generated by the robot can be used to first use the water absorbing component to absorb the liquid. , and then use the dry mop component to wipe the ground.
  • the target dirt type is solid dirt, since it is solid, the solid can be cleaned and then wiped with a wet mop.
  • the target cleaning strategy generated by the robot can be to use the vacuum component to clean the solid first, and then use the vacuum cleaner to clean the solid.
  • Use the wet mopping component to wipe the floor, and then use the drying component to dry the floor.
  • the material of the ground can also be combined.
  • the robot navigates to the target dirty position, and uses the generated target cleaning strategy to clean the target dirty position.
  • the robot can continue to collect visual data within its own field of view at the target dirty position to identify the target dirt that needs to be cleaned in the next step, that is, repeat the above process of S601-S603, so as to complete the process of cleaning. Inspection and cleaning of the entire work space.
  • the robot can also return to the target navigation point in the field of view path, take the target navigation point as the starting point for cleaning, and continue to collect visual data within the field of view.
  • the robot inspection and cleaning method collects the visual data within the field of view of the robot, detects the collected visual data through a preset contamination detection network, obtains the target contamination position and the target contamination type, and Control the robot to perform patrol cleaning tasks according to the target dirty position and target dirty type.
  • the robot can cover the entire workspace through its own field of view, and the robot can actively identify the target dirt in the field of view through the trained dirt detection network, so that the robot only needs to Focuses on the target dirt in the workspace, and performs targeted inspection and cleaning tasks based on the location and type of target dirt, eliminating the need for full-path cleaning of the entire workspace, improving the cleaning efficiency of the robot.
  • an acquisition process of a contamination detection network is also provided, that is, how to train a contamination detection network.
  • the training process of the contamination detection network may include:
  • the contamination identification parallel data (that is, the contamination data set) can improve the detection performance of the contamination detection network
  • the training of the contamination detection network is very time-consuming and labor-intensive. Still not meeting expectations.
  • the dirt detection network can be pre-trained through the visual-semantic segmentation dataset with a large number of samples, and the initial dirt detection network trained on the visual-semantic segmentation dataset can be obtained.
  • the visual semantic segmentation dataset can be the Cityscapes dataset.
  • the contamination detection network After pre-training the contamination detection network with the visual semantic segmentation data set to obtain the initial contamination detection network, you can continue to use the collected contamination data set to fine-tune the initial contamination detection network for training.
  • the original sample data in the dirty data set is used as the input of the initial dirty detection network
  • the sample annotation data in the dirty data set is taken as the expected output of the initial dirty detection network
  • the preset loss function is used for the initial dirty detection network.
  • the parameters are trained and adjusted until the convergence condition of the loss function is reached, so as to obtain a trained dirty detection network.
  • the loss function can be a cross-entropy loss function.
  • the method further includes: performing data enhancement processing on the contamination data set.
  • the method of performing data enhancement processing on the dirty data set includes at least one of the following: random cropping, horizontal flipping, and color dithering.
  • the dirty data set can be expanded by horizontal flip mirroring; the dirty data set can also be cropped to realize the data expansion of the image segmentation data set, that is, a position is randomly selected as the cropping center, and each dirty data is processed. Cropping; each dirty data can also be color jittered for data augmentation of dirty datasets.
  • a visual semantic segmentation data set with a large number of samples can be used to pre-train the contamination detection network, and then the contamination data set is used for pre-training to obtain The initial dirty detection network is fine-tuned for training.
  • the long and slow learning phase in the early stage of network training can be avoided, thereby greatly reducing the network training time.
  • a lot of tedious hyperparameter tuning can be avoided. That is to say, the technical solutions adopted in the embodiments of the present application shorten the training time of the contamination detection network and improve the accuracy of the contamination detection network.
  • the robot usually collects visual data within the field of view through a camera.
  • the target dirty position detected by the robot through the trained dirt detection network is calculated in the image coordinate system.
  • the method may further include: Obtain the first correspondence between the image coordinate system of the robot and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system; according to the first correspondence and the second correspondence, The target dirty location is converted.
  • the robot After acquiring the first correspondence between the robot's image coordinate system and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system, the robot performs projection transformation on the target dirty position according to the first correspondence , and then convert the dirty position after projection transformation based on the second correspondence, so as to obtain the actual position of the target dirt in the world coordinate system.
  • the operation steps of obtaining the first corresponding relationship and the second corresponding relationship, and converting the target dirty position according to the first corresponding relationship and the second corresponding relationship and the resulting effects have been recorded in the above-mentioned embodiments, and are not described here. Repeat.
  • the number of the target objects is at least one; the method further includes: performing path planning on at least one object position to obtain a target cleaning path;
  • the robot performing the inspection and cleaning task includes: controlling the robot to navigate to the at least one object position in sequence according to the target cleaning path, and performing the inspection and cleaning task based on the object type corresponding to the current object position.
  • FIG. 9 is a schematic flowchart of another method for patrolling and cleaning a robot according to an embodiment of the present application. As shown in Figure 9, the method may include:
  • S901. Collect visual data within the field of view of the robot.
  • each target object corresponds to an object position and an object type
  • the multiple object types corresponding to the multiple target objects may be identical or completely different, or Parts are the same, which is not limited.
  • the above-mentioned contamination detection network may include a downsampling layer and a deconvolution layer.
  • the foregoing S902 may include: performing a hierarchical downsampling operation on the visual data through the downsampling layer to obtain a multi-resolution intermediate feature map;
  • the deconvolution layer performs a hierarchical deconvolution operation on the multi-resolution intermediate feature map to obtain at least one target dirty position in the visual data and at least one target dirty position corresponding to the target dirty position. type.
  • the target cleaning path refers to the cleaning path with the shortest distance among all the cleaning paths for the robot to reach at least one target dirty position.
  • the robot can generate at least one target through the shortest path planning algorithm according to the at least one target dirty position, the historical obstacle map of the area to be cleaned, and the current obstacle map of the area to be cleaned.
  • the shortest path planning algorithm may be Dijkstra algorithm, Floyd algorithm, and ant colony algorithm.
  • the robot can navigate to the corresponding target dirty position in sequence according to the target cleaning path, and clean the target dirty position in a targeted manner based on the target dirt type corresponding to the target dirty position.
  • the process of the above S904 may be: generating a target cleaning strategy according to the target dirt type; controlling the robot to navigate to the corresponding target dirt position in sequence according to the target cleaning path, and adopting the target cleaning method.
  • the strategy cleans the target dirty location.
  • the robot can generate a target cleaning strategy for cleaning according to the obtained target dirt type.
  • the target soiling type is liquid soiling
  • the target cleaning strategy generated by the robot can be first used to absorb the liquid with a water absorbing component. , and then use the dry mop component to wipe the ground.
  • the target dirt type is solid dirt, since it is solid, the solid can be cleaned and then wiped with a wet mop.
  • the target cleaning strategy generated by the robot can be: Use the wet mopping component to wipe the floor, and then use the drying component to dry the floor.
  • the material of the ground can also be combined. For example, when the material of the floor is floor and floor tiles, you can use the vacuum cleaner for vacuuming, and then use the mopping unit to mop the floor after vacuuming; when the material of the floor is carpet, you can use the vacuum cleaner only Do vacuuming.
  • the robot navigates to the target dirty location in sequence according to the target cleaning path, and uses the generated target cleaning strategy to clean at least one target dirty location.
  • the robot can control itself to rotate, and collect visual data within the visual range during the rotation to identify the target dirt that needs to be cleaned in the next step, that is, repeatedly execute the above S901-S904 process, so as to complete the inspection and cleaning of the entire workspace.
  • the inspection and cleaning method for a robot collects visual data within the field of view of the robot, detects the collected visual data through a preset contamination detection network, and obtains at least one target dirty position and at least one target dirty location. According to the target dirt type corresponding to the dirty position, the shortest path planning is performed on at least one target dirty position to obtain the target cleaning path, and the robot is controlled to navigate to the corresponding target dirty position in turn according to the target cleaning path, and based on the corresponding target dirty position Type to perform patrol cleaning tasks.
  • the robot can cover the entire workspace through its own field of view, and the robot can actively identify the target dirt in the field of view through the trained dirt detection network, so that the robot only needs to Focuses on the target dirt in the workspace, and performs targeted inspection and cleaning tasks based on the location and type of target dirt, eliminating the need for full-path cleaning of the entire workspace, improving the cleaning efficiency of the robot.
  • the robot can also perform shortest path planning for the at least one target dirty position, so that the robot can navigate to the at least one target dirty position with the shortest path, which improves the cleaning efficiency of the robot.
  • the process of the above S901 may be: controlling the robot to rotate, and during the rotation process Collect the visual data within the field of view of the robot.
  • the way of controlling the robot to rotate may be: controlling the robot to rotate based on the field of view of at least one sensor.
  • the robot After generating a field of vision path for covering the field of view of the area to be cleaned based on the robot's field of view and the electronic map of the area to be cleaned, the robot inspects and cleans the area to be cleaned according to the planned field of view path.
  • the robot can be controlled to rotate on the spot based on the field of view of at least one sensor, and the visual data within the field of view of the robot can be collected during the rotation process. In this way, with the rotation of the robot , the direction of the robot's field of vision is continuously adjusted, so that the robot can collect visual data in a wider range at the current position.
  • the robot it is also possible to control the robot to rotate based on the field of view of at least one sensor during the movement of the robot, and continuously collect visual data within the field of view of the robot during the rotation.
  • the rotation timing of the robot may be set, which is not limited in this embodiment.
  • the above process of controlling the robot to rotate may be: controlling the robot to rotate once. That is, before the robot starts to travel, control the robot to rotate once in place, or control the robot to rotate once during the process of the robot, and collect the visual data within the robot's field of vision during the rotation process, so that the robot can perceive its own vision within a 360-degree range.
  • Data collection greatly expands the data collection range of the robot, enabling the robot to actively identify visual data in a wider range, and perform overall cleaning of the identified target dirt, thereby improving the cleaning efficiency of the robot.
  • visual data in the robot workspace can be collected through vision sensors.
  • a first vision sensor and a second vision sensor are installed on the robot.
  • the first visual sensor is the forward-looking sensor of the robot, and the central axis of the viewing angle is parallel to the horizontal line;
  • the second visual sensor is the downward-looking sensor of the robot, and the central axis of the viewing angle is located below the horizontal line, which is parallel to the horizontal line. intersect.
  • the above-mentioned process of S901 may be: controlling the first visual sensor and the second visual sensor to rotate, and collecting visual data within their respective visual fields during the rotating process.
  • the line of sight of the first visual sensor Since the line of sight of the first visual sensor is head-up, it can obtain a relatively large sensing range for sensing environmental information farther away in the area to be cleaned. Since the line of sight of the second visual sensor is downward and can be directly aimed at the ground, the second visual sensor can more clearly perceive the environmental information on the ground nearby, and can effectively make up for the blind spot of the first visual sensor.
  • the first vision sensor and the second vision sensor can be controlled to collect visual data within their respective fields of view, so that the robot can not only collect data within the far field of view, but also can The second vision sensor collects the data in the blind area of the first vision sensor, which greatly expands the data collection range of the robot.
  • the first vision sensor and the second sensor can rotate, and collect visual data within their respective fields of view during the rotation process, so that with the rotation of the first vision sensor and the second vision sensor, the direction of the robot's field of vision is continuously adjusted. , so that the robot can collect visual data in a wider range and expand the data collection range of the robot.
  • the rotation angles of the first vision sensor and the second vision sensor can be controlled according to actual requirements.
  • the rotation angle may be 360 degrees.
  • the robot is controlled to rotate, and the visual data within the field of view of the robot is collected during the rotation, or the first vision sensor and the second vision sensor of the robot are controlled to rotate, and the respective visual data are collected during the rotation.
  • Visual data within the field of view Through this technical solution, the data collection range of the robot is greatly expanded, so that the robot can actively identify visual data in a wider range, and perform overall cleaning on the identified target dirt, thereby improving the cleaning efficiency of the robot.
  • the robot usually collects visual data within the field of view through a camera.
  • the target dirty position detected by the robot through the trained dirt detection network is calculated in the image coordinate system.
  • the method may further include: Obtain the first correspondence between the image coordinate system of the robot and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system; according to the first correspondence and the second correspondence, Translating the at least one target dirty location.
  • FIG. 10 is a schematic structural diagram of a robot inspection and cleaning device according to an embodiment of the present application.
  • the apparatus may include: a collection module 100 , an identification module 101 and a control module 102 .
  • the acquisition module 100 is set to collect visual data within the field of vision of the robot; the recognition module 101 is set to detect the visual data through a preset network to obtain the object position and object type of the target object, wherein the preset network passes the original
  • the sample data and the sample labeling data corresponding to the original sample data are obtained by training; the control module 102 is configured to control the robot to perform an inspection and cleaning task according to the object position and object type of the target object.
  • the preset network is a pre-trained neural network
  • the original sample data is visual sample data
  • the sample labeling data corresponding to the original sample data is the marked sample object position and The visual sample data of the sample object type.
  • the robot inspection and cleaning device collects visual data within the field of vision of the robot, identifies the visual data through a pre-trained neural network, and obtains the object position and object type of the target object, and according to the The object position and object type control the robot to perform patrol cleaning tasks.
  • the robot can achieve the field of vision coverage of the entire workspace through its own field of view, and the robot can actively identify the target objects existing in the field of view through the pre-trained neural network, so that the robot only needs to focus on the objects in the workspace.
  • the pre-trained neural network includes a feature extraction layer, a feature fusion layer, and an object recognition layer;
  • the identification module 101 includes: a feature extraction unit 1011 , a feature fusion unit 1012 and the identification unit 1013;
  • the feature extraction unit 1011 is set to extract the multi-scale feature data in the visual data through the feature extraction layer;
  • the feature fusion unit 1012 is set to perform the multi-scale feature data through the feature fusion layer.
  • the identifying unit 1013 is configured to determine the object position and object type of the target object through the object recognition layer according to the multi-scale feature data and the fused feature data.
  • the feature extraction layer includes a first feature extraction block and a second feature extraction block; the feature extraction unit 1011 is configured to extract the visual data through the first feature extraction block feature data of the first scale, and extract the second scale feature data in the visual data through the second feature extraction block.
  • the first scale feature data is 13*13 scale feature data
  • the second scale feature data is 26*26 scale feature data.
  • control module 102 is configured to select a target storage assembly and a target cleaning assembly according to the type of the object; control the robot Navigating to the object position, and controlling the robot to clean the target object into the target storage assembly, and to clean the cleaned area through the target cleaning assembly.
  • control module 102 is configured to determine whether the robot can pass the target object according to the object type; When the target object cannot be crossed, an escape path is generated according to the position of the object and the target navigation point, and the robot is controlled to travel to the target navigation point according to the escape path.
  • the apparatus further includes: an acquisition module 103 and Conversion module 104 .
  • the acquisition module 103 is configured to acquire the first correspondence between the image coordinate system and the radar coordinate system of the robot before the control module 102 controls the robot to perform the inspection and cleaning task according to the object position and the object type.
  • the second corresponding relationship between the radar coordinate system and the world coordinate system; the conversion module 104 is configured to convert the position of the object according to the corresponding relationship.
  • the target object is dirt
  • the preset network is a preset dirt detection network; the dirt detection network segments the dataset and the dirt data by visual semantics
  • the dirty data set includes the original sample data and sample labeling data corresponding to the original sample data.
  • the inspection and cleaning device for a robot collects visual data within the field of view of the robot, detects the collected visual data through a preset contamination detection network, and obtains the target contamination location and target contamination type , and control the robot to perform inspection and cleaning tasks according to the target dirty position and target dirty type.
  • the robot can cover the entire workspace through its own field of view, and the robot can actively identify the target dirt in the field of view through the trained dirt detection network, so that the robot only needs to Focuses on the target dirt in the workspace, and performs targeted inspection and cleaning tasks based on the location and type of target dirt, eliminating the need for full-path cleaning of the entire workspace, improving the cleaning efficiency of the robot.
  • the contamination detection network includes a downsampling layer and a deconvolution layer; the detection module 101 is configured to perform hierarchical downsampling on the visual data through the downsampling layer operation to obtain a multi-resolution intermediate feature map; perform a hierarchical deconvolution operation on the multi-resolution intermediate feature map through the deconvolution layer to obtain the target dirty position and target in the visual data. Dirt type.
  • the apparatus further includes: a network training module 105; Pre-training to obtain an initial contamination detection network; using the original sample data as the input of the initial contamination detection network, using the sample labeling data as the expected output of the initial contamination detection network, and using a preset loss The function continues to train the initial dirty detection network.
  • a network training module 105 Pre-training to obtain an initial contamination detection network; using the original sample data as the input of the initial contamination detection network, using the sample labeling data as the expected output of the initial contamination detection network, and using a preset loss The function continues to train the initial dirty detection network.
  • the apparatus further includes: a training data processing module 106; the training data processing module 106 is configured to use a preset loss function in the network training module 105 to continue the initial contamination detection network Data augmentation is performed on the dirty dataset before training.
  • the manner of performing data enhancement processing on the dirty data set includes at least one of the following: random cropping, horizontal flipping, and color dithering.
  • control module 102 is configured to generate a target cleaning strategy according to the target dirt type; control the robot to navigate to the target dirt position, and use the target cleaning strategy to Target dirty locations for cleaning.
  • the number of the target objects is at least one; the apparatus further includes a path planning module 107 , and the path planning module 107 is configured to target at least one target object.
  • the target cleaning path is obtained by performing path planning on the object position; the control module 102 is configured to: control the robot to navigate to the at least one object position in sequence according to the target cleaning path, and perform patrolling based on the object type corresponding to the current object position Check cleaning tasks.
  • the inspection and cleaning device for a robot collects visual data within the field of view of the robot, detects the collected visual data through a preset contamination detection network, and obtains at least one target dirty position and at least one The target dirt type corresponding to the target dirty position, perform shortest path planning for at least one target dirty position, obtain the target cleaning path, and control the robot to navigate to the corresponding target dirty position in sequence according to the target cleaning path, and based on the corresponding target Dirty types perform patrol cleaning tasks.
  • the robot can cover the entire workspace through its own field of view, and the robot can actively identify the target dirt in the field of view through the trained dirt detection network, so that the robot only needs to Focusing on the target dirt in the workspace, and performing targeted inspection and cleaning tasks based on the location and type of target dirt, there is no need to perform full-path cleaning of the entire workspace, which greatly improves the cleaning efficiency of the robot.
  • the robot can also perform shortest path planning for the at least one target dirty position, so that the robot can navigate to multiple target dirty positions with the shortest path, which improves the cleaning efficiency of the robot.
  • the acquisition module 100 is configured to control the robot to rotate, and collect visual data within the field of view of the robot during the rotation.
  • the acquisition module 100 is configured to control the rotation of the robot based on the field of view of at least one sensor.
  • the acquisition module 100 is configured to control the first visual sensor and the second visual sensor to rotate, and collect visual data within their respective fields of view during the rotation;
  • the vision sensor is the forward-looking sensor of the robot, and the central axis of the viewing angle is parallel to the horizontal line, the second visual sensor is the downward viewing sensor of the robot, and the central axis of the viewing angle is located below the horizontal line and intersects the horizontal line.
  • a robot is provided, the schematic diagram of which can be shown in FIG. 1 .
  • the robot may include: one or more processors, a memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors,
  • the program includes instructions for executing the robot inspection and cleaning method described in any of the above embodiments.
  • Collect the visual data within the field of vision of the robot identify the visual data through a preset network to obtain the object position and object type of the target object, wherein the preset network passes the original sample data and the corresponding data of the original sample data.
  • the sample labeling data is obtained by training; the robot is controlled to perform inspection and cleaning tasks according to the object position and object type of the target object.
  • the preset network is a pre-trained neural network
  • the original sample data is visual sample data
  • the sample labeling data corresponding to the original sample data is the labeled sample object position and sample object type. Visual sample data.
  • the pre-trained neural network includes a feature extraction layer, a feature fusion layer, and an object recognition layer; when the one or more processors execute the program, the following steps are further implemented: extracting the multi-scale feature data in the visual data; feature fusion is performed on the multi-scale feature data through the feature fusion layer to obtain the fused feature data; according to the multi-scale feature data and the fused feature data, The object position and object type of the target object are determined by the object recognition layer.
  • the feature extraction layer includes a first feature extraction block and a second feature extraction block; when the one or more processors execute the program, the following steps are further implemented: extracting through the first feature extraction block feature data of the first scale in the visual data, and extract the feature data of the second scale in the visual data through the second feature extraction block.
  • the first scale feature data is 13*13 scale feature data
  • the second scale feature data is 26*26 scale feature data.
  • the one or more processors when the target object is garbage and/or dirty, further implement the following steps when executing the program: selecting a target storage component and a target cleaning according to the type of the object assembly; controlling the robot to navigate to the object position, and controlling the robot to clean the target object into the target storage assembly, and to clean the cleaned area through the target cleaning assembly.
  • the one or more processors when the target object is an obstacle, further implement the following step when executing the program: determining whether the robot can pass the target object according to the type of the object ; if not, generate an escape path according to the object position and the target navigation point, and control the robot to drive to the target navigation point according to the escape path.
  • the one or more processors when the visual data is collected based on the image coordinate system of the robot, the one or more processors further implement the following steps when executing the program: acquiring the robot's image coordinate system. The first correspondence between the image coordinate system and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system; according to the first correspondence and the second correspondence, the position of the object is to convert.
  • the target object is dirt
  • the preset network is a preset dirt detection network
  • the dirt detection network is obtained by training a visual semantic segmentation dataset and a dirt dataset, wherein , the dirty data set includes the original sample data and sample labeling data corresponding to the original sample data.
  • the contamination detection network includes a downsampling layer and a deconvolution layer; when the one or more processors execute the program, the following steps are further implemented: the visual data is processed by the downsampling layer. Perform a hierarchical downsampling operation to obtain a multi-resolution intermediate feature map; perform a hierarchical deconvolution operation on the multi-resolution intermediate feature map through the deconvolution layer to obtain the visual data.
  • the visual data is processed by the downsampling layer.
  • perform a hierarchical deconvolution operation on the multi-resolution intermediate feature map through the deconvolution layer to obtain the visual data.
  • Target soiling location and target soiling type when the one or more processors execute the program, the following steps are further implemented: the visual data is processed by the downsampling layer. Perform a hierarchical downsampling operation to obtain a multi-resolution intermediate feature map; perform a hierarchical deconvolution operation on the multi
  • the one or more processors further implement the following steps when executing the program: pre-training the contamination detection network by using the visual semantic segmentation data set to obtain an initial contamination detection network;
  • the original sample data is used as the input of the initial contamination detection network, the sample annotation data is used as the expected output of the initial contamination detection network, and the initial contamination detection network is continued to be performed using a preset loss function. train.
  • the one or more processors further implement the following step when executing the program: performing data enhancement processing on the dirty data set.
  • the manner of performing data enhancement processing on the dirty data set includes at least one of the following: random cropping, horizontal flipping, and color dithering.
  • the following steps are further implemented: generating a target cleaning strategy according to the target dirt type; controlling the robot to navigate to the target dirt position, using the The target cleaning strategy cleans the target soiled locations.
  • the number of the target object is at least one; when the one or more processors execute the program, the following steps are further implemented: performing path planning on at least one object position to obtain a target cleaning path; controlling the The robot navigates to the at least one object position in sequence according to the target cleaning path, and performs the inspection and cleaning task based on the object type corresponding to the current object position.
  • the following steps are further implemented: controlling the robot to rotate, and collecting visual data within the field of view of the robot during the rotation.
  • the following steps are further implemented: controlling the rotation of the robot based on the field of view of the at least one sensor.
  • the one or more processors further implement the following steps when executing the program: controlling the first vision sensor and the second vision sensor to rotate, and collecting visual data within their respective fields of view during the rotation;
  • the first visual sensor is the forward-looking sensor of the robot, and the central axis of the viewing angle is parallel to the horizontal line
  • the second visual sensor is the downward-looking sensor of the robot, and the central axis of the viewing angle is located on the horizontal line Below, intersects the horizontal line.
  • a non-transitory computer-readable storage medium 140 containing computer-executable instructions 1401 is provided, when the computer-executable instructions are executed by one or more processors 141 , causing the processor 141 to perform the following steps:
  • Collect the visual data within the field of vision of the robot identify the visual data through a preset network to obtain the object position and object type of the target object, wherein the preset network passes the original sample data and the corresponding data of the original sample data.
  • the sample labeling data is obtained by training; the robot is controlled to perform inspection and cleaning tasks according to the object position and object type of the target object.
  • the preset network is a pre-trained neural network
  • the original sample data is visual sample data
  • the sample labeling data corresponding to the original sample data is the labeled sample object position and sample object type. Visual sample data.
  • the pre-trained neural network includes a feature extraction layer, a feature fusion layer, and an object recognition layer; when the computer-executable instructions are executed by the processor, the following steps are further implemented: extracting the visual data through the feature extraction layer The multi-scale feature data in the feature fusion layer; the feature fusion is performed on the multi-scale feature data through the feature fusion layer to obtain the fused feature data; according to the multi-scale feature data and the fused feature data, through the The object recognition layer determines the object location and object type of the target object.
  • the feature extraction layer includes a first feature extraction block and a second feature extraction block; when the computer-executable instructions are executed by the processor, the following steps are further implemented: extracting the visual image by using the first feature extraction block feature data of the first scale in the data, and extract the feature data of the second scale in the visual data through the second feature extraction block.
  • the first scale feature data is 13*13 scale feature data
  • the second scale feature data is 26*26 scale feature data.
  • the computer-executable instructions when executed by the processor, further implement the following steps: selecting a target storage component and a target cleaning component according to the object type; controlling The robot navigates to the object position, and controls the robot to clean the target object into the target storage assembly, and clean the cleaned area through the target cleaning assembly.
  • the computer-executable instructions when executed by the processor: according to the type of the object, determine whether the robot can pass the target object; if not , then according to the position of the object and the target navigation point, an escape path is generated, and the robot is controlled to travel to the target navigation point according to the escape path.
  • the following step is further implemented: acquiring the image coordinate system of the robot and the first correspondence between the radar coordinate system and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system; according to the first correspondence and the second correspondence, the object position is converted.
  • the target object is dirt
  • the preset network is a preset dirt detection network
  • the dirt detection network is obtained by training a visual semantic segmentation dataset and a dirt dataset, wherein , the dirty data set includes the original sample data and sample labeling data corresponding to the original sample data.
  • the contamination detection network includes a downsampling layer and a deconvolution layer; the computer-executable instructions, when executed by the processor, further implement the following step: hierarchizing the visual data through the downsampling layer to obtain a multi-resolution intermediate feature map; perform a hierarchical deconvolution operation on the multi-resolution intermediate feature map through the deconvolution layer to obtain the target dirt in the visual data. Location and target soiling type.
  • the following steps are further implemented: pre-training the contamination detection network by using the visual semantic segmentation data set to obtain an initial contamination detection network;
  • the sample data is used as the input of the initial contamination detection network, the sample labeled data is used as the expected output of the initial contamination detection network, and the initial contamination detection network is continuously trained by using a preset loss function.
  • the computer-executable instructions when executed by the processor, further implement the step of: performing a data augmentation process on the dirty data set.
  • the manner of performing data enhancement processing on the dirty data set includes at least one of the following: random cropping, horizontal flipping, and color dithering.
  • the computer-executable instructions when executed by the processor, further implement the following steps: generating a target cleaning strategy according to the target soiling type; controlling the robot to navigate to the target soiling location, and adopting the target cleaning strategy The target dirty location is cleaned.
  • the number of the target objects is at least one; when the computer-executable instructions are executed by the processor, the following steps are further implemented: performing path planning on the position of at least one object to obtain a target cleaning path; controlling the robot to follow the specified path.
  • the target cleaning path is navigated to the at least one object position in sequence, and a patrol cleaning task is performed based on the object type corresponding to the current object position.
  • the following steps are further implemented: controlling the robot to rotate, and collecting visual data within the field of view of the robot during the rotation.
  • the computer-executable instructions when executed by the processor, further implement the step of: controlling the robot to rotate based on the field of view of the at least one sensor.
  • the following steps are further implemented: controlling the first vision sensor and the second vision sensor to rotate, and collecting visual data within their respective fields of view during the rotation; wherein, the The first vision sensor is the forward-looking sensor of the robot, and the central axis of the viewing angle is parallel to the horizontal line, the second visual sensor is the downward-looking sensor of the robot, and the central axis of the viewing angle is located below the horizontal line, which is the same as the horizontal line. Horizontal lines intersect.
  • the robot inspection and cleaning device, the robot, and the storage medium provided in the above embodiments can execute the robot inspection and cleaning method provided by any embodiment of the present application, and have corresponding functional modules and effects for executing the method.
  • the inspection and cleaning method for a robot provided by any embodiment of the present application.
  • Non-volatile memory may include Read-Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (Electrically PROM, EPROM), Electrically Erasable Programmable ROM (Electrically Erasable) PROM, EEPROM) or flash memory.
  • ROM Read-Only Memory
  • PROM Programmable ROM
  • EPROM Electrically Programmable ROM
  • EPROM Electrically Erasable Programmable ROM
  • EEPROM Electrically Erasable Programmable ROM
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM random access memory
  • RAM is available in various forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM SDRAM, DDRSDRAM), enhanced SDRAM (Enhanced SDRAM, ESDRAM), synchronous link DRAM (Synchlink DRAM, SLDRAM), memory bus direct RAM (Rambus Direct RAM, RDRAM), direct memory bus dynamic RAM (Dynamic RDRAM, DRDRAM), And memory bus dynamic RAM (Rambus Dynamic RAM, RDRAM) and so on.
  • SRAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • SDRAM Double Data Rate SDRAM SDRAM
  • DDRSDRAM Double Data Rate SDRAM SDRAM
  • SDRAM Double Data Rate SDRAM SDRAM
  • SDRAM Double Data Rate SDRAM SDRAM
  • DDRSDRAM Double Data Rate SDRAM SDRAM
  • ESDRAM enhanced SDRAM
  • synchronous link DRAM Synchlink D

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed are an inspection and cleaning method and apparatus of a robot, a robot, and a storage medium. The inspection and cleaning method of a robot comprises: collecting visual data in the field of view of the robot; performing detection on the visual data by means of a preset network and obtaining an object position and an object type of a target object, wherein the preset network is obtained by means of training of original sample data and sample annotation data corresponding to the original sample data; and controlling the robot to perform an inspection and cleaning task according to the object position and the object type of the target object.

Description

机器人的巡检清洁方法、装置、机器人和存储介质Robot inspection and cleaning method, device, robot and storage medium
本申请要求在2020年10月29日提交中国专利局、申请号为202011182064.X的中国专利申请的优先权,要求在2020年10月29日提交中国专利局、申请号为202011182069.2的中国专利申请的优先权,要求在2020年10月29日提交中国专利局、申请号为202011186175.8的中国专利申请的优先权,该三个申请的全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202011182064.X filed with the China Patent Office on October 29, 2020, and the Chinese patent application with the application number 202011182069.2 filed with the China Patent Office on October 29, 2020 , claim the priority of the Chinese patent application with the application number 202011186175.8 submitted to the China Patent Office on October 29, 2020, and the entire contents of the three applications are incorporated into this application by reference.
技术领域technical field
本申请涉及机器人技术领域,例如涉及一种机器人的巡检清洁方法、装置、机器人和存储介质。The present application relates to the field of robotics, for example, to a robot inspection and cleaning method, device, robot and storage medium.
背景技术Background technique
随着自动化技术和人工智能的迅速发展,机器人被广泛应用在多种场景。以清洁场景为例,清洁机器人可以通过无人驾驶技术完成简单重复的清洁任务,大大降低人力成本,实现清洁工作的自动化。With the rapid development of automation technology and artificial intelligence, robots are widely used in various scenarios. Taking the cleaning scene as an example, the cleaning robot can complete simple and repetitive cleaning tasks through unmanned driving technology, greatly reducing labor costs and realizing the automation of cleaning work.
机器人在进行巡检清洁时,一般按照预先规划的导航地图进行行驶,并在行驶过程中对地面进行全覆盖清洁。但是,上述巡检清洁方式导致机器人的清洁效率较低。When the robot is patrolling and cleaning, it generally drives according to the pre-planned navigation map, and fully covers and cleans the ground during the driving process. However, the above patrol cleaning method results in low cleaning efficiency of the robot.
发明内容SUMMARY OF THE INVENTION
本申请提供一种机器人的巡检清洁方法、装置、机器人和存储介质。The present application provides a robot inspection and cleaning method, device, robot and storage medium.
提供一种机器人的巡检清洁方法,包括:Provide a robot inspection and cleaning method, including:
采集机器人视野范围内的视觉数据;Collect visual data within the robot's field of vision;
通过预设网络对所述视觉数据进行检测,得到目标对象的对象位置和对象类型,其中,所述预设网络通过原始样本数据以及所述原始样本数据对应的样本标注数据进行训练得到;The visual data is detected by a preset network to obtain the object position and object type of the target object, wherein the preset network is obtained by training the original sample data and the sample annotation data corresponding to the original sample data;
根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务。The robot is controlled to perform inspection and cleaning tasks according to the object position and object type of the target object.
提供一种机器人的巡检清洁装置,包括:Provide a robot inspection and cleaning device, including:
采集模块,设置为采集机器人视野范围内的视觉数据;The acquisition module is set to collect visual data within the field of vision of the robot;
检测模块,设置为通过预设网络对所述视觉数据进行检测,得到目标对象 的对象位置和对象类型,其中,所述预设网络通过原始样本数据以及所述原始样本数据对应的样本标注数据进行训练得到;The detection module is configured to detect the visual data through a preset network to obtain the object position and object type of the target object, wherein the preset network uses the original sample data and the sample annotation data corresponding to the original sample data. trained;
控制模块,设置为根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务。The control module is configured to control the robot to perform the inspection and cleaning task according to the object position and the object type of the target object.
提供一种机器人,包括:至少一个处理器、存储器、和至少一个程序,其中所述至少一个程序被存储在所述存储器中,并且被所述至少一个处理器执行,所述至少一程序包括用于执行上述的机器人的巡检清洁方法的指令。A robot is provided, comprising: at least one processor, a memory, and at least one program, wherein the at least one program is stored in the memory and executed by the at least one processor, the at least one program includes using Instructions for executing the above-mentioned robot inspection and cleaning method.
提供一种包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被至少一个处理器执行时,使得所述至少一处理器执行上述的机器人的巡检清洁方法。Provide a non-volatile computer-readable storage medium containing computer-executable instructions, when the computer-executable instructions are executed by at least one processor, the at least one processor is made to execute the above-mentioned robot inspection and cleaning method .
附图说明Description of drawings
图1为本申请实施例提供的一种机器人的结构示意图;1 is a schematic structural diagram of a robot according to an embodiment of the present application;
图2为本申请实施例提供的一种机器人的巡检清洁方法的流程示意图;2 is a schematic flowchart of a method for patrolling and cleaning a robot according to an embodiment of the present application;
图3为本申请实施例提供的另一种机器人的巡检清洁方法的流程示意图;3 is a schematic flowchart of another robot inspection and cleaning method according to an embodiment of the present application;
图4为本申请实施例提供的另一种机器人的巡检清洁方法的流程示意图;4 is a schematic flowchart of another robot inspection and cleaning method according to an embodiment of the present application;
图5为本申请实施例提供的一种机器人对视觉数据进行识别的原理示意图;5 is a schematic diagram of the principle of a robot recognizing visual data according to an embodiment of the present application;
图6为本申请实施例提供的另一种机器人的巡检清洁方法的流程示意图;6 is a schematic flowchart of another robot inspection and cleaning method according to an embodiment of the present application;
图7为本申请实施例提供的一种脏污检测网络的结构示意图;FIG. 7 is a schematic structural diagram of a contamination detection network provided by an embodiment of the present application;
图8为本申请实施例提供的一种脏污检测网络的训练方法的流程示意图;FIG. 8 is a schematic flowchart of a training method for a contamination detection network provided by an embodiment of the present application;
图9为本申请实施例提供的另一种机器人的巡检清洁方法的流程示意图;9 is a schematic flowchart of another robot inspection and cleaning method according to an embodiment of the present application;
图10为本申请实施例提供的一种机器人的巡检清洁装置的结构示意图;10 is a schematic structural diagram of a robot inspection and cleaning device provided in an embodiment of the application;
图11为本申请实施例提供的另一种机器人的巡检清洁装置的结构示意图;11 is a schematic structural diagram of another robot inspection and cleaning device according to an embodiment of the application;
图12为本申请实施例提供的另一种机器人的巡检清洁装置的结构示意图;12 is a schematic structural diagram of another robot inspection and cleaning device according to an embodiment of the application;
图13为本申请实施例提供的另一种机器人的巡检清洁装置的结构示意图;13 is a schematic structural diagram of another robot inspection and cleaning device provided by an embodiment of the application;
图14为本申请实施例提供的一种计算机可读存储介质的结构示意图。FIG. 14 is a schematic structural diagram of a computer-readable storage medium provided by an embodiment of the present application.
具体实施方式Detailed ways
本申请实施例提供的机器人的巡检清洁方法,可以适用于如图1所示的机器人。如图1所示,该机器人可以包括:传感器10、控制器11和执行组件12。其中,传感器10包括安装在机器人机身上的感知传感器和定位传感器,传感器 10用于采集视野范围内的视觉数据,其可以是不同类型的摄像头、激光雷达、红外测距、超声波IMU(Inertial Measurement Unit,惯性测量单元)、里程计等单个或多个传感器。控制器11可以包括芯片和控制电路,主要通过接收传感器10采集到的视觉数据,主动识别机器人视野范围内存在的目标对象(如垃圾以及障碍物等),并基于目标对象的位置和类型来执行巡检清洁任务。执行组件12包括行走组件和清洁组件,设置为接收控制器11的控制指令,按照所规划的行驶路径导航到目标对象所在的位置,并实施清洁操作。The robot inspection and cleaning method provided in the embodiment of the present application can be applied to the robot as shown in FIG. 1 . As shown in FIG. 1 , the robot may include: a sensor 10 , a controller 11 and an execution component 12 . Among them, the sensor 10 includes a perception sensor and a positioning sensor installed on the robot body, and the sensor 10 is used to collect visual data within the field of view, which can be different types of cameras, lidar, infrared ranging, ultrasonic IMU (Inertial Measurement Unit, inertial measurement unit), odometer and other single or multiple sensors. The controller 11 may include a chip and a control circuit, mainly by receiving the visual data collected by the sensor 10, to actively identify the target objects (such as garbage and obstacles, etc.) existing in the field of view of the robot, and to execute based on the position and type of the target object. Patrol cleaning tasks. The execution component 12 includes a walking component and a cleaning component, and is configured to receive control instructions from the controller 11, navigate to the location of the target object according to the planned travel path, and perform cleaning operations.
通过下述实施例并结合附图,对本申请实施例中的技术方案进行说明。The technical solutions in the embodiments of the present application will be described through the following embodiments in conjunction with the accompanying drawings.
下述方法实施例的执行主体可以是机器人的巡检清洁装置,该装置可以通过软件、硬件或者软硬件结合的方式实现成为上述机器人的部分或者全部。下述方法实施例以执行主体是机器人为例进行说明。The execution subject of the following method embodiments may be a robot inspection and cleaning device, and the device may be implemented as part or all of the above robot through software, hardware, or a combination of software and hardware. The following method embodiments are described by taking the execution subject being a robot as an example.
图2为本申请实施例提供的一种机器人的巡检清洁方法的流程示意图。本实施例涉及的是机器人如何对工作空间进行巡检清洁的过程。如图2所示,该方法可以包括:FIG. 2 is a schematic flowchart of a method for patrolling and cleaning a robot according to an embodiment of the present application. This embodiment relates to the process of how the robot performs inspection and cleaning of the workspace. As shown in Figure 2, the method may include:
S201、采集机器人视野范围内的视觉数据。S201 , collecting visual data within the field of view of the robot.
为了实现清洁工作的自动化,可以通过机器人对待清洁区域进行巡检清洁。其中,待清洁区域是指机器人需要进行巡检清洁的区域,其可以与机器人所处的环境相对应。机器人可以通过自身的视野范围和待清洁区域的电子地图,生成用于对待清洁区域进行视野覆盖的视野路径。其中,电子地图包括但不限于栅格地图、拓扑地图和矢量地图。机器人按照该视野路径进行行驶,并在行驶过程中主动采集视野范围内的视觉数据,以及主动识别并清理视觉数据中存在的目标对象,从而实现对待清洁区域的主动巡检。In order to automate the cleaning work, the area to be cleaned can be inspected and cleaned by the robot. The to-be-cleaned area refers to an area where the robot needs to perform inspection and cleaning, which may correspond to the environment where the robot is located. The robot can generate a field of vision path for covering the field of vision of the area to be cleaned through its own field of view and the electronic map of the area to be cleaned. Wherein, electronic maps include but are not limited to grid maps, topological maps and vector maps. The robot drives according to the vision path, and actively collects visual data within the field of view during the driving process, and actively identifies and cleans the target objects in the visual data, so as to realize active inspection of the area to be cleaned.
机器人上设置有视觉传感器,使得机器人可以通过该视觉传感器对其视野范围内的区域进行数据采集,从而得到视觉传感器的视野范围内的视觉数据。当用于采集视觉数据的视觉传感器对应的传感器类型不同时,通过视觉传感器所采集的视觉数据的类型也是不相同的。其中,上述视觉数据可以为图像数据,也可以为视频数据,还可以是点云数据。举例说明,上述视觉传感器可以为摄像头。机器人可以通过摄像头连续对其视野范围内的区域进行拍摄,得到监控视频,将该监控视频作为待识别的视觉数据。机器人也可以通过摄像头直接对视野范围内的区域进行拍摄,得到拍摄图像,将该拍摄图像作为待识别的视觉数据。A vision sensor is arranged on the robot, so that the robot can collect data from an area within its field of view through the vision sensor, thereby obtaining visual data within the field of view of the vision sensor. When the sensor types corresponding to the visual sensors used for collecting visual data are different, the types of visual data collected by the visual sensors are also different. The above-mentioned visual data may be image data, video data, or point cloud data. For example, the above-mentioned visual sensor may be a camera. The robot can continuously shoot the area within its field of view through the camera to obtain surveillance video, and use the surveillance video as visual data to be identified. The robot can also directly photograph the area within the field of view through the camera to obtain the photographed image, and use the photographed image as the visual data to be recognized.
S202、通过预设网络对所述视觉数据进行识别,得到目标对象的对象位置和对象类型。S202. Identify the visual data through a preset network to obtain an object position and an object type of the target object.
所述预设网络通过原始样本数据以及所述原始样本数据对应的样本标注数据进行训练得到。The preset network is obtained by training the original sample data and the sample labeling data corresponding to the original sample data.
在得到视野范围内的视觉数据之后,机器人将视觉数据输入至预设网络,通过预设网络识别视觉数据中的目标对象,并输出目标对象的对象位置和对象类型。其中,目标对象可以为垃圾和/或障碍物。当目标对象为垃圾时,对象类型可以包括多种垃圾的类型,如塑料袋、餐巾纸、纸屑、果皮以及蔬菜叶等。对象类型也可以包括基于垃圾分类标准对多种垃圾分类后的结果,如可回收垃圾、厨余垃圾、有害垃圾以及其它垃圾等。当目标对象为障碍物时,对象类型可以包括大尺寸障碍物、小尺寸障碍物、动态障碍物、静态障碍物以及半静态障碍物等。After obtaining the visual data within the field of view, the robot inputs the visual data to the preset network, identifies the target object in the visual data through the preset network, and outputs the object position and object type of the target object. The target object may be garbage and/or obstacles. When the target object is garbage, the object type may include various types of garbage, such as plastic bags, napkins, paper scraps, fruit peels, and vegetable leaves. The object type may also include the results of classifying various types of garbage based on garbage classification criteria, such as recyclable garbage, kitchen waste, hazardous garbage, and other garbage. When the target object is an obstacle, the object types may include large-sized obstacles, small-sized obstacles, dynamic obstacles, static obstacles, and semi-static obstacles.
上述预设网络的训练数据可以是根据实际训练需求收集的原始样本数据集,也可以是训练数据库中的原始样本数据集。The training data of the preset network may be an original sample data set collected according to actual training requirements, or may be an original sample data set in a training database.
S203、根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务。S203. Control the robot to perform an inspection and cleaning task according to the object position and object type of the target object.
在确定视野范围内存在目标对象后,机器人便可以基于目标对象的对象位置和对象类型有针对性地执行巡检清洁任务。After determining that there is a target object in the field of view, the robot can perform targeted inspection and cleaning tasks based on the object position and object type of the target object.
本申请实施例提供的机器人的巡检清洁方法,采集机器人视野范围内的视觉数据,通过预设网络对所述视觉数据进行识别,得到目标对象的对象位置和对象类型,并根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务。在巡检清洁过程中,机器人通过自身的视野范围即可实现整个工作空间的视野覆盖,并且机器人能够通过预设网络主动识别视野范围内存在的目标对象,使得机器人仅需要聚焦工作空间内的目标对象,并基于目标对象的位置和类型执行巡检清洁任务,无需对整个工作空间进行全路径清洁,大大提高了机器人的清洁效率。The inspection and cleaning method for a robot provided by the embodiment of the present application collects visual data within the field of view of the robot, identifies the visual data through a preset network, obtains the object position and object type of the target object, and determines the target object according to the target object. The object position and object type control the robot to perform patrol cleaning tasks. During the inspection and cleaning process, the robot can achieve the field of vision coverage of the entire workspace through its own field of view, and the robot can actively identify the target objects existing in the field of view through the preset network, so that the robot only needs to focus on the target in the workspace. Objects, and perform inspection and cleaning tasks based on the location and type of the target object, eliminating the need for full-path cleaning of the entire workspace, which greatly improves the cleaning efficiency of the robot.
一实施例中,所述预设网络为预训练神经网络,所述原始样本数据为视觉样本数据,所述原始样本数据对应的样本标注数据为已标注样本对象位置和样本对象类型的所述视觉样本数据。图3为本申请实施例提供的另一种机器人的巡检清洁方法的流程示意图。如图3所示,该方法可以包括:In one embodiment, the preset network is a pre-trained neural network, the original sample data is visual sample data, and the sample annotation data corresponding to the original sample data is the visual sample object position and type of the sample object that have been marked. sample. FIG. 3 is a schematic flowchart of another robot inspection and cleaning method according to an embodiment of the present application. As shown in Figure 3, the method may include:
S301、采集机器人视野范围内的视觉数据。S301 , collect visual data within the field of vision of the robot.
S302、通过预训练神经网络对所述视觉数据进行识别,得到目标对象的对象位置和对象类型。S302. Identify the visual data through a pre-trained neural network to obtain the object position and object type of the target object.
所述预训练神经网络通过视觉样本数据以及已标注样本对象位置和样本对象类型的所述视觉样本数据进行训练得到。The pre-trained neural network is obtained by training the visual sample data and the visual sample data marked with the position of the sample object and the type of the sample object.
预训练神经网络可以是预先建立并且经过训练后配置在机器人中,以便识别视觉数据中的目标对象,并输出目标对象的对象位置和对象类型的网络模型。上述预训练神经网络可以是基于你只需要看一遍(You Only Look Once,YOLO)、RetinaNet、单发多盒探测器(Single Shot MultiBox Detector,SSD)或者更快的区域卷积神经网络(Faster-Region Convolutional Neural Networks,Faster-RCNN)等网络建立的。A pre-trained neural network can be a network model that is pre-established and configured in the robot after training to recognize the target object in the visual data and output the object position and object type of the target object. The above pre-trained neural networks can be based on You Only Look Once (YOLO), RetinaNet, Single Shot MultiBox Detector (SSD) or faster regional convolutional neural networks (Faster- Region Convolutional Neural Networks, Faster-RCNN) and other networks.
在得到视野范围内的视觉数据之后,机器人将视觉数据输入至预训练神经网络,通过预训练神经网络识别视觉数据中的目标对象,并输出目标对象的对象位置和对象类型。After obtaining the visual data within the field of view, the robot inputs the visual data to the pre-trained neural network, identifies the target object in the visual data through the pre-trained neural network, and outputs the object position and object type of the target object.
上述预训练神经网络的训练数据可以是根据实际训练需求收集的视觉样本数据集,也可以是训练数据库中的视觉样本数据集。其中,视觉样本数据集包括需要进行识别的视觉样本数据,以及已标注样本对象位置和样本对象类型的视觉样本数据。在本实施例中,样本对象可以包括地面上的垃圾以及障碍物等,垃圾可以包括塑料袋、餐巾纸、纸屑以及果皮等。获取到用于训练的视觉样本数据集之后,将视觉样本数据作为预训练神经网络的输入,将视觉样本数据中存在的样本对象位置和样本对象类型作为预训练神经网络的期望输出,采用相应的损失函数对预训练神经网络进行训练,直至达到损失函数的收敛条件,从而得到上述预训练神经网络。The training data of the above-mentioned pre-trained neural network may be a visual sample data set collected according to actual training requirements, or may be a visual sample data set in a training database. Wherein, the visual sample data set includes visual sample data to be identified, and visual sample data marked with the position of the sample object and the type of the sample object. In this embodiment, the sample objects may include garbage and obstacles on the ground, and the garbage may include plastic bags, napkins, paper scraps, fruit peels, and the like. After obtaining the visual sample data set for training, the visual sample data is used as the input of the pre-trained neural network, and the sample object position and sample object type existing in the visual sample data are used as the expected output of the pre-trained neural network, and the corresponding The loss function trains the pre-trained neural network until the convergence condition of the loss function is reached, thereby obtaining the above-mentioned pre-trained neural network.
以预训练神经网络采用的基础网络为YOLOv3为例,首先,可以对所选择的视觉样本数据集进行聚类操作,从而得到不同长宽比和不同尺寸的参照框(anchor)。例如,以常见垃圾数据集为例,对垃圾数据集进行k-means聚类操作,以从垃圾数据集中学习到不同长宽比和不同尺寸的参照框。Taking YOLOv3 as the basic network used in the pre-training neural network as an example, first, the selected visual sample data set can be clustered to obtain reference frames (anchors) with different aspect ratios and different sizes. For example, taking a common garbage dataset as an example, k-means clustering operation is performed on the garbage dataset to learn reference frames with different aspect ratios and different sizes from the garbage dataset.
以机器人采集的视觉数据为待检测图像为例,机器人可以将待检测图像输入至预训练神经网络。预训练神经网络可以通过darknet子网络提取待检测图像对应的特征图,并针对特征图上的每个网格,预测上述不同长宽比和不同尺寸的参照框的描述信息,其中,描述信息包括参照框的置信度、参照框的位置信息以及参照框的类别信息。接着,基于参照框的置信度和参照框的类别信息过滤掉概率较低的参照框,对剩下的参照框进行非最大抑制处理,得到最终的检测结果,该检测结果即为视觉数据中的目标对象的对象位置和对象类型。Taking the visual data collected by the robot as the image to be detected as an example, the robot can input the image to be detected into the pre-trained neural network. The pre-trained neural network can extract the feature map corresponding to the image to be detected through the darknet sub-network, and for each grid on the feature map, predict the description information of the above reference frames with different aspect ratios and different sizes, wherein the description information includes The confidence level of the reference frame, the location information of the reference frame, and the category information of the reference frame. Next, based on the confidence of the reference frame and the category information of the reference frame, filter out the reference frame with lower probability, and perform non-maximum suppression processing on the remaining reference frame to obtain the final detection result, which is the visual data. The object position and object type of the target object.
S303、根据所述对象位置和对象类型控制所述机器人执行巡检清洁任务。S303. Control the robot to perform a patrol cleaning task according to the object position and the object type.
作为一种可选的实施方式,当目标对象为垃圾和/或脏污时,上述S303可以包括:As an optional implementation manner, when the target object is garbage and/or dirt, the above S303 may include:
S3031、根据所述对象类型选定目标收纳组件和目标清洁组件。S3031. Select a target storage component and a target cleaning component according to the object type.
当确定目标对象的对象类型后,可以针对对象类型分别选定目标收纳组件和目标清洁组件。机器人中可以设置有可回收垃圾收纳组件、厨余垃圾收纳组件、有害垃圾收纳组件以及其它垃圾收纳组件。这样,在得到目标对象的对象类型后,机器人便可以基于对象类型,从所设置的所有收纳组件中选定目标收纳组件。例如,当机器人得到的目标对象的对象类型是蔬菜叶时,机器人可以选择厨余垃圾收纳组件作为目标收纳组件。After the object type of the target object is determined, the target storage component and the target cleaning component can be respectively selected for the object type. The robot may be provided with recyclable garbage storage components, kitchen waste storage components, hazardous waste storage components and other garbage storage components. In this way, after obtaining the object type of the target object, the robot can select the target storage component from all the set storage components based on the object type. For example, when the object type of the target object obtained by the robot is vegetable leaves, the robot can select the kitchen waste storage component as the target storage component.
对于不同对象类型的目标对象,对地面的污染程度是不一致的。因此,在得到目标清洁对象的对象类型时,可以基于对象类型,从所设置的所有清洁组件中选定目标清洁组件。其中,机器人中可以设置有吸尘组件、干拖组件、湿拖组件、烘干组件以及吸水组件等。在得到目标对象的对象类型时,可以基于对象类型,从所设置的所有清洁组件中选定目标清洁组件。例如,对于蔬菜叶和果皮等目标对象,可能会在地面上留下污渍,因此,在机器人将该类目标对象清扫到厨余垃圾收纳组件后,还需要使用湿拖组件进行擦拭,再使用烘干组件进行烘干,据此,机器人可以选择湿拖组件和烘干组件作为目标清洁组件。For target objects of different object types, the degree of pollution to the ground is inconsistent. Therefore, when the object type of the target cleaning object is obtained, the target cleaning component can be selected from all the set cleaning components based on the object type. Among them, the robot may be provided with a vacuuming assembly, a dry mopping assembly, a wet mopping assembly, a drying assembly, a water absorbing assembly, and the like. When the object type of the target object is obtained, the target cleaning component can be selected from all the set cleaning components based on the object type. For example, target objects such as vegetable leaves and fruit peels may leave stains on the ground. Therefore, after the robot cleans such target objects to the kitchen waste storage component, it needs to be wiped with the wet mop component, and then used to dry Dry components are dried, according to which the robot can select wet mopping components and drying components as target cleaning components.
S3032、控制所述机器人导航至所述对象位置,并控制所述机器人将所述目标对象清扫到所述目标收纳组件中,以及通过所述目标清洁组件对清扫后的区域进行清洁。S3032. Control the robot to navigate to the object position, and control the robot to clean the target object into the target storage assembly, and clean the cleaned area through the target cleaning assembly.
在选定目标收纳组件和目标清洁组件之后,机器人可以基于目标对象的对象位置规划清洁路线,并控制机器人沿清洁路线行驶至对象位置,然后在该对象位置上将目标对象清扫到目标收纳组件中,并通过所选定的目标清洁组件基于相应的清洁策略对清扫后的区域进行清洁。After selecting the target storage assembly and the target cleaning assembly, the robot can plan a cleaning route based on the object position of the target object, and control the robot to drive along the cleaning route to the target position, and then clean the target object into the target storage assembly at the target position. , and use the selected target cleaning component to clean the cleaned area based on the corresponding cleaning strategy.
作为另一种可选的实施方式,当目标对象为障碍物时,上述S303可以包括:根据所述对象类型,确定所述机器人是否能够越过所述目标对象;若所述机器人不能越过所述目标对象,则根据所述对象位置和目标导航点,生成脱困路径,并控制所述机器人按照所述脱困路径行驶至所述目标导航点。As another optional implementation manner, when the target object is an obstacle, the above S303 may include: according to the object type, determining whether the robot can pass the target object; if the robot cannot pass the target object, generate an escape path according to the object position and the target navigation point, and control the robot to drive to the target navigation point according to the escape path.
目标对象的对象类型可以包括大尺寸障碍物以及小尺寸障碍物等。对于小尺寸障碍物来说,机器人具有的独特底盘结构,使得机器人可以越过小尺寸障碍物;对于大尺寸障碍物来说,由于障碍物的尺寸较大,机器人很难越过大尺寸障碍物继续前行,导致机器人被困。因此,在机器人执行巡检清洁任务时,机器人需要根据识别出的目标对象的对象类型,确定机器人能否越过目标对象。即在识别出的目标对象为大尺寸障碍物时,机器人无法越过大尺寸障碍物进行前行,此时,机器人进入脱困模式,以避开大尺寸障碍物。为了能够继续执行巡检清洁任务,机器人从初始清洁路径中选取一个路径点作为目标导航点,并基于大尺寸障碍物的位置信息和目标导航点,生成脱困路径,并控制机器人按 照脱困路径行驶至目标导航点。The object types of the target object may include large-sized obstacles and small-sized obstacles. For small-sized obstacles, the unique chassis structure of the robot enables the robot to cross small-sized obstacles; for large-sized obstacles, due to the large size of the obstacles, it is difficult for the robot to go over the large-sized obstacles. OK, causing the robot to be trapped. Therefore, when the robot performs the inspection and cleaning task, the robot needs to determine whether the robot can pass the target object according to the object type of the identified target object. That is, when the identified target object is a large-sized obstacle, the robot cannot move forward over the large-sized obstacle. At this time, the robot enters the escape mode to avoid the large-sized obstacle. In order to continue to perform the inspection and cleaning task, the robot selects a path point from the initial cleaning path as the target navigation point, and generates an escape path based on the position information of the large-sized obstacle and the target navigation point, and controls the robot to drive according to the escape path to target navigation point.
本申请实施例提供的机器人的巡检清洁方法,采集机器人视野范围内的视觉数据,通过预训练神经网络对所述视觉数据进行识别,得到目标对象的对象位置和对象类型,并根据所述对象位置和对象类型控制所述机器人执行巡检清洁任务。在巡检清洁过程中,机器人通过自身的视野范围即可实现整个工作空间的视野覆盖,并且机器人能够通过预训练神经网络主动识别视野范围内存在的目标对象,使得机器人仅需要聚焦工作空间内的目标对象,并基于目标对象的位置和类型执行巡检清洁任务,无需对整个工作空间进行全路径清洁,大大提高了机器人的清洁效率。The robot inspection and cleaning method provided by the embodiment of the present application collects visual data within the field of vision of the robot, identifies the visual data through a pre-trained neural network, obtains the object position and object type of the target object, and determines the object position and object type according to the object. The location and object type control the robot to perform patrol cleaning tasks. During the inspection and cleaning process, the robot can achieve the field of vision coverage of the entire workspace through its own field of view, and the robot can actively identify the target objects existing in the field of view through the pre-trained neural network, so that the robot only needs to focus on the objects in the workspace. Target objects, and perform inspection and cleaning tasks based on the location and type of the target objects, eliminating the need for full-path cleaning of the entire workspace, which greatly improves the cleaning efficiency of the robot.
在一个实施例中,还提供了一种通过预训练神经网络识别视野范围内的视觉数据的过程。在上述实施例的基础上,可选的,上述预训练神经网络可以包括特征提取层、特征融合层以及对象识别层。如图4所示,上述S302可以包括:In one embodiment, there is also provided a process of identifying visual data within a field of view through a pretrained neural network. On the basis of the foregoing embodiment, optionally, the foregoing pre-trained neural network may include a feature extraction layer, a feature fusion layer, and an object recognition layer. As shown in Figure 4, the above S302 may include:
S401、通过所述特征提取层提取所述视觉数据中的多尺度特征数据。S401. Extract multi-scale feature data in the visual data through the feature extraction layer.
机器人可以选择深度学习网络作为该特征提取层。该特征提取层可以是darknet网络或者其它网络结构等。机器人将采集的视觉数据输入预训练神经网络,通过预训练神经网络中的特征提取层提取视觉数据中的特征,得到多尺度特征数据。其中,每个尺度特征数据包括尺度特征数据中的每个网格所对应的参照框的描述信息。该描述信息包括参照框的置信度、参照框的位置信息以及参照框的类别信息。上述参照框可以通过对预训练神经网络的训练数据进行聚类操作后得到。Robots can choose a deep learning network as this feature extraction layer. The feature extraction layer may be a darknet network or other network structures. The robot inputs the collected visual data into the pre-trained neural network, and extracts the features in the visual data through the feature extraction layer in the pre-trained neural network to obtain multi-scale feature data. Wherein, each scale feature data includes description information of a reference frame corresponding to each grid in the scale feature data. The description information includes the confidence level of the reference frame, the location information of the reference frame, and the category information of the reference frame. The above reference frame can be obtained by performing a clustering operation on the training data of the pre-trained neural network.
为了提高预训练神经网络的识别速度,可以减少特征提取层中的特征提取块的数量。可选的,特征提取层中包括两个特征提取块,分别为第一特征提取块和第二特征提取块。可选的,上述S401的过程可以为:通过第一特征提取块提取视觉数据中的第一尺度特征数据,并通过第二特征提取块提取视觉数据中的第二尺度特征数据。其中,第一尺度特征数据和第二尺度特征数据可以从13*13尺度特征数据、26*26尺度特征数据以及52*52尺度特征数据中任意选择两种进行组合。To improve the recognition speed of the pretrained neural network, the number of feature extraction blocks in the feature extraction layer can be reduced. Optionally, the feature extraction layer includes two feature extraction blocks, which are a first feature extraction block and a second feature extraction block, respectively. Optionally, the process of the above S401 may be: extracting the first scale feature data in the visual data through the first feature extraction block, and extracting the second scale feature data in the visual data through the second feature extraction block. Among them, the first scale feature data and the second scale feature data can be arbitrarily selected from 13*13 scale feature data, 26*26 scale feature data and 52*52 scale feature data for combination.
在实际应用中,为了提高预训练神经网络的识别速度,可选的,上述第一尺度特征数据可以为13*13尺度特征数据,第二尺度特征数据可以为26*26尺度特征数据。In practical applications, in order to improve the recognition speed of the pre-trained neural network, optionally, the first scale feature data may be 13*13 scale feature data, and the second scale feature data may be 26*26 scale feature data.
S402、通过所述特征融合层对所述多尺度特征数据进行特征融合,得到融合后的特征数据。S402. Perform feature fusion on the multi-scale feature data through the feature fusion layer to obtain fused feature data.
为了提高预训练神经网络对小目标的识别能力,特征提取层将提取出的多 尺度特征数据输入至特征融合层,通过特征融合层将多尺度特征数据进行特征融合。可选的,当通过特征提取层从视觉数据中提取出的特征数据为13*13尺度特征数据和26*26尺度特征数据时,机器人通过特征融合层将13*13尺度特征数据和26*26尺度特征数据进行特征融合,得到融合后的特征数据。In order to improve the recognition ability of the pre-trained neural network for small targets, the feature extraction layer inputs the extracted multi-scale feature data to the feature fusion layer, and the multi-scale feature data is feature-fused through the feature fusion layer. Optionally, when the feature data extracted from the visual data through the feature extraction layer is 13*13 scale feature data and 26*26 scale feature data, the robot uses the feature fusion layer to combine the 13*13 scale feature data and 26*26 scale feature data. Feature fusion is performed on the scale feature data to obtain the fused feature data.
S403、根据所述多尺度特征数据和所述融合后的特征数据,通过所述对象识别层确定目标对象的对象位置和对象类型。S403. Determine the object position and object type of the target object through the object recognition layer according to the multi-scale feature data and the fused feature data.
特征提取层将提取出的多尺度特征数据输入至对象识别层,同时特征融合层将融合后的特征数据也输入至对象识别层,通过对象识别层对多尺度特征数据和融合后的特征数据进行处理,得到目标对象的对象位置和对象类型。对象识别层可以对多尺度特征数据和融合后的特征数据中的参照框进行坐标变换以及坐标缩放等操作,将多尺度特征数据和融合后的特征数据中的参照框还原到原始数据上,得到还原后的参照框。接着,对还原后的参照框进行非极大抑制处理,过滤掉冗余的参照框,输出过滤后的参照框的描述信息,从而得到目标对象的对象位置和对象类型。The feature extraction layer inputs the extracted multi-scale feature data to the object recognition layer, and the feature fusion layer also inputs the fused feature data to the object recognition layer, and the multi-scale feature data and the fused feature data are processed by the object recognition layer. Process to obtain the object position and object type of the target object. The object recognition layer can perform coordinate transformation and coordinate scaling on the reference frame in the multi-scale feature data and the fused feature data, and restore the reference frame in the multi-scale feature data and the fused feature data to the original data. The restored frame of reference. Next, non-maximum suppression processing is performed on the restored reference frame, redundant reference frames are filtered out, and description information of the filtered reference frame is output, so as to obtain the object position and object type of the target object.
以图5所示的预训练神经网络为例,介绍上述通过预训练神经网络对视觉数据的识别过程。机器人将采集的视野范围内的视觉数据输入至预训练神经网络中的特征提取层501,通过特征提取层501中的第一特征提取块5011提取视觉数据中的特征,得到13*13尺度特征数据,通过特征提取层中的第二特征提取块5012提取视觉数据中的特征,得到26*26尺度特征数据。接着,机器人将13*13尺度特征数据和26*26尺度特征数据输入至预训练神经网络中的特征融合层502,通过特征融合层502对13*13尺度特征数据和26*26尺度特征数据进行特征融合,得到融合后的特征数据。机器人将13*13尺度特征数据、26*26尺度特征数据和融合后的特征数据输入至预训练神经网络中的对象识别层503,通过对象识别层503对13*13尺度特征数据、26*26尺度特征数据和融合后的特征数据进行坐标转换、坐标缩放以及非极大抑制处理,从而识别视觉数据中的目标对象,并输出目标对象的对象位置和对象类型。Taking the pre-trained neural network shown in FIG. 5 as an example, the above-mentioned recognition process of visual data through the pre-trained neural network is introduced. The robot inputs the collected visual data within the visual field to the feature extraction layer 501 in the pre-trained neural network, and extracts the features in the visual data through the first feature extraction block 5011 in the feature extraction layer 501 to obtain 13*13 scale feature data , and extract the features in the visual data through the second feature extraction block 5012 in the feature extraction layer to obtain 26*26 scale feature data. Next, the robot inputs the 13*13 scale feature data and the 26*26 scale feature data into the feature fusion layer 502 in the pre-trained neural network, and the feature fusion layer 502 performs the 13*13 scale feature data and the 26*26 scale feature data. Feature fusion to obtain the fused feature data. The robot inputs the 13*13 scale feature data, 26*26 scale feature data and the fused feature data into the object recognition layer 503 in the pre-trained neural network, and the 13*13 scale feature data, 26*26 The scale feature data and the fused feature data are processed by coordinate transformation, coordinate scaling and non-maximum suppression, so as to identify the target object in the visual data, and output the object position and object type of the target object.
在本实施例中,预训练神经网络可以对视觉数据中的多尺度特征数据进行特征融合,并基于融合后的特征数据和多尺度特征数据进行目标对象的识别,从而提高了机器人识别效果。同时,预训练神经网络中的特征提取层仅包括两个特征提取块,相比包括三个特征提取块的特征提取层来说,在能够满足机器人的识别效果的前提下,降低了特征提取层中特征提取块的数量,从而提高了机器人的识别速度。In this embodiment, the pre-trained neural network can perform feature fusion on the multi-scale feature data in the visual data, and recognize the target object based on the fused feature data and the multi-scale feature data, thereby improving the robot recognition effect. At the same time, the feature extraction layer in the pre-trained neural network only includes two feature extraction blocks. Compared with the feature extraction layer including three feature extraction blocks, the feature extraction layer is reduced on the premise that the recognition effect of the robot can be satisfied. The number of feature extraction blocks in the middle, thereby improving the recognition speed of the robot.
在实际应用中,通常机器人通过摄像机来采集视野范围内的视觉数据。此时,机器人通过预训练神经网络识别出的目标对象的对象位置是以图像坐标系 计算得到的。针对此情况,即在所述视觉数据是以所述机器人的图像坐标系为基准所采集的情况下,在上述实施例的基础上,可选的,在上述S303之前,该方法还可以包括:获取所述机器人的图像坐标系和雷达坐标系之间的第一对应关系以及所述雷达坐标系与世界坐标系之间的第二对应关系;根据所述第一对应关系和第二对应关系,对所述对象位置进行转换。In practical applications, the robot usually collects visual data within the field of view through a camera. At this time, the object position of the target object recognized by the robot through the pre-trained neural network is calculated in the image coordinate system. In view of this situation, that is, when the visual data is collected based on the image coordinate system of the robot, on the basis of the foregoing embodiment, optionally, before the foregoing S303, the method may further include: Obtain the first correspondence between the image coordinate system of the robot and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system; according to the first correspondence and the second correspondence, Transform the object position.
可选的,上述获取所述机器人的图像坐标系和雷达坐标系之间的第一对应关系可以包括:分别获取所述机器人在像素坐标系和雷达坐标系下针对同一待采集对象采集的第一数据和第二数据;将所述第一数据和所述第二数据进行匹配,得到多组匹配的特征点;根据所述多组匹配的特征点,确定所述机器人的图像坐标系和雷达坐标系之间的第一对应关系。Optionally, the obtaining of the first correspondence between the image coordinate system of the robot and the radar coordinate system may include: respectively obtaining the first correspondence between the robot's pixel coordinate system and the radar coordinate system for the same object to be collected. data and second data; match the first data and the second data to obtain multiple sets of matched feature points; determine the image coordinate system and radar coordinates of the robot according to the multiple sets of matched feature points The first correspondence between the systems.
可以预先将待采集对象设置在一墙角上。机器人上设置有摄像机和激光雷达,机器人分别控制摄像机和激光雷达从不同角度对设置在墙角上的待采集对象进行数据采集,从而得到第一数据和第二数据。接着,分别检测第一数据和第二数据中的特征点,并将第一数据和第二数据中的特征点进行匹配,得到多组匹配的特征点。通常,需要确定出四组匹配的特征点甚至更多。再接着,通过多组匹配的特定点,建立相应的方程组,通过求解方程组即可得到机器人的图像坐标系和雷达坐标系之间的对应关系。The object to be collected can be set on a corner in advance. The robot is provided with a camera and a laser radar, and the robot controls the camera and the laser radar to collect data from different angles of the object to be collected set on the corner of the wall, thereby obtaining the first data and the second data. Next, the feature points in the first data and the second data are respectively detected, and the feature points in the first data and the second data are matched to obtain multiple sets of matched feature points. Usually, four sets of matching feature points or more need to be determined. Then, through multiple sets of matching specific points, the corresponding equations are established, and the correspondence between the robot's image coordinate system and the radar coordinate system can be obtained by solving the equations.
在本实施例中,机器人通过机器人的图像坐标系和雷达坐标系之间的第一对应关系以及雷达坐标系与世界坐标系之间的第二对应关系对得到的对象位置进行转换,使得最终得到的目标对象的实际位置更准确,进而基于准确的对象位置控制机器人执行巡检清洁任务,提高了机器人的清洁精度和清洁效率。In this embodiment, the robot converts the obtained object position through the first correspondence between the robot's image coordinate system and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system, so that the final obtained object position is obtained. The actual position of the target object is more accurate, and then the robot is controlled to perform inspection and cleaning tasks based on the accurate object position, which improves the cleaning accuracy and cleaning efficiency of the robot.
一实施例中,所述目标对象为脏污,所述预设网络为预设的脏污检测网络;所述脏污检测网络通过视觉语义分割数据集和脏污数据集进行训练得到,其中,所述脏污数据集包括所述原始样本数据以及与所述原始样本数据对应的样本标注数据。图6为本申请实施例提供的另一种机器人的巡检清洁方法的流程示意图。如图6所示,该方法可以包括:In one embodiment, the target object is dirt, and the preset network is a preset dirt detection network; the dirt detection network is obtained by training a visual semantic segmentation dataset and a dirt dataset, wherein, The dirty data set includes the original sample data and sample annotation data corresponding to the original sample data. FIG. 6 is a schematic flowchart of another method for patrolling and cleaning a robot according to an embodiment of the present application. As shown in Figure 6, the method may include:
S601、采集机器人视野范围内的视觉数据。S601. Collect visual data within the field of view of the robot.
S602、通过预设的脏污检测网络对所述视觉数据进行检测,得到目标脏污位置和目标脏污类型。S602. Detect the visual data through a preset contamination detection network to obtain a target contamination location and a target contamination type.
所述脏污检测网络通过视觉语义分割数据集和脏污数据集进行训练得到,所述脏污数据集包括原始样本数据以及与所述原始样本数据对应的样本标注数据。The dirty detection network is obtained by training a visual semantic segmentation data set and a dirty data set, and the dirty data set includes original sample data and sample labeling data corresponding to the original sample data.
上述脏污检测网络为深度学习模型,其可以是预先建立并且经过视觉语义 分割数据集和脏污数据集进行训练后配置在机器人中,以便检测视觉数据中存在的目标脏污的网络模型。其中,上述视觉语义分割数据集可以为Cityscapes数据集;上述样本标注数据是指已标注样本脏污位置和样本脏污类型的原始样本数据。上述脏污检测网络可以是基于权卷积谐波密集网络(Fully Convolutional Harmonic Dense Net,FCHarDNet)、U-Net、V-Net以及金字塔场景解析网络(Pyramid Scene Parsing Net,PSPNet)等网络建立的。The above-mentioned contamination detection network is a deep learning model, which can be a network model that is pre-established and configured in the robot after training on the visual semantic segmentation dataset and the contamination dataset, so as to detect the target contamination existing in the visual data. Wherein, the above-mentioned visual semantic segmentation data set may be the Cityscapes data set; the above-mentioned sample labeling data refers to the original sample data for which the dirty position of the sample and the dirty type of the sample have been marked. The above-mentioned contamination detection network can be established based on a weighted convolutional harmonic dense network (Fully Convolutional Harmonic Dense Net, FCHarDNet), U-Net, V-Net, and Pyramid Scene Parsing Net (PSPNet) and other networks.
在得到视野范围内的视觉数据之后,机器人将视觉数据输入至已训练好的脏污检测网络中,通过脏污检测网络检测视觉数据中存在的目标脏污,并输出目标脏污位置和目标脏污类型。其中,目标脏污类型可以包括液体脏污和固体脏污。可选的,机器人可以通过脏污检测网络提取该视觉数据中的脏污特征,根据脏污特征,确定目标脏污类型。其中,脏污特征可以包括脏污颗粒大小以及脏污透明度(脏污透明度是指脏污的透光性能)。当脏污特征满足预设条件时,确定目标脏污类型为液体脏污;反之,确定目标脏污类型为固体脏污。上述预设条件包括:脏污颗粒大于预设颗粒大小,以及脏污透明度大于预设透明度。After obtaining the visual data within the field of view, the robot inputs the visual data into the trained dirt detection network, detects the target dirt in the visual data through the dirt detection network, and outputs the target dirt position and target dirt. type of contamination. The target soiling types may include liquid soiling and solid soiling. Optionally, the robot can extract dirt features in the visual data through a dirt detection network, and determine the target dirt type according to the dirt features. Among them, the soil characteristics may include soil particle size and soil transparency (soil transparency refers to the light transmittance of soil). When the contamination feature satisfies the preset condition, the target contamination type is determined to be liquid contamination; otherwise, the target contamination type is determined to be solid contamination. The above preset conditions include: the dirt particles are larger than the preset particle size, and the dirt transparency is larger than the preset transparency.
作为一种可选的实施方式,上述脏污检测网络可以包括下采样层和反卷积层。针对此情况,在上述实施例的基础上,可选的,上述S602可以包括:通过所述下采样层对所述视觉数据进行层级化的下采样操作,得到多分辨率的中间特征图;通过所述反卷积层对所述多分辨率的中间特征图进行层级化的反卷积操作,得到所述视觉数据中的目标脏污位置和目标脏污类型。As an optional implementation manner, the above-mentioned contamination detection network may include a downsampling layer and a deconvolution layer. In view of this situation, on the basis of the foregoing embodiment, optionally, the foregoing S602 may include: performing a hierarchical downsampling operation on the visual data through the downsampling layer to obtain a multi-resolution intermediate feature map; The deconvolution layer performs a hierarchical deconvolution operation on the multi-resolution intermediate feature map to obtain the target dirt position and target dirt type in the visual data.
上述脏污检测网络的网络结构可以如图7所示,从图7中可以看出,脏污检测网络包括N个下采样层和N个反卷积层。图7中的输入层用于输入视觉数据,输出层用于输出视觉数据中的目标脏污位置和目标脏污类型。首先,机器人将采集到的视野范围内的视觉数据输入至输入层中,通过N个下采样层对视觉数据进行层级化的下采样操作,以提取视觉数据中的脏污特征,得到不同分辨率的特征图。接着,通过N个反卷积层对不同分辨率的特征图进行层级化的反卷积操作,直至最后一个反卷积层处理完毕为止,从而通过输出层输出视觉数据中的目标脏污位置和目标脏污类型。The network structure of the above-mentioned contamination detection network can be shown in Figure 7. As can be seen from Figure 7, the contamination detection network includes N downsampling layers and N deconvolution layers. The input layer in Figure 7 is used to input visual data, and the output layer is used to output the target dirty position and target dirty type in the visual data. First, the robot inputs the collected visual data within the field of view into the input layer, and performs hierarchical down-sampling operations on the visual data through N down-sampling layers to extract the dirty features in the visual data and obtain different resolutions. feature map. Next, perform hierarchical deconvolution operations on the feature maps of different resolutions through N deconvolution layers until the last deconvolution layer is processed, so as to output the target dirty position and the target dirty position in the visual data through the output layer. Target soiling type.
在实际应用中,为了更好地将脏污检测网络的注意力集中到脏污区域。可选的,该脏污检测网络还可以包括注意力门限块。上述通过所述反卷积层对所述多分辨率的中间特征图进行层级化的反卷积操作的过程可以为:通过注意力门限块对多分辨率的中间特征图逐层进行增强和抑制,并且进行反卷积操作。In practical applications, in order to better focus the attention of the dirty detection network to the dirty area. Optionally, the contamination detection network may further include an attention threshold block. The above process of performing a hierarchical deconvolution operation on the multi-resolution intermediate feature map through the deconvolution layer may be: enhancing and suppressing the multi-resolution intermediate feature map layer by layer through an attention threshold block , and perform a deconvolution operation.
图7仅以脏污检测网络中包括的下采样层和反卷积层的个数N为4为例示出,本实施例并未限定脏污检测网络中包括的下采样层和反卷积层的个数,可 以根据实际应用需求,对脏污检测网络中包括的下采样层和反卷积层的个数N进行相应设置。FIG. 7 only shows that the number N of downsampling layers and deconvolution layers included in the contamination detection network is 4 as an example. This embodiment does not limit the downsampling layers and deconvolution layers included in the contamination detection network. The number N of downsampling layers and deconvolution layers included in the dirt detection network can be set correspondingly according to the actual application requirements.
通过反卷积层来实现对多分辨率的中间特征图的上采样操作,仅需要将中间特征图与反卷积层中的卷积核进行反卷积操作即可,相比使用双线性插值的上采样层来说,大大缩短了脏污检测的时间,提高了脏污检测的效率。The upsampling operation of the multi-resolution intermediate feature map is realized through the deconvolution layer, and only the intermediate feature map and the convolution kernel in the deconvolution layer need to be deconvolved. Compared with using bilinear For the interpolated upsampling layer, the time of contamination detection is greatly shortened, and the efficiency of contamination detection is improved.
S603、根据所述目标脏污位置和目标脏污类型控制所述机器人执行巡检清洁任务。S603. Control the robot to perform a patrol cleaning task according to the target dirt position and target dirt type.
在确定视野范围内存在目标脏污后,机器人导航到目标脏污位置,并基于目标脏污类型有针对性地对目标脏污位置进行清洁。After determining that the target is dirty within the field of view, the robot navigates to the target dirty position, and cleans the target dirty position in a targeted manner based on the target dirt type.
可选的,上述S603的过程可以为:根据目标脏污类型,生成目标清洁策略;控制所述机器人导航至所述目标脏污位置,采用所述目标清洁策略对所述目标脏污位置进行清洁。Optionally, the process of the above S603 may be: generating a target cleaning strategy according to the target dirt type; controlling the robot to navigate to the target dirt location, and using the target cleaning strategy to clean the target dirt location .
机器人可以根据得到的目标脏污类型,生成清洁时所采用的目标清洁策略。当目标脏污类型为液体脏污时,由于是液体,因此可以先将液体吸干,再用干拖进行擦拭,据此,机器人生成的目标清洁策略可以为先使用吸水组件对液体进行吸干,再使用干拖组件对地面进行擦拭。当目标脏污类型为固体脏污时,由于是固体,因此可以将固体清扫,再用湿拖进行擦拭,据此,机器人生成的目标清洁策略可以为先使用吸尘组件对固体进行清扫,再使用湿拖组件对地面进行擦拭,进而使用烘干组件对地面进行烘干。在生成目标清洁策略的过程中,还可以结合地面的材质。例如,当地面的材质为地板及地板砖时,可以使用吸尘组件进行吸尘,吸尘结束后再使用拖地组件进行擦地;当地面的材质为地毯上时,可以使用吸尘组件仅进行吸尘。The robot can generate a target cleaning strategy for cleaning according to the obtained target dirt type. When the target soiling type is liquid soiling, since it is liquid, the liquid can be sucked dry first, and then wiped with a dry mop. According to this, the target cleaning strategy generated by the robot can be used to first use the water absorbing component to absorb the liquid. , and then use the dry mop component to wipe the ground. When the target dirt type is solid dirt, since it is solid, the solid can be cleaned and then wiped with a wet mop. According to this, the target cleaning strategy generated by the robot can be to use the vacuum component to clean the solid first, and then use the vacuum cleaner to clean the solid. Use the wet mopping component to wipe the floor, and then use the drying component to dry the floor. In the process of generating the target cleaning strategy, the material of the ground can also be combined. For example, when the material of the ground is floor and floor tiles, you can use the vacuuming assembly to vacuum, and then use the mopping assembly to mop the floor after vacuuming; when the material of the ground is carpet, you can use the vacuuming assembly only Do vacuuming.
在得到目标清洁策略之后,机器人导航至目标脏污位置,并采用生成的目标清洁策略对目标脏污位置进行清洁。对目标脏污位置清洁完毕之后,机器人可以继续在目标脏污位置采集自身视野范围内的视觉数据,以识别下一步需要清洁的目标脏污,即反复执行上述S601-S603的过程,从而完成对整个工作空间的巡检清洁。机器人也可以返回至视野路径中的目标导航点,以该目标导航点为清洁起始点,继续采集视野范围内的视觉数据。After obtaining the target cleaning strategy, the robot navigates to the target dirty position, and uses the generated target cleaning strategy to clean the target dirty position. After cleaning the target dirty position, the robot can continue to collect visual data within its own field of view at the target dirty position to identify the target dirt that needs to be cleaned in the next step, that is, repeat the above process of S601-S603, so as to complete the process of cleaning. Inspection and cleaning of the entire work space. The robot can also return to the target navigation point in the field of view path, take the target navigation point as the starting point for cleaning, and continue to collect visual data within the field of view.
本申请实施例提供的机器人的巡检清洁方法,采集机器人视野范围内的视觉数据,通过预设的脏污检测网络对采集的视觉数据进行检测,得到目标脏污位置和目标脏污类型,并根据目标脏污位置和目标脏污类型控制机器人执行巡检清洁任务。在巡检清洁过程中,机器人通过自身的视野范围即可实现整个工作空间的视野覆盖,并且机器人能够通过已训练好的脏污检测网络主动识别视 野范围内存在的目标脏污,使得机器人仅需要聚焦工作空间内的目标脏污,并基于目标脏污的位置和类型有针对性地执行巡检清洁任务,无需对整个工作空间进行全路径清洁,提高了机器人的清洁效率。The robot inspection and cleaning method provided by the embodiment of the present application collects the visual data within the field of view of the robot, detects the collected visual data through a preset contamination detection network, obtains the target contamination position and the target contamination type, and Control the robot to perform patrol cleaning tasks according to the target dirty position and target dirty type. During the inspection and cleaning process, the robot can cover the entire workspace through its own field of view, and the robot can actively identify the target dirt in the field of view through the trained dirt detection network, so that the robot only needs to Focuses on the target dirt in the workspace, and performs targeted inspection and cleaning tasks based on the location and type of target dirt, eliminating the need for full-path cleaning of the entire workspace, improving the cleaning efficiency of the robot.
在一个实施例中,还提供了一种脏污检测网络的获取过程,即如何训练脏污检测网络。在上述实施例的基础上,可选的,如图8所示,该脏污检测网络的训练过程可以包括:In an embodiment, an acquisition process of a contamination detection network is also provided, that is, how to train a contamination detection network. On the basis of the foregoing embodiment, optionally, as shown in FIG. 8 , the training process of the contamination detection network may include:
S801、通过所述视觉语义分割数据集对检测网络进行预训练,得到初始脏污检测网络。S801. Pre-train a detection network by using the visual semantic segmentation data set to obtain an initial contamination detection network.
虽然脏污识别平行数据(即脏污数据集)能够提高脏污检测网络的检测性能,但是,由于脏污识别平行数据较为匮乏,导致脏污检测网络的训练非常耗时耗力,且检测性能仍达不到期望要求。然而,视觉语义分割数据集较多,因此,可以通过样本数量较多的视觉语义分割数据集对脏污检测网络进行预训练,得到在视觉语义分割数据集上训练的初始脏污检测网络。可选的,视觉语义分割数据集可以为Cityscapes数据集。通过这种预训练方式,可以避免网络训练早期的长时间缓慢学习阶段,从而极大地削减了网络训练时间。同时,也可以避免大量繁琐的超参数调优。Although the contamination identification parallel data (that is, the contamination data set) can improve the detection performance of the contamination detection network, due to the lack of the contamination identification parallel data, the training of the contamination detection network is very time-consuming and labor-intensive. Still not meeting expectations. However, there are many visual-semantic segmentation datasets. Therefore, the dirt detection network can be pre-trained through the visual-semantic segmentation dataset with a large number of samples, and the initial dirt detection network trained on the visual-semantic segmentation dataset can be obtained. Optionally, the visual semantic segmentation dataset can be the Cityscapes dataset. Through this pre-training method, the long and slow learning phase in the early stage of network training can be avoided, thereby greatly reducing the network training time. At the same time, a lot of tedious hyperparameter tuning can be avoided.
S802、将所述原始样本数据作为所述初始脏污检测网络的输入,将所述样本标注数据作为所述初始脏污检测网络的期望输出,采用预设的损失函数继续对所述初始脏污检测网络进行训练,得到所述脏污检测网络。S802. Use the original sample data as the input of the initial contamination detection network, use the sample labeling data as the expected output of the initial contamination detection network, and use a preset loss function to continue to detect the initial contamination The detection network is trained to obtain the dirty detection network.
在采用视觉语义分割数据集对脏污检测网络进行预训练,得到初始脏污检测网络之后,便可以继续使用收集的脏污数据集对初始脏污检测网络进行微调训练。即将脏污数据集中的原始样本数据作为初始脏污检测网络的输入,将脏污数据集中的样本标注数据作为初始脏污检测网络的期望输出,采用预设的损失函数对初始脏污检测网络的参数进行训练调整,直至达到损失函数的收敛条件,从而得到训练好的脏污检测网络。可选的,该损失函数可以为交叉熵损失函数。After pre-training the contamination detection network with the visual semantic segmentation data set to obtain the initial contamination detection network, you can continue to use the collected contamination data set to fine-tune the initial contamination detection network for training. The original sample data in the dirty data set is used as the input of the initial dirty detection network, the sample annotation data in the dirty data set is taken as the expected output of the initial dirty detection network, and the preset loss function is used for the initial dirty detection network. The parameters are trained and adjusted until the convergence condition of the loss function is reached, so as to obtain a trained dirty detection network. Optionally, the loss function can be a cross-entropy loss function.
为了使网络训练过程中采用的脏污数据集中的数据更多,可以对脏污数据集进行数据扩充。为此,在上述实施例的基础上,可选的,在采用预设的损失函数继续对初始脏污检测网络进行训练之前,该方法还包括:对脏污数据集进行数据增强处理。其中,对脏污数据集进行数据增强处理的方式包括以下至少之一:随机裁剪、水平翻转以及颜色抖动。In order to make more data in the dirty dataset used in the network training process, data augmentation can be performed on the dirty dataset. To this end, on the basis of the above embodiment, optionally, before continuing to train the initial contamination detection network by using a preset loss function, the method further includes: performing data enhancement processing on the contamination data set. The method of performing data enhancement processing on the dirty data set includes at least one of the following: random cropping, horizontal flipping, and color dithering.
可以采用水平翻转镜像对脏污数据集进行数据扩充;也可以对脏污数据集进行裁剪,以实现对图像分割数据集的数据扩充,即随机选择一个位置作为裁 剪中心对每个脏污数据进行裁剪;还可以对每个脏污数据进行颜色抖动,以实现脏污数据集的数据扩充。The dirty data set can be expanded by horizontal flip mirroring; the dirty data set can also be cropped to realize the data expansion of the image segmentation data set, that is, a position is randomly selected as the cropping center, and each dirty data is processed. Cropping; each dirty data can also be color jittered for data augmentation of dirty datasets.
在本实施例中,在对脏污检测网络进行训练的过程中,可以采用样本数量较多的视觉语义分割数据集对脏污检测网络进行预训练,再通过脏污数据集对预训练后得到的初始脏污检测网络进行微调训练。通过这种预训练方式,可以避免网络训练早期的长时间缓慢学习阶段,从而极大地削减了网络训练时间。同时,也可以避免大量繁琐的超参数调优。也就是说,本申请实施例所采用技术方案,缩短了脏污检测网络的训练时间,并提高了脏污检测网络的精度。In this embodiment, in the process of training the contamination detection network, a visual semantic segmentation data set with a large number of samples can be used to pre-train the contamination detection network, and then the contamination data set is used for pre-training to obtain The initial dirty detection network is fine-tuned for training. Through this pre-training method, the long and slow learning phase in the early stage of network training can be avoided, thereby greatly reducing the network training time. At the same time, a lot of tedious hyperparameter tuning can be avoided. That is to say, the technical solutions adopted in the embodiments of the present application shorten the training time of the contamination detection network and improve the accuracy of the contamination detection network.
在实际应用中,通常机器人通过摄像机来采集视野范围内的视觉数据。此时,机器人通过训练好的脏污检测网络检测出的目标脏污位置是以图像坐标系计算得到的。针对此情况,即在所述视觉数据是以所述机器人的图像坐标系为基准所采集的情况下,在上述实施例的基础上,可选的,在上述S603之前,该方法还可以包括:获取所述机器人的图像坐标系和雷达坐标系之间的第一对应关系以及所述雷达坐标系与世界坐标系之间的第二对应关系;根据所述第一对应关系和第二对应关系,对所述目标脏污位置进行转换。In practical applications, the robot usually collects visual data within the field of view through a camera. At this time, the target dirty position detected by the robot through the trained dirt detection network is calculated in the image coordinate system. In view of this situation, that is, when the visual data is collected based on the image coordinate system of the robot, on the basis of the above-mentioned embodiment, optionally, before the above-mentioned S603, the method may further include: Obtain the first correspondence between the image coordinate system of the robot and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system; according to the first correspondence and the second correspondence, The target dirty location is converted.
在获取到机器人的图像坐标系和雷达坐标系之间的第一对应关系以及雷达坐标系与世界坐标系之间的第二对应关系之后,机器人根据第一对应关系对目标脏污位置进行投影变换,再基于第二对应关系对投影变换后的脏污位置进行转换,从而得到目标脏污在世界坐标系下的实际位置。获取第一对应关系和第二对应关系,以及根据第一对应关系和第二对应关系对目标脏污位置进行转换的操作步骤以及所产生的的效果在上述实施例中已有记载,此处不再赘述。After acquiring the first correspondence between the robot's image coordinate system and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system, the robot performs projection transformation on the target dirty position according to the first correspondence , and then convert the dirty position after projection transformation based on the second correspondence, so as to obtain the actual position of the target dirt in the world coordinate system. The operation steps of obtaining the first corresponding relationship and the second corresponding relationship, and converting the target dirty position according to the first corresponding relationship and the second corresponding relationship and the resulting effects have been recorded in the above-mentioned embodiments, and are not described here. Repeat.
一实施例中,所述目标对象的数量为至少一个;所述方法还包括:对至少一个对象位置进行路径规划,得到目标清洁路径;所述根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务,包括:控制所述机器人按照所述目标清洁路径依次导航至所述至少一个对象位置,并基于当前对象位置对应的对象类型执行巡检清洁任务。图9为本申请实施例提供的另一种机器人的巡检清洁方法的流程示意图。如图9所示,该方法可以包括:In one embodiment, the number of the target objects is at least one; the method further includes: performing path planning on at least one object position to obtain a target cleaning path; The robot performing the inspection and cleaning task includes: controlling the robot to navigate to the at least one object position in sequence according to the target cleaning path, and performing the inspection and cleaning task based on the object type corresponding to the current object position. FIG. 9 is a schematic flowchart of another method for patrolling and cleaning a robot according to an embodiment of the present application. As shown in Figure 9, the method may include:
S901、采集机器人视野范围内的视觉数据。S901. Collect visual data within the field of view of the robot.
S902、通过预设的脏污检测网络对所述视觉数据进行检测,得到至少一个目标脏污位置和所述至少一个目标脏污位置对应的目标脏污类型。S902. Detect the visual data through a preset contamination detection network to obtain at least one target contamination location and a target contamination type corresponding to the at least one target contamination location.
本实施例中的预设的脏污检测网络以及目标脏污类型在上述实施例中已有记载,此处不再赘述。The preset contamination detection network and the target contamination type in this embodiment have been recorded in the above-mentioned embodiments, and will not be repeated here.
本实施例中,当目标对象的数量为多个时,每个目标对象对应一个对象位 置和一个对象类型,且多个目标对象对应的多个对象类型可以完全相同,也可以完全不同,也可以部分相同,对此不作限定。In this embodiment, when the number of target objects is multiple, each target object corresponds to an object position and an object type, and the multiple object types corresponding to the multiple target objects may be identical or completely different, or Parts are the same, which is not limited.
可选的,上述脏污检测网络可以包括下采样层和反卷积层。针对此情况,在上述实施例的基础上,可选的,上述S902可以包括:通过所述下采样层对所述视觉数据进行层级化的下采样操作,得到多分辨率的中间特征图;通过所述反卷积层对所述多分辨率的中间特征图进行层级化的反卷积操作,得到所述视觉数据中的至少一个目标脏污位置和至少一个目标脏污位置对应的目标脏污类型。Optionally, the above-mentioned contamination detection network may include a downsampling layer and a deconvolution layer. In view of this situation, on the basis of the foregoing embodiment, optionally, the foregoing S902 may include: performing a hierarchical downsampling operation on the visual data through the downsampling layer to obtain a multi-resolution intermediate feature map; The deconvolution layer performs a hierarchical deconvolution operation on the multi-resolution intermediate feature map to obtain at least one target dirty position in the visual data and at least one target dirty position corresponding to the target dirty position. type.
上述脏污检测网络的网络结构、工作原理以及所产生的的效果在上述实施例中已有记载,此处不再赘述。The network structure, working principle, and effects of the above-mentioned contamination detection network have been recorded in the above-mentioned embodiments, and will not be repeated here.
S903、对所述至少一个目标脏污位置进行最短路径规划,得到目标清洁路径。S903. Perform shortest path planning on the at least one target dirty location to obtain a target cleaning path.
目标清洁路径是指机器人到达至少一个目标脏污位置的所有清洁路径中距离最短的清洁路径。在检测到至少一个目标脏污位置时,机器人可以根据至少一个目标脏污位置、待清洁区域的历史障碍物地图以及待清洁区域的当前障碍物地图,通过最短路径规划算法,生成到达至少一个目标脏污位置的目标清洁路径。其中,最短路径规划算法可以为Dijkstra算法、Floyd算法以及蚁群算法等。The target cleaning path refers to the cleaning path with the shortest distance among all the cleaning paths for the robot to reach at least one target dirty position. When at least one target dirty position is detected, the robot can generate at least one target through the shortest path planning algorithm according to the at least one target dirty position, the historical obstacle map of the area to be cleaned, and the current obstacle map of the area to be cleaned. Target cleaning path for dirty locations. The shortest path planning algorithm may be Dijkstra algorithm, Floyd algorithm, and ant colony algorithm.
S904、控制所述机器人按照所述目标清洁路径依次导航至对应的目标脏污位置,以及基于对应的目标脏污类型执行巡检清洁任务。S904 , controlling the robot to navigate to the corresponding target dirty position in sequence according to the target cleaning path, and perform the inspection and cleaning task based on the corresponding target dirt type.
在目标清洁路径规划完毕之后,机器人便可以按照目标清洁路径依次导航至对应的目标脏污位置,并基于与目标脏污位置对应的目标脏污类型有针对性地对目标脏污位置进行清洁。After the target cleaning path is planned, the robot can navigate to the corresponding target dirty position in sequence according to the target cleaning path, and clean the target dirty position in a targeted manner based on the target dirt type corresponding to the target dirty position.
可选的,上述S904的过程可以为:根据所述目标脏污类型,生成目标清洁策略;控制所述机器人按照所述目标清洁路径依次导航至对应的目标脏污位置,以及采用所述目标清洁策略对所述目标脏污位置进行清洁。Optionally, the process of the above S904 may be: generating a target cleaning strategy according to the target dirt type; controlling the robot to navigate to the corresponding target dirt position in sequence according to the target cleaning path, and adopting the target cleaning method. The strategy cleans the target dirty location.
机器人可以根据得到的目标脏污类型,生成清洁时所采用的目标清洁策略。当目标脏污类型为液体脏污时,由于是液体,因此可以先将液体吸干,再用干拖进行擦拭,据此,机器人生成的目标清洁策略可以为先使用吸水组件对液体进行吸干,再使用干拖组件对地面进行擦拭。当目标脏污类型为固体脏污时,由于是固体,因此可以将固体清扫,再用湿拖进行擦拭,据此,机器人生成的目标清洁策略可以为先使用吸尘组件对固体进行清扫,再使用湿拖组件对地面进行擦拭,进而使用烘干组件对地面进行烘干。在生成目标清洁策略的过程中, 还可以结合地面的材质。例如,当地面的材质为地板及地板砖时,可以使用吸尘组件进行吸尘,吸尘结束后再使用拖地组件进行擦地;当地面的材质为地毯上时,可以使用吸尘组件仅进行吸尘。The robot can generate a target cleaning strategy for cleaning according to the obtained target dirt type. When the target soiling type is liquid soiling, since it is liquid, the liquid can be sucked dry first, and then wiped with a dry mop. According to this, the target cleaning strategy generated by the robot can be first used to absorb the liquid with a water absorbing component. , and then use the dry mop component to wipe the ground. When the target dirt type is solid dirt, since it is solid, the solid can be cleaned and then wiped with a wet mop. According to this, the target cleaning strategy generated by the robot can be: Use the wet mopping component to wipe the floor, and then use the drying component to dry the floor. In the process of generating the target cleaning strategy, the material of the ground can also be combined. For example, when the material of the floor is floor and floor tiles, you can use the vacuum cleaner for vacuuming, and then use the mopping unit to mop the floor after vacuuming; when the material of the floor is carpet, you can use the vacuum cleaner only Do vacuuming.
在得到目标清洁策略之后,机器人按照目标清洁路径依次导航至目标脏污位置,并采用生成的目标清洁策略对至少一个目标脏污位置进行清洁。在对最后一个目标脏污位置清洁完毕之后,机器人可以控制自身进行旋转,并在旋转过程中采集视觉范围内的视觉数据,以识别下一步需要清洁的目标脏污,即反复执行上述S901-S904的过程,从而完成对整个工作空间的巡检清洁。After obtaining the target cleaning strategy, the robot navigates to the target dirty location in sequence according to the target cleaning path, and uses the generated target cleaning strategy to clean at least one target dirty location. After cleaning the last target dirty position, the robot can control itself to rotate, and collect visual data within the visual range during the rotation to identify the target dirt that needs to be cleaned in the next step, that is, repeatedly execute the above S901-S904 process, so as to complete the inspection and cleaning of the entire workspace.
本申请实施例提供的机器人的巡检清洁方法,采集机器人视野范围内的视觉数据,通过预设的脏污检测网络对采集的视觉数据进行检测,得到至少一个目标脏污位置和至少一个目标脏污位置对应的目标脏污类型,对至少一个目标脏污位置进行最短路径规划,得到目标清洁路径,并控制机器人按照目标清洁路径依次导航至对应的目标脏污位置,以及基于对应的目标脏污类型执行巡检清洁任务。在巡检清洁过程中,机器人通过自身的视野范围即可实现整个工作空间的视野覆盖,并且机器人能够通过已训练好的脏污检测网络主动识别视野范围内存在的目标脏污,使得机器人仅需要聚焦工作空间内的目标脏污,并基于目标脏污的位置和类型有针对性地执行巡检清洁任务,无需对整个工作空间进行全路径清洁,提高了机器人的清洁效率。同时,在检测到至少一个目标脏污位置时,机器人还可以对至少一个目标脏污位置进行最短路径规划,使得机器人能够以最短路径导航至至少一个目标脏污位置,提高了机器人的清洁效率。The inspection and cleaning method for a robot provided by the embodiment of the present application collects visual data within the field of view of the robot, detects the collected visual data through a preset contamination detection network, and obtains at least one target dirty position and at least one target dirty location. According to the target dirt type corresponding to the dirty position, the shortest path planning is performed on at least one target dirty position to obtain the target cleaning path, and the robot is controlled to navigate to the corresponding target dirty position in turn according to the target cleaning path, and based on the corresponding target dirty position Type to perform patrol cleaning tasks. During the inspection and cleaning process, the robot can cover the entire workspace through its own field of view, and the robot can actively identify the target dirt in the field of view through the trained dirt detection network, so that the robot only needs to Focuses on the target dirt in the workspace, and performs targeted inspection and cleaning tasks based on the location and type of target dirt, eliminating the need for full-path cleaning of the entire workspace, improving the cleaning efficiency of the robot. At the same time, when at least one target dirty position is detected, the robot can also perform shortest path planning for the at least one target dirty position, so that the robot can navigate to the at least one target dirty position with the shortest path, which improves the cleaning efficiency of the robot.
本实施例中的脏污检测网络的获取过程在上述实施例中已有记载,此处不再赘述。The acquisition process of the contamination detection network in this embodiment has been recorded in the above-mentioned embodiments, and will not be repeated here.
在实际应用中,为了扩大机器人的数据采集范围,以提高机器人主动巡检的效率,在上述实施例的基础上,可选的,上述S901的过程可以为:控制机器人进行旋转,并在旋转过程中采集所述机器人视野范围内的视觉数据。可选的,控制机器人进行旋转的方式可以为:基于至少一个传感器的视野范围,控制机器人旋转。In practical applications, in order to expand the data collection range of the robot and improve the efficiency of the robot's active inspection, on the basis of the above embodiment, optionally, the process of the above S901 may be: controlling the robot to rotate, and during the rotation process Collect the visual data within the field of view of the robot. Optionally, the way of controlling the robot to rotate may be: controlling the robot to rotate based on the field of view of at least one sensor.
在基于机器人的视野范围和待清洁区域的电子地图,生成用于对待清洁区域进行视野覆盖的视野路径之后,机器人按照所规划的视野路径对待清洁区域进行巡检清洁。为了扩大机器人的数据采集范围,在机器人开始行进之前,可以基于至少一个传感器的视野范围控制机器人进行原地旋转,并在旋转过程中采集机器人视野范围内的视觉数据,这样,随着机器人的旋转,机器人的视野方向不断调整,使得机器人在当前位置便能够采集到更大范围内的视觉数据。也可以在机器人行进过程中基于至少一个传感器的视野范围控制机器人进行旋 转,并在旋转过程中不断采集机器人视野范围内的视觉数据。实际使用时,可以对机器人的旋转时机进行设置,本实施例对此不做限定。After generating a field of vision path for covering the field of view of the area to be cleaned based on the robot's field of view and the electronic map of the area to be cleaned, the robot inspects and cleans the area to be cleaned according to the planned field of view path. In order to expand the data collection range of the robot, before the robot starts to travel, the robot can be controlled to rotate on the spot based on the field of view of at least one sensor, and the visual data within the field of view of the robot can be collected during the rotation process. In this way, with the rotation of the robot , the direction of the robot's field of vision is continuously adjusted, so that the robot can collect visual data in a wider range at the current position. It is also possible to control the robot to rotate based on the field of view of at least one sensor during the movement of the robot, and continuously collect visual data within the field of view of the robot during the rotation. In actual use, the rotation timing of the robot may be set, which is not limited in this embodiment.
为了扩大机器人的数据采集范围,可选的,上述控制机器人进行旋转的过程可以为:控制机器人旋转一周。即在机器人开始行进之前,控制机器人原地旋转一周,或者在机器人行进过程中控制机器人旋转一周,并在旋转过程中采集机器人视野范围内的视觉数据,使得机器人能够对自身360度范围内的视觉数据进行采集,极大地扩大了机器人的数据采集范围,使得机器人能够主动识别更大范围内的视觉数据,并对识别到的目标脏污进行统筹清洁,从而提高了机器人的清洁效率。In order to expand the data collection range of the robot, optionally, the above process of controlling the robot to rotate may be: controlling the robot to rotate once. That is, before the robot starts to travel, control the robot to rotate once in place, or control the robot to rotate once during the process of the robot, and collect the visual data within the robot's field of vision during the rotation process, so that the robot can perceive its own vision within a 360-degree range. Data collection greatly expands the data collection range of the robot, enabling the robot to actively identify visual data in a wider range, and perform overall cleaning of the identified target dirt, thereby improving the cleaning efficiency of the robot.
在实际应用中,可以通过视觉传感器来采集机器人工作空间内的视觉数据。可选的,机器人上安装有第一视觉传感器和第二视觉传感器。其中,第一视觉传感器为机器人的前视传感器,且视野角度的中轴线与水平线平行;所述第二视觉传感器为所述机器人的下视传感器,且视野角度的中轴线位于水平线下方,与水平线相交。针对此情况,在上述实施例的基础上,可选的,上述S901的过程可以为:控制第一视觉传感器和第二视觉传感器进行旋转,并在旋转过程中采集各自视野范围内的视觉数据。In practical applications, visual data in the robot workspace can be collected through vision sensors. Optionally, a first vision sensor and a second vision sensor are installed on the robot. The first visual sensor is the forward-looking sensor of the robot, and the central axis of the viewing angle is parallel to the horizontal line; the second visual sensor is the downward-looking sensor of the robot, and the central axis of the viewing angle is located below the horizontal line, which is parallel to the horizontal line. intersect. In view of this situation, on the basis of the above-mentioned embodiment, optionally, the above-mentioned process of S901 may be: controlling the first visual sensor and the second visual sensor to rotate, and collecting visual data within their respective visual fields during the rotating process.
由于第一视觉传感器的视线是平视的,因此,其可以获得比较大的感知范围,用于感知待清洁区域内较远处的环境信息。由于第二视觉传感器的视线是下视的,能够直接对准地面,因此,第二视觉传感器能够更加清晰地感知近处地面的环境信息,可以有效弥补第一视觉传感器的视野盲区。针对此情况,在机器人进行巡检清洁过程中,可以控制第一视觉传感器和第二视觉传感器采集各自视野范围内的视觉数据,使得机器人不仅可以采集较远视野范围内的数据,也可以通过第二视觉传感器采集第一视觉传感器视野盲区内的数据,极大地扩大了机器人的数据采集范围。Since the line of sight of the first visual sensor is head-up, it can obtain a relatively large sensing range for sensing environmental information farther away in the area to be cleaned. Since the line of sight of the second visual sensor is downward and can be directly aimed at the ground, the second visual sensor can more clearly perceive the environmental information on the ground nearby, and can effectively make up for the blind spot of the first visual sensor. In view of this situation, during the inspection and cleaning process of the robot, the first vision sensor and the second vision sensor can be controlled to collect visual data within their respective fields of view, so that the robot can not only collect data within the far field of view, but also can The second vision sensor collects the data in the blind area of the first vision sensor, which greatly expands the data collection range of the robot.
还可以控制第一视觉传感器和第二传感器进行旋转,并在旋转过程中采集各自视野范围内的视觉数据,这样,随着第一视觉传感器和第二视觉传感器的旋转,机器人的视野方向不断调整,使得机器人能够采集更大范围内的视觉数据,扩大了机器人的数据采集范围。在实际应用中,可以根据实际需求,控制第一视觉传感器和第二视觉传感器的旋转角度。可选的,该旋转角度可以为360度。It is also possible to control the first vision sensor and the second sensor to rotate, and collect visual data within their respective fields of view during the rotation process, so that with the rotation of the first vision sensor and the second vision sensor, the direction of the robot's field of vision is continuously adjusted. , so that the robot can collect visual data in a wider range and expand the data collection range of the robot. In practical applications, the rotation angles of the first vision sensor and the second vision sensor can be controlled according to actual requirements. Optionally, the rotation angle may be 360 degrees.
在本实施例中,通过控制机器人进行旋转,并在旋转过程中采集机器人视野范围内的视觉数据,或者通过控制机器人的第一视觉传感器和第二视觉传感器进行旋转,并在旋转过程中采集各自视野范围内的视觉数据。通过该技术方案,极大地扩大了机器人的数据采集范围,使得机器人能够主动识别更大范围 内的视觉数据,并对识别到的目标脏污进行统筹清洁,从而提高了机器人的清洁效率。In this embodiment, the robot is controlled to rotate, and the visual data within the field of view of the robot is collected during the rotation, or the first vision sensor and the second vision sensor of the robot are controlled to rotate, and the respective visual data are collected during the rotation. Visual data within the field of view. Through this technical solution, the data collection range of the robot is greatly expanded, so that the robot can actively identify visual data in a wider range, and perform overall cleaning on the identified target dirt, thereby improving the cleaning efficiency of the robot.
在实际应用中,通常机器人通过摄像机来采集视野范围内的视觉数据。此时,机器人通过训练好的脏污检测网络检测出的目标脏污位置是以图像坐标系计算得到的。针对此情况,即在所述视觉数据是以所述机器人的图像坐标系为基准所采集的情况下,在上述实施例的基础上,可选的,在上述S903之前,该方法还可以包括:获取所述机器人的图像坐标系和雷达坐标系之间的第一对应关系以及所述雷达坐标系与世界坐标系之间的第二对应关系;根据所述第一对应关系和第二对应关系,对所述至少一个目标脏污位置进行转换。获取第一对应关系和第二对应关系,以及根据第一对应关系和第二对应关系对目标脏污位置进行转换的操作步骤以及所产生的的效果在上述实施例中已有记载,此处不再赘述。In practical applications, the robot usually collects visual data within the field of view through a camera. At this time, the target dirty position detected by the robot through the trained dirt detection network is calculated in the image coordinate system. In view of this situation, that is, in the case where the visual data is collected based on the image coordinate system of the robot, on the basis of the above embodiment, optionally, before the above S903, the method may further include: Obtain the first correspondence between the image coordinate system of the robot and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system; according to the first correspondence and the second correspondence, Translating the at least one target dirty location. The operation steps of obtaining the first corresponding relationship and the second corresponding relationship, and converting the target dirty position according to the first corresponding relationship and the second corresponding relationship and the resulting effects have been recorded in the above-mentioned embodiments, and are not described here. Repeat.
图10为本申请实施例提供的机器人的巡检清洁装置的一种结构示意图。如图10所示,该装置可以包括:采集模块100、识别模块101和控制模块102。FIG. 10 is a schematic structural diagram of a robot inspection and cleaning device according to an embodiment of the present application. As shown in FIG. 10 , the apparatus may include: a collection module 100 , an identification module 101 and a control module 102 .
采集模块100设置为采集机器人视野范围内的视觉数据;识别模块101设置为通过预设网络对所述视觉数据进行检测,得到目标对象的对象位置和对象类型,其中,所述预设网络通过原始样本数据以及所述原始样本数据对应的样本标注数据进行训练得到;控制模块102设置为根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务。The acquisition module 100 is set to collect visual data within the field of vision of the robot; the recognition module 101 is set to detect the visual data through a preset network to obtain the object position and object type of the target object, wherein the preset network passes the original The sample data and the sample labeling data corresponding to the original sample data are obtained by training; the control module 102 is configured to control the robot to perform an inspection and cleaning task according to the object position and object type of the target object.
在上述实施例的基础上,可选的,所述预设网络为预训练神经网络,所述原始样本数据为视觉样本数据,所述原始样本数据对应的样本标注数据为已标注样本对象位置和样本对象类型的所述视觉样本数据。On the basis of the above-mentioned embodiment, optionally, the preset network is a pre-trained neural network, the original sample data is visual sample data, and the sample labeling data corresponding to the original sample data is the marked sample object position and The visual sample data of the sample object type.
本申请可选实施例提供的机器人的巡检清洁装置,采集机器人视野范围内的视觉数据,通过预训练神经网络对所述视觉数据进行识别,得到目标对象的对象位置和对象类型,并根据所述对象位置和对象类型控制所述机器人执行巡检清洁任务。在巡检清洁过程中,机器人通过自身的视野范围即可实现整个工作空间的视野覆盖,并且机器人能够通过预训练神经网络主动识别视野范围内存在的目标对象,使得机器人仅需要聚焦工作空间内的目标对象,并基于目标对象的位置和类型执行巡检清洁任务,无需对整个工作空间进行全路径清洁,大大提高了机器人的清洁效率。The robot inspection and cleaning device provided by an optional embodiment of the present application collects visual data within the field of vision of the robot, identifies the visual data through a pre-trained neural network, and obtains the object position and object type of the target object, and according to the The object position and object type control the robot to perform patrol cleaning tasks. During the inspection and cleaning process, the robot can achieve the field of vision coverage of the entire workspace through its own field of view, and the robot can actively identify the target objects existing in the field of view through the pre-trained neural network, so that the robot only needs to focus on the objects in the workspace. Target objects, and perform inspection and cleaning tasks based on the location and type of the target objects, eliminating the need for full-path cleaning of the entire workspace, which greatly improves the cleaning efficiency of the robot.
如图11所示,在上述实施例的基础上,可选的,所述预训练神经网络包括特征提取层、特征融合层以及对象识别层;识别模块101包括:特征提取单元 1011、特征融合单元1012和识别单元1013;特征提取单元1011设置为通过所述特征提取层提取所述视觉数据中的多尺度特征数据;特征融合单元1012设置为通过所述特征融合层对所述多尺度特征数据进行特征融合,得到融合后的特征数据;识别单元1013设置为根据所述多尺度特征数据和所述融合后的特征数据,通过所述对象识别层确定目标对象的对象位置和对象类型。As shown in FIG. 11 , on the basis of the above embodiment, optionally, the pre-trained neural network includes a feature extraction layer, a feature fusion layer, and an object recognition layer; the identification module 101 includes: a feature extraction unit 1011 , a feature fusion unit 1012 and the identification unit 1013; the feature extraction unit 1011 is set to extract the multi-scale feature data in the visual data through the feature extraction layer; the feature fusion unit 1012 is set to perform the multi-scale feature data through the feature fusion layer. Feature fusion to obtain fused feature data; the identifying unit 1013 is configured to determine the object position and object type of the target object through the object recognition layer according to the multi-scale feature data and the fused feature data.
在上述实施例的基础上,可选的,所述特征提取层包括第一特征提取块和第二特征提取块;特征提取单元1011设置为通过所述第一特征提取块提取所述视觉数据中的第一尺度特征数据,并通过所述第二特征提取块提取所述视觉数据中的第二尺度特征数据。On the basis of the above embodiment, optionally, the feature extraction layer includes a first feature extraction block and a second feature extraction block; the feature extraction unit 1011 is configured to extract the visual data through the first feature extraction block feature data of the first scale, and extract the second scale feature data in the visual data through the second feature extraction block.
可选的,所述第一尺度特征数据为13*13尺度特征数据,所述第二尺度特征数据为26*26尺度特征数据。Optionally, the first scale feature data is 13*13 scale feature data, and the second scale feature data is 26*26 scale feature data.
在上述实施例的基础上,可选的,当所述目标对象为垃圾和/或脏污时,控制模块102设置为根据所述对象类型选定目标收纳组件和目标清洁组件;控制所述机器人导航至所述对象位置,并控制所述机器人将所述目标对象清扫到所述目标收纳组件中,以及通过所述目标清洁组件对清扫后的区域进行清洁。On the basis of the above embodiment, optionally, when the target object is garbage and/or dirt, the control module 102 is configured to select a target storage assembly and a target cleaning assembly according to the type of the object; control the robot Navigating to the object position, and controlling the robot to clean the target object into the target storage assembly, and to clean the cleaned area through the target cleaning assembly.
在上述实施例的基础上,可选的,当所述目标对象为障碍物时,控制模块102设置为根据所述对象类型,确定所述机器人是否能够越过所述目标对象;在确定所述机器人无法越过所述目标对象时,根据所述对象位置和目标导航点,生成脱困路径,并控制所述机器人按照所述脱困路径行驶至所述目标导航点。On the basis of the above embodiment, optionally, when the target object is an obstacle, the control module 102 is configured to determine whether the robot can pass the target object according to the object type; When the target object cannot be crossed, an escape path is generated according to the position of the object and the target navigation point, and the robot is controlled to travel to the target navigation point according to the escape path.
如图11所示,在上述实施例的基础上,可选的,在所述视觉数据是以所述机器人的图像坐标系为基准所采集的情况下,所述装置还包括:获取模块103和转换模块104。As shown in FIG. 11 , on the basis of the foregoing embodiment, optionally, in the case that the visual data is collected based on the image coordinate system of the robot, the apparatus further includes: an acquisition module 103 and Conversion module 104 .
获取模块103设置为在所述控制模块102根据所述对象位置和对象类型控制所述机器人执行巡检清洁任务之前,获取所述机器人的图像坐标系和雷达坐标系之间的第一对应关系以及所述雷达坐标系与世界坐标系之间的第二对应关系;转换模块104设置为根据所述对应关系,对所述对象位置进行转换。The acquisition module 103 is configured to acquire the first correspondence between the image coordinate system and the radar coordinate system of the robot before the control module 102 controls the robot to perform the inspection and cleaning task according to the object position and the object type. The second corresponding relationship between the radar coordinate system and the world coordinate system; the conversion module 104 is configured to convert the position of the object according to the corresponding relationship.
在上述实施例的基础上,可选的,所述目标对象为脏污,所述预设网络为预设的脏污检测网络;所述脏污检测网络通过视觉语义分割数据集和脏污数据集进行训练得到,其中,所述脏污数据集包括所述原始样本数据以及与所述原始样本数据对应的样本标注数据。On the basis of the above embodiment, optionally, the target object is dirt, and the preset network is a preset dirt detection network; the dirt detection network segments the dataset and the dirt data by visual semantics The dirty data set includes the original sample data and sample labeling data corresponding to the original sample data.
本申请可选实施例提供的机器人的巡检清洁装置,采集机器人视野范围内的视觉数据,通过预设的脏污检测网络对采集的视觉数据进行检测,得到目标脏污位置和目标脏污类型,并根据目标脏污位置和目标脏污类型控制机器人执 行巡检清洁任务。在巡检清洁过程中,机器人通过自身的视野范围即可实现整个工作空间的视野覆盖,并且机器人能够通过已训练好的脏污检测网络主动识别视野范围内存在的目标脏污,使得机器人仅需要聚焦工作空间内的目标脏污,并基于目标脏污的位置和类型有针对性地执行巡检清洁任务,无需对整个工作空间进行全路径清洁,提高了机器人的清洁效率。The inspection and cleaning device for a robot provided by an optional embodiment of the present application collects visual data within the field of view of the robot, detects the collected visual data through a preset contamination detection network, and obtains the target contamination location and target contamination type , and control the robot to perform inspection and cleaning tasks according to the target dirty position and target dirty type. During the inspection and cleaning process, the robot can cover the entire workspace through its own field of view, and the robot can actively identify the target dirt in the field of view through the trained dirt detection network, so that the robot only needs to Focuses on the target dirt in the workspace, and performs targeted inspection and cleaning tasks based on the location and type of target dirt, eliminating the need for full-path cleaning of the entire workspace, improving the cleaning efficiency of the robot.
在上述实施例的基础上,可选的,所述脏污检测网络包括下采样层和反卷积层;检测模块101设置为通过所述下采样层对所述视觉数据进行层级化的下采样操作,得到多分辨率的中间特征图;通过所述反卷积层对所述多分辨率的中间特征图进行层级化的反卷积操作,得到所述视觉数据中的目标脏污位置和目标脏污类型。On the basis of the above embodiment, optionally, the contamination detection network includes a downsampling layer and a deconvolution layer; the detection module 101 is configured to perform hierarchical downsampling on the visual data through the downsampling layer operation to obtain a multi-resolution intermediate feature map; perform a hierarchical deconvolution operation on the multi-resolution intermediate feature map through the deconvolution layer to obtain the target dirty position and target in the visual data. Dirt type.
如图12所示,在上述实施例的基础上,可选的,该装置还包括:网络训练模块105;网络训练模块105设置为通过所述视觉语义分割数据集对所述脏污检测网络进行预训练,得到初始脏污检测网络;将所述原始样本数据作为所述初始脏污检测网络的输入,将所述样本标注数据作为所述初始脏污检测网络的期望输出,采用预设的损失函数继续对所述初始脏污检测网络进行训练。As shown in FIG. 12 , on the basis of the above-mentioned embodiment, optionally, the apparatus further includes: a network training module 105; Pre-training to obtain an initial contamination detection network; using the original sample data as the input of the initial contamination detection network, using the sample labeling data as the expected output of the initial contamination detection network, and using a preset loss The function continues to train the initial dirty detection network.
在上述实施例的基础上,可选的,该装置还包括:训练数据处理模块106;训练数据处理模块106设置为在网络训练模块105采用预设的损失函数继续对所述初始脏污检测网络进行训练之前,对所述脏污数据集进行数据增强处理。On the basis of the above-mentioned embodiment, optionally, the apparatus further includes: a training data processing module 106; the training data processing module 106 is configured to use a preset loss function in the network training module 105 to continue the initial contamination detection network Data augmentation is performed on the dirty dataset before training.
在上述实施例的基础上,可选的,对所述脏污数据集进行数据增强处理的方式包括以下至少之一:随机裁剪、水平翻转以及颜色抖动。On the basis of the foregoing embodiment, optionally, the manner of performing data enhancement processing on the dirty data set includes at least one of the following: random cropping, horizontal flipping, and color dithering.
在上述实施例的基础上,可选的,控制模块102设置为根据目标脏污类型,生成目标清洁策略;控制所述机器人导航至所述目标脏污位置,采用所述目标清洁策略对所述目标脏污位置进行清洁。On the basis of the above embodiment, optionally, the control module 102 is configured to generate a target cleaning strategy according to the target dirt type; control the robot to navigate to the target dirt position, and use the target cleaning strategy to Target dirty locations for cleaning.
如图13所示,在上述实施例的基础上,可选的,所述目标对象的数量为至少一个;所述装置还包括路径规划模块107,所述路径规划模块107,设置为对至少一个对象位置进行路径规划,得到目标清洁路径;所述控制模块102设置为:控制所述机器人按照所述目标清洁路径依次导航至所述至少一个对象位置,并基于当前对象位置对应的对象类型执行巡检清洁任务。As shown in FIG. 13 , on the basis of the foregoing embodiment, optionally, the number of the target objects is at least one; the apparatus further includes a path planning module 107 , and the path planning module 107 is configured to target at least one target object. The target cleaning path is obtained by performing path planning on the object position; the control module 102 is configured to: control the robot to navigate to the at least one object position in sequence according to the target cleaning path, and perform patrolling based on the object type corresponding to the current object position Check cleaning tasks.
本申请可选实施例提供的机器人的巡检清洁装置,采集机器人视野范围内的视觉数据,通过预设的脏污检测网络对采集的视觉数据进行检测,得到至少一个目标脏污位置和至少一个目标脏污位置对应的目标脏污类型,对至少一个目标脏污位置进行最短路径规划,得到目标清洁路径,并控制机器人按照目标清洁路径依次导航至对应的目标脏污位置,以及基于对应的目标脏污类型执行 巡检清洁任务。在巡检清洁过程中,机器人通过自身的视野范围即可实现整个工作空间的视野覆盖,并且机器人能够通过已训练好的脏污检测网络主动识别视野范围内存在的目标脏污,使得机器人仅需要聚焦工作空间内的目标脏污,并基于目标脏污的位置和类型有针对性地执行巡检清洁任务,无需对整个工作空间进行全路径清洁,大大提高了机器人的清洁效率。同时,在检测到至少一个目标脏污位置时,机器人还可以对至少一个目标脏污位置进行最短路径规划,使得机器人能够以最短路径导航至多个目标脏污位置,提高了机器人的清洁效率。The inspection and cleaning device for a robot provided by an optional embodiment of the present application collects visual data within the field of view of the robot, detects the collected visual data through a preset contamination detection network, and obtains at least one target dirty position and at least one The target dirt type corresponding to the target dirty position, perform shortest path planning for at least one target dirty position, obtain the target cleaning path, and control the robot to navigate to the corresponding target dirty position in sequence according to the target cleaning path, and based on the corresponding target Dirty types perform patrol cleaning tasks. During the inspection and cleaning process, the robot can cover the entire workspace through its own field of view, and the robot can actively identify the target dirt in the field of view through the trained dirt detection network, so that the robot only needs to Focusing on the target dirt in the workspace, and performing targeted inspection and cleaning tasks based on the location and type of target dirt, there is no need to perform full-path cleaning of the entire workspace, which greatly improves the cleaning efficiency of the robot. At the same time, when at least one target dirty position is detected, the robot can also perform shortest path planning for the at least one target dirty position, so that the robot can navigate to multiple target dirty positions with the shortest path, which improves the cleaning efficiency of the robot.
在上述实施例的基础上,可选的,采集模块100设置为控制机器人进行旋转,并在旋转过程中采集所述机器人视野范围内的视觉数据。On the basis of the above embodiment, optionally, the acquisition module 100 is configured to control the robot to rotate, and collect visual data within the field of view of the robot during the rotation.
在上述实施例的基础上,可选的,采集模块100设置为基于至少一个传感器的视野范围,控制机器人旋转。On the basis of the foregoing embodiment, optionally, the acquisition module 100 is configured to control the rotation of the robot based on the field of view of at least one sensor.
在上述实施例的基础上,可选的,采集模块100设置为控制第一视觉传感器和第二视觉传感器进行旋转,并在旋转过程中采集各自视野范围内的视觉数据;其中,所述第一视觉传感器为所述机器人的前视传感器,且视野角度的中轴线与水平线平行,所述第二视觉传感器为所述机器人的下视传感器,且视野角度的中轴线位于水平线下方,与水平线相交。On the basis of the above embodiment, optionally, the acquisition module 100 is configured to control the first visual sensor and the second visual sensor to rotate, and collect visual data within their respective fields of view during the rotation; The vision sensor is the forward-looking sensor of the robot, and the central axis of the viewing angle is parallel to the horizontal line, the second visual sensor is the downward viewing sensor of the robot, and the central axis of the viewing angle is located below the horizontal line and intersects the horizontal line.
在一个实施例中,提供了一种机器人,其结构示意图可以如图1所示。该机器人可以包括:一个或多个处理器、存储器;和一个或多个程序,其中,所述一个或多个程序被存储在所述存储器中,并且被所述一个或多个处理器执行,所述程序包括用于执行上述任意实施例所述的机器人的巡检清洁方法的指令。In one embodiment, a robot is provided, the schematic diagram of which can be shown in FIG. 1 . The robot may include: one or more processors, a memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, The program includes instructions for executing the robot inspection and cleaning method described in any of the above embodiments.
上述一个或多个处理器执行所述程序时实现以下步骤:The above-mentioned one or more processors implement the following steps when executing the program:
采集机器人视野范围内的视觉数据;通过预设网络对所述视觉数据进行识别,得到目标对象的对象位置和对象类型,其中,所述预设网络通过原始样本数据以及所述原始样本数据对应的样本标注数据进行训练得到;根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务。Collect the visual data within the field of vision of the robot; identify the visual data through a preset network to obtain the object position and object type of the target object, wherein the preset network passes the original sample data and the corresponding data of the original sample data. The sample labeling data is obtained by training; the robot is controlled to perform inspection and cleaning tasks according to the object position and object type of the target object.
在一个实施例中,所述预设网络为预训练神经网络,所述原始样本数据为视觉样本数据,所述原始样本数据对应的样本标注数据为已标注样本对象位置和样本对象类型的所述视觉样本数据。In one embodiment, the preset network is a pre-trained neural network, the original sample data is visual sample data, and the sample labeling data corresponding to the original sample data is the labeled sample object position and sample object type. Visual sample data.
在一个实施例中,所述预训练神经网络包括特征提取层、特征融合层以及对象识别层;上述一个或多个处理器执行所述程序时还实现以下步骤:通过所述特征提取层提取所述视觉数据中的多尺度特征数据;通过所述特征融合层对 所述多尺度特征数据进行特征融合,得到融合后的特征数据;根据所述多尺度特征数据和所述融合后的特征数据,通过所述对象识别层确定目标对象的对象位置和对象类型。In one embodiment, the pre-trained neural network includes a feature extraction layer, a feature fusion layer, and an object recognition layer; when the one or more processors execute the program, the following steps are further implemented: extracting the multi-scale feature data in the visual data; feature fusion is performed on the multi-scale feature data through the feature fusion layer to obtain the fused feature data; according to the multi-scale feature data and the fused feature data, The object position and object type of the target object are determined by the object recognition layer.
在一个实施例中,所述特征提取层包括第一特征提取块和第二特征提取块;上述一个或多个处理器执行所述程序时还实现以下步骤:通过所述第一特征提取块提取所述视觉数据中的第一尺度特征数据,并通过所述第二特征提取块提取所述视觉数据中的第二尺度特征数据。In one embodiment, the feature extraction layer includes a first feature extraction block and a second feature extraction block; when the one or more processors execute the program, the following steps are further implemented: extracting through the first feature extraction block feature data of the first scale in the visual data, and extract the feature data of the second scale in the visual data through the second feature extraction block.
可选的,所述第一尺度特征数据为13*13尺度特征数据,所述第二尺度特征数据为26*26尺度特征数据。Optionally, the first scale feature data is 13*13 scale feature data, and the second scale feature data is 26*26 scale feature data.
在一个实施例中,当所述目标对象为垃圾和/或脏污时,上述一个或多个处理器执行所述程序时还实现以下步骤:根据所述对象类型选定目标收纳组件和目标清洁组件;控制所述机器人导航至所述对象位置,并控制所述机器人将所述目标对象清扫到所述目标收纳组件中,以及通过所述目标清洁组件对清扫后的区域进行清洁。In one embodiment, when the target object is garbage and/or dirty, the one or more processors further implement the following steps when executing the program: selecting a target storage component and a target cleaning according to the type of the object assembly; controlling the robot to navigate to the object position, and controlling the robot to clean the target object into the target storage assembly, and to clean the cleaned area through the target cleaning assembly.
在一个实施例中,当所述目标对象为障碍物时,上述一个或多个处理器执行所述程序时还实现以下步骤:根据所述对象类型,确定所述机器人是否能够越过所述目标对象;若否,则根据所述对象位置和目标导航点,生成脱困路径,并控制所述机器人按照所述脱困路径行驶至所述目标导航点。In one embodiment, when the target object is an obstacle, the one or more processors further implement the following step when executing the program: determining whether the robot can pass the target object according to the type of the object ; if not, generate an escape path according to the object position and the target navigation point, and control the robot to drive to the target navigation point according to the escape path.
在一个实施例中,在所述视觉数据是以所述机器人的图像坐标系为基准所采集的情况下,上述一个或多个处理器执行所述程序时还实现以下步骤:获取所述机器人的图像坐标系和雷达坐标系之间的第一对应关系以及所述雷达坐标系与世界坐标系之间的第二对应关系;根据所述第一对应关系和第二对应关系,对所述对象位置进行转换。In one embodiment, when the visual data is collected based on the image coordinate system of the robot, the one or more processors further implement the following steps when executing the program: acquiring the robot's image coordinate system. The first correspondence between the image coordinate system and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system; according to the first correspondence and the second correspondence, the position of the object is to convert.
在一个实施例中,所述目标对象为脏污,所述预设网络为预设的脏污检测网络;所述脏污检测网络通过视觉语义分割数据集和脏污数据集进行训练得到,其中,所述脏污数据集包括所述原始样本数据以及与所述原始样本数据对应的样本标注数据。In one embodiment, the target object is dirt, and the preset network is a preset dirt detection network; the dirt detection network is obtained by training a visual semantic segmentation dataset and a dirt dataset, wherein , the dirty data set includes the original sample data and sample labeling data corresponding to the original sample data.
在一个实施例中,所述脏污检测网络包括下采样层和反卷积层;上述一个或多个处理器执行所述程序时还实现以下步骤:通过所述下采样层对所述视觉数据进行层级化的下采样操作,得到多分辨率的中间特征图;通过所述反卷积层对所述多分辨率的中间特征图进行层级化的反卷积操作,得到所述视觉数据中的目标脏污位置和目标脏污类型。In one embodiment, the contamination detection network includes a downsampling layer and a deconvolution layer; when the one or more processors execute the program, the following steps are further implemented: the visual data is processed by the downsampling layer. Perform a hierarchical downsampling operation to obtain a multi-resolution intermediate feature map; perform a hierarchical deconvolution operation on the multi-resolution intermediate feature map through the deconvolution layer to obtain the visual data. Target soiling location and target soiling type.
在一个实施例中,上述一个或多个处理器执行所述程序时还实现以下步骤: 通过所述视觉语义分割数据集对所述脏污检测网络进行预训练,得到初始脏污检测网络;将所述原始样本数据作为所述初始脏污检测网络的输入,将所述样本标注数据作为所述初始脏污检测网络的期望输出,采用预设的损失函数继续对所述初始脏污检测网络进行训练。In one embodiment, the one or more processors further implement the following steps when executing the program: pre-training the contamination detection network by using the visual semantic segmentation data set to obtain an initial contamination detection network; The original sample data is used as the input of the initial contamination detection network, the sample annotation data is used as the expected output of the initial contamination detection network, and the initial contamination detection network is continued to be performed using a preset loss function. train.
在一个实施例中,上述一个或多个处理器执行所述程序时还实现以下步骤:对所述脏污数据集进行数据增强处理。In one embodiment, the one or more processors further implement the following step when executing the program: performing data enhancement processing on the dirty data set.
可选的,对所述脏污数据集进行数据增强处理的方式包括以下至少之一:随机裁剪、水平翻转以及颜色抖动。Optionally, the manner of performing data enhancement processing on the dirty data set includes at least one of the following: random cropping, horizontal flipping, and color dithering.
在一个实施例中,上述一个或多个处理器执行所述程序时还实现以下步骤:根据目标脏污类型,生成目标清洁策略;控制所述机器人导航至所述目标脏污位置,采用所述目标清洁策略对所述目标脏污位置进行清洁。In one embodiment, when the one or more processors execute the program, the following steps are further implemented: generating a target cleaning strategy according to the target dirt type; controlling the robot to navigate to the target dirt position, using the The target cleaning strategy cleans the target soiled locations.
在一个实施例中,所述目标对象的数量为至少一个;上述一个或多个处理器执行所述程序时还实现以下步骤:对至少一个对象位置进行路径规划,得到目标清洁路径;控制所述机器人按照所述目标清洁路径依次导航至所述至少一个对象位置,并基于当前对象位置对应的对象类型执行巡检清洁任务。In one embodiment, the number of the target object is at least one; when the one or more processors execute the program, the following steps are further implemented: performing path planning on at least one object position to obtain a target cleaning path; controlling the The robot navigates to the at least one object position in sequence according to the target cleaning path, and performs the inspection and cleaning task based on the object type corresponding to the current object position.
在一个实施例中,上述一个或多个处理器执行所述程序时还实现以下步骤:控制机器人进行旋转,并在旋转过程中采集所述机器人视野范围内的视觉数据。In one embodiment, when the one or more processors execute the program, the following steps are further implemented: controlling the robot to rotate, and collecting visual data within the field of view of the robot during the rotation.
在一个实施例中,上述一个或多个处理器执行所述程序时还实现以下步骤:基于至少一个传感器的视野范围,控制机器人旋转。In one embodiment, when the one or more processors execute the program, the following steps are further implemented: controlling the rotation of the robot based on the field of view of the at least one sensor.
在一个实施例中,上述一个或多个处理器执行所述程序时还实现以下步骤:控制第一视觉传感器和第二视觉传感器进行旋转,并在旋转过程中采集各自视野范围内的视觉数据;其中,所述第一视觉传感器为所述机器人的前视传感器,且视野角度的中轴线与水平线平行,所述第二视觉传感器为所述机器人的下视传感器,且视野角度的中轴线位于水平线下方,与水平线相交。In one embodiment, the one or more processors further implement the following steps when executing the program: controlling the first vision sensor and the second vision sensor to rotate, and collecting visual data within their respective fields of view during the rotation; Wherein, the first visual sensor is the forward-looking sensor of the robot, and the central axis of the viewing angle is parallel to the horizontal line, the second visual sensor is the downward-looking sensor of the robot, and the central axis of the viewing angle is located on the horizontal line Below, intersects the horizontal line.
在一个实施例中,如图14所示,提供了一种包含计算机可执行指令1401的非易失性计算机可读存储介质140,当所述计算机可执行指令被一个或多个处理器141执行时,使得所述处理器141执行以下步骤:In one embodiment, as shown in FIG. 14 , a non-transitory computer-readable storage medium 140 containing computer-executable instructions 1401 is provided, when the computer-executable instructions are executed by one or more processors 141 , causing the processor 141 to perform the following steps:
采集机器人视野范围内的视觉数据;通过预设网络对所述视觉数据进行识别,得到目标对象的对象位置和对象类型,其中,所述预设网络通过原始样本数据以及所述原始样本数据对应的样本标注数据进行训练得到;根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务。Collect the visual data within the field of vision of the robot; identify the visual data through a preset network to obtain the object position and object type of the target object, wherein the preset network passes the original sample data and the corresponding data of the original sample data. The sample labeling data is obtained by training; the robot is controlled to perform inspection and cleaning tasks according to the object position and object type of the target object.
在一个实施例中,所述预设网络为预训练神经网络,所述原始样本数据为视觉样本数据,所述原始样本数据对应的样本标注数据为已标注样本对象位置和样本对象类型的所述视觉样本数据。In one embodiment, the preset network is a pre-trained neural network, the original sample data is visual sample data, and the sample labeling data corresponding to the original sample data is the labeled sample object position and sample object type. Visual sample data.
在一个实施例中,所述预训练神经网络包括特征提取层、特征融合层以及对象识别层;计算机可执行指令被处理器执行时还实现以下步骤:通过所述特征提取层提取所述视觉数据中的多尺度特征数据;通过所述特征融合层对所述多尺度特征数据进行特征融合,得到融合后的特征数据;根据所述多尺度特征数据和所述融合后的特征数据,通过所述对象识别层确定目标对象的对象位置和对象类型。In one embodiment, the pre-trained neural network includes a feature extraction layer, a feature fusion layer, and an object recognition layer; when the computer-executable instructions are executed by the processor, the following steps are further implemented: extracting the visual data through the feature extraction layer The multi-scale feature data in the feature fusion layer; the feature fusion is performed on the multi-scale feature data through the feature fusion layer to obtain the fused feature data; according to the multi-scale feature data and the fused feature data, through the The object recognition layer determines the object location and object type of the target object.
在一个实施例中,所述特征提取层包括第一特征提取块和第二特征提取块;计算机可执行指令被处理器执行时还实现以下步骤:通过所述第一特征提取块提取所述视觉数据中的第一尺度特征数据,并通过所述第二特征提取块提取所述视觉数据中的第二尺度特征数据。In one embodiment, the feature extraction layer includes a first feature extraction block and a second feature extraction block; when the computer-executable instructions are executed by the processor, the following steps are further implemented: extracting the visual image by using the first feature extraction block feature data of the first scale in the data, and extract the feature data of the second scale in the visual data through the second feature extraction block.
可选的,所述第一尺度特征数据为13*13尺度特征数据,所述第二尺度特征数据为26*26尺度特征数据。Optionally, the first scale feature data is 13*13 scale feature data, and the second scale feature data is 26*26 scale feature data.
在一个实施例中,当所述目标对象为垃圾和/或脏污时,计算机可执行指令被处理器执行时还实现以下步骤:根据所述对象类型选定目标收纳组件和目标清洁组件;控制所述机器人导航至所述对象位置,并控制所述机器人将所述目标对象清扫到所述目标收纳组件中,以及通过所述目标清洁组件对清扫后的区域进行清洁。In one embodiment, when the target object is garbage and/or dirty, the computer-executable instructions, when executed by the processor, further implement the following steps: selecting a target storage component and a target cleaning component according to the object type; controlling The robot navigates to the object position, and controls the robot to clean the target object into the target storage assembly, and clean the cleaned area through the target cleaning assembly.
在一个实施例中,当所述目标对象为障碍物时,计算机可执行指令被处理器执行时还实现以下步骤:根据所述对象类型,确定所述机器人是否能够越过所述目标对象;若否,则根据所述对象位置和目标导航点,生成脱困路径,并控制所述机器人按照所述脱困路径行驶至所述目标导航点。In one embodiment, when the target object is an obstacle, the computer-executable instructions further implement the following steps when executed by the processor: according to the type of the object, determine whether the robot can pass the target object; if not , then according to the position of the object and the target navigation point, an escape path is generated, and the robot is controlled to travel to the target navigation point according to the escape path.
在一个实施例中,在所述视觉数据是以所述机器人的图像坐标系为基准所采集的情况下,计算机可执行指令被处理器执行时还实现以下步骤:获取所述机器人的图像坐标系和雷达坐标系之间的第一对应关系以及所述雷达坐标系与世界坐标系之间的第二对应关系;根据所述第一对应关系和第二对应关系,对所述对象位置进行转换。In one embodiment, when the visual data is collected based on the image coordinate system of the robot, when the computer-executable instruction is executed by the processor, the following step is further implemented: acquiring the image coordinate system of the robot and the first correspondence between the radar coordinate system and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system; according to the first correspondence and the second correspondence, the object position is converted.
在一个实施例中,所述目标对象为脏污,所述预设网络为预设的脏污检测网络;所述脏污检测网络通过视觉语义分割数据集和脏污数据集进行训练得到,其中,所述脏污数据集包括所述原始样本数据以及与所述原始样本数据对应的样本标注数据。In one embodiment, the target object is dirt, and the preset network is a preset dirt detection network; the dirt detection network is obtained by training a visual semantic segmentation dataset and a dirt dataset, wherein , the dirty data set includes the original sample data and sample labeling data corresponding to the original sample data.
在一个实施例中,所述脏污检测网络包括下采样层和反卷积层;计算机可执行指令被处理器执行时还实现以下步骤:通过所述下采样层对所述视觉数据进行层级化的下采样操作,得到多分辨率的中间特征图;通过所述反卷积层对所述多分辨率的中间特征图进行层级化的反卷积操作,得到所述视觉数据中的目标脏污位置和目标脏污类型。In one embodiment, the contamination detection network includes a downsampling layer and a deconvolution layer; the computer-executable instructions, when executed by the processor, further implement the following step: hierarchizing the visual data through the downsampling layer to obtain a multi-resolution intermediate feature map; perform a hierarchical deconvolution operation on the multi-resolution intermediate feature map through the deconvolution layer to obtain the target dirt in the visual data. Location and target soiling type.
在一个实施例中,计算机可执行指令被处理器执行时还实现以下步骤:通过所述视觉语义分割数据集对所述脏污检测网络进行预训练,得到初始脏污检测网络;将所述原始样本数据作为所述初始脏污检测网络的输入,将所述样本标注数据作为所述初始脏污检测网络的期望输出,采用预设的损失函数继续对所述初始脏污检测网络进行训练。In one embodiment, when the computer-executable instructions are executed by the processor, the following steps are further implemented: pre-training the contamination detection network by using the visual semantic segmentation data set to obtain an initial contamination detection network; The sample data is used as the input of the initial contamination detection network, the sample labeled data is used as the expected output of the initial contamination detection network, and the initial contamination detection network is continuously trained by using a preset loss function.
在一个实施例中,计算机可执行指令被处理器执行时还实现以下步骤:对所述脏污数据集进行数据增强处理。In one embodiment, the computer-executable instructions, when executed by the processor, further implement the step of: performing a data augmentation process on the dirty data set.
可选的,对所述脏污数据集进行数据增强处理的方式包括以下至少之一:随机裁剪、水平翻转以及颜色抖动。Optionally, the manner of performing data enhancement processing on the dirty data set includes at least one of the following: random cropping, horizontal flipping, and color dithering.
在一个实施例中,计算机可执行指令被处理器执行时还实现以下步骤:根据目标脏污类型,生成目标清洁策略;控制所述机器人导航至所述目标脏污位置,采用所述目标清洁策略对所述目标脏污位置进行清洁。In one embodiment, the computer-executable instructions, when executed by the processor, further implement the following steps: generating a target cleaning strategy according to the target soiling type; controlling the robot to navigate to the target soiling location, and adopting the target cleaning strategy The target dirty location is cleaned.
在一个实施例中,所述目标对象的数量为至少一个;计算机可执行指令被处理器执行时还实现以下步骤:对至少一个对象位置进行路径规划,得到目标清洁路径;控制所述机器人按照所述目标清洁路径依次导航至所述至少一个对象位置,并基于当前对象位置对应的对象类型执行巡检清洁任务。In one embodiment, the number of the target objects is at least one; when the computer-executable instructions are executed by the processor, the following steps are further implemented: performing path planning on the position of at least one object to obtain a target cleaning path; controlling the robot to follow the specified path. The target cleaning path is navigated to the at least one object position in sequence, and a patrol cleaning task is performed based on the object type corresponding to the current object position.
在一个实施例中,计算机可执行指令被处理器执行时还实现以下步骤:控制机器人进行旋转,并在旋转过程中采集所述机器人视野范围内的视觉数据。In one embodiment, when the computer-executable instructions are executed by the processor, the following steps are further implemented: controlling the robot to rotate, and collecting visual data within the field of view of the robot during the rotation.
在一个实施例中,计算机可执行指令被处理器执行时还实现以下步骤:基于至少一个传感器的视野范围,控制机器人旋转。In one embodiment, the computer-executable instructions, when executed by the processor, further implement the step of: controlling the robot to rotate based on the field of view of the at least one sensor.
在一个实施例中,计算机可执行指令被处理器执行时还实现以下步骤:控制第一视觉传感器和第二视觉传感器进行旋转,并在旋转过程中采集各自视野范围内的视觉数据;其中,所述第一视觉传感器为所述机器人的前视传感器,且视野角度的中轴线与水平线平行,所述第二视觉传感器为所述机器人的下视传感器,且视野角度的中轴线位于水平线下方,与水平线相交。In one embodiment, when the computer-executable instructions are executed by the processor, the following steps are further implemented: controlling the first vision sensor and the second vision sensor to rotate, and collecting visual data within their respective fields of view during the rotation; wherein, the The first vision sensor is the forward-looking sensor of the robot, and the central axis of the viewing angle is parallel to the horizontal line, the second visual sensor is the downward-looking sensor of the robot, and the central axis of the viewing angle is located below the horizontal line, which is the same as the horizontal line. Horizontal lines intersect.
上述实施例中提供的机器人的巡检清洁装置、机器人和存储介质可执行本申请任意实施例所提供的机器人的巡检清洁方法,具备执行该方法相应的功能模块和效果。未在上述实施例中详尽描述的技术细节,可参见本申请任意实施 例所提供的机器人的巡检清洁方法。The robot inspection and cleaning device, the robot, and the storage medium provided in the above embodiments can execute the robot inspection and cleaning method provided by any embodiment of the present application, and have corresponding functional modules and effects for executing the method. For technical details that are not described in detail in the foregoing embodiments, reference may be made to the inspection and cleaning method for a robot provided by any embodiment of the present application.
可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述多个方法的实施例的流程。其中,本申请所提供的实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、可编程ROM(Programmable ROM,PROM)、电可编程ROM(Electrically PROM,EPROM)、电可擦除可编程ROM(Electrically Erasable PROM,EEPROM)或闪存。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(Static RAM,SRAM)、动态RAM(Dynamic RAM,DRAM)、同步DRAM(Synchronous DRAM,SDRAM)、双数据率SDRAM(Double Data Rate SDRAM,DDRSDRAM)、增强型SDRAM(Enhanced SDRAM,ESDRAM)、同步链路DRAM(Synchlink DRAM,SLDRAM)、存储器总线直接RAM(Rambus Direct RAM,RDRAM)、直接存储器总线动态RAM(Dynamic RDRAM,DRDRAM)、以及存储器总线动态RAM(Rambus Dynamic RAM,RDRAM)等。It can be understood that all or part of the process in the method of the above-mentioned embodiments can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a non-volatile computer-readable storage medium. When the program is executed, it may include the flow of the above-mentioned multiple method embodiments. Wherein, any reference to memory, storage, database or other medium used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include Read-Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (Electrically PROM, EPROM), Electrically Erasable Programmable ROM (Electrically Erasable) PROM, EEPROM) or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in various forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM SDRAM, DDRSDRAM), enhanced SDRAM (Enhanced SDRAM, ESDRAM), synchronous link DRAM (Synchlink DRAM, SLDRAM), memory bus direct RAM (Rambus Direct RAM, RDRAM), direct memory bus dynamic RAM (Dynamic RDRAM, DRDRAM), And memory bus dynamic RAM (Rambus Dynamic RAM, RDRAM) and so on.
以上所述实施例的多个技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的多个技术特征所有可能的组合都进行描述。The multiple technical features of the above-described embodiments can be combined arbitrarily. To simplify the description, all possible combinations of the multiple technical features in the above-described embodiments are not described.

Claims (24)

  1. 一种机器人的巡检清洁方法,包括:A robot inspection and cleaning method, comprising:
    采集机器人视野范围内的视觉数据;Collect visual data within the robot's field of vision;
    通过预设网络对所述视觉数据进行检测,得到目标对象的对象位置和对象类型,其中,所述预设网络通过原始样本数据以及所述原始样本数据对应的样本标注数据进行训练得到;The visual data is detected by a preset network to obtain the object position and object type of the target object, wherein the preset network is obtained by training the original sample data and the sample annotation data corresponding to the original sample data;
    根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务。The robot is controlled to perform inspection and cleaning tasks according to the object position and object type of the target object.
  2. 根据权利要求1所述的方法,其中,所述预设网络为预训练神经网络,所述原始样本数据为视觉样本数据,所述原始样本数据对应的样本标注数据为已标注样本对象位置和样本对象类型的所述视觉样本数据。The method according to claim 1, wherein the preset network is a pre-trained neural network, the original sample data is visual sample data, and the sample labeling data corresponding to the original sample data is the position of the labeled sample object and the sample The visual sample data of the object type.
  3. 根据权利要求2所述的方法,其中,所述预训练神经网络包括特征提取层、特征融合层以及对象识别层;The method according to claim 2, wherein the pre-trained neural network comprises a feature extraction layer, a feature fusion layer and an object recognition layer;
    所述通过预设网络对所述视觉数据进行检测,得到目标对象的对象位置和对象类型,包括:Detecting the visual data through the preset network to obtain the object position and object type of the target object, including:
    通过所述特征提取层提取所述视觉数据中的多尺度特征数据;Extracting multi-scale feature data in the visual data through the feature extraction layer;
    通过所述特征融合层对所述多尺度特征数据进行特征融合,得到融合后的特征数据;Perform feature fusion on the multi-scale feature data through the feature fusion layer to obtain fused feature data;
    根据所述多尺度特征数据和所述融合后的特征数据,通过所述对象识别层确定所述目标对象的对象位置和对象类型。According to the multi-scale feature data and the fused feature data, the object position and object type of the target object are determined through the object recognition layer.
  4. 根据权利要求3所述的方法,其中,所述特征提取层包括第一特征提取块和第二特征提取块;The method of claim 3, wherein the feature extraction layer comprises a first feature extraction block and a second feature extraction block;
    所述通过所述特征提取层提取所述视觉数据中的多尺度特征数据,包括:The extracting multi-scale feature data in the visual data through the feature extraction layer includes:
    通过所述第一特征提取块提取所述视觉数据中的第一尺度特征数据,并通过所述第二特征提取块提取所述视觉数据中的第二尺度特征数据。The first scale feature data in the visual data is extracted by the first feature extraction block, and the second scale feature data in the visual data is extracted by the second feature extraction block.
  5. 根据权利要求4所述的方法,其中,所述第一尺度特征数据为13*13尺度特征数据,所述第二尺度特征数据为26*26尺度特征数据。The method according to claim 4, wherein the first scale feature data is 13*13 scale feature data, and the second scale feature data is 26*26 scale feature data.
  6. 根据权利要求2至5中任一项所述的方法,其中,在所述目标对象为垃圾和脏污中的至少之一的情况下,所述根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务,包括:The method according to any one of claims 2 to 5, wherein, in a case where the target object is at least one of garbage and dirt, the controlling according to the object position and object type of the target object The robot performs inspection and cleaning tasks, including:
    根据所述对象类型选定目标收纳组件和目标清洁组件;selecting a target storage assembly and a target cleaning assembly according to the object type;
    控制所述机器人导航至所述对象位置,并控制所述机器人将所述目标对象 清扫到所述目标收纳组件中,以及通过所述目标清洁组件对清扫后的区域进行清洁。The robot is controlled to navigate to the object position, and the robot is controlled to clean the target object into the target storage assembly, and the cleaned area is cleaned by the target cleaning assembly.
  7. 根据权利要求2至5中任一项所述的方法,其中,在所述目标对象为障碍物的情况下,所述根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务,包括:The method according to any one of claims 2 to 5, wherein when the target object is an obstacle, the robot is controlled to perform patrol cleaning according to the object position and object type of the target object tasks, including:
    根据所述对象类型,确定所述机器人是否能够越过所述目标对象;According to the object type, determining whether the robot can pass the target object;
    响应于所述机器人不能越过所述目标对象,根据所述对象位置和目标导航点,生成脱困路径,并控制所述机器人按照所述脱困路径行驶至所述目标导航点。In response to the robot being unable to go over the target object, an escape path is generated according to the position of the object and the target navigation point, and the robot is controlled to travel to the target navigation point according to the escape path.
  8. 根据权利要求1所述的方法,其中,所述目标对象为脏污,所述预设网络为预设的脏污检测网络;The method according to claim 1, wherein the target object is dirt, and the preset network is a preset dirt detection network;
    所述脏污检测网络通过视觉语义分割数据集和脏污数据集进行训练得到,其中,所述脏污数据集包括所述原始样本数据以及与所述原始样本数据对应的样本标注数据。The dirty detection network is obtained by training a visual semantic segmentation dataset and a dirty dataset, wherein the dirty dataset includes the original sample data and sample labeling data corresponding to the original sample data.
  9. 根据权利要求8所述的方法,其中,所述脏污检测网络包括下采样层和反卷积层;The method of claim 8, wherein the contamination detection network comprises a downsampling layer and a deconvolution layer;
    所述通过预设网络对所述视觉数据进行检测,得到目标对象的对象位置和对象类型,包括:Detecting the visual data through the preset network to obtain the object position and object type of the target object, including:
    通过所述下采样层对所述视觉数据进行层级化的下采样操作,得到多分辨率的中间特征图;Perform a hierarchical downsampling operation on the visual data through the downsampling layer to obtain a multi-resolution intermediate feature map;
    通过所述反卷积层对所述多分辨率的中间特征图进行层级化的反卷积操作,得到所述视觉数据中的所述目标对象的对象位置和对象类型。A hierarchical deconvolution operation is performed on the multi-resolution intermediate feature map through the deconvolution layer to obtain the object position and object type of the target object in the visual data.
  10. 根据权利要求8所述的方法,其中,所述脏污检测网络的获取过程,包括:The method according to claim 8, wherein the acquisition process of the contamination detection network comprises:
    通过所述视觉语义分割数据集对检测网络进行预训练,得到初始脏污检测网络;The detection network is pre-trained by the visual semantic segmentation data set to obtain an initial dirty detection network;
    将所述原始样本数据作为所述初始脏污检测网络的输入,将所述样本标注数据作为所述初始脏污检测网络的期望输出,采用预设的损失函数对所述初始脏污检测网络进行训练,得到所述脏污检测网络。The original sample data is used as the input of the initial contamination detection network, the sample labeled data is used as the expected output of the initial contamination detection network, and the initial contamination detection network is performed using a preset loss function. Training to obtain the dirty detection network.
  11. 根据权利要求10所述的方法,在所述采用预设的损失函数对所述初始脏污检测网络进行训练之前,还包括:The method according to claim 10, before using the preset loss function to train the initial contamination detection network, further comprising:
    对所述脏污数据集进行数据增强处理。Data augmentation is performed on the dirty dataset.
  12. 根据权利要求11所述的方法,其中,对所述脏污数据集进行数据增强处理的方式包括以下至少之一:随机裁剪、水平翻转以及颜色抖动。The method according to claim 11, wherein the manner of performing data enhancement processing on the dirty data set comprises at least one of the following: random cropping, horizontal flipping, and color dithering.
  13. 根据权利要求8至12中任一项所述的方法,其中,所述根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务,包括:The method according to any one of claims 8 to 12, wherein the controlling the robot to perform an inspection and cleaning task according to the object position and the object type of the target object comprises:
    根据所述对象类型,生成目标清洁策略;generating a target cleaning strategy according to the object type;
    控制所述机器人导航至所述对象位置,采用所述目标清洁策略对所述对象位置进行清洁。The robot is controlled to navigate to the object location, and the object location is cleaned using the target cleaning strategy.
  14. 根据权利要求1、3、或6-10中任一项所述的方法,其中,所述目标对象的数量为至少一个;The method of any one of claims 1, 3, or 6-10, wherein the number of the target objects is at least one;
    所述方法还包括:The method also includes:
    对至少一个对象位置进行路径规划,得到目标清洁路径;Perform path planning on at least one object position to obtain a target cleaning path;
    所述根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务,包括:The controlling the robot to perform the inspection and cleaning task according to the object position and object type of the target object includes:
    控制所述机器人按照所述目标清洁路径依次导航至所述至少一个对象位置,并基于当前对象位置对应的对象类型执行巡检清洁任务。The robot is controlled to navigate to the at least one object position in sequence according to the target cleaning path, and perform patrol cleaning tasks based on the object type corresponding to the current object position.
  15. 根据权利要求14所述的方法,其中,所述采集机器人视野范围内的视觉数据,包括:The method according to claim 14, wherein the collecting visual data within the field of view of the robot comprises:
    控制所述机器人进行旋转,并在旋转过程中采集所述机器人视野范围内的视觉数据。The robot is controlled to rotate, and visual data within the field of view of the robot is collected during the rotation.
  16. 根据权利要求15所述的方法,其中,所述控制所述机器人进行旋转,包括:16. The method of claim 15, wherein the controlling the robot to rotate comprises:
    基于至少一个传感器的视野范围,控制机器人旋转。The robot is controlled to rotate based on the field of view of the at least one sensor.
  17. 根据权利要求14所述的方法,其中,所述采集机器人视野范围内的视觉数据,包括:The method according to claim 14, wherein the collecting visual data within the field of view of the robot comprises:
    控制第一视觉传感器和第二视觉传感器进行旋转,并在旋转过程中控制所述第一视觉传感器和所述第二视觉传感器采集视野范围内的视觉数据;其中,所述第一视觉传感器为所述机器人的前视传感器,且所述第一视觉传感器的视野角度的中轴线与水平线平行,所述第二视觉传感器为所述机器人的下视传感器,且所述第二视觉传感器的视野角度的中轴线位于水平线下方,与水平线相交。Control the first visual sensor and the second visual sensor to rotate, and control the first visual sensor and the second visual sensor to collect visual data within the field of view during the rotation; wherein, the first visual sensor is all The forward-looking sensor of the robot, and the central axis of the viewing angle of the first visual sensor is parallel to the horizontal line, the second visual sensor is the downward-looking sensor of the robot, and the viewing angle of the second visual sensor is The central axis is below the horizontal line and intersects the horizontal line.
  18. 根据权利要求2、8至12、或14至17中任一项所述的方法,其中,在 所述视觉数据是以所述机器人的图像坐标系为基准所采集的情况下,在所述根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务之前,还包括:The method according to any one of claims 2, 8 to 12, or 14 to 17, wherein, in the case where the visual data is collected based on the image coordinate system of the robot, in the Before the object position and object type of the target object control the robot to perform the inspection and cleaning task, it also includes:
    获取所述机器人的图像坐标系和雷达坐标系之间的第一对应关系以及所述雷达坐标系与世界坐标系之间的第二对应关系;obtaining the first correspondence between the image coordinate system of the robot and the radar coordinate system and the second correspondence between the radar coordinate system and the world coordinate system;
    根据所述第一对应关系和所述第二对应关系,对所述对象位置进行转换。The object position is converted according to the first correspondence and the second correspondence.
  19. 一种机器人的巡检清洁装置,包括:A robot inspection and cleaning device, comprising:
    采集模块,设置为采集机器人视野范围内的视觉数据;The acquisition module is set to collect visual data within the field of vision of the robot;
    检测模块,设置为通过预设网络对所述视觉数据进行检测,得到目标对象的对象位置和对象类型,其中,所述预设网络通过原始样本数据以及所述原始样本数据对应的样本标注数据进行训练得到;The detection module is configured to detect the visual data through a preset network to obtain the object position and object type of the target object, wherein the preset network uses the original sample data and the sample annotation data corresponding to the original sample data. trained;
    控制模块,设置为根据所述目标对象的对象位置和对象类型控制所述机器人执行巡检清洁任务。The control module is configured to control the robot to perform the inspection and cleaning task according to the object position and the object type of the target object.
  20. 根据权利要求19所述的装置,其中,所述预设网络为预训练神经网络,所述原始样本数据为视觉样本数据,所述原始样本数据对应的样本标注数据为已标注样本对象位置和样本对象类型的所述视觉样本数据。The device according to claim 19, wherein the preset network is a pre-trained neural network, the original sample data is visual sample data, and the sample labeling data corresponding to the original sample data is the position of the labeled sample object and the sample The visual sample data of the object type.
  21. 根据权利要求19所述的装置,其中,所述目标对象为脏污,所述预设网络为预设的脏污检测网络;The device according to claim 19, wherein the target object is dirt, and the preset network is a preset dirt detection network;
    所述脏污检测网络通过视觉语义分割数据集和脏污数据集进行训练得到,其中,所述脏污数据集包括所述原始样本数据以及与所述原始样本数据对应的样本标注数据。The dirty detection network is obtained by training a visual semantic segmentation dataset and a dirty dataset, wherein the dirty dataset includes the original sample data and sample labeling data corresponding to the original sample data.
  22. 根据权利要求19或21所述的装置,其中,所述目标对象的数量为至少一个;The apparatus according to claim 19 or 21, wherein the number of the target objects is at least one;
    所述装置还包括路径规划模块,所述路径规划模块,设置为对至少一个对象位置进行路径规划,得到目标清洁路径;The device further includes a path planning module, which is configured to perform path planning on at least one object position to obtain a target cleaning path;
    所述控制模块是设置为:控制所述机器人按照所述目标清洁路径依次导航至所述至少一个对象位置,并基于当前对象位置对应的对象类型执行巡检清洁任务。The control module is configured to: control the robot to navigate to the at least one object position in sequence according to the target cleaning path, and perform the inspection and cleaning task based on the object type corresponding to the current object position.
  23. 一种机器人,包括:至少一个处理器、存储器、和至少一个程序,其中,所述至少一个程序被存储在所述存储器中,并且被所述至少一个处理器执行,所述至少一程序包括用于执行权利要求1至18中任一项所述的机器人的巡检清洁方法的指令。A robot comprising: at least one processor, a memory, and at least one program, wherein the at least one program is stored in the memory and executed by the at least one processor, the at least one program includes using Instructions for executing the inspection and cleaning method of a robot according to any one of claims 1 to 18.
  24. 一种包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被至少一个处理器执行时,使得所述至少一处理器执行权利要求1至18中任一项所述的机器人的巡检清洁方法。A non-volatile computer-readable storage medium containing computer-executable instructions that, when executed by at least one processor, cause the at least one processor to perform any one of claims 1 to 18 The robot inspection and cleaning method.
PCT/CN2020/136691 2020-10-29 2020-12-16 Inspection and cleaning method and apparatus of robot, robot, and storage medium WO2022088430A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202011182069.2 2020-10-29
CN202011182069.2A CN112287834A (en) 2020-10-29 2020-10-29 Inspection cleaning method and device for robot, robot and storage medium
CN202011186175.8 2020-10-29
CN202011182064.XA CN112287833A (en) 2020-10-29 2020-10-29 Inspection cleaning method and device for robot, robot and storage medium
CN202011182064.X 2020-10-29
CN202011186175.8A CN112315383B (en) 2020-10-29 2020-10-29 Inspection cleaning method and device for robot, robot and storage medium

Publications (1)

Publication Number Publication Date
WO2022088430A1 true WO2022088430A1 (en) 2022-05-05

Family

ID=81383429

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/136691 WO2022088430A1 (en) 2020-10-29 2020-12-16 Inspection and cleaning method and apparatus of robot, robot, and storage medium

Country Status (1)

Country Link
WO (1) WO2022088430A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100106298A1 (en) * 2008-10-27 2010-04-29 Eusebio Guillermo Hernandez Outdoor home cleaning robot
CN107414866A (en) * 2017-09-07 2017-12-01 苏州三体智能科技有限公司 A kind of inspection sweeping robot system and its inspection cleaning method
CN110924340A (en) * 2019-11-25 2020-03-27 武汉思睿博特自动化系统有限公司 Mobile robot system for intelligently picking up garbage and implementation method
CN111543902A (en) * 2020-06-08 2020-08-18 深圳市杉川机器人有限公司 Floor cleaning method and device, intelligent cleaning equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100106298A1 (en) * 2008-10-27 2010-04-29 Eusebio Guillermo Hernandez Outdoor home cleaning robot
CN107414866A (en) * 2017-09-07 2017-12-01 苏州三体智能科技有限公司 A kind of inspection sweeping robot system and its inspection cleaning method
CN110924340A (en) * 2019-11-25 2020-03-27 武汉思睿博特自动化系统有限公司 Mobile robot system for intelligently picking up garbage and implementation method
CN111543902A (en) * 2020-06-08 2020-08-18 深圳市杉川机器人有限公司 Floor cleaning method and device, intelligent cleaning equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112315383B (en) Inspection cleaning method and device for robot, robot and storage medium
Lu et al. L3-net: Towards learning based lidar localization for autonomous driving
CN112287834A (en) Inspection cleaning method and device for robot, robot and storage medium
Sales et al. Adaptive finite state machine based visual autonomous navigation system
CN112287833A (en) Inspection cleaning method and device for robot, robot and storage medium
Kim et al. UAV-UGV cooperative 3D environmental mapping
Makhubela et al. A review on vision simultaneous localization and mapping (vslam)
CN111768489B (en) Indoor navigation map construction method and system
KR20200027087A (en) Robot and the controlling method thereof
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
Zarzar et al. Efficient bird eye view proposals for 3d siamese tracking
Liu et al. An enhanced lidar inertial localization and mapping system for unmanned ground vehicles
Hu et al. Robot-assisted mobile scanning for automated 3D reconstruction and point cloud semantic segmentation of building interiors
CN111609853A (en) Three-dimensional map construction method, sweeping robot and electronic equipment
CN117152249A (en) Multi-unmanned aerial vehicle collaborative mapping and perception method and system based on semantic consistency
Liu et al. D-lc-nets: Robust denoising and loop closing networks for lidar slam in complicated circumstances with noisy point clouds
Liu An integrated lidar-slam system for complex environment with noisy point clouds
CN112087573A (en) Drawing of an environment
Liu A lidar-inertial-visual slam system with loop detection
CN111609854A (en) Three-dimensional map construction method based on multiple depth cameras and sweeping robot
WO2022088430A1 (en) Inspection and cleaning method and apparatus of robot, robot, and storage medium
Ji et al. Vision-aided localization and navigation for autonomous vehicles
Livatino et al. Autonomous robot navigation with automatic learning of visual landmarks
Li et al. Road edge and obstacle detection on the SmartGuard navigation system
Reddy et al. Image based obstacle detection and path planning for solar powered autonomous vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20959588

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.10.2023)