CN113359738A - Mobile robot path planning method based on deep learning - Google Patents
Mobile robot path planning method based on deep learning Download PDFInfo
- Publication number
- CN113359738A CN113359738A CN202110671845.3A CN202110671845A CN113359738A CN 113359738 A CN113359738 A CN 113359738A CN 202110671845 A CN202110671845 A CN 202110671845A CN 113359738 A CN113359738 A CN 113359738A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- information
- robot
- path
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000013135 deep learning Methods 0.000 title claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 56
- 238000001514 detection method Methods 0.000 claims abstract description 28
- 238000003062 neural network model Methods 0.000 claims abstract description 23
- 230000007613 environmental effect Effects 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 9
- 230000000007 visual effect Effects 0.000 claims abstract description 5
- 238000012795 verification Methods 0.000 claims description 23
- 108010076504 Protein Sorting Signals Proteins 0.000 claims description 18
- 230000004888 barrier function Effects 0.000 claims description 17
- 230000005540 biological transmission Effects 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 6
- 230000007547 defect Effects 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a mobile robot path planning method based on deep learning, which comprises the steps that a robot receives target point information and global path information; carrying out target detection operation on the environmental information through a target detection module and judging the path validity; judging the passing direction of the body of the mobile robot according to the environment three-dimensional information; and inputting the annular three-dimensional information corresponding to the obstacle into a preset obstacle analysis model for processing and the like. Has the advantages that: the robot is provided with a visual sensor, and can generate environment three-dimensional information through a target detection module; and meanwhile, whether the robot can pass through the obstacle is determined according to the three-dimensional information of the environment, and an obstacle analysis model which is trained by adopting training data based on a neural network model is used when the robot can pass through the obstacle is analyzed, so that the analysis result is more accurate, and the optimal path is finally generated.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a mobile robot path planning method based on deep learning.
Background
At present, automation technology is rapidly developed, a robot system in the field is an automatic operation system composed of a robot, peripheral equipment and tools, the robot inevitably needs to move among a plurality of stations in an automatic operation process, and the movement of the robot in the prior art mostly moves according to a set path completely, so that the robot cannot adapt to different working conditions.
Therefore, the method for planning the path of the mobile robot based on deep learning can automatically plan the path when obstacles exist on the moving path under different working conditions.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides a mobile robot path planning method based on deep learning.
In order to achieve the purpose, the invention adopts the following technical scheme: the mobile robot path planning method based on deep learning comprises the following steps:
s1, the robot receives the target point information and the global path information, and the robot generates an initial path; the robot acquires environmental information in the advancing direction of the body of the mobile robot through a vision sensor, wherein the vision sensor is arranged on the body of the mobile robot;
s2, carrying out target detection operation on the environment information through a target detection module and judging the effectiveness of a path, judging whether the environment information is connected to a data transmission module or not by setting and selecting whether barrier detection operation is needed or not, starting to receive video data output by a visual sensor when the environment information is connected to the data transmission module, then judging whether barrier detection operation is needed or not, directly displaying the video data when the environment information is not needed, starting barrier detection operation when the environment information is needed, marking an operation result on the video data and displaying the operation result, and simultaneously generating environment three-dimensional information;
s3, judging the passing ability of the body of the mobile robot according to the environment three-dimensional information, determining the obstacles which the robot cannot pass through, and if the obstacles can pass through, keeping the original path unchanged; if not, calculating the optimal path of the robot bypassing the obstacle, calculating the optimal path corresponding to the obstacle and replacing the initial path;
s4, inputting the annular three-dimensional information corresponding to the obstacle into a preset obstacle analysis model for processing, so as to obtain an analysis result output by the obstacle analysis model; the obstacle analysis model is based on a neural network model and is trained by adopting training data, and the training data is composed of a signal sequence for training and an artificial label corresponding to the signal sequence for training;
and S5, sequentially detecting the obstacle information on the path to obtain complete and completely passable path information, namely the planned path.
In the mobile robot path planning method based on deep learning, the three-dimensional environment information is input into a preset obstacle analysis model for processing, so that an analysis result output by the obstacle analysis model is obtained; wherein, the analysis result is a passable obstacle or a non-passable obstacle, the obstacle analysis model is based on a neural network model and is trained by using training data, and the training data comprises a training signal sequence and an artificial label corresponding to the training signal sequence before step S4:
s3-1, calling sample data of specified data from a preset sample database to form a sample set, and dividing the sample set into a training set and a verification set according to a preset proportion; the sample data is composed of a signal sequence collected in advance and an artificial label corresponding to the signal sequence collected in advance;
s3-2, calling a preset neural network model, and inputting the training set into the neural network model for training to obtain a preliminary neural network model;
s3-3, carrying out verification processing on the preliminary neural network model by using the verification set to obtain a verification result;
s3-4, judging whether the verification result is that the verification is passed;
and S3-5, if the verification result is that the verification is passed, marking the preliminary neural network model as an obstacle analysis model.
In the mobile robot path planning method based on deep learning, the vision sensor directly acquires video information in front of the robot, and the target detection module acquires the video information and then intercepts a plurality of groups of adjacent frames as judgment pictures; synthesizing a plurality of judgment pictures illuminated at different angles into a picture in tiff format, so that the outline characteristic of the barrier is more obvious; training an obstacle model by using a deep learning method, and detecting an obstacle contour in the image;
after the images are synthesized, loading the synthesized images into a preset network framework for training by adopting a target detection algorithm; and carrying out three-dimensional matching on the obstacle outline calculated by the multiple groups of judgment pictures to form three-dimensional obstacle information.
In the mobile robot path planning method based on deep learning, the gray value of the picture with the detail outline tiff format is mapped from 0-1 to 0-255 to obtain a png format picture; classifying all collected pictures according to the following steps of 7: 3, dividing the proportion into a training set and a test set, manually labeling the training set, and classifying the labels according to the defect types; and after the labeling is finished, importing the labeled pictures into a target detection algorithm for training, and obtaining a training model after the training is finished.
In the method for planning the path of the mobile robot based on the deep learning, the determining the obstacle which the robot cannot pass through by judging the passing direction of the robot body according to the three-dimensional information of the obstacle comprises the following steps:
obtaining normal vector information according to the position information of each information unit in the environment three-dimensional information and dividing a ground plane; calculating the height and gradient of each information unit in the environment three-dimensional information relative to the ground plane; and judging the passing direction of the body of the robot according to the calculated height and the gradient of the height, and determining the barrier which cannot be passed by the robot.
In the above method for planning a path of a mobile robot based on deep learning, the determining the obstacle that the robot cannot pass through by determining the passing direction of the robot body according to the calculated height and the gradient of the height includes:
determining a passing judgment type according to the normal vector direction of the environment three-dimensional information; if the passing judgment type is top passing judgment, top passing judgment is carried out according to the calculated height and the height of the robot, and an obstacle which cannot be passed by the robot is determined; and if the passing judgment type is bottom passing judgment, performing bottom passing judgment according to the calculated height gradient and the maximum passing angle of the robot body, and determining the obstacle which cannot be passed by the robot.
In the method for planning the path of the mobile robot based on the deep learning, the three-dimensional environment information and the three-dimensional obstacle information are generated each time and are stored in the comparison library together with the corresponding judgment pictures, data retrieval is carried out on the comparison library each time before the three-dimensional environment information and the three-dimensional obstacle information are generated, if the three-dimensional environment information and the three-dimensional obstacle information are consistent, the comparison library is directly called, and if the three-dimensional environment information and the three-dimensional obstacle information are not consistent, the three-dimensional environment information and the three-dimensional obstacle information are regenerated again.
Compared with the prior art, the invention has the advantages that:
the robot is provided with a visual sensor, and can generate environment three-dimensional information through a target detection module; and meanwhile, whether the robot can pass through the obstacle is determined according to the three-dimensional information of the environment, and an obstacle analysis model which is trained by adopting training data based on a neural network model is used when the robot can pass through the obstacle is analyzed, so that the analysis result is more accurate, and the optimal path is finally generated.
Drawings
Fig. 1 is a schematic block diagram of a mobile robot path planning method based on deep learning according to the present invention.
Detailed Description
The following examples are for illustrative purposes only and are not intended to limit the scope of the present invention.
Examples
Referring to fig. 1, the method for planning the path of the mobile robot based on deep learning includes the following steps:
s1, the robot receives the target point information and the global path information, and the robot generates an initial path; the robot acquires environmental information in the advancing direction of the body of the mobile robot through a vision sensor, and the vision sensor is arranged on the body of the mobile robot;
s2, carrying out target detection operation on the environment information through a target detection module and judging the effectiveness of a path, judging whether the environment information is connected to a data transmission module or not by setting and selecting whether barrier detection operation is needed or not, starting to receive video data output by a visual sensor when the environment information is connected to the data transmission module, then judging whether barrier detection operation is needed or not, directly displaying the video data when the environment information is not needed, starting barrier detection operation when the environment information is needed, marking an operation result on the video data and displaying the operation result, and simultaneously generating environment three-dimensional information;
s3, judging the passing ability of the moving robot body according to the environment three-dimensional information, determining the obstacles which the robot cannot pass through, and if the obstacles can pass through, keeping the original path unchanged; if not, calculating the optimal path of the robot bypassing the obstacle, calculating the optimal path corresponding to the obstacle and replacing the initial path;
s4, inputting the annular three-dimensional information corresponding to the obstacle into a preset obstacle analysis model for processing, so as to obtain an analysis result output by the obstacle analysis model; the obstacle analysis model is based on a neural network model and is trained by adopting training data, and the training data consists of a signal sequence for training and an artificial label corresponding to the signal sequence for training;
and S5, sequentially detecting the obstacle information on the path to obtain complete and completely passable path information, namely the planned path.
Inputting the three-dimensional environment information into a preset obstacle analysis model for processing, so as to obtain an analysis result output by the obstacle analysis model; wherein, the analysis result is the accessible type barrier or the inaccessible type barrier, the barrier analysis model is based on the neural network model and is trained by adopting training data, and the training data comprises a training signal sequence and a manual label corresponding to the training signal sequence before the step S4, wherein the step S4 comprises the following steps:
s3-1, calling sample data of specified data from a preset sample database to form a sample set, and dividing the sample set into a training set and a verification set according to a preset proportion; the sample data is composed of a signal sequence collected in advance and an artificial label corresponding to the signal sequence collected in advance;
s3-2, calling a preset neural network model, and inputting a training set into the neural network model for training to obtain a preliminary neural network model;
s3-3, verifying the preliminary neural network model by using a verification set to obtain a verification result;
s3-4, judging whether the verification result is that the verification is passed;
and S3-5, if the verification result is that the verification is passed, marking the preliminary neural network model as an obstacle analysis model.
The vision sensor directly acquires video information in front of the robot, and the target detection module acquires the video information and then intercepts a plurality of groups of adjacent frames as judgment pictures; synthesizing a plurality of judgment pictures illuminated at different angles into a picture in tiff format, so that the outline characteristic of the barrier is more obvious; training an obstacle model by using a deep learning method, and detecting an obstacle contour in the image;
after the images are synthesized, loading the synthesized images into a preset network framework for training by adopting a target detection algorithm; and carrying out three-dimensional matching on the obstacle outline calculated by the multiple groups of judgment pictures to form three-dimensional obstacle information.
Mapping the gray value of the picture with the judged picture detail outline tiff format from 0-1 to 0-255 to obtain a png format picture; classifying all collected pictures according to the following steps of 7: 3, dividing the proportion into a training set and a test set, manually labeling the training set, and classifying the labels according to the defect types; and after the labeling is finished, importing the labeled pictures into a target detection algorithm for training, and obtaining a training model after the training is finished.
The method for judging the passing ability of the advancing direction of the robot body according to the three-dimensional information of the obstacles and determining the obstacles which cannot be passed by the robot comprises the following steps:
obtaining normal vector information according to the position information of each information unit in the environment three-dimensional information and dividing a ground plane; calculating the height and gradient of each information unit in the environment three-dimensional information relative to the ground level; and judging the passing direction of the body of the robot according to the calculated height and the gradient of the height, and determining the obstacle which cannot be passed by the robot.
The determining the obstacle that the robot cannot pass through by judging the passing direction of the robot body according to the calculated height and the gradient of the height comprises the following steps:
determining a passability judgment type according to the normal vector direction of the environment three-dimensional information; if the passing judgment type is top passing judgment, top passing judgment is carried out according to the calculated height and the height of the robot, and an obstacle which cannot be passed by the robot is determined; and if the passability judgment type is bottom passability judgment, performing bottom passability judgment according to the calculated height gradient and the maximum passing angle of the robot body, and determining the obstacle which cannot be passed by the robot.
And storing the environment three-dimensional information and the obstacle three-dimensional information and the corresponding judgment pictures in a comparison library at the same time each time, performing data retrieval on the comparison library each time before generating the environment three-dimensional information and the obstacle three-dimensional information, and directly calling if the environment three-dimensional information and the obstacle three-dimensional information are consistent with each other, otherwise, regenerating the environment three-dimensional information and the obstacle three-dimensional information.
Claims (7)
1. The mobile robot path planning method based on deep learning is characterized by comprising the following steps:
s1, the robot receives the target point information and the global path information, and the robot generates an initial path; the robot acquires environmental information in the advancing direction of the body of the mobile robot through a vision sensor, wherein the vision sensor is arranged on the body of the mobile robot;
s2, carrying out target detection operation on the environment information through a target detection module and judging the effectiveness of a path, judging whether the environment information is connected to a data transmission module or not by setting and selecting whether barrier detection operation is needed or not, starting to receive video data output by a visual sensor when the environment information is connected to the data transmission module, then judging whether barrier detection operation is needed or not, directly displaying the video data when the environment information is not needed, starting barrier detection operation when the environment information is needed, marking an operation result on the video data and displaying the operation result, and simultaneously generating environment three-dimensional information;
s3, judging the passing ability of the body of the mobile robot according to the environment three-dimensional information, determining the obstacles which the robot cannot pass through, and if the obstacles can pass through, keeping the original path unchanged; if not, calculating the optimal path of the robot bypassing the obstacle, calculating the optimal path corresponding to the obstacle and replacing the initial path;
s4, inputting the annular three-dimensional information corresponding to the obstacle into a preset obstacle analysis model for processing, so as to obtain an analysis result output by the obstacle analysis model; the obstacle analysis model is based on a neural network model and is trained by adopting training data, and the training data is composed of a signal sequence for training and an artificial label corresponding to the signal sequence for training;
and S5, sequentially detecting the obstacle information on the path to obtain complete and completely passable path information, namely the planned path.
2. The deep learning-based mobile robot path planning method according to claim 1, wherein the environmental three-dimensional information is input into a preset obstacle analysis model for processing, so as to obtain an analysis result output by the obstacle analysis model; wherein, the analysis result is a passable obstacle or a non-passable obstacle, the obstacle analysis model is based on a neural network model and is trained by using training data, and the training data comprises a training signal sequence and an artificial label corresponding to the training signal sequence before step S4:
s3-1, calling sample data of specified data from a preset sample database to form a sample set, and dividing the sample set into a training set and a verification set according to a preset proportion; the sample data is composed of a signal sequence collected in advance and an artificial label corresponding to the signal sequence collected in advance;
s3-2, calling a preset neural network model, and inputting the training set into the neural network model for training to obtain a preliminary neural network model;
s3-3, carrying out verification processing on the preliminary neural network model by using the verification set to obtain a verification result;
s3-4, judging whether the verification result is that the verification is passed;
and S3-5, if the verification result is that the verification is passed, marking the preliminary neural network model as an obstacle analysis model.
3. The deep learning-based mobile robot path planning method according to claim 1, wherein the vision sensor directly acquires video information in front of the robot, and the target detection module intercepts a plurality of groups of adjacent frames as decision pictures after acquiring the video information; synthesizing a plurality of judgment pictures illuminated at different angles into a picture in tiff format, so that the outline characteristic of the barrier is more obvious; training an obstacle model by using a deep learning method, and detecting an obstacle contour in the image;
after the images are synthesized, loading the synthesized images into a preset network framework for training by adopting a target detection algorithm; and carrying out three-dimensional matching on the obstacle outline calculated by the multiple groups of judgment pictures to form three-dimensional obstacle information.
4. The deep learning-based mobile robot path planning method according to claim 3, wherein gray values of pictures with the detail outline tiff format are determined to be mapped from 0-1 to 0-255, and a png format picture is obtained; classifying all collected pictures according to the following steps of 7: 3, dividing the proportion into a training set and a test set, manually labeling the training set, and classifying the labels according to the defect types; and after the labeling is finished, importing the labeled pictures into a target detection algorithm for training, and obtaining a training model after the training is finished.
5. The method for planning the path of the mobile robot based on the deep learning of claim 3, wherein the determining the obstacle which the robot cannot pass through by judging the passing direction of the robot body according to the three-dimensional information of the obstacle comprises:
obtaining normal vector information according to the position information of each information unit in the environment three-dimensional information and dividing a ground plane; calculating the height and gradient of each information unit in the environment three-dimensional information relative to the ground plane; and judging the passing direction of the body of the robot according to the calculated height and the gradient of the height, and determining the barrier which cannot be passed by the robot.
6. The method for planning a path of a mobile robot based on deep learning of claim 5, wherein the determining the obstacle that the robot cannot pass through by determining the passing direction of the robot body according to the calculated height and the gradient of the height comprises:
determining a passing judgment type according to the normal vector direction of the environment three-dimensional information; if the passing judgment type is top passing judgment, top passing judgment is carried out according to the calculated height and the height of the robot, and an obstacle which cannot be passed by the robot is determined; and if the passing judgment type is bottom passing judgment, performing bottom passing judgment according to the calculated height gradient and the maximum passing angle of the robot body, and determining the obstacle which cannot be passed by the robot.
7. The method for planning a path of a mobile robot based on deep learning of claim 1, wherein the environmental three-dimensional information and the obstacle three-dimensional information are generated each time and are stored in the comparison library together with the corresponding determination pictures, data retrieval is performed on the comparison library each time before the environmental three-dimensional information and the obstacle three-dimensional information are generated, and if the environmental three-dimensional information and the obstacle three-dimensional information are generated, the data retrieval is directly invoked, and if the environmental three-dimensional information and the obstacle three-dimensional information are consistent, the data retrieval is performed again.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110671845.3A CN113359738A (en) | 2021-06-17 | 2021-06-17 | Mobile robot path planning method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110671845.3A CN113359738A (en) | 2021-06-17 | 2021-06-17 | Mobile robot path planning method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113359738A true CN113359738A (en) | 2021-09-07 |
Family
ID=77534522
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110671845.3A Pending CN113359738A (en) | 2021-06-17 | 2021-06-17 | Mobile robot path planning method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113359738A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114488852A (en) * | 2022-01-25 | 2022-05-13 | 海南大学 | Unmanned vehicle virtual simulation system and method for cross-country environment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130094533A (en) * | 2012-02-16 | 2013-08-26 | 인하대학교 산학협력단 | Collision prevention system of mobile robot in unknown environment and method thereof |
CN106444780A (en) * | 2016-11-10 | 2017-02-22 | 速感科技(北京)有限公司 | Robot autonomous navigation method and system based on vision positioning algorithm |
CN107368076A (en) * | 2017-07-31 | 2017-11-21 | 中南大学 | Robot motion's pathdepth learns controlling planning method under a kind of intelligent environment |
CN109375618A (en) * | 2018-09-27 | 2019-02-22 | 深圳乐动机器人有限公司 | The navigation barrier-avoiding method and terminal device of clean robot |
CN111429515A (en) * | 2020-03-19 | 2020-07-17 | 佛山市南海区广工大数控装备协同创新研究院 | Learning method of robot obstacle avoidance behavior based on deep learning |
-
2021
- 2021-06-17 CN CN202110671845.3A patent/CN113359738A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130094533A (en) * | 2012-02-16 | 2013-08-26 | 인하대학교 산학협력단 | Collision prevention system of mobile robot in unknown environment and method thereof |
CN106444780A (en) * | 2016-11-10 | 2017-02-22 | 速感科技(北京)有限公司 | Robot autonomous navigation method and system based on vision positioning algorithm |
CN107368076A (en) * | 2017-07-31 | 2017-11-21 | 中南大学 | Robot motion's pathdepth learns controlling planning method under a kind of intelligent environment |
CN109375618A (en) * | 2018-09-27 | 2019-02-22 | 深圳乐动机器人有限公司 | The navigation barrier-avoiding method and terminal device of clean robot |
CN111429515A (en) * | 2020-03-19 | 2020-07-17 | 佛山市南海区广工大数控装备协同创新研究院 | Learning method of robot obstacle avoidance behavior based on deep learning |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114488852A (en) * | 2022-01-25 | 2022-05-13 | 海南大学 | Unmanned vehicle virtual simulation system and method for cross-country environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200012894A1 (en) | Visually Aided Active Learning for Training Object Detector | |
CN111507976B (en) | Defect detection method and system based on multi-angle imaging | |
CN102737236B (en) | Method for automatically acquiring vehicle training sample based on multi-modal sensor data | |
CN102298778B (en) | Estimation system, estimation method, and estimation program for estimating object state | |
CN111563442A (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
CN110264444B (en) | Damage detection method and device based on weak segmentation | |
KR102346676B1 (en) | Method for creating damage figure using the deep learning-based damage image classification of facility | |
CN110929795B (en) | Method for quickly identifying and positioning welding spot of high-speed wire welding machine | |
CN111368682B (en) | Method and system for detecting and identifying station caption based on master RCNN | |
CN111906782B (en) | Intelligent robot grabbing method based on three-dimensional vision | |
CN113052295B (en) | Training method of neural network, object detection method, device and equipment | |
Zelener et al. | Cnn-based object segmentation in urban lidar with missing points | |
CN115330734A (en) | Automatic robot repair welding system based on three-dimensional target detection and point cloud defect completion | |
CN113674216A (en) | Subway tunnel disease detection method based on deep learning | |
CN116363125A (en) | Deep learning-based battery module appearance defect detection method and system | |
CN113111875A (en) | Seamless steel rail weld defect identification device and method based on deep learning | |
CN113116377B (en) | Ultrasonic imaging navigation method, ultrasonic equipment and storage medium | |
CN116259002A (en) | Human body dangerous behavior analysis method based on video | |
CN110136186B (en) | Detection target matching method for mobile robot target ranging | |
CN113240798B (en) | Intelligent material integrity detection and configuration method based on digital twinning and AR | |
CN113359738A (en) | Mobile robot path planning method based on deep learning | |
CN109079777B (en) | Manipulator hand-eye coordination operation system | |
CN114359865A (en) | Obstacle detection method and related device | |
CN115984759A (en) | Substation switch state identification method and device, computer equipment and storage medium | |
Shi et al. | A fast workpiece detection method based on multi-feature fused SSD |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |