CN110986949B - Path identification method based on artificial intelligence platform - Google Patents
Path identification method based on artificial intelligence platform Download PDFInfo
- Publication number
- CN110986949B CN110986949B CN201911229234.2A CN201911229234A CN110986949B CN 110986949 B CN110986949 B CN 110986949B CN 201911229234 A CN201911229234 A CN 201911229234A CN 110986949 B CN110986949 B CN 110986949B
- Authority
- CN
- China
- Prior art keywords
- feature
- convolution
- network
- path
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000013473 artificial intelligence Methods 0.000 title claims description 6
- 238000010606 normalization Methods 0.000 claims abstract description 4
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000012545 processing Methods 0.000 claims abstract description 4
- 239000010410 layer Substances 0.000 claims description 16
- 239000002356 single layer Substances 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 4
- 230000000694 effects Effects 0.000 claims description 4
- 238000005215 recombination Methods 0.000 claims description 4
- 230000006798 recombination Effects 0.000 claims description 4
- 230000003213 activating effect Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 claims 1
- 230000002708 enhancing effect Effects 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 10
- 238000011161 development Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000011664 nicotinic acid Substances 0.000 description 1
- 235000001968 nicotinic acid Nutrition 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of intelligent path planning, and particularly relates to a path planning method based on an artificial intelligent platform. The invention comprises the following steps: acquiring an image having a path: collecting images with paths, performing standard class on each image, and establishing data connection according to defined characteristic types to form a data set; data preprocessing: calculating the mean value and standard deviation of each pixel on the whole data set for the original image, overturning each image with 50% probability, and carrying out normalization processing to obtain a preprocessed image set. Compared with the traditional method, the method has higher accuracy and more intelligent judging process, and can be applied to various application scenes. The invention uses the full convolution network as the basic network structure, and reserves local information, so that the learned characteristics are easier to be visualized and understood, and meanwhile, the full convolution network has no too much limitation on the size and the type of the image, thereby enhancing the practicability.
Description
Technical Field
The invention belongs to the technical field of intelligent path planning, and particularly relates to a path planning method based on an artificial intelligent platform.
Background
Robots are an important field of vigorous development, integrate optics, electronics, measurement and control technology, automatic control theory, information technology, software technology and computer technology, and form a comprehensive new technology. Due to the development and popularity of computer technology, human production has gradually transitioned from mechanized, automated to the "smart" era worldwide.
The development effort of the robot technology integrates multiple disciplines, represents the development front of high technology, and the continuous expansion of the application field of human life is leading to international re-understanding of the roles and effects of the robot technology, and along with the continuous development and progress of the robot technology, a plurality of novel robot members gradually enter various fields of society and play an increasingly important role. The advantages of the method are more and more paid attention to and paid to all countries in the world, and the method becomes a core technology of all countries.
The background of robot path planning is generally three categories:
(1) Path planning research under known environments and static obstacle avoidance conditions;
(2) Path planning research under known environment and dynamic obstacle avoidance conditions;
(3) Path planning studies in an unknown or dynamic environment.
Path planning can be divided into two main categories according to whether map environment information is known or not: global and local path planning. The former can generally finish the shortest path planning from point to point and traversing the map on the basis of the established map model; the latter is mainly applied in environments where the overall or local environment is not clear, and where sensors are required to detect the surrounding environment to determine the range of the feasible region. The two methods are basically not different, the local path planning method is still effective for global path planning, and most of the global path planning methods can be transplanted into the local path planning application after being improved.
The path planning algorithm can be divided into a traditional algorithm and an intelligent bionics algorithm according to the basic principle. The traditional algorithm comprises a Y algorithm, a fuzzy logic algorithm, a tabu search algorithm and the like. However, the conventional path planning algorithm is inefficient in large-scale searching, so that people have to explore a faster and more adaptive method. The application of the bionic intelligent algorithm in path planning is developed along with the exploration and deep research of the intelligent algorithm at the end of the twentieth century, and is currently applied to ant colony algorithms, neural network algorithms, genetic algorithms, particle swarm algorithms and the like of robot path planning. Such algorithms generally have higher search efficiency, but sometimes fall into local optima and even lower efficiency in some cases.
Disclosure of Invention
The invention aims to provide a high-efficiency path planning method based on an artificial intelligence platform.
The purpose of the invention is realized in the following way:
a path planning method based on an artificial intelligence platform comprises the following steps:
(1) Acquiring an image having a path: collecting images with paths, performing standard class on each image, and establishing data connection according to defined characteristic types to form a data set;
(2) Data preprocessing: calculating the mean value and standard deviation of each pixel on the whole data set for the original image, overturning each image with 50% probability, and carrying out normalization processing to obtain a preprocessed image set;
all mean images areStandard deviation is std, which is normalized for a particular image x as follows:
(3) Extracting primary characteristics: sequentially determining a 51-layer ResNet network architecture and a 1-layer ResNet network architecture from top to bottom as a bottom network for constructing a characteristic artificial intelligent network, performing primary characteristic extraction on a path acquisition image, and extracting characteristics A of 5 different scales 1 ,A 2 ,A 3 ,A 4 ,A 5 The method comprises the steps of carrying out a first treatment on the surface of the Calculating a network architecture weight beta;
f (x') represents the activation value in space of the element k of the network layer; w (w) k The weight of the unit k to the network layer is calculated;
(4) Overlapping a convolution network: the 5-scale features obtained in the step (3) are respectively passed through the steps from top to bottomThe convolution network is overlapped to obtain new characteristic S 1 ,S 2 ,S 3 ,S 4 ,S 5 Eliminating the aliasing effect between different layers;
will S 5 The scale characteristics are enlarged by 5-10 times to obtain the enlarged characteristic R 5 Characteristic R 4 Is composed of features S 5 Obtained after adding 2 times, and simultaneously scale feature A 4 Convolution of 1×8×128 gives the convolution characteristic a 4 ', will expand the feature R 5 With convolution feature A 4 ' adding to get a new feature S 4 The method comprises the steps of carrying out a first treatment on the surface of the Scale feature A 3 Convolution of 1×8×128 gives the convolution characteristic a 3 ', will expand the feature R 4 With convolution feature A 3 ' adding to get a new feature S 3 The method comprises the steps of carrying out a first treatment on the surface of the Scale feature A 2 Convolution of 1×8×128 gives the convolution characteristic a 2 ', will expand the feature R 3 With convolution feature A 2 ' adding to get a new feature S 2 The method comprises the steps of carrying out a first treatment on the surface of the Scale feature A 1 Convolution of 1×8×128 gives the convolution characteristic a 1 ', will expand the feature R 2 With convolution feature A 1 ' adding to get a new feature S 1 ;
Novel feature S 6 By the method of S 5 Performing convolution with a step size of 2 by 9 x 9, and then performing convolution on the feature S 6 Activating the Leaky ReLU function, and convolving with the step length of 2 by 9 multiplied by 9 to obtain a new feature S 7 ;
(5) Reconstructing a feature map: establishing a feature pyramid network as a main network to obtain features S 5 、S 6 、S 7 The reconstruction features are generated through up-sampling and single-layer convolution, feature recombination is completed, and the reconstructed feature map is generated in the following manner:
wherein Conv represents single-layer convolution and Upsamples represents upsampling;
(6) Target frame output: the 5 reconstructed features are connected to the output of a target frame suitable for reconstructing the feature map, the output of the target frame is divided into two classification sub-networks, one classification sub-network is used as the class output of the regression target, and the other regression sub-network is used as the output of the regression boundary frame:
the Focal Loss function output is:
FL(Q t )=-(1-Q t ) λ βlog(Q t );
wherein Q is t The probability of correct path image identification is that beta is weight, the value is between 0.2 and 0.3, and lambda is the focusing coefficient;
the balanced cross entropy function output is:
CE(Q t )=-βlog(Q t );
outputting the Focal Loss function and the balanced cross entropy function, and calculating the intersection union ratio by using a point set confidence function:
wherein D is T (x) For the pixel distance, d, between the atlas x in the corresponding feature map and the set of points of the real label s Is a preset minimum distance value;
(7) Calculating the mean square error of the path image: while computing the classified subnetwork, a new feature is used to compute the mean square error loss:
n≤5,x i ' is a value of path image recognition;
(8) Obtaining a path recognition model: after calculating the output of the classification sub-network and the regression sub-network, carrying out gradient descent training with the output of the classification sub-network and the regression sub-network to obtain a path identification model, wherein the path identification model is as follows:
V D(x) =βMSE+(1-β)D(x)
V CE(Qt) =βCE(Q t )+(1-β)V D(x)
W=V D(x) -αV CE(Qt)
b=W-αCE(Q t )
V D(x) to identify the speed, V CE(Qt) And (3) carrying out path recognition by the path recognition model for the highest speed, wherein W is a planned path, and b is an actual path.
The invention has the beneficial effects that: the invention belongs to a path identification method based on an artificial intelligent platform for executing a robot or a traveling device under the artificial intelligent platform. Compared with the traditional method, the method has higher accuracy and more intelligent judging process, and can be applied to various application scenes. The invention uses the full convolution network as the basic network structure, and reserves local information, so that the learned characteristics are easier to be visualized and understood, and meanwhile, the full convolution network has no too much limitation on the size and the type of the image, thereby enhancing the practicability. The convolutional neural network is added into the class activation mapping to keep the original data, the network identification accuracy is enhanced, the auxiliary judgment is integrated, the use and expansion of non-professional persons are facilitated, and the popularization possibility is increased. The method has the advantages of speed of the single-stage test model and calculation accuracy of the double-stage test model.
Drawings
Fig. 1 is a flow chart of the method of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention relates to the field of deep learning path recognition, and the path regression and recognition are carried out by using depth feature recombination, so that the feature points of a depth convolution layer are better utilized, and the calculation speed of a target is improved. The invention specifically comprises the following steps:
(1) Acquiring an image having a path: the invention can use image recording and collecting tools such as monitor, probe and the like to collect images with paths, then store pictures through programs of chips implanted in equipment and equipment, and perform standard class on each image, and establish data connection according to defined characteristic types to form a data set;
(2) Data preprocessing: the chip or the core computing tool starts to work, such as a CPU, a DSP and the like, calculates the mean value and the standard deviation of each pixel on the whole data set for the original image, turns over each image with 50% probability, and performs normalization processing to obtain a preprocessed image set; the step can process pictures of different scenes, different paths and formats into a format capable of being subjected to uniform judgment through a uniform algorithm.
All mean images areStandard deviation is std, which is normalized for a particular image x as follows:
(3) Extracting primary characteristics: sequentially determining a 51-layer ResNet network architecture and a 1-layer ResNet network architecture from top to bottom as a bottom network for constructing a characteristic artificial intelligent network, performing primary characteristic extraction on a path acquisition image, and extracting characteristics A of 5 different scales 1 ,A 2 ,A 3 ,A 4 ,A 5 The method comprises the steps of carrying out a first treatment on the surface of the Calculating a network architecture weight beta;
f (x') represents the activation value in space of the element k of the network layer; w (w) k The weight of the unit k to the network layer is calculated; the invention adopts artificial intelligent network, and introduces architecture weight to be beneficial to improving the accuracy of path judgment.
(4) Overlapping a convolution network: superposing the 5-scale features obtained in the step (3) through a convolution network from top to bottom to obtain new features S 1 ,S 2 ,S 3 ,S 4 ,S 5 Eliminating the aliasing effect between different layers;
will S 5 The scale characteristics are enlarged by 5-10 times to obtain the enlarged characteristic R 5 Characteristic R 4 Is composed of features S 5 Obtained after adding 2 times, at the same time scaleFeature A 4 Convolution of 1×8×128 gives the convolution characteristic a 4 ', will expand the feature R 5 With convolution feature A 4 ' adding to get a new feature S 4 The method comprises the steps of carrying out a first treatment on the surface of the Scale feature A 3 Convolution of 1×8×128 gives the convolution characteristic a 3 ', will expand the feature R 4 With convolution feature A 3 ' adding to get a new feature S 3 The method comprises the steps of carrying out a first treatment on the surface of the Scale feature A 2 Convolution of 1×8×128 gives the convolution characteristic a 2 ', will expand the feature R 3 With convolution feature A 2 ' adding to get a new feature S 2 The method comprises the steps of carrying out a first treatment on the surface of the Scale feature A 1 Convolution of 1×8×128 gives the convolution characteristic a 1 ', will expand the feature R 2 With convolution feature A 1 ' adding to get a new feature S 1 ;
Novel feature S 6 By the method of S 5 Performing convolution with a step size of 2 by 9 x 9, and then performing convolution on the feature S 6 Activating the Leaky ReLU function, and convolving with the step length of 2 by 9 multiplied by 9 to obtain a new feature S 7 ;
(5) Reconstructing a feature map: establishing a feature pyramid network as a main network to obtain features S 5 、S 6 、S 7 The reconstruction features are generated through up-sampling and single-layer convolution, feature recombination is completed, and the reconstructed feature map is generated in the following manner:
wherein Conv represents single-layer convolution and Upsamples represents upsampling;
in the process of reconstructing the feature map, single-layer convolution up-sampling reconstruction features are introduced, and the normalized features are further screened, so that the feature map is conveniently established, and the accuracy of the invention is improved.
(6) Target frame output: the 5 reconstructed features are connected to the output of a target frame suitable for reconstructing the feature map, the output of the target frame is divided into two classification sub-networks, one classification sub-network is used as the class output of the regression target, and the other regression sub-network is used as the output of the regression boundary frame:
the Focal Loss function output is:
FL(Q t )=-(1-Q t ) λ βlog(Q t );
wherein Q is t The probability of correct path image identification is that beta is weight, the value is between 0.2 and 0.3, and lambda is the focusing coefficient;
the balanced cross entropy function output is:
CE(Q t )=-βlog(Q t );
outputting the Focal Loss function and the balanced cross entropy function, and calculating the intersection union ratio by using a point set confidence function:
wherein D is T (x) For the pixel distance, d, between the atlas x in the corresponding feature map and the set of points of the real label s Is a preset minimum distance value;
the calculation output of the invention introduces a target frame output concept, and is beneficial to avoiding the distortion phenomenon of the output result through the output mode, the main reasons of the current distortion comprise errors and signal interference, and the signal interference problem is avoided and the error value is further reduced through improving the probability and weight of the correct path image identification through the target frame output.
(7) Calculating the mean square error of the path image: while computing the classified subnetwork, a new feature is used to compute the mean square error loss:
n≤5,x i ' is a value of path image recognition;
(8) Obtaining a path recognition model: after calculating the output of the classification sub-network and the regression sub-network, carrying out gradient descent training with the output of the classification sub-network and the regression sub-network to obtain a path identification model, wherein the path identification model is as follows:
V D(x) =βMSE+(1-β)D(x)
V CE(Qt) =βCE(Q t )+(1-β)V D(x)
W=V D(x) -αV CE(Qt)
b=W-αCE(Q t )
V D(x) to identify the speed, V CE(Qt) And (3) carrying out path recognition by the path recognition model for the highest speed, wherein W is a planned path, and b is an actual path.
The two steps further correct the path recognition model through the mean square error of the path image, obtain the path recognition result under the final artificial intelligent platform, and help the recognition main body to judge and execute subsequent operations.
Claims (2)
1. A path planning method based on an artificial intelligence platform comprises the following steps:
(1) Acquiring an image having a path: collecting images with paths, performing standard class on each image, and establishing data connection according to defined characteristic types to form a data set;
(2) Data preprocessing: calculating the mean value and standard deviation of each pixel on the whole data set for the original image, overturning each image with 50% probability, and carrying out normalization processing to obtain a preprocessed image set;
(3) Extracting primary characteristics: sequentially determining a 51-layer ResNet network architecture and a 1-layer ResNet network architecture from top to bottom as a bottom network for constructing a characteristic artificial intelligent network, performing primary characteristic extraction on a path acquisition image, and extracting characteristics A of 5 different scales 1 ,A 2 ,A 3 ,A 4 ,A 5 The method comprises the steps of carrying out a first treatment on the surface of the Calculating a network architecture weight beta;
(4) Overlapping a convolution network: superposing the 5-scale features obtained in the step (3) through a convolution network from top to bottom to obtain new features S 1 ,S 2 ,S 3 ,S 4 ,S 5 Eliminating the aliasing effect between different layers;
(5) Reconstructing a feature map: establishing a feature pyramid network as a main network to obtain features S 5 、S 6 、S 7 Generating reconstruction features through upsampling and single-layer convolution, and finishing feature recombination;
(6) Target frame output: connecting the 5 reconstructed features to a target frame output suitable for reconstructing the feature map, dividing the output of the target frame into two classification sub-networks, wherein one classification sub-network is used as the class output of the regression target, and the other regression sub-network is used as the output of the regression boundary frame:
the Focal Loss function output is:
FL(Q t )=-(1-Q t ) λ βlog(Q t );
wherein Q is t The probability of correct path image identification is that beta is weight, the value is between 0.2 and 0.3, and lambda is the focusing coefficient;
the balanced cross entropy function output is:
CE(Q t )=-βlog(Q t );
outputting the Focal Loss function and the balanced cross entropy function, and calculating the intersection union ratio by using a point set confidence function:
wherein D is T (x) For the pixel distance, d, between the atlas x in the corresponding feature map and the set of points of the real label s Is a preset minimum distance value;
(7) Calculating the mean square error of the path image: while computing the classified subnetwork, 5 new features are used to compute the mean square error loss:
n≤5,x i ' is a value of path image recognition;
(8) Obtaining a path recognition model: after calculating the output of the classification sub-network and the regression sub-network, carrying out gradient descent training with the output of the classification sub-network and the regression sub-network to obtain a path identification model;
the specific steps of overlapping the convolution network in the step (4) include: will S 5 The scale characteristics are enlarged by 5-10 times to obtain the enlarged characteristic R 5 Characteristic R 4 Is composed of features S 5 Obtained after adding 2 times, and simultaneously scale feature A 4 Convolution of 1×8×128 gives the convolution characteristic a 4 ', will expand the feature R 5 With convolution feature A 4 ' adding to get a new feature S 4 The method comprises the steps of carrying out a first treatment on the surface of the Scale feature A 3 Convolution of 1×8×128 gives the convolution characteristic a 3 ', will expand the feature R 4 With convolution feature A 3 ' adding to get a new feature S 3 The method comprises the steps of carrying out a first treatment on the surface of the Scale feature A 2 Convolution of 1×8×128 gives the convolution characteristic a 2 ', will expand the feature R 3 With convolution feature A 2 ' adding to get a new feature S 2 The method comprises the steps of carrying out a first treatment on the surface of the Scale feature A 1 Convolution of 1×8×128 gives the convolution characteristic a 1 ', will expand the feature R 2 With convolution feature A 1 ' adding to get a new feature S 1 The method comprises the steps of carrying out a first treatment on the surface of the The new feature S in said step (5) 6 By the method of S 5 Performing convolution with a step size of 2 by 9 x 9, and then performing convolution on the feature S 6 Activating the Leaky ReLU function, and convolving with the step length of 2 by 9 multiplied by 9 to obtain a new feature S 7 The method comprises the steps of carrying out a first treatment on the surface of the The generation mode of the recombined characteristic diagram in the step (5) is as follows:
wherein Conv represents single-layer convolution and Upsamples represents upsampling;
the path recognition model in the step (8) is as follows:
V D(x) =βMSE+(1-β)D(x)
V CE(Qt) =βCE(Q t )+(1-β)V D(x)
W=V D(x) -αV CE(Qt)
b=W-αCE(Q t )
V D(x) to identify the speed, V CE(Qt) And (3) carrying out path recognition by the path recognition model for the highest speed, wherein W is a planned path, and b is an actual path.
2. The path planning method based on the artificial intelligence platform according to claim 1, wherein: in the step (2), all mean images areStandard deviation is std, which is normalized for a particular image x as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911229234.2A CN110986949B (en) | 2019-12-04 | 2019-12-04 | Path identification method based on artificial intelligence platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911229234.2A CN110986949B (en) | 2019-12-04 | 2019-12-04 | Path identification method based on artificial intelligence platform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110986949A CN110986949A (en) | 2020-04-10 |
CN110986949B true CN110986949B (en) | 2023-05-09 |
Family
ID=70090137
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911229234.2A Active CN110986949B (en) | 2019-12-04 | 2019-12-04 | Path identification method based on artificial intelligence platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110986949B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111553277B (en) * | 2020-04-28 | 2022-04-26 | 电子科技大学 | Chinese signature identification method and terminal introducing consistency constraint |
WO2021218614A1 (en) * | 2020-04-30 | 2021-11-04 | 陈永聪 | Establishment of general artificial intelligence system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109272010A (en) * | 2018-07-27 | 2019-01-25 | 吉林大学 | Multi-scale Remote Sensing Image fusion method based on convolutional neural networks |
CN109740552A (en) * | 2019-01-09 | 2019-05-10 | 上海大学 | A kind of method for tracking target based on Parallel Signature pyramid neural network |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108475331B (en) * | 2016-02-17 | 2022-04-05 | 英特尔公司 | Method, apparatus, system and computer readable medium for object detection |
US10540961B2 (en) * | 2017-03-13 | 2020-01-21 | Baidu Usa Llc | Convolutional recurrent neural networks for small-footprint keyword spotting |
CA3000166A1 (en) * | 2017-04-03 | 2018-10-03 | Royal Bank Of Canada | Systems and methods for cyberbot network detection |
US20190147255A1 (en) * | 2017-11-15 | 2019-05-16 | Uber Technologies, Inc. | Systems and Methods for Generating Sparse Geographic Data for Autonomous Vehicles |
US11443178B2 (en) * | 2017-12-15 | 2022-09-13 | Interntional Business Machines Corporation | Deep neural network hardening framework |
US11087130B2 (en) * | 2017-12-29 | 2021-08-10 | RetailNext, Inc. | Simultaneous object localization and attribute classification using multitask deep neural networks |
CN108399362B (en) * | 2018-01-24 | 2022-01-07 | 中山大学 | Rapid pedestrian detection method and device |
CN108460742A (en) * | 2018-03-14 | 2018-08-28 | 日照职业技术学院 | A kind of image recovery method based on BP neural network |
CN108520206B (en) * | 2018-03-22 | 2020-09-29 | 南京大学 | Fungus microscopic image identification method based on full convolution neural network |
CN109815886B (en) * | 2019-01-21 | 2020-12-18 | 南京邮电大学 | Pedestrian and vehicle detection method and system based on improved YOLOv3 |
CN109886161B (en) * | 2019-01-30 | 2023-12-12 | 江南大学 | Road traffic identification recognition method based on likelihood clustering and convolutional neural network |
CN110244734B (en) * | 2019-06-20 | 2021-02-05 | 中山大学 | Automatic driving vehicle path planning method based on deep convolutional neural network |
AU2019101133A4 (en) * | 2019-09-30 | 2019-10-31 | Bo, Yaxin MISS | Fast vehicle detection using augmented dataset based on RetinaNet |
-
2019
- 2019-12-04 CN CN201911229234.2A patent/CN110986949B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109272010A (en) * | 2018-07-27 | 2019-01-25 | 吉林大学 | Multi-scale Remote Sensing Image fusion method based on convolutional neural networks |
CN109740552A (en) * | 2019-01-09 | 2019-05-10 | 上海大学 | A kind of method for tracking target based on Parallel Signature pyramid neural network |
Also Published As
Publication number | Publication date |
---|---|
CN110986949A (en) | 2020-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112927357B (en) | 3D object reconstruction method based on dynamic graph network | |
CN109993734A (en) | Method and apparatus for output information | |
CN107688856B (en) | Indoor robot scene active identification method based on deep reinforcement learning | |
WO2024060395A1 (en) | Deep learning-based high-precision point cloud completion method and apparatus | |
CN111562612B (en) | Deep learning microseismic event identification method and system based on attention mechanism | |
CN110222767B (en) | Three-dimensional point cloud classification method based on nested neural network and grid map | |
CN112418330A (en) | Improved SSD (solid State drive) -based high-precision detection method for small target object | |
CN110986949B (en) | Path identification method based on artificial intelligence platform | |
CN109978870A (en) | Method and apparatus for output information | |
CN116704137B (en) | Reverse modeling method for point cloud deep learning of offshore oil drilling platform | |
CN114092697A (en) | Building facade semantic segmentation method with attention fused with global and local depth features | |
CN117094925A (en) | Pig body point cloud completion method based on point agent enhancement and layer-by-layer up-sampling | |
CN115375925A (en) | Underwater sonar image matching algorithm based on phase information and deep learning | |
Yu et al. | Dual-branch framework: AUV-based target recognition method for marine survey | |
CN111383273A (en) | High-speed rail contact net part positioning method based on improved structure reasoning network | |
Wang et al. | Improved SSD Framework for Automatic Subsurface Object Indentification for GPR Data Processing | |
CN114140524B (en) | Closed loop detection system and method for multi-scale feature fusion | |
CN115546050A (en) | Intelligent restoration network and restoration method for ceramic cultural relics based on point cloud completion | |
CN114998539A (en) | Smart city sensor terrain positioning and mapping method | |
Jie et al. | Heterogeneous Deep Metric Learning for Ground and Aerial Point Cloud-Based Place Recognition | |
Yang et al. | 3dsenet: 3d spatial attention region ensemble network for real-time 3d hand pose estimation | |
Liang et al. | SCDFMixer: Spatial-Channel Dual-Frequency Mixer Based on Satellite Optical Sensors for Remote Sensing Multi-Object Detection | |
Wang et al. | Application of SIFT algorithm based on the Gabor features in multi-source information image monitoring | |
CN112183473B (en) | Geological curved surface visual semantic feature extraction method | |
Shen et al. | Refined Feature Fusion-Based Transformer Network for Hyperspectral and LiDAR classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |