CN114612757B - Multi-source navigation image fusion method and system based on deep learning - Google Patents

Multi-source navigation image fusion method and system based on deep learning Download PDF

Info

Publication number
CN114612757B
CN114612757B CN202210237540.6A CN202210237540A CN114612757B CN 114612757 B CN114612757 B CN 114612757B CN 202210237540 A CN202210237540 A CN 202210237540A CN 114612757 B CN114612757 B CN 114612757B
Authority
CN
China
Prior art keywords
image
fusion
fused
weight coefficient
error factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210237540.6A
Other languages
Chinese (zh)
Other versions
CN114612757A (en
Inventor
彭盼
陈放
丁磊
柏晓乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Dynamics Co ltd
Original Assignee
Smart Dynamics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Dynamics Co ltd filed Critical Smart Dynamics Co ltd
Publication of CN114612757A publication Critical patent/CN114612757A/en
Application granted granted Critical
Publication of CN114612757B publication Critical patent/CN114612757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-source navigation image fusion method and system based on deep learning, wherein the method comprises the following steps: preprocessing a training sample image and a test sample image and extracting image spectral characteristics to obtain a training data set and a test data set; training the first deep convolutional neural network model according to the training data set to obtain a second deep convolutional neural network model; obtaining a first depth feature set according to the test data set and the second depth convolution neural network model, and further fusing a first sensing image and a second sensing image in the test sample image to obtain a first fused image; and performing feature positioning identification on the first fusion image by using a classifier to obtain a first navigation output result. The method solves the technical problems that in the prior art, equal fusion of multisource navigation images cannot effectively exert the complementary advantages of multisource features, manual feature selection is needed in shallow learning, and fusion precision is low.

Description

Multi-source navigation image fusion method and system based on deep learning
Technical Field
The invention relates to the field of image processing, in particular to a multi-source navigation image fusion method and system based on deep learning.
Background
A significant part of the robot is autonomous navigation, i.e., the robot walks by itself without user control, and usually assists the robot to travel through the sensors and the processor, however, the robot always generates friction while traveling, and the friction causes errors of the sensors, so how to precisely calculate the path of the robot and take into account the errors generated by the friction becomes an important issue. In the technical field of robot navigation image processing, because different sensors have certain limitations (such as large weather influence on laser radar, low resolution of a camera sensor, speckle noise and the like), a target is identified and interpreted by using a single sensor, and only certain characteristics of the target can be obtained, so that in the navigation image processing process, multi-source navigation image information is comprehensively utilized, and the method is an important way for improving the application value of multi-source navigation images.
However, in the prior art, the equal fusion of the multi-source navigation images cannot effectively exert the complementary advantages of the multi-source features, and the shallow learning needs manual feature selection and has low fusion precision.
Disclosure of Invention
Aiming at the defects in the prior art, the embodiment of the application aims to solve the technical problems that the equal fusion of multi-source navigation images in the prior art cannot effectively exert the complementary advantages of multi-source features, manual feature selection is needed for shallow learning, and the fusion precision is low by providing the multi-source navigation image fusion method and the multi-source navigation image fusion system based on deep learning, so that the method can automatically realize the automatic selection of the multi-source to-be-fused positioning navigation image features by utilizing the deep learning method, does not need the manual feature selection, is time-saving and labor-saving, is convenient for the engineering application of multi-source positioning navigation image fusion, can more comprehensively and deeply express the self characteristics of different source images, realizes the semantic representation of images on multiple abstract levels, and improves the technical effects of the precision of multi-source image fusion and classification.
In one aspect, an embodiment of the present application provides a multi-source navigation image fusion method based on deep learning, where the method includes: obtaining a first image to be fused, wherein the first image to be fused comprises a training sample image and a test sample image; preprocessing the training sample image and the test sample image and extracting image spectral characteristics to obtain a training data set and a test data set; constructing a first deep convolutional neural network model; training the first deep convolution neural network model according to the training data set to obtain a second deep convolution neural network model; obtaining a first depth feature set according to the test data set and the second depth convolution neural network model; fusing a first sensing image and a second sensing image in the test sample image based on the first depth feature set to obtain a first fused image; and performing feature positioning identification on the first fusion image by using a classifier to obtain a first navigation output result.
On the other hand, the application also provides a multi-source navigation image fusion system based on deep learning, and the system comprises: a first obtaining unit, configured to obtain a first image to be fused, where the first image to be fused includes a training sample image and a test sample image; a second obtaining unit, configured to obtain a training data set and a test data set by performing preprocessing and image spectral feature extraction on the training sample image and the test sample image; a first construction unit for constructing a first deep convolutional neural network model; a third obtaining unit, configured to train the first deep convolutional neural network model according to the training data set, so as to obtain a second deep convolutional neural network model; a fourth obtaining unit, configured to obtain a first depth feature set according to the test data set and the second depth convolutional neural network model; the first fusion unit is used for fusing a first sensing image and a second sensing image in the test sample image based on the first depth feature set to obtain a first fusion image; and the first recognition unit is used for carrying out feature positioning recognition on the first fusion image by utilizing a classifier to obtain a first navigation output result.
In a third aspect, an embodiment of the present application provides a deep learning-based multi-source navigation image fusion system, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to any one of the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the image source of the multi-source navigation image is analyzed, a first image to be fused is generated based on a training sample image and a testing sample image, then the training sample image and the testing sample image are preprocessed and subjected to image spectral feature extraction, a training data set and a testing data set are obtained, training testing is conducted on a constructed depth convolution neural network model according to the training data set and the testing data obtained through extraction, depth features represented by each type of image in the first image to be fused are output, a first depth feature set is obtained, a first sensing image and a second sensing image in the testing sample image are fused based on the first depth feature set, a first fusion image is obtained, and a classifier is used for conducting feature location recognition on the first fusion image, and a first navigation output result is obtained. The method automatically realizes the automatic selection of the characteristics of the multisource positioning and navigation images to be fused by utilizing a deep learning method, does not need manual characteristic selection, saves time and labor, is convenient for the engineering application of multisource positioning and navigation image fusion, and improves the technical effect of multisource image fusion precision.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
fig. 1 is a schematic flow chart of a multi-source navigation image fusion method based on deep learning according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method for obtaining a training data set and a test data set according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating the prediction of the fusion performance effect of the multi-source navigation image fusion method based on deep learning in the embodiment of the present application;
FIG. 4 is a diagram illustrating a result of a deep learning-based multi-source navigation image fusion system according to an embodiment of the present disclosure;
fig. 5 is a diagram illustrating results of an exemplary electronic device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a multi-source navigation image fusion method and system based on deep learning, solves the technical problems that in the prior art, equal fusion of multi-source navigation images cannot effectively exert the advantage of multi-source feature complementation, manual feature selection is needed in shallow learning, and fusion precision is low, achieves the technical effects that the deep learning method is utilized to automatically realize automatic selection of multi-source to-be-fused positioning navigation image features, and improves the precision of multi-source image fusion and classification.
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are merely some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Summary of the application
The existing multi-source fusion method usually adopts an equal fusion processing strategy, namely, each type of data participating in fusion is fused, however, the actual application finds that: not all fusion can improve the classification precision of the navigation images, and even the classification precision is reduced when certain classes of images are fused. On the other hand, the existing multi-source fusion method is mainly developed based on the shallow learning idea, and the characteristics need to be manually selected in the fusion process, so that the time and the labor are wasted, the multi-source image fusion accuracy depends on experience and luck to a great extent, and the engineering application of the multi-source navigation image fusion is not facilitated, so that the multi-source navigation image fusion method capable of overcoming the defects is urgently needed in the field.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the invention provides a multi-source data fusion positioning navigation method for robot navigation, which automatically realizes the automatic selection of the characteristics of a multi-source to-be-fused positioning navigation image by utilizing a deep learning method, does not need manual characteristic selection, is time-saving and labor-saving, is convenient for the engineering application of multi-source positioning navigation image fusion, can more comprehensively and deeply express the self characteristics of different source images, realizes the semantic representation of images on multiple abstract levels, and improves the precision of multi-source image fusion and classification. The method solves the technical problems that in the prior art, equal fusion of multi-source navigation images cannot effectively exert the complementary advantages of multi-source features, manual feature selection is needed in shallow learning, and fusion precision is low. Further, a first image to be fused is generated by analyzing an image source of a multi-source navigation image based on a training sample image and a test sample image, preprocessing and image spectral feature extraction are carried out on the training sample image and the test sample image to obtain a training data set and a test data set, a training test is carried out on a constructed depth convolution neural network model according to the training data set and the test data obtained by extraction, so that a depth feature represented by each type of image in the first image to be fused is output, a first depth feature set is obtained, a first sensing image and a second sensing image in the test sample image are fused based on the first depth feature set to obtain a first fusion image, and a classifier is used for carrying out feature location recognition on the first fusion image to obtain a first navigation output result.
For better understanding of the above technical solutions, the following detailed descriptions will be provided in conjunction with the drawings and the detailed description of the embodiments.
Example one
As shown in fig. 1, an embodiment of the present application provides a multi-source navigation image fusion method based on deep learning, where the method includes:
step S100: obtaining a first image to be fused, wherein the first image to be fused comprises a training sample image and a test sample image;
step S200: preprocessing the training sample image and the test sample image and extracting image spectral characteristics to obtain a training data set and a test data set;
specifically, in the technical field of robot navigation image processing, because different sensors have certain limitations (such as the fact that a laser radar is greatly influenced by weather, a camera sensor has low resolution, speckle noise and the like), only a certain aspect of a target can be obtained by using a single sensor to identify and interpret the target, therefore, in the navigation image processing process, multi-source navigation image information is comprehensively used, and the method is an important way for improving the application value of a multi-source navigation image, and therefore, a multi-source navigation image fusion method based on deep learning is provided for image fusion, so that the image fusion precision and the fusion quality are improved.
Further, the first image to be fused is a navigation image obtained based on a multi-source sensor, and comprises images obtained by sensing at least two groups of different sensing sources respectively, the first image to be fused comprises a training sample image and a test sample image, and the training sample image and the test sample image also comprise two types of sensing source images. For example, for two types of images, such as a laser radar image and a camera image, on the premise of ensuring training, all the laser radar images and the camera images are divided into two groups, one group is used as a training sample image, the other group is used as a test sample image, and the first image to be fused is obtained by using the training sample image and the test sample image.
And respectively preprocessing and extracting image spectral features of the first image to be fused to obtain image features of the first image to be fused, and further taking the extracted image features as the training data set and the test data set, so that accurate analysis and extraction of the preprocessed features can be realized, and the generated source data is convenient for subsequent data analysis.
Step S300: constructing a first deep convolution neural network model;
specifically, the construction mode of the deep convolutional neural network model can be set empirically according to the application purpose, and parameters including the number of network layers, the number of convolutional layer layers, the number of pooling layer layers, the number of convolutional filters, the size of convolutional kernel, pooling scale and the like can be set according to the application purpose. Illustratively, the spectral feature vector extracted in the sub-step A1b is used as an input to construct a deep convolutional neural network model, where the deep convolutional neural network model includes: 1 input layer, 5 convolutional layers, 3 pooling layers, 2 full-link layers, 1 softmax layer and 1 output layer, and the specific result of the deep convolutional neural network model is as follows: 5 convolutional layers after the input layer, 3 pooling layers after the first, second, and fifth convolutional layers, respectively, a full-link layer between the third pooling layer and the output layer, followed by a softmax layer, an output layer at the last, namely, an input layer, a winding and stacking layer, a pooling layer, a winding and stacking layer, a fully-connected layer, a softmax layer and an output layer. Among them, the convolution kernel size of the convolution filter of 5 convolution layers is preferably 13 × 13, 5 × 5, 3 × 3 and 6 × 6, and the number of convolution filters is preferably 128, 256, 512 and 256; the size of the pooling scale of the pooling layer is preferably 3 x 3; the size of the output layer is preferably 256 × 256, and the number of nodes of the output layer is consistent with the number of samples (i.e., the number of pixels of the laser radar image or the camera image); the input selection is preferably a fully concatenated result, i.e. one mapping of the current layer is concatenated with all mappings of the previous layer.
Further, the setting according to the application purpose comprises: convolutional layer parameters are set by convolutional layer forward operation, and convolutional layer parameters are updated by calculating the partial derivatives of the convolutional kernels and the offsets. Setting parameters of a Pooling layer, wherein forward operation of the Pooling layer is downsampling operation, the forward operation of the Pooling layer preferably selects a Max-Pooling Pooling mode, the size of a Pooling kernel is 2 multiplied by 2, and the step length is 2; when the rear connecting layer of the pooling layer is a convolution layer, the pooling layer is calculated according to a reverse error propagation operation formula; the excitation function parameters are preferably set by a sigmoid function or a hyperbolic tangent function, and most preferably by a sigmoid function. sigmoid compresses the output to [0,1], so the final output average generally goes to 0. The softmax layer sets softmax layer parameters through forward calculation and partial derivative calculation.
Step S400: training the first deep convolutional neural network model according to the training data set to obtain a second deep convolutional neural network model;
specifically, according to the training data set, a Hinge loss function and a random gradient descent method are adopted to train the first deep convolutional neural network, and when the loss function of the whole deep convolutional neural network tends to be close to a local optimal solution, namely, the training is completed when convergence occurs; wherein the local optimal solution is manually set in advance. Further, the second deep convolution neural model is obtained by respectively training according to the fusion sensing sources in the first image to be fused, that is, the second deep convolution neural network is a trained model. Taking two sensing sources of a laser radar and a camera as an example, a deep convolutional neural network is trained according to a laser radar image training data set and a camera image training data set respectively.
Step S500: obtaining a first depth feature set according to the test data set and the second depth convolution neural network model;
further, a second depth feature set is obtained by performing normalization processing on the first depth feature set, wherein a formula of the normalization processing is as follows:
Figure 936459DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE003
and
Figure 660833DEST_PATH_IMAGE004
respectively representing the depth features before and after the normalization process.
Specifically, the test data set is used as the input data of the second deep convolutional neural network model for model recognition, since the second deep convolutional neural network model is the trained convolutional neural network model,
therefore, the second deep convolutional neural network model can accurately identify the input data, so that the depth feature corresponding to each sensing source is obtained, and the first depth feature set is further obtained.
Further, illustratively, the lidar image test data set and the camera image test data set are processed respectively, the softmax layer at the end of the depth convolutional neural network model is removed, and the output of the full link layer is reserved to be used as the depth feature set obtained by learning on the lidar image test data set and the camera image test data set respectively
Figure 176128DEST_PATH_IMAGE006
And
Figure 426981DEST_PATH_IMAGE008
(ii) a Each feature vector in the depth feature set corresponds to a weight coefficient, and is an input sample number (namely the number of pixels of the laser radar image or the camera image), so that the output first depth feature set can be selected autonomously. By using
Figure DEST_PATH_IMAGE009
Respectively carrying out normalization processing on the depth characteristics of the depth characteristic set of the laser radar image test data set and the camera image test data set by a normalization formula; wherein the content of the first and second substances,
Figure 143264DEST_PATH_IMAGE003
and
Figure 213988DEST_PATH_IMAGE004
respectively represent the depth features before and after the normalization process,
Figure 532974DEST_PATH_IMAGE003
is composed of
Figure 638333DEST_PATH_IMAGE006
And
Figure 118993DEST_PATH_IMAGE008
the feature vector of (1).
Step S600: fusing a first sensing image and a second sensing image in the test sample image based on the first depth feature set to obtain a first fused image;
step S700: and performing feature positioning identification on the first fusion image by using a classifier to obtain a first navigation output result.
Specifically, the first depth feature set is an image feature obtained based on images of different sensing sources, so that the first fusion image is obtained by fusing the test sample images, and illustratively, the laser radar image and the camera image of the test sample image are selectively fused based on the depth feature set of the test data set, so as to obtain a fused image. The classifier is a trained classifier, a support vector machine classifier is preferably selected, a kernel Function of the support vector machine classifier is preferably a Gaussian Radial Basis Function (RBF) or a Multi-Layer perceptron kernel Function (MLP), and a supervised learning method is preferably selected for training classifier parameters. The support vector machine classifier can also be replaced by other methods, such as a Boosting classifier, a Gaussian process classifier and a KNN classifier, so that according to the first navigation output result, the automatic selection of the multi-source to-be-fused positioning navigation image features is automatically realized by utilizing a deep learning method, the manual feature selection is not needed, time and labor are saved, the engineering application of multi-source positioning navigation image fusion is facilitated, the self characteristics of different source images can be more comprehensively and deeply expressed, the semantic representation of the images on multiple abstract levels is realized, and the technical effects of the precision of multi-source image fusion and classification are improved.
Further, as shown in fig. 2, the training data set and the test data set are obtained by preprocessing the training sample image and the test sample image and extracting spectral features of the images, and step S200 in the embodiment of the present application further includes:
step S210: constructing an image preprocessing model to be fused and an image spectral feature extraction model;
step S220: preprocessing the first image to be fused according to the image to be fused preprocessing model to obtain a second image to be fused;
step S230: and inputting the second image to be fused into the image spectral feature extraction model, and obtaining first output information according to the image spectral feature extraction model, wherein the first output information comprises the training data set and the test data set.
Further, the formula for preprocessing the first image to be fused according to the image to be fused preprocessing model is as follows:
Figure DEST_PATH_IMAGE011
wherein s represents a spectrum of the image to be fused, i, j represent the position coordinates of the image to be fused respectively,
Figure 817959DEST_PATH_IMAGE012
and
Figure DEST_PATH_IMAGE013
respectively representing the value of the pixel at the position coordinate (i, j) of the s-th spectral band before the normalization process and the value of the pixel at the position coordinate (i, j) of the s-th spectral band after the normalization process,
Figure 940636DEST_PATH_IMAGE014
and
Figure DEST_PATH_IMAGE015
respectively representing the minimum value and the maximum value of the pixel in the s spectrum section of the whole image to be fused.
Specifically, preprocessing and image spectral feature extraction are carried out on a training sample image and a test sample image by utilizing a preprocessing and image spectral feature extraction model of an image to be fused, so as to obtain a laser radar image training data set and a camera image training data set. And the training sample image and the test sample image can be preprocessed and feature extracted at the same time, and can also be preprocessed and feature extracted sequentially.
Furthermore, preprocessing an image to be fused by adopting a normalization formula, and normalizing each pixel in the input image to be fused, wherein the training sample image comprises a laser radar image and a camera image, and the laser radar image and the camera image are respectively used as the image to be fused for preprocessing and image spectral feature extraction, so that a training data set is obtained; wherein
Figure 900501DEST_PATH_IMAGE016
Spectral feature vectors representing pixels of position coordinates (i, j) in the lidar image and the camera image;
Figure DEST_PATH_IMAGE017
the pixel class of the position coordinate (i, j) is represented, and K is a constant and represents the total number of classes. Preferably, the lidar image comprises R, G, B three spectral bands, i.e. s =3, and the spectral feature vector of the pixel of the position coordinate (i, j) is in the dimension of 1 × 3 × w; the camera image preferably contains a spectral range, i.e. s =1, and the spectral feature vector of the pixels with their position coordinates (i, j) is in the dimension 1 × w.
Further, as shown in fig. 3, step S100 in the embodiment of the present application further includes:
step S110: acquiring image category information of the first image to be fused;
step S120: determining a first sensor and a second sensor according to the image category information;
step S130: respectively carrying out error factor analysis on the first sensor and the second sensor to obtain a first error factor and a second error factor;
step S140: performing coincidence degree analysis on the first error factor and the second error factor to obtain a first coincidence degree;
step S150: and predicting the image fusion performance effect according to the first contact ratio, and outputting first prediction information.
Specifically, since the first image to be fused is an image obtained based on a multi-source sensor, the image category of the first image to be fused is analyzed first to obtain image category information, so as to determine sensor information thereof, for example, a fusion source includes a laser radar image and a camera image, so as to perform error factor analysis on different sensors, where the first error factor is a factor affected by the quality of the first sensor image; the second error is a factor of quality influence on the second sensor, and then the first overlap ratio is obtained by analyzing the overlap ratio of the first error factor and the second error factor, for example, when the overlap ratio of the first error factor and the influence factor in the second error factor is higher, the effect of fusion is greatly influenced, for example, if the image quality is not high due to influence of weather at the same time, the image cannot be accurately identified and positioned after the image fusion, and the fusion needs to be performed by using a sensing source with small weather influence. When the coincidence degree of the influence factors in the first error factor and the second error factor is not high, the image fusion can be fused according to the image characteristics of the image fusion, and therefore better fusion quality is obtained.
Therefore, the image fusion performance effect is predicted according to the first contact ratio, first prediction information is output, fusion sources can be analyzed and judged according to fusion targets according to the first prediction information, and accuracy and effectiveness of multi-source fusion are guaranteed.
Further, the step S600 of the embodiment of the present application further includes, based on the first depth feature set, fusing the first sensing image and the second sensing image in the test sample image to obtain a first fused image:
step S610: obtaining feature class information in the first depth feature set;
step S620: obtaining a first weight coefficient and a second weight coefficient according to the feature class information;
step S630: obtaining a preset weight coefficient;
step S640: and judging according to the first weight coefficient, the second weight coefficient and the preset weight coefficient, and obtaining the first fusion image according to a judgment result.
Further, the determining is performed according to the first weight coefficient, the second weight coefficient and the preset weight coefficient, and the first fused image is obtained according to a determination result, in step S640 of this embodiment of the present application, further includes:
step S641: outputting the larger weight coefficient of the first weight coefficient and the second weight coefficient by judging the first weight coefficient and the second weight coefficient to obtain a first output weight coefficient;
step S642: judging whether the first output weight coefficient is larger than the preset weight coefficient or not;
step S643: and if the first output weight coefficient is smaller than the preset weight coefficient, calculating the first depth feature set based on a weight weighting method to obtain the first fusion image.
Specifically, each feature vector in the first depth feature set corresponds to a weight coefficient, which is the number of input samples (i.e., the number of pixels of a lidar image or a camera image), so that the output first depth feature set can be autonomously selected.
And obtaining a corresponding first weight coefficient and a second weight coefficient by the feature class in the first depth feature set, wherein the preset weight coefficient is a fusion weight judgment threshold value set in advance, so that the judgment of a fusion strategy is carried out according to the first weight coefficient, the second weight coefficient and the preset weight coefficient, and the first fusion image is output according to the judgment result.
The fusion was performed according to the following strategy: when the larger one of the first weight coefficient and the second weight coefficient is larger than the preset weight coefficient, taking the feature vector corresponding to the larger one of the first weight coefficient and the second weight coefficient as the final fusion image feature; and when the larger one of the first weight coefficient and the second weight coefficient is smaller than the preset weight coefficient, fusing the feature vectors by adopting a weight weighting method, wherein the weight distribution in the weight weighting method can be intelligently set according to the magnitude of the influence information entropy, and the fused feature vectors are used as the final fused image features to obtain a fused image. Achieve the purpose of
Compared with the prior art, the invention has the following beneficial effects:
1. the image source of the multi-source navigation image is analyzed, a first image to be fused is generated based on a training sample image and a testing sample image, preprocessing and image spectral feature extraction are carried out on the training sample image and the testing sample image, a training data set and a testing data set are obtained, training and testing are carried out on a constructed depth convolution neural network model according to the training data set and the testing data obtained through extraction, so that the depth feature represented by each type of image in the first image to be fused is output, a first depth feature set is obtained, a first sensing image and a second sensing image in the testing sample image are fused based on the first depth feature set, a first fusion image is obtained, and a classifier is used for carrying out feature location recognition on the first fusion image, so that a first navigation output result is obtained. The method automatically realizes the automatic selection of the multi-source to-be-fused positioning navigation image features by utilizing a deep learning method, does not need manual feature selection, is time-saving and labor-saving, is convenient for the engineering application of multi-source positioning navigation image fusion, and improves the technical effect of multi-source image fusion precision.
2. Because the image fusion performance effect is predicted according to the first contact ratio, the first prediction information is output, and the fusion source can be analyzed and judged according to the fusion target according to the first prediction information, so that the accuracy and the effectiveness of multi-source fusion are ensured.
3. Due to the fact that fusion weight distribution factors are carried out according to strategies, intelligent setting can be carried out according to the influence information entropy of the fusion weight distribution factors, the fused feature vectors are used as final fusion image features, automatic selection of the multi-source to-be-fused positioning navigation image features is achieved, and the multi-source image fusion effect is improved.
Example two
Based on the same inventive concept as the deep learning-based multi-source navigation image fusion method in the foregoing embodiment, the present invention further provides a deep learning-based multi-source navigation image fusion system, as shown in fig. 4, the system includes:
a first obtaining unit 11, where the first obtaining unit 11 is configured to obtain a first image to be fused, where the first image to be fused includes a training sample image and a test sample image;
a second obtaining unit 12, where the second obtaining unit 12 is configured to obtain a training data set and a test data set by performing preprocessing and image spectral feature extraction on the training sample image and the test sample image;
a first constructing unit 13, wherein the first constructing unit 13 is used for constructing a first deep convolutional neural network model;
a third obtaining unit 14, where the third obtaining unit 14 is configured to train the first deep convolutional neural network model according to the training data set, and obtain a second deep convolutional neural network model;
a fourth obtaining unit 15, where the fourth obtaining unit 15 is configured to obtain a first depth feature set according to the test data set and the second depth convolutional neural network model;
a first fusion unit 16, where the first fusion unit 16 is configured to fuse a first sensing image and a second sensing image in the test sample image based on the first depth feature set, so as to obtain a first fusion image;
and the first identification unit 17 is configured to perform feature localization identification on the first fusion image by using a classifier, so as to obtain a first navigation output result.
Further, the system further comprises:
the second construction unit is used for constructing an image preprocessing model to be fused and an image spectral feature extraction model;
a fifth obtaining unit, configured to pre-process the first image to be fused according to the image to be fused pre-processing model, and obtain a second image to be fused;
the first input unit is used for inputting the second image to be fused into the image spectral feature extraction model, and obtaining first output information according to the image spectral feature extraction model, wherein the first output information comprises the training data set and the test data set.
Further, the system further comprises:
and the first processing unit is used for preprocessing the first image to be fused according to the image to be fused preprocessing model to obtain the second image to be fused.
Further, the system further comprises:
and the second processing unit is used for carrying out normalization processing on the first depth feature set to obtain a second depth feature set.
Further, the system further comprises:
a sixth obtaining unit, configured to obtain image category information of the first image to be fused;
a first determination unit configured to determine a first sensor and a second sensor according to the image category information;
a seventh obtaining unit, configured to perform error factor analysis on the first sensor and the second sensor respectively to obtain a first error factor and a second error factor;
an eighth obtaining unit, configured to obtain a first contact ratio by performing contact ratio analysis on the first error factor and the second error factor;
and the first prediction unit is used for predicting the image fusion performance effect according to the first contact ratio and outputting first prediction information.
Further, the system further comprises:
a ninth obtaining unit, configured to obtain feature class information in the first depth feature set;
a tenth obtaining unit, configured to obtain a first weight coefficient and a second weight coefficient according to the feature class information;
an eleventh obtaining unit, configured to obtain a preset weight coefficient;
and the first judgment unit is used for judging according to the first weight coefficient, the second weight coefficient and the preset weight coefficient and obtaining the first fusion image according to a judgment result.
Further, the system further comprises:
a second determining unit, configured to determine the first weight coefficient and the second weight coefficient, and output a larger weight coefficient of the first weight coefficient and the second weight coefficient to obtain a first output weight coefficient;
a third judging unit, configured to judge whether the first output weight coefficient is greater than the preset weight coefficient;
a twelfth obtaining unit, configured to, if the first output weight coefficient is smaller than the preset weight coefficient, calculate the first depth feature set based on a weight weighting method, so as to obtain the first fusion image.
Various changes and specific examples of the multi-source navigation image fusion method based on deep learning in the first embodiment of fig. 1 are also applicable to the multi-source navigation image fusion system based on deep learning in the present embodiment, and through the foregoing detailed description of the multi-source navigation image fusion method based on deep learning, those skilled in the art can clearly know the implementation method of the multi-source navigation image fusion system based on deep learning in the present embodiment, so for the sake of brevity of the description, detailed descriptions are not repeated here.
EXAMPLE III
The electronic device of the embodiment of the present application is described below with reference to fig. 5.
FIG. 5 illustrates a resulting schematic of an electronic device according to an embodiment of the application.
Based on the inventive concept of the deep learning-based multi-source navigation image fusion method in the previous embodiment, the invention further provides a deep learning-based multi-source navigation image fusion system, wherein a computer program is stored on the deep learning-based multi-source navigation image fusion system, and when the computer program is executed by a processor, the steps of any one of the methods of the deep learning-based multi-source navigation image fusion system are realized.
Where in fig. 5 a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 305 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other systems over a transmission medium. The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
The embodiment of the application provides a multi-source navigation image fusion method based on deep learning, which comprises the following steps: obtaining a first image to be fused, wherein the first image to be fused comprises a training sample image and a test sample image; preprocessing the training sample image and the test sample image and extracting image spectral features to obtain a training data set and a test data set; constructing a first deep convolutional neural network model; training the first deep convolutional neural network model according to the training data set to obtain a second deep convolutional neural network model; obtaining a first depth feature set according to the test data set and the second depth convolution neural network model; fusing a first sensing image and a second sensing image in the test sample image based on the first depth feature set to obtain a first fused image; and performing feature positioning identification on the first fusion image by using a classifier to obtain a first navigation output result. The technical problems that in the prior art, the equal fusion of multi-source navigation images cannot effectively exert the complementary advantages of multi-source features, manual feature selection is needed in shallow learning, and the fusion precision is low are solved, the automatic selection of the multi-source to-be-fused positioning navigation image features is automatically realized by utilizing a deep learning method, the manual feature selection is not needed, time and labor are saved, the engineering application of multi-source positioning navigation image fusion is facilitated, the self characteristics of different source images can be more comprehensively and deeply expressed, the semantic representation of the images on multiple abstract levels is realized, and the technical effects of the precision of multi-source image fusion and classification are improved.
Those of ordinary skill in the art will understand that: various numbers of the first, second, etc. mentioned in this application are only for convenience of description and distinction, and are not used to limit the scope of the embodiments of this application, nor to indicate a sequence order. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any," or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one (one ) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable system. The computer finger
The instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, where the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, including one or more integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by general purpose processors, digital signal processors, application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic systems, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing systems, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RA memory, flash memory, RO memory, EPRO memory, EEPRO memory, registers, hard disk, removable disk, CD-RO, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be disposed in a terminal. In the alternative, the processor and the storage medium may reside in different components within the terminal. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations may be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the present application and its equivalent technology, it is intended that the present application include such modifications and variations.

Claims (9)

1. A multi-source navigation image fusion method based on deep learning is characterized by comprising the following steps:
obtaining a first image to be fused, wherein the first image to be fused comprises a training sample image and a test sample image;
obtaining image category information of the first image to be fused, wherein the first image to be fused is an image obtained based on a multi-source sensor;
determining a first sensor and a second sensor according to the image category information;
respectively carrying out error factor analysis on the first sensor and the second sensor to obtain a first error factor and a second error factor, wherein the first error factor is a factor influenced by the quality of the first sensor image; the second error factor is a factor influenced by the mass of the second sensor;
performing overlap ratio analysis on the first error factor and the second error factor to obtain a first overlap ratio, wherein when the overlap ratio of the first error factor and the second error factor is higher, the effect of fusion is greatly influenced, and further, when the image quality is not high due to the influence of weather, the image cannot be accurately identified and positioned after the image fusion, and a sensing source with small influence of weather needs to be fused; when the coincidence degree of the influence factors in the first error factor and the second error factor is not high, the image fusion can be fused according to the image characteristics of the image fusion, so that better fusion quality is obtained;
predicting the image fusion performance effect according to the first contact ratio, and outputting first prediction information;
preprocessing the training sample image and the test sample image and extracting image spectral features to obtain a training data set and a test data set;
constructing a first deep convolutional neural network model;
training the first deep convolutional neural network model according to the training data set to obtain a second deep convolutional neural network model;
obtaining a first depth feature set according to the test data set and the second depth convolution neural network model;
fusing a first sensing image and a second sensing image in the test sample image based on the first depth feature set to obtain a first fused image;
and performing feature positioning identification on the first fusion image by using a classifier to obtain a first navigation output result.
2. The method of claim 1, wherein the training dataset and the test dataset are obtained by preprocessing and image spectral feature extraction on the training sample image and the test sample image, the method further comprising:
constructing an image preprocessing model to be fused and an image spectral feature extraction model;
preprocessing the first image to be fused according to the image to be fused preprocessing model to obtain a second image to be fused;
and inputting the second image to be fused into the image spectral feature extraction model, and obtaining first output information according to the image spectral feature extraction model, wherein the first output information comprises the training data set and the test data set.
3. The method according to claim 2, wherein the formula for preprocessing the first image to be fused according to the image to be fused preprocessing model is as follows:
Figure 23266DEST_PATH_IMAGE001
wherein s represents the spectral band of the image to be fused, i, j represent the position coordinates of the image to be fused respectively,
Figure DEST_PATH_IMAGE002
and
Figure DEST_PATH_IMAGE003
respectively representing the value of the pixel at the position coordinate (i, j) of the s-th spectral band before the normalization processing and the value of the pixel at the position coordinate (i, j) of the s-th spectral band after the normalization processing,
Figure DEST_PATH_IMAGE004
and
Figure DEST_PATH_IMAGE005
respectively representing the minimum value and the maximum value of the pixel in the s spectrum section of the whole image to be fused.
4. The method of claim 1, wherein a second set of depth features is obtained by normalizing the first set of depth features, wherein the normalization is formulated as:
Figure DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 19035DEST_PATH_IMAGE007
and
Figure DEST_PATH_IMAGE008
respectively representing the depth features before and after the normalization process.
5. The method of claim 1, wherein the fusing the first sensed image and the second sensed image in the test sample image based on the first set of depth features to obtain a first fused image, the method further comprising:
obtaining feature class information in the first depth feature set;
obtaining a first weight coefficient and a second weight coefficient according to the feature class information;
obtaining a preset weight coefficient;
and judging according to the first weight coefficient, the second weight coefficient and the preset weight coefficient, and obtaining the first fusion image according to a judgment result.
6. The method according to claim 5, wherein the determination is made based on the first weight coefficient, the second weight coefficient, and the preset weight coefficient, and the first fused image is obtained based on the determination result, the method further comprising:
outputting the larger weight coefficient of the first weight coefficient and the second weight coefficient by judging the first weight coefficient and the second weight coefficient to obtain a first output weight coefficient;
judging whether the first output weight coefficient is larger than the preset weight coefficient or not;
and if the first output weight coefficient is smaller than the preset weight coefficient, calculating the first depth feature set based on a weight weighting method to obtain the first fusion image.
7. A deep learning-based multi-source navigation image fusion system, the system comprising:
a first obtaining unit, configured to obtain a first image to be fused, where the first image to be fused includes a training sample image and a test sample image;
a sixth obtaining unit, configured to obtain image category information of the first image to be fused, where the first image to be fused is an image obtained based on a multisource sensor;
a first determination unit configured to determine a first sensor and a second sensor according to the image category information;
a seventh obtaining unit, configured to perform error factor analysis on the first sensor and the second sensor respectively to obtain a first error factor and a second error factor, where the first error factor is a factor that is influenced by quality of an image of the first sensor; the second error factor is a factor influenced by the mass of the second sensor;
an eighth obtaining unit, configured to obtain a first contact ratio by performing contact ratio analysis on the first error factor and the second error factor, where when a contact ratio of an influence factor in the first error factor and the second error factor is higher, the fusion effect is greatly influenced, and further, when the image quality is not high due to influence of weather, the image cannot be accurately identified and positioned after the image fusion, and a sensing source that is slightly influenced by weather needs to be fused; when the coincidence degree of the influence factors in the first error factor and the second error factor is not high, the image fusion can be fused according to the image characteristics of the image fusion, so that better fusion quality is obtained;
the first prediction unit is used for predicting the image fusion performance effect according to the first contact ratio and outputting first prediction information;
a second obtaining unit, configured to obtain a training data set and a test data set by performing preprocessing and image spectral feature extraction on the training sample image and the test sample image;
a first construction unit for constructing a first deep convolutional neural network model;
a third obtaining unit, configured to train the first deep convolutional neural network model according to the training data set, so as to obtain a second deep convolutional neural network model;
a fourth obtaining unit, configured to obtain a first depth feature set according to the test data set and the second depth convolutional neural network model;
the first fusion unit is used for fusing a first sensing image and a second sensing image in the test sample image based on the first depth feature set to obtain a first fusion image;
and the first recognition unit is used for carrying out feature positioning recognition on the first fusion image by utilizing a classifier to obtain a first navigation output result.
8. An electronic device, comprising: a processor coupled to a memory for storing a program, wherein the program, when executed by the processor, causes a system to perform the steps of the method of any of claims 1 to 6 when executed.
9. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202210237540.6A 2022-01-28 2022-03-10 Multi-source navigation image fusion method and system based on deep learning Active CN114612757B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210109444 2022-01-28
CN2022101094443 2022-01-28

Publications (2)

Publication Number Publication Date
CN114612757A CN114612757A (en) 2022-06-10
CN114612757B true CN114612757B (en) 2022-11-15

Family

ID=81863511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210237540.6A Active CN114612757B (en) 2022-01-28 2022-03-10 Multi-source navigation image fusion method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN114612757B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793606A (en) * 2014-01-27 2014-05-14 中国电子科技集团公司第十研究所 Method for assessment of performance of multi-source sensor target synthetic recognition system
CN106295714A (en) * 2016-08-22 2017-01-04 中国科学院电子学研究所 A kind of multi-source Remote-sensing Image Fusion based on degree of depth study
CN112325879A (en) * 2020-11-03 2021-02-05 中国电子科技集团公司信息科学研究院 Bionic composite navigation time service microsystem based on multi-source sensor integration

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7821453B2 (en) * 2007-12-20 2010-10-26 Sarnoff Corporation Distributed iterative multimodal sensor fusion method for improved collaborative localization and navigation
US8589334B2 (en) * 2010-01-15 2013-11-19 Telcordia Technologies, Inc. Robust information fusion methods for decision making for multisource data
CN103530862B (en) * 2013-10-30 2016-06-22 重庆邮电大学 Infrared and low light image fusion method based on the neighborhood characteristic area of NSCT
CN105973619A (en) * 2016-04-27 2016-09-28 厦门大学 Bridge local damage identification method based on influence line under structure health monitoring system
CN105913402B (en) * 2016-05-20 2019-04-16 上海海洋大学 A kind of several remote sensing image fusion denoising methods based on DS evidence theory
CN106019973A (en) * 2016-07-30 2016-10-12 杨超坤 Smart home with emotion recognition function
CN108169722A (en) * 2017-11-30 2018-06-15 河南大学 A kind of unknown disturbances influence the system deviation method for registering of lower sensor
US11361470B2 (en) * 2019-05-09 2022-06-14 Sri International Semantically-aware image-based visual localization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793606A (en) * 2014-01-27 2014-05-14 中国电子科技集团公司第十研究所 Method for assessment of performance of multi-source sensor target synthetic recognition system
CN106295714A (en) * 2016-08-22 2017-01-04 中国科学院电子学研究所 A kind of multi-source Remote-sensing Image Fusion based on degree of depth study
CN112325879A (en) * 2020-11-03 2021-02-05 中国电子科技集团公司信息科学研究院 Bionic composite navigation time service microsystem based on multi-source sensor integration

Also Published As

Publication number Publication date
CN114612757A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN112560876B (en) Single-stage small sample target detection method for decoupling measurement
US11823429B2 (en) Method, system and device for difference automatic calibration in cross modal target detection
US11100320B2 (en) Image recognition method and apparatus
CN108090456B (en) Training method for recognizing lane line model, and lane line recognition method and device
CN108229455B (en) Object detection method, neural network training method and device and electronic equipment
CN106295714B (en) Multi-source remote sensing image fusion method based on deep learning
CN108345875B (en) Driving region detection model training method, detection method and device
CN113807350A (en) Target detection method, device, equipment and storage medium
CN108182421A (en) Methods of video segmentation and device
EP3825733A1 (en) Long range lidar-based speed estimation
CN113313763A (en) Monocular camera pose optimization method and device based on neural network
CN113065525A (en) Age recognition model training method, face age recognition method and related device
CN110853085A (en) Semantic SLAM-based mapping method and device and electronic equipment
CN115049821A (en) Three-dimensional environment target detection method based on multi-sensor fusion
CN114612757B (en) Multi-source navigation image fusion method and system based on deep learning
CN111563916B (en) Long-term unmanned aerial vehicle tracking and positioning method, system and device based on stereoscopic vision
CN111598844B (en) Image segmentation method and device, electronic equipment and readable storage medium
CN117372928A (en) Video target detection method and device and related equipment
CN115298705A (en) License plate recognition method and device, electronic equipment and storage medium
CN116451081A (en) Data drift detection method, device, terminal and storage medium
CN116468702A (en) Chloasma assessment method, device, electronic equipment and computer readable storage medium
CN112446428B (en) Image data processing method and device
CN112200222A (en) Model training apparatus
JP7345680B2 (en) Inference device, inference method, and inference program
CN117523428B (en) Ground target detection method and device based on aircraft platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant