WO2019210737A1 - 对象预测方法及装置、电子设备和存储介质 - Google Patents

对象预测方法及装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2019210737A1
WO2019210737A1 PCT/CN2019/077152 CN2019077152W WO2019210737A1 WO 2019210737 A1 WO2019210737 A1 WO 2019210737A1 CN 2019077152 W CN2019077152 W CN 2019077152W WO 2019210737 A1 WO2019210737 A1 WO 2019210737A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
result
predicted
intermediate prediction
fusion
Prior art date
Application number
PCT/CN2019/077152
Other languages
English (en)
French (fr)
Inventor
徐旦
欧阳万里
王晓刚
尼库塞贝
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to SG11202007158UA priority Critical patent/SG11202007158UA/en
Priority to KR1020207022191A priority patent/KR102406765B1/ko
Priority to JP2020540732A priority patent/JP7085632B2/ja
Publication of WO2019210737A1 publication Critical patent/WO2019210737A1/zh
Priority to US16/985,747 priority patent/US11593596B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present application relates to the field of computer technologies, and in particular, to an object prediction method and apparatus, an electronic device, and a storage medium.
  • neural networks can be applied to various object prediction tasks.
  • the accuracy of the plurality of target prediction results obtained is low.
  • the present application proposes an object prediction technical solution.
  • an object prediction method applied to a neural network comprising:
  • an object prediction method applied to a neural network comprising:
  • Feature extraction network in the neural network to be predicted is subjected to feature extraction processing to obtain feature information for the object to be predicted; and the feature information is input into a first prediction network in the neural network for processing, and determining Determining a plurality of intermediate prediction results of the prediction object; inputting the intermediate prediction result into a fusion network of the neural network to perform fusion processing to obtain the fusion information; and inputting the fusion information into the neural network respectively Processing in the second prediction network, determining a plurality of target prediction results for the object to be predicted; marking information according to the plurality of intermediate prediction results, multiple intermediate prediction results, multiple target prediction results, and multiple targets
  • the annotation information of the prediction result determines a model loss of the neural network; and adjusts a network parameter value of the neural network according to the model loss.
  • an object prediction apparatus applied to a neural network comprising:
  • a feature extraction module configured to perform feature extraction processing on the object to be predicted, to obtain feature information of the object to be predicted
  • the intermediate prediction result determining module is configured to determine, according to the feature information, a plurality of intermediate prediction results for the object to be predicted;
  • a fusion module configured to perform fusion processing on the multiple intermediate prediction results to obtain fusion information
  • the target prediction result determining module is configured to determine a plurality of target prediction results for the object to be predicted according to the fusion information.
  • an object prediction apparatus applied to a neural network comprising:
  • a first information obtaining module configured to perform feature extraction processing on the feature extraction network in the neural network to be predicted, to obtain feature information for the object to be predicted;
  • a first result determining module configured to input the feature information into a first prediction network in the neural network for processing, and determine a plurality of intermediate prediction results for the object to be predicted;
  • a second information obtaining module configured to input the intermediate prediction result into a fusion network of the neural network for performing fusion processing to obtain the fusion information
  • a second result determining module configured to separately input the fusion information into a plurality of second prediction networks in the neural network for processing, and determine a plurality of target prediction results for the object to be predicted;
  • a model loss determining module configured to determine a model loss of the neural network according to the plurality of intermediate prediction results, the annotation information of the plurality of intermediate prediction results, the plurality of target prediction results, and the annotation information of the plurality of target prediction results;
  • a parameter adjustment module configured to adjust a network parameter value of the neural network according to the model loss.
  • an electronic device comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the object prediction method described above.
  • a computer readable storage medium having stored thereon computer program instructions that, when executed by a processor, implement the object prediction method described above.
  • a computer program comprising computer readable code, the processor in the electronic device executing to perform the object prediction described above when the computer readable code is run in an electronic device method.
  • the feature information of the object to be predicted can be extracted, and a plurality of intermediate prediction results for the object to be predicted are determined according to the feature information, and the fusion information is obtained by performing fusion processing on the plurality of intermediate prediction results. And determining, according to the fusion information, a plurality of target prediction results for the object to be predicted, which is beneficial to improving accuracy of the plurality of target prediction results.
  • FIG. 1 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • FIG. 2 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • FIG. 3 is a schematic diagram of an application scenario of an object prediction method, according to an exemplary embodiment.
  • FIG. 4 is a schematic diagram of an expanded convolution, according to an exemplary embodiment.
  • FIG. 5 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • FIG. 6 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • FIG. 7a, 7b, and 7c are schematic diagrams of application scenarios of an object prediction method, respectively, according to an exemplary embodiment.
  • FIG. 8 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • FIG. 9 is a flowchart of training a neural network in an object prediction method, according to an exemplary embodiment.
  • FIG. 10 is a flowchart of training a neural network in an object prediction method, according to an exemplary embodiment.
  • FIG. 11 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • FIG. 12 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • FIG. 13 is a block diagram of an object prediction apparatus, according to an exemplary embodiment.
  • FIG. 14 is a block diagram of an object prediction apparatus, according to an exemplary embodiment.
  • FIG. 15 is a block diagram of an object prediction apparatus, according to an exemplary embodiment.
  • FIG. 16 is a block diagram of an object prediction apparatus, according to an exemplary embodiment.
  • FIG. 17 is a block diagram of an electronic device, according to an exemplary embodiment.
  • FIG. 1 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • the method can be applied to an electronic device.
  • the electronic device can be provided as a terminal, a server or other form of device.
  • the object prediction method according to an embodiment of the present application includes:
  • step S101 feature extraction processing is performed on the object to be predicted, and feature information of the object to be predicted is obtained;
  • step S102 determining, according to the feature information, a plurality of intermediate prediction results for the object to be predicted
  • step S103 performing fusion processing on the plurality of intermediate prediction results to obtain fusion information
  • step S104 a plurality of target prediction results for the object to be predicted are determined according to the fusion information.
  • the feature information of the object to be predicted can be extracted, and a plurality of intermediate prediction results for the object to be predicted are determined according to the feature information, and the fusion information is obtained by performing fusion processing on the plurality of intermediate prediction results. And determining, according to the fusion information, a plurality of target prediction results for the object to be predicted, which is beneficial to improving accuracy of the plurality of target prediction results.
  • the deep learning technology can be used in various object prediction tasks, for example, a depth estimation prediction task (a depth estimation can provide three-dimensional information of a scene), a scene segmentation prediction task (a scene segmentation can generate a two-dimensional semantic of a scene), and the like.
  • Object prediction can be widely used in a variety of important application areas.
  • depth estimation prediction and scene segmentation prediction can be applied to applications such as intelligent video analysis, road scene modeling, and autopilot.
  • multiple target predictions may need to be performed simultaneously.
  • depth estimation and scene segmentation are performed simultaneously on an image or sequence under a single camera.
  • scene segmentation is a discrete classification problem.
  • the accuracy of multiple target predictions for multiple target predictions is often low and the prediction performance is poor. It can be seen that the complexity of performing multiple target predictions at the same time is very high. How to improve the accuracy of multiple target prediction results when multiple target predictions are simultaneously becomes an urgent problem to be solved.
  • feature extraction processing may be performed on the object to be predicted, and feature information of the object to be predicted is obtained, and a plurality of intermediate prediction results for the object to be predicted are determined according to the feature information.
  • the plurality of intermediate prediction results may be intermediate prediction results of multiple levels (for example, from a low level to a high level), thereby generating multimodal data, which may assist in determining the final plurality of target predictions.
  • the fusion information is obtained by performing fusion processing on the plurality of intermediate prediction results, and determining, according to the fusion information, a plurality of target prediction results for the object to be predicted.
  • the embodiment of the present application assists in determining the final plurality of target prediction results by using a plurality of intermediate prediction results determined according to the object to be predicted, which is beneficial to improving the accuracy of the plurality of target prediction results.
  • the embodiments of the present application can be applied to various types of multi-task prediction, for example, RGB-D behavior recognition, multi-sensor intelligent video monitoring, depth estimation, and scene segmentation dual task prediction.
  • the neural network may be trained according to the object to be predicted.
  • the object to be predicted may be various types of images, for example, RGB images, etc., which is not limited in this application.
  • the plurality of intermediate prediction results of the object to be predicted may include the target prediction result, and may also be related or complementary to the plurality of target prediction results.
  • the present application does not limit the correspondence between a plurality of intermediate prediction results and a plurality of target prediction results, the number of intermediate prediction results, the number of target prediction results, and the like.
  • the object to be predicted is an RGB image
  • the intermediate prediction result includes a depth estimation intermediate prediction result, a surface normal intermediate prediction result, a contour intermediate prediction result, and a semantic segmentation intermediate prediction result
  • the target prediction result includes a depth estimation result and a scene.
  • the segmentation result is explained as an example.
  • a feature extraction process is performed on a prediction object (for example, a single RGB image) to obtain feature information of the object to be predicted.
  • the feature extraction network input into the neural network may be subjected to feature extraction processing to obtain feature information for the object to be predicted.
  • the feature extraction network may include various convolutional neural networks.
  • the feature extraction network may use one of an Alex Net network structure, a VGG network structure, and a ResNet network structure, which is not limited in this application.
  • FIG. 2 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • FIG. 3 is a schematic diagram of an application scenario of an object prediction method, according to an exemplary embodiment.
  • step S101 may include:
  • step S1011 feature extraction processing is performed on the object to be predicted to obtain features of a plurality of levels
  • step S1012 the features of the plurality of levels are aggregated to obtain feature information for the object to be predicted.
  • feature extraction processing is performed on the predicted object, for example, feature extraction processing is performed on the predicted object by a feature extraction network including a convolutional neural network.
  • the convolutional neural network may include a multi-level convolutional layer, for example, a first-level convolutional layer to an N-th convolutional layer, and each of the convolutional layers may include one or more convolution sub-layers.
  • multiple levels of features can be obtained (for example, the features of the last convolved sub-layer in each level of the convolutional layer are determined as features of each level). For example, as shown in FIG. 3, four levels of features can be obtained.
  • the receptive field of the convolution can be increased by expanding convolution, so that the characteristics of the plurality of levels can be obtained. Contains a wider range of information.
  • the convolution structure of the last convolution sublayer of the convolutional neural network multilevel convolutional layer can be an expanded convolution.
  • the expansion convolution is an expansion convolution with a void size of 1, and the size of the convolution kernel is 3*3.
  • the circled point in FIG. 4 can be convoluted with the 3*3 convolution kernel, and the remaining points (holes) are not convoluted.
  • the expansion convolution can improve the receptive field of the convolution, so that after the feature extraction process, the obtained plurality of hierarchical features can contain a wider range of information.
  • the feature extraction processing of the object to be predicted, the manner of obtaining the features of the plurality of levels, the size of the hole of the expanded convolution, and the like are not limited.
  • multiple levels of features may be aggregated to obtain feature information for the object to be predicted.
  • the features of each level of the convolutional neural network may be aggregated, for example, the features of the first three shallower levels are aggregated to the features of the last level of the convolutional layer (eg, superimposed fusion), aggregation processing
  • the feature information for the object to be predicted is obtained later.
  • the features of each shallow level may be downsampled by a convolution operation, and obtained by the bilinear interpolation process and the last stage.
  • the features of the convolutional layer are characterized by the same resolution.
  • the resolution of features of each level is different, for example, the feature resolution of the shallowest level is the largest, and the resolution of the deepest level (for example, the feature of the last level convolutional layer) is the smallest.
  • the features of each shallower level can be downsampled by a convolution operation, and the same resolution as that of the last-level convolutional layer can be obtained by bilinear interpolation processing.
  • Feature for performing aggregation processing for example, superimposing and merging features of a plurality of levels having the same resolution after processing to obtain feature information of the object to be predicted).
  • the number of feature channels can also be used to control the number of feature channels by convolution operations for features of each shallower level to make the aggregation process more efficient.
  • feature information for the object to be predicted can be obtained, and the feature information can be used to better predict the intermediate prediction result.
  • the method for performing feature extraction processing on the object to be predicted and obtaining the feature information of the object to be predicted is not limited.
  • step S102 a plurality of intermediate prediction results for the object to be predicted are determined based on the feature information.
  • a plurality of intermediate prediction results for the object to be predicted may be determined according to the feature information of the object to be predicted.
  • the feature information of the object to be predicted may be reconstructed into different intermediate prediction tasks, and a plurality of intermediate prediction results for the object to be predicted may be determined.
  • the intermediate prediction results can be used to assist in determining the target prediction results.
  • FIG. 5 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • step S102 may include:
  • step S1021 the feature information is reconstructed to obtain a plurality of reconstructed features.
  • the feature information may be reconstructed.
  • the feature information may be deconvolved to obtain a plurality of reconstructed features.
  • the feature information may be subjected to a deconvolution operation to obtain four reconstructed features, respectively.
  • the resolution of the reconstructed feature is twice the resolution of the feature information.
  • step S1022 a plurality of intermediate prediction results for the object to be predicted are determined according to a plurality of reconstruction features.
  • a plurality of reconstructed features may be separately convoluted to obtain intermediate prediction results of the plurality of intermediate prediction tasks.
  • the convolution operation is performed on multiple reconstructed features respectively, and the intermediate information of the corresponding multiple intermediate prediction tasks can be obtained, and the intermediate information of the multiple intermediate prediction tasks can be processed by bilinear interpolation to obtain the resolution to be predicted.
  • Multiple intermediate predictions with a quarter of the original resolution For example, as shown in FIG. 3, a depth estimation intermediate prediction result, a surface normal intermediate prediction result, a contour intermediate prediction result, and a semantic division intermediate prediction result for the object to be predicted may be determined.
  • a plurality of intermediate prediction results for the object to be predicted can be determined based on the feature information.
  • the plurality of intermediate prediction results can be used to assist in determining a plurality of target prediction results.
  • the application does not limit the manner in which a plurality of intermediate prediction results for the object to be predicted are determined according to the feature information.
  • step S103 the plurality of intermediate prediction results are subjected to fusion processing to obtain fusion information.
  • the multiple intermediate prediction results may be fused by multiple methods to obtain fusion information.
  • the fusion information can be used to determine a plurality of target prediction results for the object to be predicted.
  • the fusion information may be one or more, and when the fusion information is one, the fusion information may be used to separately determine a plurality of target prediction results for the object to be predicted.
  • the merging information may also be multiple. For example, a plurality of intermediate prediction results may be fused, and a plurality of fused information for determining each target prediction result are respectively obtained.
  • fusion information is obtained, which effectively combines more information from multiple related tasks (intermediate prediction results) to improve the accuracy of multiple target prediction results.
  • the present application does not limit the manner in which the information is obtained, the number of pieces of information to be fused, and the like.
  • the depth estimation intermediate prediction result, the surface normal intermediate prediction result, the contour intermediate prediction result, and the semantic segmentation intermediate prediction result are fused to obtain fusion information.
  • FIG. 6 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • 7a, 7b, and 7c are schematic diagrams of application scenarios of an object prediction method, respectively, according to an exemplary embodiment.
  • step S103 may include:
  • step S1031 the plurality of intermediate prediction results are reprocessed to obtain reprocessing results of the plurality of intermediate prediction results.
  • multiple intermediate prediction results may be reprocessed.
  • multiple intermediate prediction results may be convoluted to obtain reprocessing results of multiple intermediate prediction results to obtain richer information and narrow down.
  • the reprocessing result of the obtained multiple intermediate prediction results may be the same as the intermediate prediction result.
  • intermediate prediction results are respectively represented (for example, depth estimation intermediate prediction results, surface normal intermediate prediction results, contour intermediate prediction results, and semantic segmentation intermediate prediction results).
  • Reprocessing the plurality of intermediate prediction results to obtain reprocessing results of the plurality of intermediate prediction results for example, obtaining as well as These 4 corresponding reprocessing results.
  • step S1032 the reprocessing result of the plurality of intermediate prediction results is subjected to fusion processing to obtain fusion information.
  • the reprocessing result of the multiple intermediate prediction results may be fused to obtain the fused information.
  • step S1032 may include performing superposition processing on the reprocessing results of the plurality of intermediate prediction results to obtain fusion information.
  • Fusion information Can be used to determine a plurality of target prediction results for the object to be predicted, for example, as shown in FIG. 7a, the fusion information can be The depth estimation task branch and the scene segmentation task branch are respectively input, and the depth estimation result and the scene segmentation result for the object to be predicted are determined.
  • fusion information for determining a plurality of target prediction results can be obtained.
  • the manner in which the present application superimposes the reprocessing result of the intermediate prediction result is not limited.
  • the multiple intermediate prediction results include a first intermediate prediction result and a second intermediate prediction result, wherein the first intermediate prediction result has the highest correlation with the target prediction result.
  • the plurality of intermediate prediction results are a depth estimation intermediate prediction result, a surface normal intermediate prediction result, a contour intermediate prediction result, and a semantic segmentation intermediate prediction result, respectively.
  • the plurality of intermediate prediction results may be divided into a first intermediate prediction result and a second intermediate prediction result.
  • the first intermediate prediction result is a depth estimation intermediate prediction result having the highest correlation with the target prediction result (depth estimation result).
  • the other three intermediate prediction results may be the second intermediate prediction result.
  • step S1032 may further include:
  • two target prediction results for the object to be predicted are determined.
  • the reprocessing result of the second intermediate prediction result is processed to obtain a reference result.
  • the reprocessing result of the second intermediate prediction result may be processed according to formula (1), for example, respectively as well as Convolution operations were performed to obtain three reference results.
  • the reprocessing result of the first intermediate prediction result is superimposed with the reference result, and the fusion information for the target prediction result may be obtained.
  • the reprocessing result of the first intermediate prediction result may be superimposed with the reference result according to formula (1) (for example, for each pixel, the reprocessing result of the first intermediate prediction result is The information of the pixel is superimposed with the information of the other three reference results, and the fusion information for the target prediction result can be obtained. For example, as shown in Figure 7b, get the fusion information. Fusion information The fusion information for the prediction result of the first target can be used to determine the first target prediction result. It should be understood that the fusion information for the plurality of target prediction results may be respectively obtained according to the formula (1).
  • the fusion information for the plurality of target prediction results may be determined, and the fusion information may include more information in the reprocessing result of the first intermediate prediction result having the highest correlation with the target prediction result, achieving smoother Perform multi-mode data fusion.
  • the specific method for superimposing the reprocessing result of the first intermediate prediction result and the reference result to obtain the fusion information for the target prediction result is not limited.
  • step S1032 may further include: determining a attention coefficient according to a reprocessing result of the first intermediate prediction result, where the attention coefficient is a reference coefficient determined according to an attention mechanism;
  • Determining the attention coefficient determined according to the reprocessing result of the first intermediate prediction result in determining the fusion information for the kth target prediction result Represents a convolution operation, Express convolution parameters, Representing the reprocessing result of the kth intermediate prediction result (which is the first intermediate prediction result) in the process of determining the fusion information for the kth target prediction result, and ⁇ represents the sigmoid function.
  • the attention coefficient may be determined according to the reprocessing result of the first intermediate prediction result, where the attention coefficient is a reference coefficient determined according to the attention mechanism.
  • the reprocessing result according to the first intermediate prediction result may be Determining the attention coefficient
  • the attention coefficient is a reference coefficient determined according to the attention mechanism, and can be used to filter a plurality of second intermediate prediction results to guide information transfer fusion (for example, for more attention or ignoring information from the second intermediate prediction result).
  • denotes a point multiplication process, where k, t, and T are positive integers, t is a variable, and t is between 1 and T, t ⁇ k. ⁇ indicates that the right part is superimposed to obtain the fusion information of the left part.
  • the reprocessing result of the second intermediate prediction result is processed to obtain a reference result.
  • the reprocessing result of the second intermediate prediction result may be processed according to formula (3), for example, respectively as well as Convolution operations were performed to obtain three reference results.
  • the reference result and the attention coefficient are subjected to point multiplication processing to obtain attention content.
  • the attention coefficient can be determined according to formula (2).
  • the attention coefficient corresponding to each pixel can be obtained.
  • the reference result and the attention coefficient can be subjected to dot multiplication processing to obtain attention content.
  • the obtained reference result is subjected to point multiplication processing with the attention coefficient to obtain respective corresponding attention contents.
  • the reprocessing result of the first intermediate prediction result is superimposed with the attention content to obtain fusion information for the target prediction result.
  • the reprocessing result of the first intermediate prediction result may be superimposed with the plurality of attention content according to formula (3) (for example, for each pixel, the first intermediate prediction result is re-processed)
  • the processing result is that the information of the pixel is superimposed with the information of the other three attention contents, and the fusion information for the target prediction result can be obtained.
  • the fusion information is obtained. Fusion information
  • the fusion information for the prediction result of the first target can be used to determine the first target prediction result. It should be understood that the fusion information for the plurality of target prediction results may be respectively obtained according to the formula (1).
  • the fusion information for the plurality of target prediction results may be determined, the fusion information may include more information in the reprocessing result of the first intermediate prediction result having the highest correlation with the target prediction result, through the first middle
  • the reprocessing result of the prediction result determines a attention coefficient, which can be used to filter a plurality of second intermediate prediction results, and guide information transfer fusion (for example, for more attention or ignoring information from the second intermediate prediction result), thereby improving targeting The relevance of the fusion information of multiple target prediction results.
  • the present application does not limit the manner in which the attention coefficient is determined, the manner in which the reference result is determined, the manner in which the attention content is determined, and the manner in which the fusion information is determined.
  • step S104 a plurality of target prediction results for the object to be predicted are determined based on the fusion information.
  • a plurality of target prediction results for the object to be predicted may be determined according to the fusion information.
  • the fusion information may be separately input into a plurality of branches for the target prediction task to determine a plurality of target prediction results.
  • the corresponding fusion information may be input into the corresponding target prediction task branch to determine a plurality of target prediction results.
  • multiple target prediction tasks may be implemented over a sub-network (eg, a second prediction network of neural networks).
  • the sub-network may include different branches, each of which may adopt various types of networks of different depths according to the complexity of the task, have different network parameters, and different designs.
  • the present application does not limit the manner of determining a plurality of target prediction results for the object to be predicted, the structure and design of the sub-networks of the plurality of target prediction tasks, according to the fusion information.
  • a depth estimation result for the object to be predicted and a scene segmentation result are determined.
  • FIG. 8 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • step S104 may include:
  • step S104 fusion information for a plurality of target prediction results is determined
  • step S1042 the fusion information is processed to obtain a target feature
  • step S1043 a plurality of target prediction results are determined according to the target feature.
  • fusion information for multiple target prediction results can be determined.
  • the fusion information for the depth estimation result is determined as
  • determining the fusion information for the segmentation result of the scene is The fusion information can be processed to obtain target features, and multiple target prediction results are determined according to the target characteristics.
  • the depth estimation result is determined as an example.
  • the fusion information for the depth estimation result can be Processed to get the target features.
  • the resolution of multiple intermediate prediction results is one quarter of the original resolution of the object to be predicted, and can be obtained by two consecutive deconvolution operations (two times each magnification) to obtain the original object to be predicted.
  • the target prediction result can be determined according to the target feature. For example, a convolution operation can be performed on the target feature to determine a target prediction result.
  • a plurality of target prediction results for the object to be predicted can be determined based on the fusion information.
  • the present application does not limit the manner in which a plurality of target prediction results for an object to be predicted are determined based on the fusion information.
  • the foregoing method may be applicable to a scenario in which a plurality of target prediction results are determined by using a trained neural network, and may also be applied to a process of training a neural network, which is not limited in this embodiment of the present application.
  • the step of training the neural network according to the object to be predicted may be included.
  • FIG. 9 is a flowchart of training a neural network in an object prediction method, according to an exemplary embodiment.
  • the step of training the neural network according to the object to be predicted may include:
  • step S105 the object to be predicted is input into the feature extraction network in the neural network to perform feature extraction processing, to obtain feature information for the object to be predicted;
  • step S106 the feature information is input into a first prediction network in the neural network for processing, and a plurality of intermediate prediction results for the object to be predicted are determined;
  • step S107 the intermediate prediction result is input into the fusion network of the neural network to perform fusion processing, to obtain the fusion information;
  • step S108 the fusion information is separately input into a plurality of second prediction networks in the neural network for processing, and a plurality of target prediction results for the object to be predicted are determined;
  • step S109 determining a model loss of the neural network according to the plurality of intermediate prediction results, the annotation information of the plurality of intermediate prediction results, the plurality of target prediction results, and the annotation information of the plurality of target prediction results;
  • step S110 the network parameter value of the neural network is adjusted according to the model loss.
  • the feature extraction network that is to be predicted into the neural network may be subjected to feature extraction processing to obtain feature information for the object to be predicted.
  • the feature information is input into a first prediction network in the neural network for processing, and a plurality of intermediate prediction results for the object to be predicted are determined. For example, determine 4 intermediate prediction results.
  • the intermediate prediction result is input into the fusion network of the neural network to perform fusion processing, and the fusion information is obtained, and the fusion information is separately input into a plurality of second prediction networks in the neural network for processing, and determining a plurality of target prediction results of the object to be predicted. Determining the model loss of the neural network according to the plurality of intermediate prediction results, the annotation information of the plurality of intermediate prediction results, the plurality of target prediction results, and the annotation information of the plurality of target prediction results.
  • the model loss of the determined neural network may be 6 loss functions during training.
  • the sum of the losses (including the respective losses of the four intermediate predictions and the respective losses of the two target predictions).
  • Each loss function may include different types.
  • the loss function in the contour intermediate prediction task, the loss function may be a cross entropy loss function, and in the semantic segmentation intermediate prediction task (scene segmentation prediction task), the loss function may be a Softmax loss function.
  • the loss function may be a Euclidean distance loss function.
  • the loss weights of the loss functions may not be identical.
  • the loss weights of the loss function of the depth estimation intermediate prediction task, the depth estimation prediction task, the scene segmentation prediction task, and the semantic segmentation intermediate prediction task can be set to 1, the loss of the loss function of the surface normal intermediate prediction task and the contour intermediate prediction task.
  • the weight can be set to 0.8.
  • the present application does not limit the type of the loss function, the loss weight of each loss function, and the like.
  • the network parameter value of the neural network may be adjusted according to the model loss.
  • the network parameter values are adjusted using a reverse gradient algorithm or the like. It should be understood that the network parameter values of the neural network may be adjusted in a suitable manner, which is not limited in this application.
  • the current neural network may be determined as the final The neural network thus completes the training process of the neural network.
  • the training conditions and the loss threshold may be set by a person skilled in the art according to actual conditions, which is not limited in this application.
  • the feature information can be obtained by inputting an object to be predicted (for example, an RGB image), and a plurality of intermediate prediction results are obtained according to the feature information.
  • Multiple intermediate prediction results can not only serve as supervised information for learning deeper features, but also provide richer multimodal data to improve the final target prediction task, assist in determining the final multiple target prediction results, and simultaneously improve multiple target prediction tasks. Generalization capabilities and predictive performance to improve the accuracy of multiple target predictions.
  • multiple target prediction results are directly trained at the same time, but multiple intermediate prediction results are determined, and multiple The intermediate prediction results assist in determining multiple target prediction results, thereby reducing the complexity of neural network training and ensuring better training efficiency and effectiveness.
  • FIG. 10 is a flowchart of training a neural network in an object prediction method, according to an exemplary embodiment.
  • the step of training the neural network according to the object to be predicted further includes:
  • step S111 the feature extraction network is input to the feature extraction network in the neural network to perform feature extraction processing, and before the feature information for the object to be predicted is obtained, the annotation information of the plurality of target prediction results is determined. ;
  • step S112 the annotation information of the plurality of intermediate prediction results is determined according to the annotation information of the plurality of target prediction results.
  • the target prediction result includes a depth estimation result and a scene segmentation result.
  • the annotation information of the prediction results of the two targets can be determined. For example, it is determined by manual marking or the like.
  • the annotation information of the plurality of intermediate prediction results may be determined according to the depth estimation result and the annotation information of the scene segmentation result.
  • the intermediate prediction results include a depth estimation intermediate prediction result, a surface normal intermediate prediction result, a contour intermediate prediction result, and a semantic segmentation intermediate prediction result.
  • the annotation information of the depth estimation result and the scene segmentation result may be respectively determined as the depth estimation intermediate prediction result and the annotation information of the semantic segmentation intermediate prediction result.
  • the annotation information of the intermediate prediction result of the contour can be obtained by the annotation information of the scene segmentation result, and the annotation information of the intermediate prediction result of the surface normal can be estimated by the annotation information of the depth estimation result.
  • the annotation information of the plurality of intermediate prediction results is determined according to the annotation information of the plurality of target prediction results, so that more annotation information is used as the supervision information training neural network. Without having to complete too many annotation tasks, the efficiency of neural network training is improved.
  • the method for determining the labeling information of the plurality of intermediate prediction results according to the labeling information of the plurality of target prediction results is not limited.
  • FIG. 11 is a flowchart of an object prediction method, according to an exemplary embodiment.
  • the method can be applied to an electronic device.
  • the electronic device can be provided as a terminal, a server or other form of device.
  • the object prediction method according to an embodiment of the present application includes:
  • step S201 the feature extraction network in the object to be predicted is input into the neural network for feature extraction processing, to obtain feature information for the object to be predicted;
  • step S202 the feature information is input into a first prediction network in the neural network for processing, and a plurality of intermediate prediction results for the object to be predicted are determined;
  • step S203 the intermediate prediction result is input into the fusion network of the neural network to perform fusion processing, to obtain the fusion information;
  • step S204 the fusion information is separately input into a plurality of second prediction networks in the neural network for processing, and determining a plurality of target prediction results for the object to be predicted;
  • step S205 determining a model loss of the neural network according to the plurality of intermediate prediction results, the annotation information of the plurality of intermediate prediction results, the plurality of target prediction results, and the annotation information of the plurality of target prediction results;
  • step S206 the network parameter value of the neural network is adjusted according to the model loss.
  • a neural network can be trained according to an object to be predicted, and the neural network can be used to determine a plurality of target prediction results for the object to be predicted.
  • a neural network can be trained and will not be described again.
  • FIG. 12 is a flowchart of an object prediction method, according to an exemplary embodiment. In a possible implementation manner, as shown in FIG. 12, the method further includes:
  • step S207 the feature extraction network is input to the feature extraction network in the neural network to be predicted, and the annotation information of the plurality of target prediction results is determined before the feature information for the object to be predicted is obtained.
  • step S208 the annotation information of the plurality of intermediate prediction results is determined according to the annotation information of the plurality of target prediction results.
  • FIG. 13 is a block diagram of an object prediction apparatus, according to an exemplary embodiment. As shown in FIG. 13, the object prediction apparatus includes:
  • the feature extraction module 301 is configured to perform feature extraction processing on the object to be predicted, and obtain feature information of the object to be predicted;
  • the intermediate prediction result determining module 302 is configured to determine, according to the feature information, a plurality of intermediate prediction results for the object to be predicted;
  • the fusion module 303 is configured to perform fusion processing on the multiple intermediate prediction results to obtain fusion information.
  • the target prediction result determining module 304 is configured to determine a plurality of target prediction results for the object to be predicted according to the fusion information.
  • FIG. 14 is a block diagram of an object prediction apparatus, according to an exemplary embodiment. As shown in FIG. 14, in a possible implementation manner, the feature extraction module 301 includes:
  • the feature obtaining sub-module 3011 is configured to perform feature extraction processing on the object to be predicted to obtain features of multiple levels;
  • the feature information obtaining sub-module 3012 is configured to perform aggregation processing on the features of the plurality of levels to obtain feature information for the object to be predicted.
  • the intermediate prediction result determining module 302 includes:
  • the reconstructed feature obtaining sub-module 3021 is configured to perform reconstruction processing on the feature information to obtain a plurality of reconstructed features
  • the intermediate prediction result obtaining sub-module 3022 is configured to determine a plurality of intermediate prediction results for the object to be predicted based on the plurality of reconstruction features.
  • the fusion module 303 includes:
  • the reprocessing result obtaining submodule 3031 is configured to reprocess the plurality of intermediate prediction results to obtain reprocessing results of the plurality of intermediate prediction results;
  • the fusion information obtaining sub-module 3032 is configured to perform fusion processing on the reprocessing results of the plurality of intermediate prediction results to obtain fusion information.
  • the fusion information obtaining submodule 3032 is configured to:
  • the multiple intermediate prediction results include a first intermediate prediction result and a second intermediate prediction result, wherein the first intermediate prediction result has the highest correlation with the target prediction result,
  • the fusion information obtaining submodule 3032 is configured to:
  • the multiple intermediate prediction results include a first intermediate prediction result and a second intermediate prediction result, wherein the first intermediate prediction result has the highest correlation with the target prediction result,
  • the fusion information obtaining submodule 3032 is configured to:
  • the target prediction result determining module 304 includes:
  • a fusion information determination sub-module 3041 configured to determine fusion information for a plurality of target prediction results
  • the target feature obtaining sub-module 3042 is configured to process the fusion information to obtain a target feature.
  • the target prediction result determination sub-module 3043 is configured to determine a plurality of target prediction results according to the target feature.
  • the neural network is trained according to the object to be predicted.
  • the device further includes:
  • the first obtaining module 305 is configured to input the object to be predicted into the feature extraction network in the neural network for feature extraction processing, to obtain feature information for the object to be predicted;
  • the first determining module 306 is configured to input the feature information into a first prediction network in the neural network for processing, and determine a plurality of intermediate prediction results for the object to be predicted;
  • a second obtaining module 307 configured to input the intermediate prediction result into a fusion network of the neural network for performing fusion processing to obtain the fusion information
  • the second determining module 308 is configured to separately input the fusion information into a plurality of second prediction networks in the neural network for processing, and determine a plurality of target prediction results for the object to be predicted;
  • the third determining module 309 is configured to determine the model loss of the neural network according to the plurality of intermediate prediction results, the annotation information of the plurality of intermediate prediction results, the plurality of target prediction results, and the annotation information of the plurality of target prediction results. ;
  • the network parameter value adjustment module 310 is configured to adjust the network parameter value of the neural network according to the model loss.
  • the device further includes:
  • the label information determining module 311 is configured to perform feature extraction processing on the feature extraction network that is to be input into the neural network, and obtain the target information before the feature information for the object to be predicted Labeling information of the result;
  • the intermediate annotation information determining module 312 is configured to determine the annotation information of the plurality of intermediate prediction results according to the annotation information of the plurality of target prediction results.
  • the intermediate prediction result determining module 302 includes:
  • the first determining submodule 3023 is configured to determine, according to the feature information, a depth estimation intermediate prediction result, a surface normal intermediate prediction result, a contour intermediate prediction result, and a semantic segmentation intermediate prediction result for the object to be predicted,
  • the fusion module 303 includes:
  • the obtaining sub-module 3033 is configured to perform fusion processing on the depth estimation intermediate prediction result, the surface normal intermediate prediction result, the contour intermediate prediction result, and the semantic segmentation intermediate prediction result to obtain fusion information,
  • the target prediction result determining module 304 includes:
  • the second determining submodule 3044 is configured to determine a depth estimation result and a scene segmentation result for the object to be predicted according to the fusion information.
  • FIG. 15 is a block diagram of an object prediction apparatus, according to an exemplary embodiment. As shown in FIG. 15, the object prediction apparatus includes:
  • the first information obtaining module 401 is configured to perform feature extraction processing on the feature extraction network in the neural network to be predicted, to obtain feature information for the object to be predicted;
  • the first result determining module 402 is configured to input the feature information into a first prediction network in the neural network for processing, and determine a plurality of intermediate prediction results for the object to be predicted;
  • the second information obtaining module 403 is configured to input the intermediate prediction result into a fusion network of the neural network to perform fusion processing, to obtain the fusion information;
  • the second result determining module 404 is configured to input the fusion information into a plurality of second prediction networks in the neural network for processing, and determine a plurality of target prediction results for the object to be predicted;
  • the model loss determination module 405 is configured to determine the model loss of the neural network according to the plurality of intermediate prediction results, the annotation information of the plurality of intermediate prediction results, the plurality of target prediction results, and the annotation information of the plurality of target prediction results. ;
  • the parameter adjustment module 406 is configured to adjust the network parameter value of the neural network according to the model loss.
  • FIG. 16 is a block diagram of an object prediction apparatus, according to an exemplary embodiment. As shown in FIG. 16, in a possible implementation, the device further includes:
  • the first information determining module 407 is configured to perform feature extraction processing on the feature extraction network in the object to be predicted input into the neural network, and obtain the annotation of the plurality of target prediction results before obtaining the feature information for the object to be predicted information;
  • the second information determining module 408 is configured to determine the labeling information of the plurality of intermediate prediction results according to the labeling information of the plurality of target prediction results.
  • FIG. 17 is a block diagram of an electronic device, according to an exemplary embodiment.
  • an electronic device can be provided as a terminal, a server, or other form of device.
  • device 1900 includes a processing component 1922 that further includes one or more processors, and memory resources represented by memory 1932 for storing instructions executable by processing component 1922, such as an application.
  • An application stored in memory 1932 can include one or more modules each corresponding to a set of instructions.
  • processing component 1922 is configured to execute instructions to perform the methods described above.
  • Device 1900 can also include a power component 1926 configured to perform power management of device 1900, a wired or wireless network interface 1950 configured to connect device 1900 to the network, and an input/output (I/O) interface 1958.
  • Device 1900 can operate based on an operating system stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a computer readable storage medium such as a memory 1932 comprising computer program instructions executable by processing component 1922 of device 1900 to perform the above method.
  • a computer program comprising computer readable code, the processor in the electronic device executing to implement the method described above when the computer readable code is run in an electronic device.
  • the application can be a system, method and/or computer program product.
  • the computer program product can comprise a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement various aspects of the present application.
  • the computer readable storage medium can be a tangible device that can hold and store the instructions used by the instruction execution device.
  • the computer readable storage medium can be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, for example, with instructions stored thereon A raised structure in the hole card or groove, and any suitable combination of the above.
  • a computer readable storage medium as used herein is not to be interpreted as a transient signal itself, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or other transmission medium (eg, a light pulse through a fiber optic cable), or through a wire The electrical signal transmitted.
  • the computer readable program instructions described herein can be downloaded from a computer readable storage medium to various computing/processing devices or downloaded to an external computer or external storage device over a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in each computing/processing device .
  • Computer program instructions for performing the operations of the present application can be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine related instructions, microcode, firmware instructions, state setting data, or in one or more programming languages.
  • the computer readable program instructions can execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer, partly on the remote computer, or entirely on the remote computer or server. carried out.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or can be connected to an external computer (eg, using an Internet service provider to access the Internet) connection).
  • the customized electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present application.
  • the computer readable program instructions can be provided to a general purpose computer, a special purpose computer, or a processor of other programmable data processing apparatus to produce a machine such that when executed by a processor of a computer or other programmable data processing apparatus Means for implementing the functions/acts specified in one or more of the blocks of the flowcharts and/or block diagrams.
  • the computer readable program instructions can also be stored in a computer readable storage medium that causes the computer, programmable data processing device, and/or other device to operate in a particular manner, such that the computer readable medium storing the instructions includes An article of manufacture that includes instructions for implementing various aspects of the functions/acts recited in one or more of the flowcharts.
  • the computer readable program instructions can also be loaded onto a computer, other programmable data processing device, or other device to perform a series of operational steps on a computer, other programmable data processing device or other device to produce a computer-implemented process.
  • instructions executed on a computer, other programmable data processing apparatus, or other device implement the functions/acts recited in one or more of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagram can represent a module, a program segment, or a portion of an instruction that includes one or more components for implementing the specified logical functions.
  • Executable instructions can also occur in a different order than those illustrated in the drawings. For example, two consecutive blocks may be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or function. Or it can be implemented by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请涉及一种对象预测方法及装置、电子设备和存储介质。该方法应用于神经网络,方法包括:对待预测对象进行特征提取处理,得到待预测对象的特征信息;根据特征信息,确定针对待预测对象的多个中间预测结果;对多个中间预测结果进行融合处理,得到融合信息;根据融合信息,确定针对待预测对象的多个目标预测结果。根据本申请的实施例,能够提取到待预测对象的特征信息,根据特征信息确定针对所述待预测对象的多个中间预测结果,通过对所述多个中间预测结果进行融合处理,得到融合信息,并根据该融合信息,确定针对所述待预测对象的多个目标预测结果,有利于提高多个目标预测结果的准确度。

Description

对象预测方法及装置、电子设备和存储介质
本申请要求在2018年5月4日提交中国专利局、申请号为201810421005.X、申请名称为“对象预测方法及装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种对象预测方法及装置、电子设备和存储介质。
背景技术
随着深度学习技术的快速发展,神经网络可应用于各类对象预测任务中。然而,相关技术中,同时进行多个目标预测时,得到的多个目标预测结果的准确度较低。
发明内容
有鉴于此,本申请提出了一种对象预测技术方案。
根据本申请的一方面,提供了一种对象预测方法,应用于神经网络,所述方法包括:
对待预测对象进行特征提取处理,得到所述待预测对象的特征信息;根据所述特征信息,确定针对所述待预测对象的多个中间预测结果;对所述多个中间预测结果进行融合处理,得到融合信息;根据所述融合信息,确定针对所述待预测对象的多个目标预测结果。
根据本申请的另一方面,提供了一种对象预测方法,应用于神经网络,所述方法包括:
待预测对象输入神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息;将所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果;将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息;将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果;根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失;根据所述模型损失,调整所述神经网络的网络参数值。
根据本申请的另一方面,提供了一种对象预测装置,应用于神经网络,所述装置包括:
特征提取模块,被配置为对待预测对象进行特征提取处理,得到所述待预测对象的特征信息;
中间预测结果确定模块,被配置为根据所述特征信息,确定针对所述待预测对象的多个中间预测结果;
融合模块,被配置为对所述多个中间预测结果进行融合处理,得到融合信息;
目标预测结果确定模块,被配置为根据所述融合信息,确定针对所述待预测对象的多个目标预测结果。
根据本申请的另一方面,提供了一种对象预测装置,应用于神经网络,所述装置包括:
第一信息获得模块,被配置为将待预测对象输入神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息;
第一结果确定模块,被配置为将所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果;
第二信息获得模块,被配置为将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息;
第二结果确定模块,被配置为将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果;
模型损失确定模块,被配置为根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失;
参数调整模块,被配置为根据所述模型损失,调整所述神经网络的网络参数值。
根据本申请的另一方面,提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行上述对象预测方法。
根据本申请的另一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述对象预测方法。
据本申请的另一方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述对象预测方法。
根据本申请的实施例,能够提取到待预测对象的特征信息,根据特征信息确定针对所述待预测对象的多个中间预测结果,通过对所述多个中间预测结果进行融合处理,得到融合信息,并根据该融合信息,确定针对所述待预测对象的多个目标预测结果,有利于提高多个目标预测结果的准确度。
根据下面参考附图对示例性实施例的详细说明,本申请的其它特征及方面将变得清楚。
附图说明
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本申请的示例性实施例、特征和方面,并且用于解释本申请的原理。
图1是根据示例性实施例示出的一种对象预测方法的流程图。
图2是根据示例性实施例示出的一种对象预测方法的流程图。
图3是根据示例性实施例示出的一种对象预测方法的应用场景的示意图。
图4是根据示例性实施例示出的一种膨胀卷积的示意图。
图5是根据示例性实施例示出的一种对象预测方法的流程图。
图6是根据示例性实施例示出的一种对象预测方法的流程图。
图7a、图7b和图7c分别是根据示例性实施例示出的一种对象预测方法的应用场景的示意图。
图8是根据示例性实施例示出的一种对象预测方法的流程图。
图9是根据示例性实施例示出的一种对象预测方法中训练神经网络的流程图。
图10是根据示例性实施例示出的一种对象预测方法中训练神经网络的流程图。
图11是根据示例性实施例示出的一种对象预测方法的流程图。
图12是根据示例性实施例示出的一种对象预测方法的流程图。
图13是根据示例性实施例示出的一种对象预测装置的框图。
图14是根据示例性实施例示出的一种对象预测装置的框图。
图15是根据示例性实施例示出的一种对象预测装置的框图。
图16是根据示例性实施例示出的一种对象预测装置的框图。
图17是根据示例性实施例示出的一种电子设备的框图。
具体实施方式
以下将参考附图详细说明本申请的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。
另外,为了更好的说明本申请,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本申请同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。
图1是根据示例性实施例示出的一种对象预测方法的流程图。该方法可应用于电子设备中。该电子设备可以被提供为一终端、一服务器或其它形态的设备。如图1所示,根据本申请实施例的对象预 测方法包括:
在步骤S101中,对待预测对象进行特征提取处理,得到所述待预测对象的特征信息;
在步骤S102中,根据所述特征信息,确定针对所述待预测对象的多个中间预测结果;
在步骤S103中,对所述多个中间预测结果进行融合处理,得到融合信息;
在步骤S104中,根据所述融合信息,确定针对所述待预测对象的多个目标预测结果。
根据本申请的实施例,能够提取到待预测对象的特征信息,根据特征信息确定针对所述待预测对象的多个中间预测结果,通过对所述多个中间预测结果进行融合处理,得到融合信息,并根据该融合信息,确定针对所述待预测对象的多个目标预测结果,有利于提高多个目标预测结果的准确度。
相关技术中,深度学习技术可用于各类对象预测任务中,例如,深度估计预测任务(深度估计可以提供场景的三维信息)、场景分割预测任务(场景分割可以生成场景的二维语义)等。对象预测可并被广泛应用于各类重要的应用领域,例如,深度估计预测和场景分割预测可应用于智能视频分析、道路场景建模以及自动驾驶等应用领域。
在实际使用过程中,可能需要同时进行多个目标预测。例如,对单一摄像头下的图像或序列同时进行深度估计和场景分割。然而,同时进行多个目标预测过程中,因多个目标预测任务可能具有显著的差异,例如,深度估计是一个连续的回归问题,场景分割是一个离散的分类问题。同时进行多个目标预测的多个目标预测结果的准确度往往较低,预测性能不佳。可见,同时进行多个目标预测的复杂度非常高。如何在同时多个目标预测时,提高多个目标预测结果的准确度成为一个亟待解决的难题。
在本申请实施例中,可以对待预测对象进行特征提取处理,得到所述待预测对象的特征信息,根据所述特征信息,确定针对所述待预测对象的多个中间预测结果。其中,多个中间预测结果可以为多个层级(例如,从低层级到高层级)的中间预测结果,从而生成了多模态数据,这些多模态数据可以辅助确定最终的多个目标预测。通过对所述多个中间预测结果进行融合处理,得到融合信息,并根据所述融合信息,确定针对所述待预测对象的多个目标预测结果。与直接确定待预测对象的多个目标预测结果,并仅在最终预测层通过交互改善多个目标预测结果或者通过利用联合优化目标函数训练得到的模型直接得到的多个目标预测结果的方式相比,本申请实施例利用根据待预测对象确定出的多个中间预测结果辅助指导确定最终的多个目标预测结果,有利于提高多个目标预测结果的准确度。
应理解,本申请实施例可应用于各类多任务预测中,例如,RGB-D行为识别、多传感器智能视频监控、深度估计和场景分割双任务预测等。其中,神经网络可以是根据待预测对象训练得到的。待预测对象可以为各类图像,例如,RGB图像等,本申请对此不作限制。待预测对象的多个中间预测结果可以包括目标预测结果,也可以与多个目标预测结果相关或互补。本申请对多个中间预测结果与多个目标预测结果的对应关系、中间预测结果的数量、目标预测结果的数量等不作限制。
以下为了便于说明,以待预测对象为RGB图像,中间预测结果包括深度估计中间预测结果、曲面法线中间预测结果、轮廓中间预测结果以及语义分割中间预测结果,目标预测结果包括深度估计结果以及场景分割结果为例进行说明。
举例来说,对待预测对象(例如,单一RGB图像)进行特征提取处理,得到所述待预测对象的特征信息。例如,可以将待预测对象输入神经网络中的特征提取网络进行特征提取处理,得到针对待预测对象的特征信息。其中,特征提取网络可以包括各类卷积神经网络。例如,特征提取网络可使用Alex Net网络结构、VGG网络结构以及ResNet网络结构中的一种,本申请对此不作限制。
图2是根据示例性实施例示出的一种对象预测方法的流程图。图3是根据示例性实施例示出的一种对象预测方法的应用场景的示意图。在一种可能的实现方式中,如图2所示,步骤S101可以包括:
在步骤S1011中,对待预测对象进行特征提取处理,得到多个层级的特征;
在步骤S1012中,对所述多个层级的特征进行聚集处理,得到针对所述待预测对象的特征信息。
举例来说,对待预测对象进行特征提取处理,例如,通过包括卷积神经网络的特征提取网络对待预测对象进行特征提取处理。其中,卷积神经网络可以包括多级卷积层,例如,第一级卷积层至第N级卷积层,每级卷积层可以包括一个或多个卷积子层。对待预测对象进行特征提取处理,可以得到多 个层级的特征(例如,将每级卷积层中最后一个卷积子层的特征确定为各层级的特征)。例如,如图3所示,可以得到4个层级的特征。
在一种可能的实现方式中,通过包括卷积神经网络的特征提取网络对待预测对象进行特征提取处理时,可以通过膨胀卷积提高卷积的感受野,以使得到的多个层级的特征可以包含更大范围的信息。
举例来说,卷积神经网络多级卷积层的最后一个卷积子层的卷积结构可以为膨胀卷积。
图4是根据示例性实施例示出的一种膨胀卷积的示意图。在一种可能的实现方式中,如图4所示,该膨胀卷积为空洞大小为1的膨胀卷积,卷积核的大小为3*3。该卷积子层在特征提取过程中,图4中圆圈的点可以和3*3的卷积核进行卷积操作,其余的点(空洞)不进行卷积操作。可见,膨胀卷积可以提高卷积的感受野,从而使得进行特征提取处理后,得到的多个层级的特征可以包含更大范围的信息。本申请对对待预测对象进行特征提取处理,得到多个层级的特征的方式、膨胀卷积的空洞大小等不作限制。
在一种可能的实现方式中,可以对多个层级的特征进行聚集处理,得到针对所述待预测对象的特征信息。例如,可以将卷积神经网络每个层级的特征进行聚集处理,例如,将前面3个较浅层级的特征向最后一级卷积层的特征进行聚集处理(例如,叠加融合),聚集处理后得到针对所述待预测对象的特征信息。
在一种可能的实现方式中,在对多个层级的特征进行聚集处理时,可以对各较浅层级的特征通过卷积操作进行降低采样,并通过双线性插值处理得到与最后一级卷积层的特征相同分辨率的特征。
举例来说,各层级的特征的分辨率不同,例如,最浅层级的特征分辨率最大,最深层级(例如,最后一级卷积层的特征)的分辨率最小。在对多个层级的特征进行聚集处理时,可以对各较浅层级的特征通过卷积操作进行降低采样,并通过双线性插值处理得到与最后一级卷积层的特征相同分辨率的特征,以进行聚集处理(例如,将处理后分辨率相同的多个层级的特征进行叠加融合,得到待预测对象的特征信息)。应理解,通过对各较浅层级的特征通过卷积操作还可用于控制特征通道的数量,以使得聚集处理更加存储高效。
通过这种方式,可以得到针对所述待预测对象的特征信息,该特征信息可用于更好地进行中间预测结果的预测。本申请对对待预测对象进行特征提取处理,得到所述待预测对象的特征信息的方式不作限制。
如图1所示,在步骤S102中,根据所述特征信息,确定针对所述待预测对象的多个中间预测结果。
举例来说,可以根据待预测对象的特征信息,确定针对待预测对象的多个中间预测结果。例如,可以将待预测对象的特征信息重构为不同的中间预测任务,并确定针对所述待预测对象的多个中间预测结果。中间预测结果可用于辅助确定目标预测结果。
图5是根据示例性实施例示出的一种对象预测方法的流程图。在一种可能的实现方式中,如图5所示,步骤S102可以包括:
在步骤S1021中,对所述特征信息进行重构处理,得到多个重构特征。
举例来说,可以对特征信息进行重构处理,例如,可以对特征信息进行反卷积操作,得到多个重构特征。例如,如图3所示,可以对特征信息进行反卷积操作,分别得到4个重构特征。在对特征信息进行反卷积操作时,可以得到4个分辨率相同的重构特征,且重构特征的分辨率为特征信息的分辨率的2倍。
在步骤S1022中,根据多个重构特征,确定针对所述待预测对象的多个中间预测结果。
举例来说,可以分别对多个重构特征进行卷积操作,得到多个中间预测任务的中间预测结果。其中,分别对多个重构特征进行卷积操作,可以得到相应多个中间预测任务的中间信息,可将多个中间预测任务的中间信息通过双线性插值处理,得到分辨率为待预测对象原始分辨率四分之一的多个中间预测结果。例如,如图3所示,可以确定针对所述待预测对象的深度估计中间预测结果、曲面法线中间预测结果、轮廓中间预测结果以及语义分割中间预测结果。
通过这种方式,可以根据所述特征信息,确定针对所述待预测对象的多个中间预测结果。该多个 中间预测结果可用于辅助确定多个目标预测结果。本申请对根据所述特征信息,确定针对所述待预测对象的多个中间预测结果的方式不作限制。
如图1所示,在步骤S103中,对所述多个中间预测结果进行融合处理,得到融合信息。
举例来说,确定针对待预测对象的多个中间预测结果(多模态数据),可以通过多种方式对多个中间预测结果进行融合处理,得到融合信息。该融合信息可用于确定针对待预测对象的多个目标预测结果。其中,融合信息可以为一个或多个,融合信息为一个时,该融合信息可以用于分别确定针对待预测对象的多个目标预测结果。该融合信息还可以为多个,例如,可以对多个中间预测结果进行融合处理,分别得到用于确定各目标预测结果的多个融合信息。这样,通过对多个中间预测结果进行融合处理,得到融合信息,该融合信息有效地结合了来自多个相关任务(中间预测结果)的更多的信息,以提高多个目标预测结果的准确度。本申请对得到融合信息的方式、融合信息的数量等不作限制。
例如,对深度估计中间预测结果、曲面法线中间预测结果、轮廓中间预测结果以及语义分割中间预测结果进行融合处理,得到融合信息。
图6是根据示例性实施例示出的一种对象预测方法的流程图。图7a、图7b和图7c分别是根据示例性实施例示出的一种对象预测方法的应用场景的示意图。在一种可能的实现方式中,如图6所示,步骤S103可以包括:
在步骤S1031中,对所述多个中间预测结果进行再处理,得到多个中间预测结果的再处理结果。
举例来说,可以对多个中间预测结果进行再处理,例如,可以对多个中间预测结果进行卷积操作,得到多个中间预测结果的再处理结果,以得到更加丰富的信息,并缩小多个中间预测结果的差距。其中,得到的多个中间预测结果的再处理结果可以与中间预测结果大小相同。
例如,如图7a、7b以及7c所示,
Figure PCTCN2019077152-appb-000001
以及
Figure PCTCN2019077152-appb-000002
分别表示4个中间预测结果(例如,深度估计中间预测结果、曲面法线中间预测结果、轮廓中间预测结果以及语义分割中间预测结果)。对所述多个中间预测结果进行再处理,得到多个中间预测结果的再处理结果,例如,得到
Figure PCTCN2019077152-appb-000003
以及
Figure PCTCN2019077152-appb-000004
这4个相应的再处理结果。
在步骤S1032中,对所述多个中间预测结果的再处理结果进行融合处理,得到融合信息。
举例来说,可以对多个中间预测结果的再处理结果进行融合处理,得到融合信息。
在一种可能的实现方式中,步骤S1032可以包括:对所述多个中间预测结果的再处理结果进行叠加处理,得到融合信息。
如图7a所示,将
Figure PCTCN2019077152-appb-000005
以及
Figure PCTCN2019077152-appb-000006
这4个相应的再处理结果进行叠加处理(例如,线性叠加等),得到融合信息
Figure PCTCN2019077152-appb-000007
该融合信息
Figure PCTCN2019077152-appb-000008
可用于确定针对待预测对象的多个目标预测结果,例如,如图7a所示,可将该融合信息
Figure PCTCN2019077152-appb-000009
分别输入深度估计任务分支以及场景分割任务分支,并确定针对所述待预测对象的深度估计结果以及场景分割结果。
这样,可以得到用于确定多个目标预测结果的融合信息。本申请对中间预测结果的再处理结果进行叠加处理的方式不作限制。
在一种可能的实现方式中,多个中间预测结果中包括第一中间预测结果以及第二中间预测结果,其中,所述第一中间预测结果与目标预测结果的相关度最高。
举例来说,如前文所述,多个中间预测结果分别为深度估计中间预测结果、曲面法线中间预测结果、轮廓中间预测结果以及语义分割中间预测结果。现以目标预测结果为深度估计结果为例进行说明,可将多个中间预测结果分为第一中间预测结果以及第二中间预测结果。其中,第一中间预测结果为与 目标预测结果(深度估计结果)的相关度最高的深度估计中间预测结果。其他三个中间预测结果可为第二中间预测结果。
在一种可能的实现方式中,步骤S1032还可以包括:
对所述第二中间预测结果的再处理结果进行处理,得到参考结果;
将所述第一中间预测结果的再处理结果与所述参考结果进行叠加处理,得到针对所述目标预测结果的融合信息。
下面给出一个示例性的融合信息的确定公式(1):
Figure PCTCN2019077152-appb-000010
在公式(1)中,
Figure PCTCN2019077152-appb-000011
表示针对第k个目标预测结果的融合信息,
Figure PCTCN2019077152-appb-000012
表示确定针对第k个目标预测结果的融合信息过程中,第k个中间预测结果(为第一中间预测结果)的再处理结果,
Figure PCTCN2019077152-appb-000013
表示卷积操作,
Figure PCTCN2019077152-appb-000014
表示第t个中间预测结果(为第二中间预测结果)的再处理结果,W t,k表示与第t个中间预测结果以及第k个中间预测结果相关的卷积核的参数,其中,k、t、T为正整数,t为变量,t的取值在1到T之间,t≠k。←表示由其右边部分经过叠加处理,可得到左边部分的融合信息。
举例来说,如前文所述,包括两个目标预测任务,确定针对待预测对象的两个目标预测结果。现以确定第1个目标预测结果(深度估计结果,k=1)的融合信息为例,进行说明。
例如,可以确定
Figure PCTCN2019077152-appb-000015
(深度估计中间预测结果)、
Figure PCTCN2019077152-appb-000016
以及
Figure PCTCN2019077152-appb-000017
中,
Figure PCTCN2019077152-appb-000018
为与第1个目标预测结果相关度最高的第一中间预测结果,
Figure PCTCN2019077152-appb-000019
以及
Figure PCTCN2019077152-appb-000020
分别为第二中间预测结果。
在一种可能的实现方式中,对所述第二中间预测结果的再处理结果进行处理,得到参考结果。
举例来说,可以根据公式(1),对所述第二中间预测结果的再处理结果进行处理,例如,分别对
Figure PCTCN2019077152-appb-000021
以及
Figure PCTCN2019077152-appb-000022
进行卷积操作,分别得到3个参考结果。
在一种可能的实现方式中,将所述第一中间预测结果的再处理结果与所述参考结果进行叠加处理,可以得到针对所述目标预测结果的融合信息。
举例来说,可以根据公式(1),将所述第一中间预测结果的再处理结果与所述参考结果进行叠加处理(例如,针对每一个像素,将第一中间预测结果的再处理结果该像素的信息与另外3个参考结果该像素的信息进行叠加处理),可以得到针对所述目标预测结果的融合信息。例如,如图7b所示,得到融合信息
Figure PCTCN2019077152-appb-000023
该融合信息
Figure PCTCN2019077152-appb-000024
为针对第1个目标预测结果的融合信息,可用于确定第1个目标预测结果。应理解,可以根据公式(1)分别得到针对多个目标预测结果的融合信息。
通过这种方式,可以确定针对多个目标预测结果的融合信息,该融合信息可包括与目标预测结果相关度最高的第一中间预测结果的再处理结果中的更多的信息,实现较平滑地进行多模数据融合。本申请对将所述第一中间预测结果的再处理结果与所述参考结果进行叠加处理,得到针对所述目标预测结果的融合信息的具体方式不作限制。
在一种可能的实现方式中,步骤S1032还可以包括:根据所述第一中间预测结果的再处理结果,确定注意力系数,所述注意力系数是根据注意力机制确定的参考系数;
对所述第二中间预测结果的再处理结果进行处理,得到参考结果;
将所述参考结果以及所述注意力系数进行点乘处理,得到注意力内容;
将所述第一中间预测结果的再处理结果与所述注意力内容进行叠加处理,得到针对所述目标预测结果的融合信息。
下面给出一个示例性的注意力系数的确定公式(2):
Figure PCTCN2019077152-appb-000025
在公式(2)中,
Figure PCTCN2019077152-appb-000026
表示确定针对第k个目标预测结果的融合信息过程中,根据第一中间预测结果的再处理结果,确定的注意力系数。
Figure PCTCN2019077152-appb-000027
表示卷积操作,
Figure PCTCN2019077152-appb-000028
表示卷积参数,
Figure PCTCN2019077152-appb-000029
表示确定针对第k个目标预测结果的融合信息过程中,第k个中间预测结果(为第一中间预测结果)的再处理结果,σ表示sigmoid函数。
举例来说,可以根据所述第一中间预测结果的再处理结果,确定注意力系数,所述注意力系数是根据注意力机制确定的参考系数。现以确定第1个目标预测结果(深度估计结果,k=1)的融合信息为例,进行说明。
例如,如图7c所示,可以根据第一中间预测结果的再处理结果
Figure PCTCN2019077152-appb-000030
确定注意力系数
Figure PCTCN2019077152-appb-000031
该注意力系数是根据注意力机制确定的参考系数,可用于对多个第二中间预测结果进行过滤,指导信息传递融合(例如,用于更加关注或忽略来自第二中间预测结果的信息)。
下面给出一个示例性的融合信息的确定公式(3):
Figure PCTCN2019077152-appb-000032
在公式(3)中,
Figure PCTCN2019077152-appb-000033
表示针对第k个目标预测结果的融合信息,
Figure PCTCN2019077152-appb-000034
表示确定针对第k个目标预测结果的融合信息过程中,第k个中间预测结果(为第一中间预测结果)的再处理结果,
Figure PCTCN2019077152-appb-000035
表示卷积操作,
Figure PCTCN2019077152-appb-000036
表示第t个中间预测结果(为第二中间预测结果)的再处理结果,W t表示与第t个中间预测结果相关的卷积核的参数,
Figure PCTCN2019077152-appb-000037
表示确定针对第k个目标预测结果的融合信息过程中,根据第一中间预测结果的再处理结果,确定的注意力系数。⊙表示点乘处理,其中,k、t、T为正整数,t为变量,t的取值在1到T之间,t≠k。←表示由其右边部分经过叠加处理,可得到左边部分的融合信息
在一种可能的实现方式中,对所述第二中间预测结果的再处理结果进行处理,可以得到参考结果。
举例来说,可以根据公式(3),对所述第二中间预测结果的再处理结果进行处理,例如,分别对
Figure PCTCN2019077152-appb-000038
以及
Figure PCTCN2019077152-appb-000039
进行卷积操作,分别得到3个参考结果。
在一种可能的实现方式中,将所述参考结果以及所述注意力系数进行点乘处理,可以得到注意力内容。
举例来说,可以根据公式(2)确定得到注意力系数。例如,可以得到每个像素对应的注意力系数。可以将参考结果以及所述注意力系数进行点乘处理,可以得到注意力内容。可以将根据
Figure PCTCN2019077152-appb-000040
以及
Figure PCTCN2019077152-appb-000041
得到的参考结果与注意力系数进行点乘处理,得到各自对应的注意力内容。
在一种可能的实现方式中,将所述第一中间预测结果的再处理结果与所述注意力内容进行叠加处理,得到针对所述目标预测结果的融合信息。
举例来说,可以根据公式(3),将所述第一中间预测结果的再处理结果与所述多个注意力内容进行叠加处理(例如,针对每一个像素,将第一中间预测结果的再处理结果该像素的信息与另外3个注意力内容该像素的信息进行叠加处理),可以得到针对所述目标预测结果的融合信息。例如,如图7c所示,得到融合信息
Figure PCTCN2019077152-appb-000042
该融合信息
Figure PCTCN2019077152-appb-000043
为针对第1个目标预测结果的融合信息,可用于确定第1个目标预测结果。应理解,可以根据公式(1)分别得到针对多个目标预测结果的融合信息。
通过这种方式,可以确定针对多个目标预测结果的融合信息,该融合信息可包括与目标预测结果相关度最高的第一中间预测结果的再处理结果中的更多的信息,通过第一中间预测结果的再处理结果确定注意力系数,可用于对多个第二中间预测结果进行过滤,指导信息传递融合(例如,用于更加关注或忽略来自第二中间预测结果的信息),从而提高针对多个目标预测结果的融合信息的针对性。本申请对注意力系数的确定方式、参考结果的确定方式、注意力内容的确定方式以及融合信息的确定方式不作限制。
如图1所示,在步骤S104中,根据所述融合信息,确定针对所述待预测对象的多个目标预测结果。
举例来说,可以根据融合信息,确定针对所述待预测对象的多个目标预测结果。例如,确定的融合信息为一个时,可将该融合信息分别输入用于目标预测任务的多个分支中,确定多个目标预测结果。当确定的融合信息为针对不同目标预测任务的不同融合信息时,可将相应的融合信息输入到对应的目标预测任务分支中,确定多个目标预测结果。应理解,多个目标预测任务可通过一个子网络实现(例如,神经网络的第二预测网络)。该子网络可以包括不同的分支,每个分支可根据任务的复杂度采用不同深度的各类网络、具有不同的网络参数以及不同的设计。本申请对根据所述融合信息,确定针对所述待预测对象的多个目标预测结果的方式、多个目标预测任务的子网络的结构和设计等不作限制。
例如,根据所述融合信息,确定针对所述待预测对象的深度估计结果以及场景分割结果。
图8是根据示例性实施例示出的一种对象预测方法的流程图。在一种可能的实现方式中,如图8所示,步骤S104可以包括:
在步骤S1041中,确定针对多个目标预测结果的融合信息;
在步骤S1042中,对所述融合信息进行处理,得到目标特征;
在步骤S1043中,根据所述目标特征,确定多个目标预测结果。
举例来说,可以确定针对多个目标预测结果的融合信息。例如,如图7b所示,确定针对深度估计结果的融合信息为
Figure PCTCN2019077152-appb-000044
和确定针对场景分割结果的融合信息为
Figure PCTCN2019077152-appb-000045
可以对融合信息进行处理,得到目标特征,并根据目标特征,确定多个目标预测结果。
现以确定深度估计结果为例进行说明。
举例来说,可以对针对深度估计结果的融合信息为
Figure PCTCN2019077152-appb-000046
进行处理,得到目标特征。例如,可以对融合信息
Figure PCTCN2019077152-appb-000047
进行两次连续的反卷积操作。如前文所述,多个中间预测结果的分辨率为待预测对象原始分辨率四分之一,可以通过两次连续的反卷积操作(每次放大为2倍),得到与待预测对象原始分辨率相同的目标特征。可以根据目标特征,确定目标预测结果。例如,可以对该目标特征进行卷积操作,确定得到目标预测结果。
通过这种方式,可以根据融合信息,确定针对待预测对象的多个目标预测结果。本申请对根据融合信息,确定针对待预测对象的多个目标预测结果的方式不作限制。
应理解,上述方法可以适用于使用训练好的神经网络确定多个目标预测结果的场景,也可以适用于训练神经网络的过程,本申请实施例对此不做限定。在一种可能的实现方式中,在使用训练好的神 经网络确定多个目标预测结果之前,可包括根据待预测对象训练所述神经网络的步骤。
图9是根据示例性实施例示出的一种对象预测方法中训练神经网络的流程图。在一种可能的实现方式中,如图9所示,根据待预测对象训练所述神经网络的步骤可以包括:
在步骤S105中,将所述待预测对象输入所述神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息;
在步骤S106中,将所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果;
在步骤S107中,将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息;
在步骤S108中,将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果;
在步骤S109中,根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失;
在步骤S110中,根据所述模型损失,调整所述神经网络的网络参数值。
举例来说,可以将待预测对象输入所述神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息。所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果。例如,确定4个中间预测结果。
将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息,将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果。根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失。
例如,确定针对所述待预测对象的4个中间预测结果,并最终确定针对所述待预测对象的2个目标预测结果,在训练过程中,确定的神经网络的模型损失可以为6个损失函数的损失之和(包括4个中间预测结果的各自损失、2个目标预测结果的各自损失)。其中,各损失函数可包括不同的类型,例如,轮廓中间预测任务中,损失函数可为交叉熵损失函数,语义分割中间预测任务(场景分割预测任务)中,损失函数可为Softmax损失函数。深度估计中间预测任务(深度估计预测任务)、曲面法线中间预测任务中,损失函数可为欧式距离损失函数。在确定神经网络的模型损失时,各损失函数的损失权重可不完全相同。例如,深度估计中间预测任务、深度估计预测任务、场景分割预测任务以及语义分割中间预测任务的损失函数的损失权重可设为1,曲面法线中间预测任务以及轮廓中间预测任务的损失函数的损失权重可设置为0.8。本申请对损失函数的类型、各损失函数的损失权重等不作限制。
在一种可能的实现方式中,可以根据所述模型损失,调整所述神经网络的网络参数值。例如,采用反向梯度算法等调整网络参数值。应当理解,可采用合适的方式调整神经网络的网络参数值,本申请对此不作限制。
经过多次调整后,如果满足预先设定的训练条件,例如调整次数达到预先设定的训练次数阈值,或者模型损失小于或等于预先设定的损失阈值,则可以将当前的神经网络确定为最终的神经网络,从而完成了的神经网络的训练过程。应当理解,本领域技术人员可以根据实际情况设定训练条件以及损失阈值,本申请对此不作限制。
通过这种方式,可以训练得到能够准确地得到多个目标预测结果的神经网络。在训练过程中,通过输入待预测对象(例如,一个RGB图像)可以得到特征信息,并根据特征信息得到多个中间预测结果。多个中间预测结果不仅可以作为学习更深层次特征的监督信息,而且提供更加丰富的多模态数据来改善最终目标预测任务,辅助确定最终的多个目标预测结果,可同时提升多个目标预测任务的泛化能力和预测性能,从而提高多个目标预测结果的准确度。
另外,根据本申请实施例,输入待预测对象进行神经网络训练过程中,并非直接使用不同的损失函数,直接同时训练多个目标预测结果,而是通过确定多个中间预测结果,并通过多个中间预测结果 辅助确定多个目标预测结果,从而使得神经网络训练的复杂度降低,保证了较好的训练效率和效果。
图10是根据示例性实施例示出的一种对象预测方法中训练神经网络的流程图。在一种可能的实现方式中,如图10所示,根据待预测对象训练所述神经网络的步骤还包括:
在步骤S111中,在将所述待预测对象输入所述神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息之前,确定所述多个目标预测结果的标注信息;
在步骤S112中,根据所述多个目标预测结果的标注信息,确定所述多个中间预测结果的标注信息。
举例来说,如前文所述,目标预测结果包括深度估计结果和场景分割结果。在训练神经网络过程中,可以确定这两个目标预测结果的标注信息。例如,通过人工标记等方式确定。可以根据深度估计结果和场景分割结果的标注信息,确定所述多个中间预测结果的标注信息。例如,中间预测结果包括深度估计中间预测结果、曲面法线中间预测结果、轮廓中间预测结果以及语义分割中间预测结果。其中,可以将深度估计结果和场景分割结果的标注信息分别确定为深度估计中间预测结果以及语义分割中间预测结果的标注信息。轮廓中间预测结果的标注信息可通过场景分割结果的标注信息推算得到,曲面法线中间预测结果的标注信息可通过深度估计结果的标注信息推算得到。
通过这种方式,神经网络训练过程中,根据所述多个目标预测结果的标注信息,确定所述多个中间预测结果的标注信息,使得用到较多的标注信息作为监督信息训练神经网络,而无需完成过多的标注任务,提高神经网络训练的效率。本申请对根据所述多个目标预测结果的标注信息,确定所述多个中间预测结果的标注信息的方式不作限制。
图11是根据示例性实施例示出的一种对象预测方法的流程图。该方法可应用于电子设备中。该电子设备可以被提供为一终端、一服务器或其它形态的设备。如图11所示,根据本申请实施例的对象预测方法包括:
在步骤S201中,将待预测对象输入神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息;
在步骤S202中,将所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果;
在步骤S203中,将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息;
在步骤S204中,将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果;
在步骤S205中,根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失;
在步骤S206中,根据所述模型损失,调整所述神经网络的网络参数值。
根据本申请实施例,能够根据待预测对象训练得到神经网络,该神经网络可用于确定针对待预测对象的多个目标预测结果。
举例来说,如前文所述,可训练得到神经网络,在此不再赘述。
图12是根据示例性实施例示出的一种对象预测方法的流程图。在一种可能的实现方式中,如图12所示,所述方法还包括:
在步骤S207中,在将待预测对象输入神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息之前,确定所述多个目标预测结果的标注信息;
在步骤S208中,根据所述多个目标预测结果的标注信息,确定所述多个中间预测结果的标注信息。
举例来说,如前文所述,在此不再赘述。
以上已经描述了本申请的示例性实施例。应当理解,上述对示例性实施例的说明并不构成对本申请的限制,示例性实施例中的各个技术特征可以根据实际需要和逻辑进行任意组合、修改及变更,从而形成不同的技术方案,这些技术方案均属于本申请实施例的一部分。
图13是根据示例性实施例示出的一种对象预测装置的框图。如图13所示,所述对象预测装置包括:
特征提取模块301,被配置为对待预测对象进行特征提取处理,得到所述待预测对象的特征信息;
中间预测结果确定模块302,被配置为根据所述特征信息,确定针对所述待预测对象的多个中间预测结果;
融合模块303,被配置为对所述多个中间预测结果进行融合处理,得到融合信息;
目标预测结果确定模块304,被配置为根据所述融合信息,确定针对所述待预测对象的多个目标预测结果。
图14是根据示例性实施例示出的一种对象预测装置的框图。如图14所示,在一种可能的实现方式中,所述特征提取模块301包括:
特征获得子模块3011,被配置为对待预测对象进行特征提取处理,得到多个层级的特征;
特征信息获得子模块3012,被配置为对所述多个层级的特征进行聚集处理,得到针对所述待预测对象的特征信息。
如图14所示,在一种可能的实现方式中,所述中间预测结果确定模块302包括:
重构特征获得子模块3021,被配置为对所述特征信息进行重构处理,得到多个重构特征;
中间预测结果获得子模块3022,被配置为根据多个重构特征,确定针对所述待预测对象的多个中间预测结果。
如图14所示,在一种可能的实现方式中,所述融合模块303包括:
再处理结果获得子模块3031,被配置为对所述多个中间预测结果进行再处理,得到多个中间预测结果的再处理结果;
融合信息获得子模块3032,被配置为对所述多个中间预测结果的再处理结果进行融合处理,得到融合信息。
在一种可能的实现方式中,所述融合信息获得子模块3032被配置为:
对所述多个中间预测结果的再处理结果进行叠加处理,得到融合信息。
在一种可能的实现方式中,所述多个中间预测结果中包括第一中间预测结果以及第二中间预测结果,其中,所述第一中间预测结果与目标预测结果的相关度最高,
其中,所述融合信息获得子模块3032被配置为:
对所述第二中间预测结果的再处理结果进行处理,得到参考结果;
将所述第一中间预测结果的再处理结果与所述参考结果进行叠加处理,得到针对所述目标预测结果的融合信息。
在一种可能的实现方式中,所述多个中间预测结果中包括第一中间预测结果以及第二中间预测结果,其中,所述第一中间预测结果与目标预测结果的相关度最高,
其中,所述融合信息获得子模块3032被配置为:
根据所述第一中间预测结果的再处理结果,确定注意力系数,所述注意力系数是根据注意力机制确定的参考系数;
对所述第二中间预测结果的再处理结果进行处理,得到参考结果;
将所述参考结果以及所述注意力系数进行点乘处理,得到注意力内容;
将所述第一中间预测结果的再处理结果与所述注意力内容进行叠加处理,得到针对所述目标预测结果的融合信息。
如图14所示,在一种可能的实现方式中,所述目标预测结果确定模块304包括:
融合信息确定子模块3041,被配置为确定针对多个目标预测结果的融合信息;
目标特征获得子模块3042,被配置为对所述融合信息进行处理,得到目标特征;
目标预测结果确定子模块3043,被配置为根据所述目标特征,确定多个目标预测结果。
在一种可能的实现方式中,所述神经网络根据待预测对象训练得到。
如图14所示,在一种可能的实现方式中,所述装置还包括:
第一获得模块305,被配置为将所述待预测对象输入所述神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息;
第一确定模块306,被配置为将所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果;
第二获得模块307,被配置为将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息;
第二确定模块308,被配置为将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果;
第三确定模块309,被配置为根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失;
网络参数值调整模块310,被配置为根据所述模型损失,调整所述神经网络的网络参数值。
如图14所示,在一种可能的实现方式中,所述装置还包括:
标注信息确定模块311,被配置为在将所述待预测对象输入所述神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息之前,确定所述多个目标预测结果的标注信息;
中间标注信息确定模块312,被配置为根据所述多个目标预测结果的标注信息,确定所述多个中间预测结果的标注信息。
如图14所示,在一种可能的实现方式中,所述中间预测结果确定模块302包括:
第一确定子模块3023,被配置为根据所述特征信息,确定针对所述待预测对象的深度估计中间预测结果、曲面法线中间预测结果、轮廓中间预测结果以及语义分割中间预测结果,
其中,所述融合模块303包括:
获得子模块3033,被配置为对所述深度估计中间预测结果、所述曲面法线中间预测结果、所述轮廓中间预测结果以及所述语义分割中间预测结果进行融合处理,得到融合信息,
其中,所述目标预测结果确定模块304包括:
第二确定子模块3044,被配置为根据所述融合信息,确定针对所述待预测对象的深度估计结果以及场景分割结果。
图15是根据示例性实施例示出的一种对象预测装置的框图。如图15所示,所述对象预测装置包括:
第一信息获得模块401,被配置为将待预测对象输入神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息;
第一结果确定模块402,被配置为将所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果;
第二信息获得模块403,被配置为将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息;
第二结果确定模块404,被配置为将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果;
模型损失确定模块405,被配置为根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失;
参数调整模块406,被配置为根据所述模型损失,调整所述神经网络的网络参数值。
图16是根据示例性实施例示出的一种对象预测装置的框图。如图16所示,在一种可能的实现方式中,所述装置还包括:
第一信息确定模块407,被配置为在将待预测对象输入神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息之前,确定所述多个目标预测结果的标注信息;
第二信息确定模块408,被配置为根据所述多个目标预测结果的标注信息,确定所述多个中间预测结果的标注信息。
图17是根据示例性实施例示出的一种电子设备的框图。例如,电子设备可以被提供为一终端、一服务器或其它形态的设备。参照图17,设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。
设备1900还可以包括一个电源组件1926被配置为执行设备1900的电源管理,一个有线或无线网络接口1950被配置为将设备1900连接到网络,和一个输入输出(I/O)接口1958。设备1900可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
在示例性实施例中,还提供了一种计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由设备1900的处理组件1922执行以完成上述方法。
在示例性实施例中,还提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述方法。
本申请可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本申请的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本申请操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本申请的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理 器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本申请的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (31)

  1. 一种对象预测方法,其特征在于,应用于神经网络,所述方法包括:
    对待预测对象进行特征提取处理,得到所述待预测对象的特征信息;
    根据所述特征信息,确定针对所述待预测对象的多个中间预测结果;
    对所述多个中间预测结果进行融合处理,得到融合信息;
    根据所述融合信息,确定针对所述待预测对象的多个目标预测结果。
  2. 根据权利要求1所述的方法,其特征在于,对待预测对象进行特征提取处理,得到所述待预测对象的特征信息,包括:
    对待预测对象进行特征提取处理,得到多个层级的特征;
    对所述多个层级的特征进行聚集处理,得到针对所述待预测对象的特征信息。
  3. 根据权利要求1所述的方法,其特征在于,根据所述特征信息,确定针对所述待预测对象的多个中间预测结果,包括:
    对所述特征信息进行重构处理,得到多个重构特征;
    根据多个重构特征,确定针对所述待预测对象的多个中间预测结果。
  4. 根据权利要求1所述的方法,其特征在于,对所述多个中间预测结果进行融合处理,得到融合信息,包括:
    对所述多个中间预测结果进行再处理,得到多个中间预测结果的再处理结果;
    对所述多个中间预测结果的再处理结果进行融合处理,得到融合信息。
  5. 根据权利要求4所述的方法,其特征在于,对所述多个中间预测结果的再处理结果进行融合处理,得到融合信息,包括:
    对所述多个中间预测结果的再处理结果进行叠加处理,得到融合信息。
  6. 根据权利要求4所述的方法,其特征在于,所述多个中间预测结果中包括第一中间预测结果以及第二中间预测结果,其中,所述第一中间预测结果与目标预测结果的相关度最高,
    其中,对所述多个中间预测结果的再处理结果进行融合处理,得到融合信息,包括:
    对所述第二中间预测结果的再处理结果进行处理,得到参考结果;
    将所述第一中间预测结果的再处理结果与所述参考结果进行叠加处理,得到针对所述目标预测结果的融合信息。
  7. 根据权利要求4所述的方法,其特征在于,所述多个中间预测结果中包括第一中间预测结果以及第二中间预测结果,其中,所述第一中间预测结果与目标预测结果的相关度最高,
    其中,对所述多个中间预测结果的再处理结果进行融合处理,得到融合信息,包括:
    根据所述第一中间预测结果的再处理结果,确定注意力系数,所述注意力系数是根据注意力机制确定的参考系数;
    对所述第二中间预测结果的再处理结果进行处理,得到参考结果;
    将所述参考结果以及所述注意力系数进行点乘处理,得到注意力内容;
    将所述第一中间预测结果的再处理结果与所述注意力内容进行叠加处理,得到针对所述目标预测结果的融合信息。
  8. 根据权利要求1所述的方法,其特征在于,根据所述融合信息,确定针对所述待预测对象的多个目标预测结果,包括:
    确定针对多个目标预测结果的融合信息;
    对所述融合信息进行处理,得到目标特征;
    根据所述目标特征,确定多个目标预测结果。
  9. 根据权利要求1所述的方法,其特征在于,所述神经网络根据待预测对象训练得到。
  10. 根据权利要求9所述的方法,其特征在于,根据待预测对象训练所述神经网络的步骤包括:
    将所述待预测对象输入所述神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息;
    将所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果;
    将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息;
    将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果;
    根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失;
    根据所述模型损失,调整所述神经网络的网络参数值。
  11. 根据权利要求10所述的方法,其特征在于,根据待预测对象训练所述神经网络的步骤还包括:
    在将所述待预测对象输入所述神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息之前,确定所述多个目标预测结果的标注信息;
    根据所述多个目标预测结果的标注信息,确定所述多个中间预测结果的标注信息。
  12. 根据权利要求1所述的方法,其特征在于,根据所述特征信息,确定针对所述待预测对象的多个中间预测结果,包括:
    根据所述特征信息,确定针对所述待预测对象的深度估计中间预测结果、曲面法线中间预测结果、轮廓中间预测结果以及语义分割中间预测结果,
    其中,对所述多个中间预测结果进行融合处理,得到融合信息,包括:
    对所述深度估计中间预测结果、所述曲面法线中间预测结果、所述轮廓中间预测结果以及所述语义分割中间预测结果进行融合处理,得到融合信息,
    其中,根据所述融合信息,确定针对所述待预测对象的多个目标预测结果,包括:
    根据所述融合信息,确定针对所述待预测对象的深度估计结果以及场景分割结果。
  13. 一种对象预测方法,其特征在于,应用于神经网络,所述方法包括:
    将待预测对象输入神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息;
    将所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果;
    将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息;
    将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果;
    根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失;
    根据所述模型损失,调整所述神经网络的网络参数值。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    在将待预测对象输入神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息之前,确定所述多个目标预测结果的标注信息;
    根据所述多个目标预测结果的标注信息,确定所述多个中间预测结果的标注信息。
  15. 一种对象预测装置,其特征在于,应用于神经网络,所述装置包括:
    特征提取模块,被配置为对待预测对象进行特征提取处理,得到所述待预测对象的特征信息;
    中间预测结果确定模块,被配置为根据所述特征信息,确定针对所述待预测对象的多个中间预测结果;
    融合模块,被配置为对所述多个中间预测结果进行融合处理,得到融合信息;
    目标预测结果确定模块,被配置为根据所述融合信息,确定针对所述待预测对象的多个目标预测结果。
  16. 根据权利要求15所述的装置,其特征在于,所述特征提取模块包括:
    特征获得子模块,被配置为对待预测对象进行特征提取处理,得到多个层级的特征;
    特征信息获得子模块,被配置为对所述多个层级的特征进行聚集处理,得到针对所述待预测对象的特征信息。
  17. 根据权利要求15所述的装置,其特征在于,所述中间预测结果确定模块包括:
    重构特征获得子模块,被配置为对所述特征信息进行重构处理,得到多个重构特征;
    中间预测结果获得子模块,被配置为根据多个重构特征,确定针对所述待预测对象的多个中间预测结果。
  18. 根据权利要求15所述的装置,其特征在于,所述融合模块包括:
    再处理结果获得子模块,被配置为对所述多个中间预测结果进行再处理,得到多个中间预测结果的再处理结果;
    融合信息获得子模块,被配置为对所述多个中间预测结果的再处理结果进行融合处理,得到融合信息。
  19. 根据权利要求18所述的装置,其特征在于,所述融合信息获得子模块被配置为:
    对所述多个中间预测结果的再处理结果进行叠加处理,得到融合信息。
  20. 根据权利要求18所述的装置,其特征在于,所述多个中间预测结果中包括第一中间预测结果以及第二中间预测结果,其中,所述第一中间预测结果与目标预测结果的相关度最高,
    其中,所述融合信息获得子模块被配置为:
    对所述第二中间预测结果的再处理结果进行处理,得到参考结果;
    将所述第一中间预测结果的再处理结果与所述参考结果进行叠加处理,得到针对所述目标预测结果的融合信息。
  21. 根据权利要求18所述的装置,其特征在于,所述多个中间预测结果中包括第一中间预测结果以及第二中间预测结果,其中,所述第一中间预测结果与目标预测结果的相关度最高,
    其中,所述融合信息获得子模块被配置为:
    根据所述第一中间预测结果的再处理结果,确定注意力系数,所述注意力系数是根据注意力机制 确定的参考系数;
    对所述第二中间预测结果的再处理结果进行处理,得到参考结果;
    将所述参考结果以及所述注意力系数进行点乘处理,得到注意力内容;
    将所述第一中间预测结果的再处理结果与所述注意力内容进行叠加处理,得到针对所述目标预测结果的融合信息。
  22. 根据权利要求15所述的装置,其特征在于,所述目标预测结果确定模块包括:
    融合信息确定子模块,被配置为确定针对多个目标预测结果的融合信息;
    目标特征获得子模块,被配置为对所述融合信息进行处理,得到目标特征;
    目标预测结果确定子模块,被配置为根据所述目标特征,确定多个目标预测结果。
  23. 根据权利要求15所述的装置,其特征在于,所述神经网络根据待预测对象训练得到。
  24. 根据权利要求23所述的装置,其特征在于,所述装置还包括:
    第一获得模块,被配置为将所述待预测对象输入所述神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息;
    第一确定模块,被配置为将所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果;
    第二获得模块,被配置为将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息;
    第二确定模块,被配置为将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果;
    第三确定模块,被配置为根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失;
    网络参数值调整模块,被配置为根据所述模型损失,调整所述神经网络的网络参数值。
  25. 根据权利要求24所述的装置,其特征在于,所述装置还包括:
    标注信息确定模块,被配置为在将所述待预测对象输入所述神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息之前,确定所述多个目标预测结果的标注信息;
    中间标注信息确定模块,被配置为根据所述多个目标预测结果的标注信息,确定所述多个中间预测结果的标注信息。
  26. 根据权利要求15所述的装置,其特征在于,所述中间预测结果确定模块包括:
    第一确定子模块,被配置为根据所述特征信息,确定针对所述待预测对象的深度估计中间预测结果、曲面法线中间预测结果、轮廓中间预测结果以及语义分割中间预测结果,
    其中,所述融合模块包括:
    获得子模块,被配置为对所述深度估计中间预测结果、所述曲面法线中间预测结果、所述轮廓中间预测结果以及所述语义分割中间预测结果进行融合处理,得到融合信息,
    其中,所述目标预测结果确定模块包括:
    第二确定子模块,被配置为根据所述融合信息,确定针对所述待预测对象的深度估计结果以及场景分割结果。
  27. 一种对象预测装置,其特征在于,应用于神经网络,所述装置包括:
    第一信息获得模块,被配置为将待预测对象输入神经网络中的特征提取网络进行特征提取处理, 得到针对所述待预测对象的特征信息;
    第一结果确定模块,被配置为将所述特征信息输入所述神经网络中的第一预测网络中进行处理,确定针对所述待预测对象的多个中间预测结果;
    第二信息获得模块,被配置为将所述中间预测结果输入所述神经网络的融合网络中进行融合处理,得到所述融合信息;
    第二结果确定模块,被配置为将所述融合信息分别输入所述神经网络中的多个第二预测网络中进行处理,确定针对所述待预测对象的多个目标预测结果;
    模型损失确定模块,被配置为根据所述多个中间预测结果、多个中间预测结果的标注信息、多个目标预测结果以及多个目标预测结果的标注信息,确定所述神经网络的模型损失;
    参数调整模块,被配置为根据所述模型损失,调整所述神经网络的网络参数值。
  28. 根据权利要求27所述的装置,其特征在于,所述装置还包括:
    第一信息确定模块,被配置为在将待预测对象输入神经网络中的特征提取网络进行特征提取处理,得到针对所述待预测对象的特征信息之前,确定所述多个目标预测结果的标注信息;
    第二信息确定模块,被配置为根据所述多个目标预测结果的标注信息,确定所述多个中间预测结果的标注信息。
  29. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:执行权利要求1至14中任意一项所述的方法。
  30. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至14中任意一项所述的方法。
  31. 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至14中任意一项所述的方法。
PCT/CN2019/077152 2018-05-04 2019-03-06 对象预测方法及装置、电子设备和存储介质 WO2019210737A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
SG11202007158UA SG11202007158UA (en) 2018-05-04 2019-03-06 Object prediction method and apparatus, electronic device and storage medium
KR1020207022191A KR102406765B1 (ko) 2018-05-04 2019-03-06 대상물 예측 방법 및 장치, 전자 기기 및 기억 매체
JP2020540732A JP7085632B2 (ja) 2018-05-04 2019-03-06 対象物推定方法および装置、電子機器ならびに記憶媒体
US16/985,747 US11593596B2 (en) 2018-05-04 2020-08-05 Object prediction method and apparatus, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810421005.X 2018-05-04
CN201810421005.XA CN110443266B (zh) 2018-05-04 2018-05-04 对象预测方法及装置、电子设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/985,747 Continuation US11593596B2 (en) 2018-05-04 2020-08-05 Object prediction method and apparatus, and storage medium

Publications (1)

Publication Number Publication Date
WO2019210737A1 true WO2019210737A1 (zh) 2019-11-07

Family

ID=68386249

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/077152 WO2019210737A1 (zh) 2018-05-04 2019-03-06 对象预测方法及装置、电子设备和存储介质

Country Status (6)

Country Link
US (1) US11593596B2 (zh)
JP (1) JP7085632B2 (zh)
KR (1) KR102406765B1 (zh)
CN (1) CN110443266B (zh)
SG (1) SG11202007158UA (zh)
WO (1) WO2019210737A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930386A (zh) * 2019-11-20 2020-03-27 重庆金山医疗技术研究院有限公司 图像处理方法、装置、设备及存储介质
CN111767810A (zh) * 2020-06-18 2020-10-13 哈尔滨工程大学 一种基于D-LinkNet的遥感图像道路提取方法
CN114511452A (zh) * 2021-12-06 2022-05-17 中南大学 融合多尺度空洞卷积和三元组注意力的遥感图像检索方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4094199A1 (en) * 2020-07-14 2022-11-30 Google LLC Neural network models using peer-attention
US20220201317A1 (en) * 2020-12-22 2022-06-23 Ssimwave Inc. Video asset quality assessment and encoding optimization to achieve target quality requirement
KR20220125719A (ko) * 2021-04-28 2022-09-14 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 목표 대상 검측 모델을 트레이닝하는 방법 및 장비, 목표 대상을 검측하는 방법 및 장비, 전자장비, 저장매체 및 컴퓨터 프로그램
CN113313511A (zh) * 2021-04-30 2021-08-27 北京奇艺世纪科技有限公司 一种视频流量预测方法、装置、电子设备及介质
CN113947246B (zh) * 2021-10-21 2023-06-13 腾讯科技(深圳)有限公司 基于人工智能的流失处理方法、装置及电子设备
CN114639070B (zh) * 2022-03-15 2024-06-04 福州大学 融合注意力机制的人群运动流量分析方法
US20240037930A1 (en) 2022-07-29 2024-02-01 Rakuten Group, Inc. Online knowledge distillation for multi-task learning system, method, device, and program
CN117457101B (zh) * 2023-12-22 2024-03-26 中国农业科学院烟草研究所(中国烟草总公司青州烟草研究所) 一种烘烤烟叶含水量预测方法、介质及系统
CN118133191B (zh) * 2024-05-08 2024-08-02 海信集团控股股份有限公司 一种多模态数据的目标检测方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701508A (zh) * 2016-01-12 2016-06-22 西安交通大学 基于多级卷积神经网络的全局-局部优化模型及显著性检测算法
CN106203318A (zh) * 2016-06-29 2016-12-07 浙江工商大学 基于多层次深度特征融合的摄像机网络行人识别方法
CN106845549A (zh) * 2017-01-22 2017-06-13 珠海习悦信息技术有限公司 一种基于多任务学习的场景与目标识别的方法及装置
US20170169315A1 (en) * 2015-12-15 2017-06-15 Sighthound, Inc. Deeply learned convolutional neural networks (cnns) for object localization and classification
CN107958216A (zh) * 2017-11-27 2018-04-24 沈阳航空航天大学 基于半监督的多模态深度学习分类方法

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7596243B2 (en) * 2005-09-16 2009-09-29 Sony Corporation Extracting a moving object boundary
CN101169827B (zh) * 2007-12-03 2010-06-02 北京中星微电子有限公司 一种对图像中的特征点进行跟踪的方法及装置
CN105981050B (zh) * 2013-11-30 2019-05-07 北京市商汤科技开发有限公司 用于从人脸图像的数据提取人脸特征的方法和系统
CN104217216B (zh) * 2014-09-01 2017-10-17 华为技术有限公司 生成检测模型的方法和设备、用于检测目标的方法和设备
WO2017015390A1 (en) * 2015-07-20 2017-01-26 University Of Maryland, College Park Deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition
US10417529B2 (en) * 2015-09-15 2019-09-17 Samsung Electronics Co., Ltd. Learning combinations of homogenous feature arrangements
KR20170050448A (ko) * 2015-10-30 2017-05-11 삼성에스디에스 주식회사 이미지 상의 객체 검출 방법 및 객체 검출 장치
US10275684B2 (en) * 2015-11-04 2019-04-30 Samsung Electronics Co., Ltd. Authentication method and apparatus, and method and apparatus for training a recognizer
US10467459B2 (en) * 2016-09-09 2019-11-05 Microsoft Technology Licensing, Llc Object detection based on joint feature extraction
CN107704866B (zh) * 2017-06-15 2021-03-23 清华大学 基于新型神经网络的多任务场景语义理解模型及其应用
CN110838124B (zh) * 2017-09-12 2021-06-18 深圳科亚医疗科技有限公司 用于分割具有稀疏分布的对象的图像的方法、系统和介质
US11037032B2 (en) * 2017-10-06 2021-06-15 Wisconsin Alumni Research Foundation Methods, systems, and media for detecting the presence of an analyte
CN108108657B (zh) * 2017-11-16 2020-10-30 浙江工业大学 基于多任务深度学习的修正局部敏感哈希车辆检索方法
CN107967451B (zh) * 2017-11-23 2021-04-27 常州大学 一种对静止图像进行人群计数的方法
US10740654B2 (en) * 2018-01-22 2020-08-11 Qualcomm Incorporated Failure detection for a neural network object tracker

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169315A1 (en) * 2015-12-15 2017-06-15 Sighthound, Inc. Deeply learned convolutional neural networks (cnns) for object localization and classification
CN105701508A (zh) * 2016-01-12 2016-06-22 西安交通大学 基于多级卷积神经网络的全局-局部优化模型及显著性检测算法
CN106203318A (zh) * 2016-06-29 2016-12-07 浙江工商大学 基于多层次深度特征融合的摄像机网络行人识别方法
CN106845549A (zh) * 2017-01-22 2017-06-13 珠海习悦信息技术有限公司 一种基于多任务学习的场景与目标识别的方法及装置
CN107958216A (zh) * 2017-11-27 2018-04-24 沈阳航空航天大学 基于半监督的多模态深度学习分类方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110930386A (zh) * 2019-11-20 2020-03-27 重庆金山医疗技术研究院有限公司 图像处理方法、装置、设备及存储介质
CN110930386B (zh) * 2019-11-20 2024-02-20 重庆金山医疗技术研究院有限公司 图像处理方法、装置、设备及存储介质
CN111767810A (zh) * 2020-06-18 2020-10-13 哈尔滨工程大学 一种基于D-LinkNet的遥感图像道路提取方法
CN111767810B (zh) * 2020-06-18 2022-08-02 哈尔滨工程大学 一种基于D-LinkNet的遥感图像道路提取方法
CN114511452A (zh) * 2021-12-06 2022-05-17 中南大学 融合多尺度空洞卷积和三元组注意力的遥感图像检索方法
CN114511452B (zh) * 2021-12-06 2024-03-19 中南大学 融合多尺度空洞卷积和三元组注意力的遥感图像检索方法

Also Published As

Publication number Publication date
US11593596B2 (en) 2023-02-28
KR102406765B1 (ko) 2022-06-08
SG11202007158UA (en) 2020-08-28
KR20200105500A (ko) 2020-09-07
CN110443266A (zh) 2019-11-12
JP7085632B2 (ja) 2022-06-16
US20200364518A1 (en) 2020-11-19
CN110443266B (zh) 2022-06-24
JP2021512407A (ja) 2021-05-13

Similar Documents

Publication Publication Date Title
WO2019210737A1 (zh) 对象预测方法及装置、电子设备和存储介质
Wang et al. Adaptive fusion for RGB-D salient object detection
CN109829433B (zh) 人脸图像识别方法、装置、电子设备及存储介质
JP6755849B2 (ja) 人工ニューラルネットワークのクラスに基づく枝刈り
CN108629414B (zh) 深度哈希学习方法及装置
JP7286013B2 (ja) ビデオコンテンツ認識方法、装置、プログラム及びコンピュータデバイス
Liu et al. Weakly supervised 3d scene segmentation with region-level boundary awareness and instance discrimination
Rafique et al. Deep fake detection and classification using error-level analysis and deep learning
Chen et al. Video salient object detection via contrastive features and attention modules
WO2023282847A1 (en) Detecting objects in a video using attention models
US11227197B2 (en) Semantic understanding of images based on vectorization
Tripathy et al. AMS-CNN: Attentive multi-stream CNN for video-based crowd counting
Idan et al. Fast shot boundary detection based on separable moments and support vector machine
Savchenko et al. Fast search of face recognition model for a mobile device based on neural architecture comparator
Roy et al. Predicting image aesthetics using objects in the scene
Wang et al. Fadnet++: Real-time and accurate disparity estimation with configurable networks
Ling et al. Graph neural networks: Graph matching
US11948090B2 (en) Method and apparatus for video coding
KR102239133B1 (ko) 영상 변환을 이용한 머신러닝 기반 결함 분류 장치 및 방법
Wang et al. Shot boundary detection through multi-stage deep convolution neural network
CN111915703B (zh) 一种图像生成方法和装置
CN116561371A (zh) 一种基于多实例学习和标签关系图的多标签视频分类方法
Yusiong et al. Multi-scale autoencoders in autoencoder for semantic image segmentation
Muthu et al. Unsupervised video object segmentation: an affinity and edge learning approach
WO2024120157A1 (zh) 目标检测模型训练方法、检测方法及装置、设备、介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19795947

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020540732

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20207022191

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04/03/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19795947

Country of ref document: EP

Kind code of ref document: A1