CN111488938A - Image matching method based on two-step switchable normalized depth neural network - Google Patents

Image matching method based on two-step switchable normalized depth neural network Download PDF

Info

Publication number
CN111488938A
CN111488938A CN202010293080.XA CN202010293080A CN111488938A CN 111488938 A CN111488938 A CN 111488938A CN 202010293080 A CN202010293080 A CN 202010293080A CN 111488938 A CN111488938 A CN 111488938A
Authority
CN
China
Prior art keywords
normalization
data
network
layer
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010293080.XA
Other languages
Chinese (zh)
Other versions
CN111488938B (en
Inventor
肖国宝
钟振
曾坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Jiuzhou Longteng Scientific And Technological Achievement Transformation Co ltd
Original Assignee
Minjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minjiang University filed Critical Minjiang University
Priority to CN202010293080.XA priority Critical patent/CN111488938B/en
Publication of CN111488938A publication Critical patent/CN111488938A/en
Application granted granted Critical
Publication of CN111488938B publication Critical patent/CN111488938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image matching method based on a two-step switchable normalized deep neural network. Specifically, given the correspondence of feature points in two views, the image feature matching problem is formulated as a binary classification problem. Then an end-to-end neural network framework is constructed, and a two-step switchable normalization block is designed to improve the network performance by combining the advantages of the adaptive normalizers aiming at different convolution layers of sparse exchangeable normalization and robust global context information of context normalization. The image matching method based on the deep neural network mainly comprises the following steps: preparing a data set, feature enhancement, feature learning, and testing. The invention can improve the matching precision.

Description

Image matching method based on two-step switchable normalized depth neural network
Technical Field
The invention relates to the technical field of computer vision, in particular to an image matching method based on a two-step switchable normalized depth neural network.
Background
Image matching is an important research area for computer vision. It is widely used in preprocessing in many fields, such as three-dimensional reconstruction, simultaneous localization and mapping, panoramic stitching, stereo matching, etc. It consists essentially of two steps to construct matching pairs and remove mismatching.
Many methods of image matching currently exist. They can be classified into parametric methods, nonparametric methods and learning-based methods. The parametric approach is a popular strategy to solve the matching problem, such as RANSAC and its variants: PROSAC and USAC. Specifically, it first performs a random minimum subset sampling, generates a homography matrix or basis matrix, then verifies the matrix (whether it is the smallest possible outlier subset) and loops through the first and second steps. However, these methods have two basic disadvantages: 1) when the ratio of correct matches to total matches is low, they (parametric methods) do not work efficiently; 2) they cannot express complex model non-parametric methods to mine local information for corresponding selection. It is assumed that under view angle variations or non-rigid deformations, the spatial neighborhood relationship between feature points of an image pair of the same scene or object is similar. Based on this fact, researchers use spatial neighbor relations to remove false matches. Researchers use superpixels to obtain the feature appearance of the feature matching problem and build the adjacency matrix of the graph. The nodes represent potential correspondences and the weights on the links represent pairwise agreements between potential matches. These methods involve compatibility information between matches. However, they do not mine local information from compatible communications.
Methods based on deep learning have enjoyed tremendous success in a variety of computer vision tasks. Many researchers have attempted to solve the matching task using a learning-based approach. They can be broadly divided into two categories: sparse Point correspondences are constructed from image pairs of the same or similar scenes using a deep learning architecture, as well as a Point-Net-like architecture. Although the learning-based approach has proven to be superior to the parametric and non-parametric approaches, there are still a large number of false matches in the generated hypothesis matches for the network model of Choy et al. The network model of MooYi et al captures global context information through context normalization and embeds the context information in nodes, but its context normalization is easily affected by other matching pairs. While learning-based approaches have been able to achieve good results on various data sets, batch normalization in the network layer is often limited by batch size, and different convolutions result in poor performance with the same normalization, so how to switch flexibly is more challenging.
In order to effectively deal with the difficulties existing in the matching process, an end-to-end network is provided. Given the correspondence of feature points in two views, the existing deep learning-based method expresses the feature matching problem as a binary classification problem. Among these methods, normalization plays an important role in network performance. However, they employ the same normalizer in all normalization layers of the entire network, which results in poor performance. To solve this problem, the present invention proposes a two-step switchable normalization block that combines the advantages of adaptive normalizers for different convolution layers of the switchable normalization and robust global context information for context normalization. Therefore, the invention can avoid the influence of the difficulties mentioned above to a certain extent, and finally improve the matching precision. Experimental results show that the invention achieves the most advanced performance on the basis of a data set.
Disclosure of Invention
In view of this, the present invention provides an image matching method based on a two-step switchable normalized depth neural network, which can improve matching accuracy.
The invention is realized by adopting the following scheme: an image matching method based on a two-step switchable normalized depth neural network is characterized in that:
the method comprises the following steps:
step S1: data set processing: providing an image pair (I, I'), extracting feature points kp from each image separately using a detector based on black-plug mappingi,kp′i(ii) a The set of feature points for information extraction of the image I is KP ═ { KP ═ KPi}i∈N(ii) a Obtaining a feature point set KP ' ═ KP ' from the image I 'i}i∈N(ii) a Each corresponding relation (kp)i,kp′i) 4D data can be generated:
D=[d1;d2;d3;.......dN;],di=[xi,yi,x′i,y′i]
d represents a matched set of image pairs, i.e. input data, DiRepresents a matching pair, (x)i,yi),(x′i,y′i) Representing the coordinates of two feature points in the matching;
step S2, feature enhancement, namely, using a convolution layer with convolution kernel size of 1 × 1 to map the 4D data processed in the step S1 into a 32-dimensional feature vector, namely D(1×N×4)→D(1×N×32)The method is used for reducing information loss caused by network feature learning; wherein N is the number of feature points extracted from one picture;
step S3: extracting features of the enhanced features, namely the mapped feature vectors, by using a residual error network, replacing Batch Normalization (Batch Normalization) with two-step sparse switchable Normalization for extracting global features of the enhanced data more robustly and outputting a preliminary prediction result;
step S4: in the testing phase, the output of the residual network is set as a preliminary prediction result and the preliminary prediction result is processed using the activation functions tanh and relu, i.e., fx=relu(tanh(xout) Obtaining a final result with a predicted value of 0, 1, wherein 0 represents error matching and 1 represents correct matching; in the training of the whole network, a cross entropy loss function is adopted to guide the learning of the network; as shown in the formula:
Figure BDA0002450979490000041
wherein, yiIs denoted label, y'iIndicating the predicted value.
Further, the step S3 specifically includes the following steps:
the two-step sparse switchable normalization is divided into two layers: the first layer is a context normalization and,the second layer is switchable normalization; context normalization is to tessellate global context information for each data, given input data xiAt layer l, context normalization is defined as follows:
Figure BDA0002450979490000042
wherein the content of the first and second substances,
Figure BDA0002450979490000043
an output result representing context normalization; u. oflAnd olRespectively representing the average value and standard deviation of data of the network layer;
context normalization embeds global information into each feature point data; in the second layer of normalization, a differential feedforward sparse learning algorithm is used for selecting the most appropriate normalization from batch normalization, example normalization and layer normalization so as to reduce the influence of fixed normalization on the final result; the switchable normalization is defined as follows:
Figure BDA0002450979490000044
wherein
Figure BDA0002450979490000045
Representing the output of the second Normalization layer,. lambda.and β represent the scale and displacement parameters, respectively,. phi. psi. u.represents the set of three normalizations (i.e., L eye Normalization, Batchnormalization, instant Normalization)jAnd
Figure BDA0002450979490000051
mean and variance of the corresponding network layer data, j 1,2,3, position index IN three normalization { L N, BN, IN }, rjAnd r'jRespectively representing the scaling parameters of the mean and variance of their respective network layer data.
Compared with the prior art, the invention has the following beneficial effects:
the invention proposes a two-step switchable normalization block that combines the advantages of adaptive normalizers and context-normalized robust global context information for different convolution layers of switchable normalization. Therefore, the invention can finally improve the matching precision. Experimental results show that the invention achieves the most advanced performance on the basis of a data set.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
FIG. 2 is a diagram of a neural network architecture according to an embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As shown in fig. 1, the present embodiment provides an image matching method based on a two-step switchable normalized deep neural network, which includes firstly performing data set processing on original data, secondly performing feature enhancement on the processed data, then extracting features from the enhanced features, and finally outputting a result in a test stage.
The method specifically comprises the following steps:
step S1: data set processing: providing an image pair (I, I'), extracting feature points kp from each image separately using a detector based on black-plug mappingi,kp′i(ii) a The information of the image I is extractedThe characteristic point set is KP ═ KPi}i∈N(ii) a Obtaining a feature point set KP ' ═ KP ' from the image I 'i}i∈N(ii) a Each corresponding relation (kp)i,kp′i) 4D data can be generated:
D=[d1;d2;d3;.......dN;],di=[xi,yi,x′i,y′i]
d represents a matched set of image pairs, i.e. input data, DiRepresents a matching pair, (x)i,yi),(x′i,y′i) Representing the coordinates of two feature points in the matching;
step S2, feature enhancement, namely, using a convolution layer with convolution kernel size of 1 × 1 to map the 4D data processed in the step S1 into a 32-dimensional feature vector, namely D(1×N×4)→D(1×N×32)The method is used for reducing information loss caused by network feature learning; wherein N is the number of feature points extracted from one picture;
step S3: extracting features of the enhanced features, namely the mapped feature vectors, by using a residual error network, replacing Batch Normalization (Batch Normalization) with two-step sparse switchable Normalization for extracting global features of the enhanced data more robustly and outputting a preliminary prediction result;
step S4: in the testing phase, the output of the residual network is set as a preliminary prediction result and the preliminary prediction result is processed using the activation functions tanh and relu, i.e., fx=relu(tanh(xout) Obtaining a final result with a predicted value of 0, 1, wherein 0 represents error matching and 1 represents correct matching; in the training of the whole network, a cross entropy loss function is adopted to guide the learning of the network; as shown in the formula:
Figure BDA0002450979490000071
wherein, yiIs denoted label, y'iIndicating the predicted value.
As shown in fig. 2, in this embodiment, the step S3 specifically includes the following steps:
the two-step sparse switchable normalization is divided into two layers: the first layer is context normalization, and the second layer is switchable normalization; context normalization is to tessellate global context information for each data, given input data xiAt layer l, context normalization is defined as follows:
Figure BDA0002450979490000072
wherein the content of the first and second substances,
Figure BDA0002450979490000073
an output result representing context normalization; u. oflAnd olRespectively representing the average value and standard deviation of data of the network layer;
context normalization embeds global information into each feature point data; it is noted that the conventional context normalization post-processing is susceptible to interference from other data. This is because the context information of different data in the post-processing is mixed by the subsequent batch normalization operation, so we adopt a switching strategy in the second step. Specifically, we use the differential feed-forward sparse learning algorithm (i.e., sparestmax) in the second layer normalization to select the most appropriate normalization from batch normalization, instance normalization, and layer normalization to reduce the effect of the fixed normalization on the final result; the switchable normalization is defined as follows:
Figure BDA0002450979490000074
wherein
Figure BDA0002450979490000081
Representing the output of the second Normalization layer,. lambda.and β represent the scale and displacement parameters, respectively,. phi. psi. u.represents the set of three normalizations (i.e., L eye Normalization, Batchnormalization, instant Normalization)jAnd
Figure BDA0002450979490000082
mean and variance of the corresponding network layer data, j 1,2,3, position index IN three normalization { L N, BN, IN }, rjAnd r'jRespectively representing the scaling parameters of the mean and variance of their respective network layer data.
Preferably, in the present embodiment, a sparse exchangeable normalization (SSN) is introduced to learn different combination normalizers for different convolution layers of the hierarchical deep learning network, so as to solve the feature matching problem. At the same time, a two-step switchable normalization block (TSSN block) is established that combines the advantages of robust global context information for adaptive normalizers and Context Normalization (CN) for different convolutional layers of sparse exchangeable normalization (SSN).
Preferably, the present embodiment adaptively outputs matched pairs through analyzing the input features to be matched and then training a novel deep neural network. Specifically, given the correspondence of feature points in two views, the image feature matching problem is expressed as a binary problem. Namely: given the correspondence of feature points in the two views-i.e. the input data (data processing) represents the image feature matching problem as a binary problem, i.e. our network considers the matching data as a binary problem, with 1 representing a correct match and 0 representing a false match.
An end-to-end neural network framework is then constructed, i.e. the input data can directly obtain well-matched output data (0, 1) through the network of the embodiment without passing through other steps. The network diagram of the embodiment is shown in figure 2; and the advantages of the adaptive normalizer aiming at different convolution layers of sparse exchangeable normalization and robust global context information of context normalization are combined, and a two-step switchable normalization block is designed to improve the network performance. The image matching method based on the deep neural network mainly comprises the following steps: preparing a data set, feature enhancement, feature learning, and testing.
The quantification and characterization of the method of the present embodiment and the current state-of-the-art matching method were performed on a common data set (CO L MAP), and the results show that the method of the present embodiment is significantly superior to other algorithms.
Preferably, table 1 shows the quantitative comparison of the F-measurement value, accuracy and recall of the CO L MAP data set of this embodiment with several other matching algorithms, and the comparison methods include ranac, L PM, Point-Net + +, L CG-Net.
TABLE 1
F-measured value Rate of accuracy Recall rate
Ransac 0.1914 0.2222 0.1879
LPM 0.2213 0.2415 0.2579
Point-Net 0.1683 0.1205 0.3847
Point-Net++ 0.3298 0.2545 0.5668
LCG-Net 0.3953 0.3063 0.6839
TSSN-Net 0.4357 0.3733 0.5518
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (2)

1. An image matching method based on a two-step switchable normalized depth neural network is characterized in that:
the method comprises the following steps:
step S1: data set processing: providing an image pair (I, I'), extracting feature points kp from each image separately using a detector based on black-plug mappingi,kp′i(ii) a The set of feature points for information extraction of the image I is KP ═ { KP ═ KPi}i∈N(ii) a Obtaining a feature point set KP ' ═ KP ' from the image I 'i}i∈N(ii) a Each corresponding relation (kp)i,kp′i) 4D data can be generated:
D=[d1;d2;d3;.......dN;],di=[xi,yi,x′i,y′i]
d represents a matched set of image pairs, i.e. input data, DiRepresents a matching pair, (x)i,yi),(x′i,y′i) Representing the coordinates of two feature points in the matching;
step S2:feature enhancement, namely, mapping the 4D data processed in the step S1 into a 32-dimensional feature vector, namely D, by using a convolution layer with the convolution kernel size of 1 × 1(1×N×4)→D(1×N×32)The method is used for reducing information loss caused by network feature learning; wherein N is the number of feature points extracted from one picture;
step S3: extracting features of the enhanced features, namely the mapped feature vectors, by using a residual error network, replacing batch normalization by two-step sparse switchable normalization, extracting global features of the enhanced data more robustly, and outputting a preliminary prediction result;
step S4: in the testing phase, the output of the residual network is set as a preliminary prediction result and the preliminary prediction result is processed using the activation functions tanh and relu, i.e., fx=relu(tanh(xout) Obtaining a final result with a predicted value of 0, 1, wherein 0 represents error matching and 1 represents correct matching; in the training of the whole network, a cross entropy loss function is adopted to guide the learning of the network; as shown in the formula:
Figure FDA0002450979480000021
wherein, yiIs denoted label, y'iIndicating the predicted value.
2. The image matching method based on the two-step switchable normalized depth neural network of claim 1, wherein: the step S3 specifically includes the following steps:
the two-step sparse switchable normalization is divided into two layers: the first layer is context normalization, and the second layer is switchable normalization; context normalization is to tessellate global context information for each data, given input data xiAt layer l, context normalization is defined as follows:
Figure FDA0002450979480000022
wherein the content of the first and second substances,
Figure FDA0002450979480000023
an output result representing context normalization; u. oflAnd olRespectively representing the average value and standard deviation of data of the network layer;
context normalization embeds global information into each feature point data; in the second layer of normalization, a differential feedforward sparse learning algorithm is used for selecting the most appropriate normalization from batch normalization, example normalization and layer normalization so as to reduce the influence of fixed normalization on the final result; the switchable normalization is defined as follows:
Figure FDA0002450979480000024
wherein
Figure FDA0002450979480000025
Figure FDA0002450979480000026
Representing the output of the second normalization layer, | λ and β representing the scale and displacement parameters, respectively, | ψ | representing a set of three normalizations, ujAnd
Figure FDA0002450979480000027
mean and variance of the corresponding network layer data, j 1,2,3, position index IN three normalization { L N, BN, IN }, rjAnd r'jRespectively representing the scaling parameters of the mean and variance of their respective network layer data.
CN202010293080.XA 2020-04-15 2020-04-15 Image matching method based on two-step switchable normalized depth neural network Active CN111488938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010293080.XA CN111488938B (en) 2020-04-15 2020-04-15 Image matching method based on two-step switchable normalized depth neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010293080.XA CN111488938B (en) 2020-04-15 2020-04-15 Image matching method based on two-step switchable normalized depth neural network

Publications (2)

Publication Number Publication Date
CN111488938A true CN111488938A (en) 2020-08-04
CN111488938B CN111488938B (en) 2022-05-13

Family

ID=71794953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010293080.XA Active CN111488938B (en) 2020-04-15 2020-04-15 Image matching method based on two-step switchable normalized depth neural network

Country Status (1)

Country Link
CN (1) CN111488938B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112164100A (en) * 2020-09-25 2021-01-01 闽江学院 Image registration method based on graph convolution neural network
CN112288011A (en) * 2020-10-30 2021-01-29 闽江学院 Image matching method based on self-attention deep neural network
CN112308128A (en) * 2020-10-28 2021-02-02 闽江学院 Image matching method based on attention mechanism neural network
CN112489098A (en) * 2020-12-09 2021-03-12 福建农林大学 Image matching method based on spatial channel attention mechanism neural network
CN113378911A (en) * 2021-06-08 2021-09-10 北京百度网讯科技有限公司 Image classification model training method, image classification method and related device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109005398A (en) * 2018-07-27 2018-12-14 杭州电子科技大学 A kind of stereo image parallax matching process based on convolutional neural networks
CN109685145A (en) * 2018-12-26 2019-04-26 广东工业大学 A kind of small articles detection method based on deep learning and image procossing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109005398A (en) * 2018-07-27 2018-12-14 杭州电子科技大学 A kind of stereo image parallax matching process based on convolutional neural networks
CN109685145A (en) * 2018-12-26 2019-04-26 广东工业大学 A kind of small articles detection method based on deep learning and image procossing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHEN ZHAO等: "NM-Net: Mining Reliable Neighbors for Robust Feature Correspondences", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112164100A (en) * 2020-09-25 2021-01-01 闽江学院 Image registration method based on graph convolution neural network
CN112164100B (en) * 2020-09-25 2023-12-12 闽江学院 Image registration method based on graph convolution neural network
CN112308128A (en) * 2020-10-28 2021-02-02 闽江学院 Image matching method based on attention mechanism neural network
CN112308128B (en) * 2020-10-28 2024-01-05 闽江学院 Image matching method based on attention mechanism neural network
CN112288011A (en) * 2020-10-30 2021-01-29 闽江学院 Image matching method based on self-attention deep neural network
CN112288011B (en) * 2020-10-30 2022-05-13 闽江学院 Image matching method based on self-attention deep neural network
CN112489098A (en) * 2020-12-09 2021-03-12 福建农林大学 Image matching method based on spatial channel attention mechanism neural network
CN112489098B (en) * 2020-12-09 2024-04-09 福建农林大学 Image matching method based on spatial channel attention mechanism neural network
CN113378911A (en) * 2021-06-08 2021-09-10 北京百度网讯科技有限公司 Image classification model training method, image classification method and related device
CN113378911B (en) * 2021-06-08 2022-08-26 北京百度网讯科技有限公司 Image classification model training method, image classification method and related device

Also Published As

Publication number Publication date
CN111488938B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN111488938B (en) Image matching method based on two-step switchable normalized depth neural network
Sagong et al. Pepsi: Fast image inpainting with parallel decoding network
Parmar et al. Image transformer
CN111798400B (en) Non-reference low-illumination image enhancement method and system based on generation countermeasure network
Rafi et al. An Efficient Convolutional Network for Human Pose Estimation.
CN112288011B (en) Image matching method based on self-attention deep neural network
CN110796080B (en) Multi-pose pedestrian image synthesis algorithm based on generation countermeasure network
CN107038448B (en) Target detection model construction method
WO2020108362A1 (en) Body posture detection method, apparatus and device, and storage medium
CN110163286B (en) Hybrid pooling-based domain adaptive image classification method
CN110097609B (en) Sample domain-based refined embroidery texture migration method
CN112308137B (en) Image matching method for aggregating neighborhood points and global features by using attention mechanism
Li et al. Learning face image super-resolution through facial semantic attribute transformation and self-attentive structure enhancement
CN110222717A (en) Image processing method and device
CN115619743A (en) Construction method and application of OLED novel display device surface defect detection model
CN112101262B (en) Multi-feature fusion sign language recognition method and network model
CN112308128B (en) Image matching method based on attention mechanism neural network
Zhao et al. NormalNet: Learning-based mesh normal denoising via local partition normalization
CN114612709A (en) Multi-scale target detection method guided by image pyramid characteristics
CN117237623B (en) Semantic segmentation method and system for remote sensing image of unmanned aerial vehicle
CN112949765A (en) Image matching method fusing local and global information
CN109035318B (en) Image style conversion method
CN112734655B (en) Low-light image enhancement method for enhancing CRM (customer relationship management) based on convolutional neural network image
CN115409159A (en) Object operation method and device, computer equipment and computer storage medium
CN112529081A (en) Real-time semantic segmentation method based on efficient attention calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240424

Address after: 230000 Room 203, building 2, phase I, e-commerce Park, Jinggang Road, Shushan Economic Development Zone, Hefei City, Anhui Province

Patentee after: Hefei Jiuzhou Longteng scientific and technological achievement transformation Co.,Ltd.

Country or region after: China

Address before: 200 xiyuangong Road, Shangjie Town, Minhou County, Fuzhou City, Fujian Province

Patentee before: MINJIANG University

Country or region before: China

TR01 Transfer of patent right