CN111078916A - Cross-domain three-dimensional model retrieval method based on multi-level feature alignment network - Google Patents
Cross-domain three-dimensional model retrieval method based on multi-level feature alignment network Download PDFInfo
- Publication number
- CN111078916A CN111078916A CN201911061497.7A CN201911061497A CN111078916A CN 111078916 A CN111078916 A CN 111078916A CN 201911061497 A CN201911061497 A CN 201911061497A CN 111078916 A CN111078916 A CN 111078916A
- Authority
- CN
- China
- Prior art keywords
- dimensional model
- domain
- image
- network
- alignment network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 13
- 238000011176 pooling Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 description 18
- 238000013527 convolutional neural network Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 3
- 241000948268 Meda Species 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001617 migratory effect Effects 0.000 description 2
- 238000013526 transfer learning Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 102100040805 CREB/ATF bZIP transcription factor Human genes 0.000 description 1
- 101000964541 Homo sapiens CREB/ATF bZIP transcription factor Proteins 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/532—Query formulation, e.g. graphical querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a cross-domain three-dimensional model retrieval method based on a multilevel feature alignment network, which comprises the following steps: virtual photographing is carried out on the three-dimensional model in the three-dimensional model database by using a virtual camera, and multi-view data are generated; constructing a multi-level feature alignment network; performing domain-level feature alignment on the image and the features of the three-dimensional model through an alignment network and a discriminator; calculating the centroids of all the characteristics of each class in the two domains of the image and the three-dimensional model based on an alignment network, taking the distance between the centroids of the same class as a part of a loss function, and minimizing the loss function by using a back propagation algorithm; and when the loss function is minimized, respectively extracting the characteristics of the image and the three-dimensional model by using the trained multi-level characteristic alignment network, and performing cross-domain three-dimensional model retrieval. The invention provides a new network framework: the multi-level feature alignment network performs feature alignment on two domains of the image and the three-dimensional model on two levels of a domain level and a class level, and improves the retrieval precision of the cross-domain three-dimensional model.
Description
Technical Field
The invention relates to the fields of feature alignment, cross-domain learning and model retrieval, in particular to a cross-domain three-dimensional model retrieval method based on a multi-level feature alignment network.
Background
With the development of technology, the application of three-dimensional models is more and more extensive, virtual reality, reality augmentation and 3D modeling[1]The technologies are mature day by day, the data form of the three-dimensional model shows explosive growth, and efficient and rapid retrieval and classification of the three-dimensional model also become a very important research problem[2]. However, in the previous researches and methods, a three-dimensional model is used as a query target, with the increase of user demands, retrieval modes become various, the query target is not limited to the three-dimensional model, and the frequency of the query target appearing in the form of an image is greatly increased. However, the simple feature extraction directly performed on the image and the model leads to a poor retrieval result due to the distribution difference of the image features and the three-dimensional model features in the feature space, and cannot meet the retrieval requirements of users. This situation provides a strong motivation for the research of the three-dimensional model retrieval method for cross-domain learning. Meanwhile, the three-dimensional model data currently disclosed is ModelNet40[3],ShapeNetCore55[4]Are databases where the comparison is representative and data size is large, but the amount of data is millions compared to image public databases such as ImageNet, etc., and is still a dwarf. It is known that the size of the data used to train an algorithm will directly affect the performance of an algorithm. Therefore, the three-dimensional model retrieval field which utilizes a data set which is disclosed in a large amount in the image field through cross-domain learning to bring about a large amount of label information and a priori knowledge migration is also a relatively pioneering attempt.
In the field of three-dimensional model retrieval, the research work of predecessors can be mainly divided into two categories: model-based methods and view-based methods. Model-based methods, mainly using structural information of the model and its own data, such as voxels, point clouds[5]Wait for data, thisThe class method has the defects of complex calculation and sparse and disordered data, but directly reflects the self characteristics of the model. The method based on the view utilizes a virtual camera to virtually photograph the model to obtain a group of views of the three-dimensional model[6]As the representation of the model, the view cannot represent the topological structure of the target three-dimensional model, and the three-dimensional model retrieval based on the view has certain limitation.
In the field of cross-domain learning, the methods of cross-domain learning can also be mainly divided into two categories: conventional migratory learning and deep migratory learning. Traditional transfer learning utilizes data distribution of different domains to train a learner[7]The deep transfer learning is to obtain the prior knowledge of other fields by utilizing the deep learning[8]。
Although much work has been done in the fields of three-dimensional model retrieval and cross-domain learning, few have been studied in combination. Based on the current situation, the invention provides a cross-domain three-dimensional model retrieval method based on a multilevel feature alignment network, which comprises the following steps:
the challenges currently faced are mainly two-fold:
1. how to extract features of different domains (images and three-dimensional models) with identifiable domain invariance;
2. how to fully explore semantic information of each category in cross-domain learning and better improve retrieval precision.
Disclosure of Invention
The invention provides a cross-domain three-dimensional model retrieval method based on a multilevel feature alignment network, and provides a new network framework: the multi-level feature alignment network performs feature alignment on two domains of an image and a three-dimensional model on two levels of a domain level and a class level, improves the retrieval precision of the cross-domain three-dimensional model, and is described in detail in the following:
a cross-domain three-dimensional model retrieval method based on a multi-level feature alignment network comprises the following steps:
virtual photographing is carried out on the three-dimensional model in the three-dimensional model database by using a virtual camera, and multi-view data are generated;
constructing a multi-level feature alignment network;
performing domain-level feature alignment on the image and the features of the three-dimensional model through an alignment network and a discriminator;
calculating the centroids of all the characteristics of each class in the two domains of the image and the three-dimensional model based on an alignment network, taking the distance between the centroids of the same class as a part of a loss function, and minimizing the loss function by using a back propagation algorithm;
and when the loss function is minimized, respectively extracting the characteristics of the image and the three-dimensional model by using the trained multi-level characteristic alignment network, and performing cross-domain three-dimensional model retrieval.
Wherein,
the characteristic extraction part of the multilevel characteristic alignment network adopts CNN as a basic network, and the CNN networks of the two parts share parameters when extracting image characteristics and three-dimensional model multi-view characteristics;
and adopting a pooling layer to integrate the characteristics of all the images into a 3D descriptor as the characteristic representation of the three-dimensional model.
Further, the loss function is specifically:
the sum of classification loss, the difference between two domains of the image and the three-dimensional model and the centroid distances of different domains in the same class is obtained;
and the difference between the two domains of the image and the three-dimensional model and the centroid distance of the same class of different domains are respectively multiplied by the hyper-parameter.
The technical scheme provided by the invention has the beneficial effects that:
1. by domain-level feature alignment, the features of the image and the three-dimensional model are distributed closer in a feature space, so that the proposed features have identifiability and domain invariance, and the precision of three-dimensional model retrieval can be effectively improved;
2. by class-level feature alignment, the distribution of the same class of features of the image and the three-dimensional model in a feature space is closer, semantic information of each class can be better explored, and the retrieval precision of the three-dimensional model is effectively improved.
Drawings
FIG. 1 is a flow chart of a cross-domain three-dimensional model retrieval method based on a multilevel feature alignment network;
FIG. 2 is a schematic diagram of generating multiple views of a three-dimensional model using a virtual camera;
FIG. 3 is a flow chart of a multi-level feature alignment network;
FIG. 4 is a process diagram of visualization of alignment of multi-level feature alignment networks to different domain features.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below.
Example 1
A cross-domain three-dimensional model retrieval method based on a multilevel feature alignment network is disclosed, and referring to FIG. 1, the method comprises the following steps:
101: firstly, virtually photographing a model in a three-dimensional model database by using a virtual camera to generate multi-view data;
102: a new network framework was designed: and the multi-level feature alignment network is used for performing feature alignment on the image and the three-dimensional model on a domain level and a class level.
The characteristic extraction part of the multilevel characteristic alignment network adopts a classic CNN network (AlexNet and the like) as a basic network structure. When the image features and the multi-view features of the three-dimensional model are extracted, the CNN networks of the two parts share parameters, and then a pooling layer is adopted to integrate the features of all views in the multi-view of the three-dimensional model into a 3D descriptor to be used as feature representation of the three-dimensional model.
103: and (3) a multi-level feature alignment network, wherein in order to better extract the features with recognizable domain invariance of different domains (images and three-dimensional models), the features of the images and the three-dimensional models are subjected to domain-level feature alignment through a discriminator. In order to fully explore semantic information of each category and align features at a class level, the embodiment of the invention further provides a new algorithm: and calculating the centroids of all the characteristics of each class in the two domains of the image and the three-dimensional model, and then taking the distance between the centroids of the same class as a part of a loss function, and minimizing the loss function by using a back propagation algorithm.
104: and when the loss function is minimized, respectively extracting the characteristics of the image and the three-dimensional model by using the trained multi-level characteristic alignment network, and performing cross-domain three-dimensional model retrieval.
In summary, the embodiment of the present invention designs a brand new network structure based on a multi-level feature alignment network for cross-domain retrieval of a three-dimensional model, so that the distribution of the features of an alignment image and the three-dimensional model in a feature space improves the precision of the three-dimensional model retrieval.
Example 2
The scheme in example 1 is further described below with reference to specific examples and calculation formulas, which are described in detail below:
201: firstly, a virtual camera is used for virtually photographing a model in a three-dimensional model database to generate multi-view data;
wherein, the step 201 mainly comprises:
a group of viewpoints is predefined, wherein the viewpoints are viewpoints for observing a target object, M is the number of the predefined viewpoints, in the embodiment of the invention, M is 12, namely, a virtual camera is placed around the centroid of a three-dimensional model at intervals of 30 degrees, and the viewpoints are completely and uniformly distributed around the target object. By selecting different interval angles, different sets of views of the model can be obtained.
And projecting all objects in the three-dimensional model database, wherein each target obtains a group of views, and the group of views of all the targets form the multi-view model database.
202: a new network framework was designed: multiple levels of features align the network.
Feature alignment is performed on the image and the three-dimensional model at a domain level and a class level. The multi-level feature alignment network feature extraction part adopts a classic CNN network (AlexNet and the like) as a basic network structure. When the image features and the features of the three-dimensional model multiple views are extracted, parameter sharing is carried out on the two parts of CNN networks, and then the features of all the views in the three-dimensional model multiple views are integrated into a 3D descriptor by adopting a pooling layer to be used as feature representation of the three-dimensional model (see figure 3);
wherein, the step 202 mainly includes:
1. the classical CNN network, here taking AlexNet as the infrastructure network structure.
2. The pooling layer utilized, either the average pooling layer or the maximum pooling layer is used.
203: in order to better extract the characteristics with recognizable domain invariance of different domains (images and three-dimensional models), the characteristics of the images and the three-dimensional models are subjected to domain-level characteristic alignment through a discriminator.
In order to fully explore semantic information of each category and align features at a class level, the embodiment of the invention further provides a new algorithm: and calculating the centroids of all the characteristics of each class in the two domains of the image and the three-dimensional model, and then taking the distance between the centroids of the same class as a part of a loss function, and minimizing the loss function by using a back propagation algorithm.
Wherein, the loss function is specifically defined as:
F(Xs,Ys,XT)=Fc(Xs,Ys)+αFDC(Xs,XT)+βFSM(Xs,YS,XT)
wherein, FC(Xs,Ys) Representing a loss of classification, FDC(Xs,XT) Representing the difference between two fields of image and three-dimensional model, FSM(XS,YS,XT) Representing the centroid distance of the same class for different domains α is a hyper-parameter to balance different loss functions.
Loss of classification Fc(Xs,Ys) Sum domain difference loss FDc(Xs,XT) Loss of binding is defined as F:
wherein D isS,XS,YS,XTRespectively representing original domain imagesThe method comprises the following steps of (1) domain, image sample set, image sample label and three-dimensional model sample set;phi represents the cross-loss function and the domain difference metric function, respectively, α is the equilibrium classification loss FcSum-field difference FDCAnd E represents the desired value.
Centroid distances F of the same class for different domainsSM(XS,YS,XT) Is defined as:
wherein,respectively representing the centroids of the nth class of the image and the three-dimensional model; n represents how many categories in total; d (.) is a distance evaluation function.
Referring to fig. 3 (dark and light regions represent features of different domains), it can be noted that, after domain-level feature alignment, the features of different domains start to approach each other gradually in the distribution of the feature space, and the previous cross-domain algorithm focuses more on domain-level alignment, so that even if the features of different domains are aligned, the retrieval accuracy is likely not to be improved, because the features of the same class are not processed well.
Therefore, the embodiment of the invention designs a multi-level feature alignment network, and after domain level alignment, the features are aligned again on class level, so that a new algorithm is provided to solve the problem of class level alignment, and in the process of feature alignment visualization, the features of different domains can be verified to be well close together after the multi-level feature alignment network is passed, and the features of the same class of different domains are also well aligned. The invention fully proves that the multi-level feature alignment network provided by the invention can well solve the challenges of cross-domain model retrieval and also proves the correctness and effectiveness of the provided class-level alignment algorithm.
204: and when the loss function is minimized, respectively extracting the characteristics of the image and the three-dimensional model by utilizing the trained multi-level characteristic alignment network, and performing cross-domain three-dimensional model retrieval.
Minimizing the loss function F (X) during the training processS,YS,XT) When the loss function F (X)S,YS,XT) And when the minimum is needed, extracting the characteristics of the image and the three-dimensional model by using the trained network, and searching the three-dimensional model.
Example 3
The following experiments were performed to verify the feasibility of the protocols of examples 1 and 2, as described in detail below:
six common evaluation indexes in a three-dimensional model are adopted[9]: NN, FT, ST, F, DCG, ANMMR to measure the performance of the method.
The implementation method of the six evaluation criteria comprises the following steps:
NN: calculating the percentage of the nearest matching model belonging to the query class;
FT: calculating the recall rate of the first K relevant matching sample, wherein K is the base number of the query category;
ST: calculating the recall rate of the first 2K correlation matching sample;
f: comprehensively measuring the precision and the recall rate of a certain number of retrieval results;
DCG: giving higher weight to the three-dimensional model related to the query image, giving lower weight to the unrelated three-dimensional model, and then performing statistical measurement;
ANMMR: ranking performance is measured by a weighted ranking list, taking into account ranking information of relevant ones of the top retrieval models.
Wherein, the higher the value of the first five terms represents the better the process performance, and the lower the value of the sixth term represents the better the process performance. The embodiment of the invention utilizes ModelNet40[3]And images collected from the interconnect constitute a data set for evaluating the performance of the method. The results of comparing this method (Ours) with the existing representative model retrieval methods (AlexNet) and cross-domain methods (MEDA, JGSA, JAN and RevGard) are given in the following table:
NN | FT | ST | F | DCG | ANMRR | |
AlexNet | 0.518 | 0.355 | 0.488 | 0.355 | 0.383 | 0.629 |
MEDA | 0.570 | 0.392 | 0.523 | 0.392 | 0.425 | 0.590 |
JGSA | 0.585 | 0.405 | 0.533 | 0.405 | 0.433 | 0.577 |
JAN | 0.608 | 0.501 | 0.646 | 0.501 | 0.527 | 0.484 |
RevGard | 0.623 | 0.467 | 0.614 | 0.467 | 0.503 | 0.514 |
Ours | 0.700 | 0.555 | 0.681 | 0.555 | 0.593 | 0.424 |
compared with the existing method, the cross-domain model retrieval based on the multilevel feature alignment network provided by the invention has better performance than that of the current mainstream method, and can well process the challenges in the current cross-domain model retrieval.
Reference documents:
[1]S Jeannin,S Jeannin.MPEG7 Visual part of experimentation ModelVersion 7[J].ISO/IEC JTC1/SC29/WG11 N,2001,3914.
[2] zhang Fei, three-dimensional model feature extraction and related feedback algorithm research and implementation [ D ]. northwest university, 2010.
[3]Z.Wu,S.Song,A.Khosla,F.Yu,L.Zhang,X.Tang,and J.Xiao.3d shapenets:Adeep representation for volumetric shapes.In Proceedings of IEEE Conferenceon Computer Vision and Pattern Recognition(CVPR),2015
[4]Savva M,Yu F,Su H,et al.Large-scale 3D shape retrieval fromShapeNet core55[C]//Eurographics Workshop on 3d Object Retrieval.EurographicsAssociation,2016.
[5]Qi C R,Su H,Mo K,et al.PointNet:Deep Learning on Point Sets for 3DClassification and Segmentation[J].2016.
[6]Su H,Maji S,Kalogerakis E,et al.Multi-view Convolutional NeuralNetworks for 3D Shape Recognition[J].2015.
[7]Sugiyama M,Suzuki T,Nakajima S,et al.Direct importance estimationfor covariate shift adaptation[J].Annals of the Institute of StatisticalMathematics,2008,60(4):699-746.
[8]Long M,Cao Z,Wang J,et al.Conditional Adversarial DomainAdaptation[J].2017.
[9]Liu A,Nie W,Gao Y,et al.View-Based 3-D Model Retrieval:A Benchmark[J].IEEE TRANSACTIONS ON CYBERNETICS,2018.
In the embodiment of the present invention, except for the specific description of the model of each device, the model of other devices is not limited, as long as the device can perform the above functions.
Those skilled in the art will appreciate that the drawings are only schematic illustrations of preferred embodiments, and the above-described embodiments of the present invention are merely provided for description and do not represent the merits of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (3)
1. A cross-domain three-dimensional model retrieval method based on a multilevel feature alignment network is characterized by comprising the following steps:
virtual photographing is carried out on the three-dimensional model in the three-dimensional model database by using a virtual camera, and multi-view data are generated;
constructing a multi-level feature alignment network;
performing domain-level feature alignment on the image and the features of the three-dimensional model through an alignment network and a discriminator;
calculating the centroids of all the characteristics of each class in the two domains of the image and the three-dimensional model based on an alignment network, taking the distance between the centroids of the same class as a part of a loss function, and minimizing the loss function by using a back propagation algorithm;
and when the loss function is minimized, respectively extracting the characteristics of the image and the three-dimensional model by using the trained multi-level characteristic alignment network, and performing cross-domain three-dimensional model retrieval.
2. The method as claimed in claim 1, wherein the multi-level feature alignment network-based cross-domain three-dimensional model search method,
the characteristic extraction part of the multilevel characteristic alignment network adopts CNN as a basic network, and the CNN networks of the two parts share parameters when extracting image characteristics and three-dimensional model multi-view characteristics;
and adopting a pooling layer to integrate the characteristics of all the images into a 3D descriptor as the characteristic representation of the three-dimensional model.
3. The method for searching the cross-domain three-dimensional model based on the multilevel feature alignment network as claimed in claim 1, wherein the loss function is specifically:
the sum of classification loss, the difference between two domains of the image and the three-dimensional model and the centroid distances of different domains in the same class is obtained;
and the difference between the two domains of the image and the three-dimensional model and the centroid distance of the same class of different domains are respectively multiplied by the hyper-parameter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911061497.7A CN111078916A (en) | 2019-11-01 | 2019-11-01 | Cross-domain three-dimensional model retrieval method based on multi-level feature alignment network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911061497.7A CN111078916A (en) | 2019-11-01 | 2019-11-01 | Cross-domain three-dimensional model retrieval method based on multi-level feature alignment network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111078916A true CN111078916A (en) | 2020-04-28 |
Family
ID=70310610
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911061497.7A Pending CN111078916A (en) | 2019-11-01 | 2019-11-01 | Cross-domain three-dimensional model retrieval method based on multi-level feature alignment network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111078916A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111914697A (en) * | 2020-07-16 | 2020-11-10 | 天津大学 | Multi-view target identification method based on view semantic information and sequence context information |
CN111914912A (en) * | 2020-07-16 | 2020-11-10 | 天津大学 | Cross-domain multi-view target identification method based on twin conditional countermeasure network |
CN113515657A (en) * | 2021-07-06 | 2021-10-19 | 天津大学 | Cross-modal multi-view target retrieval method and device |
CN116188830A (en) * | 2022-11-01 | 2023-05-30 | 青岛柯锐思德电子科技有限公司 | Hyperspectral image cross-domain classification method based on multi-level feature alignment |
CN116934970A (en) * | 2023-07-24 | 2023-10-24 | 天津大学 | Medical single view three-dimensional reconstruction device based on priori knowledge guidance |
CN117390210A (en) * | 2023-12-07 | 2024-01-12 | 山东建筑大学 | Building indoor positioning method, positioning system, storage medium and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109308486A (en) * | 2018-08-03 | 2019-02-05 | 天津大学 | Multi-source image fusion and feature extraction algorithm based on deep learning |
CN110188228A (en) * | 2019-05-28 | 2019-08-30 | 北方民族大学 | Cross-module state search method based on Sketch Searching threedimensional model |
-
2019
- 2019-11-01 CN CN201911061497.7A patent/CN111078916A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109308486A (en) * | 2018-08-03 | 2019-02-05 | 天津大学 | Multi-source image fusion and feature extraction algorithm based on deep learning |
CN110188228A (en) * | 2019-05-28 | 2019-08-30 | 北方民族大学 | Cross-module state search method based on Sketch Searching threedimensional model |
Non-Patent Citations (1)
Title |
---|
HEYU ZHOU,AN-AN LIU,WEIZHI NIE: ""Dual-level Embedding Alignment Network for 2D Image-Based 3D Object Retrieval"", 《PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111914697A (en) * | 2020-07-16 | 2020-11-10 | 天津大学 | Multi-view target identification method based on view semantic information and sequence context information |
CN111914912A (en) * | 2020-07-16 | 2020-11-10 | 天津大学 | Cross-domain multi-view target identification method based on twin conditional countermeasure network |
CN111914912B (en) * | 2020-07-16 | 2023-06-13 | 天津大学 | Cross-domain multi-view target identification method based on twin condition countermeasure network |
CN113515657A (en) * | 2021-07-06 | 2021-10-19 | 天津大学 | Cross-modal multi-view target retrieval method and device |
CN113515657B (en) * | 2021-07-06 | 2022-06-14 | 天津大学 | Cross-modal multi-view target retrieval method and device |
CN116188830A (en) * | 2022-11-01 | 2023-05-30 | 青岛柯锐思德电子科技有限公司 | Hyperspectral image cross-domain classification method based on multi-level feature alignment |
CN116188830B (en) * | 2022-11-01 | 2023-09-29 | 青岛柯锐思德电子科技有限公司 | Hyperspectral image cross-domain classification method based on multi-level feature alignment |
CN116934970A (en) * | 2023-07-24 | 2023-10-24 | 天津大学 | Medical single view three-dimensional reconstruction device based on priori knowledge guidance |
CN117390210A (en) * | 2023-12-07 | 2024-01-12 | 山东建筑大学 | Building indoor positioning method, positioning system, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111078916A (en) | Cross-domain three-dimensional model retrieval method based on multi-level feature alignment network | |
CN110069656B (en) | Method for searching three-dimensional model based on two-dimensional picture of generated countermeasure network | |
CN100456300C (en) | Method for searching 3D model based on 2D sketch | |
CN101477529B (en) | Three-dimensional object retrieval method and apparatus | |
CN102254015B (en) | Image retrieval method based on visual phrases | |
CN110188225B (en) | Image retrieval method based on sequencing learning and multivariate loss | |
CN105205135B (en) | A kind of 3D model retrieval methods and its retrieval device based on topic model | |
CN110851645A (en) | Image retrieval method based on similarity maintenance under depth metric learning | |
CN113255895B (en) | Structure diagram alignment method and multi-diagram joint data mining method based on diagram neural network representation learning | |
CN102024036B (en) | Three-dimensional object retrieval method and device based on hypergraphs | |
CN111914912B (en) | Cross-domain multi-view target identification method based on twin condition countermeasure network | |
CN109308486A (en) | Multi-source image fusion and feature extraction algorithm based on deep learning | |
CN105868706A (en) | Method for identifying 3D model based on sparse coding | |
CN111027140B (en) | Airplane standard part model rapid reconstruction method based on multi-view point cloud data | |
CN105678590B (en) | Cloud model-based topN recommendation method for social network | |
Tam et al. | Deformable model retrieval based on topological and geometric signatures | |
CN105893573B (en) | A kind of location-based multi-modal media data subject distillation model | |
CN107145519B (en) | Image retrieval and annotation method based on hypergraph | |
CN111274332A (en) | Intelligent patent retrieval method and system based on knowledge graph | |
CN104462365A (en) | Multi-view target searching method based on probability model | |
CN106844620A (en) | A kind of characteristic matching method for searching three-dimension model based on view | |
CN109472712A (en) | A kind of efficient Markov random field Combo discovering method strengthened based on structure feature | |
CN101599077A (en) | A kind of method of retrieving three-dimensional objects | |
CN101894267B (en) | Three-dimensional object characteristic view selection method | |
CN115795069A (en) | Two-stage three-dimensional model sketch retrieval method based on feature migration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200428 |