CN114647753B - Fine-grained sketch retrieval three-dimensional model method with multi-region space alignment - Google Patents
Fine-grained sketch retrieval three-dimensional model method with multi-region space alignment Download PDFInfo
- Publication number
- CN114647753B CN114647753B CN202210561621.1A CN202210561621A CN114647753B CN 114647753 B CN114647753 B CN 114647753B CN 202210561621 A CN202210561621 A CN 202210561621A CN 114647753 B CN114647753 B CN 114647753B
- Authority
- CN
- China
- Prior art keywords
- dimensional model
- sketch
- depth map
- dimensional
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/532—Query formulation, e.g. graphical querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The invention relates to the field of fine-grained retrieval of three-dimensional models based on sketches, and provides a multi-region space aligned fine-grained sketch three-dimensional model retrieval method, which comprises the following steps: rendering depth maps of all three-dimensional models projected under 3 visual angles, and performing spatial alignment preprocessing on the depth maps and sketch data; constructing a multi-region feature extraction network for simultaneously extracting the sketch features and the three-dimensional model depth map, and performing combined supervision training by using identity consistency loss, region consistency similarity loss and difficult sample loss in batches to obtain a trained sketch and three-dimensional model depth map feature extraction network; and respectively extracting the characteristics of the query sketch and the three-dimensional model depth map to be retrieved by utilizing the trained sketch and the three-dimensional model depth map multi-region characteristic extraction network, and sequencing the characteristic similarity by adopting cosine distance to obtain a retrieved three-dimensional model result. The method fully considers the multi-region difference between the sketch and the three-dimensional model rendering image of the same example, and effectively improves the retrieval precision.
Description
Technical Field
The invention relates to the field of fine-grained retrieval of three-dimensional models based on sketches, in particular to a fine-grained sketch retrieval three-dimensional model method based on multi-region space alignment.
Background
Instance-level three-dimensional model retrieval can effectively serve fields such as VR, AR and 3D printing. Compared with text information, the sketch is more suitable for representing detail information. The three-dimensional model retrieval based on the sketch mainly focuses on category-level retrieval, and the three-dimensional model fine-grained retrieval based on the sketch focuses more on example-level retrieval, namely three-dimensional model individuals consistent with the sketch.
The challenges faced in the fine-grained retrieval of three-dimensional models based on sketches mainly include: 1) the method comprises the following steps of (1) solving the problem of domain separation of a two-dimensional sketch and a three-dimensional model, wherein the two-dimensional sketch can only express the outline under a certain visual angle, and the three-dimensional model has depth information and contains information of all visual angles; 2) the visual angle difference is that the sketch and the three-dimensional model under different visual angles have larger appearance difference; 3) differences between the sketch and the three-dimensional model under the same view angle are distributed in a plurality of areas and are difficult to distinguish only through global features.
Due to the lack of data sets, the fine-grained retrieval of three-dimensional models based on sketches is also relatively rare. Qi et al disclose a first sketch fine-grained retrieval of three-dimensional model data sets and compare the performance of the existing sketch-based three-dimensional model retrieval methods. In a fine-grained retrieval task, a projection-based method is obviously superior to a non-projection method, and domain differences can be effectively reduced. Qi and other important points concern the problem of matching of the projection view angles of the sketch and the three-dimensional model, and the problem of multi-region difference under the same view angle is not solved.
Disclosure of Invention
Aiming at the problem of multi-region difference between a sketch and a three-dimensional model projection image, the invention provides a fine-grained sketch retrieval three-dimensional model method with multi-region space alignment, and the retrieval precision is effectively improved.
The invention provides a method for retrieving a three-dimensional model by spatially aligned fine-grained sketch, which comprises the following steps:
step 1, rendering depth maps of all three-dimensional models projected under 3 visual angles, and performing spatial alignment pretreatment on the depth maps and sketch data;
step 2, constructing a multi-region feature extraction network for simultaneously extracting the sketch features and the three-dimensional model depth map, performing combined supervision training by using identity consistency Loss, region consistency similarity Loss and Batch internal difficult sample Loss (Batch Hard triple Loss), and updating network parameters through back propagation to obtain the trained sketch and the three-dimensional model depth map multi-region feature extraction network;
and 3, respectively extracting the characteristics of the query sketch and the three-dimensional model depth map to be retrieved by utilizing the sketch trained in the step 2 and the three-dimensional model depth map multi-region characteristic extraction network, and sorting the characteristic similarity by adopting cosine distance to obtain a retrieved three-dimensional model result.
In the above technical solution, the step 1 includes the following sub-steps,
step 1.1, rendering three depth maps of the three-dimensional model according to three angles of 0 degree, 45 degrees and 90 degrees of an azimuth angle by using a mesh _ to _ sdf library;
step 1.2, reading a draft image or a depth map image by utilizing an OpenCV library, and converting the RGB image into a gray scale image;
step 1.3, traversing all pixel values of the gray-scale image, finding out all pixels with the gray-scale value smaller than 250, and comparing the minimum row column number and the maximum row column number corresponding to the pixels;
step 1.4, cutting the original image by utilizing an OpenCV (open circuit vehicle) library according to the obtained minimum row column number and the maximum row column number to obtain a cut image;
step 1.5, calculating the aspect ratio of the cut image, zooming the maximum value of the width and the height to 250 pixels, and zooming the value with the smaller value of the width and the height with the same aspect ratio to obtain a zoomed image;
step 1.6, filling a white boundary into the zoomed image by utilizing an OpenCV (open content description language) library to enable the size of the final image to be equal to。
In the above technical solution, the step 2 includes the following sub-steps,
step 2.1, constructing two network branches with the same structure and different parameters for extracting sketch features and a three-dimensional model depth map, wherein each branch takes Resnet50 as a reference network, a self-adaptive pooling layer is added behind layer4 of Resnet50, the parameters of the pooling layer are (3, 1), and the upper part, the middle part and the lower part after pooling are obtained;
step 2.2, adding a 1x1 convolution layer, a BatchNorm layer and a LeakyReLU layer behind each part of the upper, middle and lower regions of the sketch and the upper, middle and lower regions of the three-dimensional model depth map to obtain 256-dimensional feature vectors;
step 2.3, connecting the 256-dimensional characteristic vectors in the upper area of the sketch and the 256-dimensional characteristic vectors in the upper area of the three-dimensional model depth map to the same classification layer, and similarly, respectively connecting the middle area and the lower area of the sketch and the three-dimensional model depth map to the classification layers to form 3 classification layers;
and 2.4, performing supervision training by using identity consistency loss, similarity consistency loss and batch internal difficulty sample loss to obtain a trained sketch and a three-dimensional model depth map feature extraction network.
In the above technical solution, the step 3 includes the following sub-steps,
step 3.1, rendering all three-dimensional models to be retrieved into depth degrees under 3 visual angles according to the step 1, performing spatial alignment processing, and performing spatial alignment processing on the query sketch according to the step 1;
step 3.2, inputting all the three-dimensional model depth maps to be retrieved after the spatial alignment processing into a trained three-dimensional model depth map extraction network to extract the merging features of the upper, middle and lower regions of the three-dimensional model depth maps, and inputting the query sketch after the spatial alignment processing into the trained sketch feature extraction network to extract the merging features of the upper, middle and lower regions of the query sketch;
and 3.3, calculating cosine distances of the characteristics of the query sketch and the characteristics of all depth maps of the three-dimensional model to be retrieved, sorting according to the cosine distances from large to small, adopting the maximum value of the cosine distances in the depth maps under three visual angles of the three-dimensional model as the similarity between the three-dimensional model and the sketch, and carrying out de-duplication processing on a sorting result to obtain a retrieval result of the three-dimensional model.
Compared with the prior art, the invention has the following beneficial effects:
(1) and carrying out spatial alignment preprocessing on the sketch and the depth map rendered by the three-dimensional model, and facilitating the subsequent extraction of spatially aligned multi-region local features.
(2) The existing three-dimensional model fine-grained retrieval based on the sketch adopts global features to compare similarity, but individual differences of the same category are often distributed in a plurality of local areas, and the examples with high similarity are difficult to distinguish only through the global features. The invention designs a multi-region local feature extraction network, and increases the inter-class feature distance, reduces the cross-domain feature distance of the same example and effectively improves the retrieval precision by jointly supervising the identity consistency loss, the region consistency similarity loss and the difficult sample loss in a batch.
Drawings
FIG. 1 is a network architecture diagram of a multi-region spatially aligned fine-grained sketch retrieval three-dimensional model according to the present invention.
FIG. 2 is a flow chart of an embodiment of the present invention.
Detailed Description
As shown in fig. 2, the method for retrieving a three-dimensional model by using a multi-region spatially aligned fine-grained sketch provided in the embodiment of the present invention mainly includes the following steps:
step 1: rendering depth maps of 3 visual angles of all the three-dimensional models, and performing spatial alignment preprocessing on the sketch and the rendered depth maps of the three-dimensional models;
step 2: constructing a multi-region feature extraction network of a draft and a three-dimensional model depth map, and performing joint supervision training by using identity consistency loss, region consistency similarity loss and difficult sample loss in batches to obtain the trained multi-region feature extraction network of the draft and the three-dimensional model depth map;
and step 3: and (3) respectively extracting the characteristics of the query sketch and the depth maps of the three-dimensional models to be retrieved under 3 visual angles by using the sketch trained in the step (2) and the multi-region characteristic extraction network of the three-dimensional model depth maps, calculating the characteristic cosine distances of the query sketch and the depth maps of all the three-dimensional models to be retrieved under 3 visual angles, sequencing from large to small, and taking the maximum distance under 3 visual angles as the similarity between the three-dimensional models and the query sketch to obtain a retrieval result.
Fig. 1 is a network architecture diagram of a multi-region spatially aligned fine-grained sketch retrieval three-dimensional model according to the present invention, which includes: (1) inputting a multi-view sketch and a multi-view three-dimensional model depth map; (2) a sketch and a three-dimensional model depth map multi-region feature extraction network; (3) identity consistency loss, region consistency similarity loss, and batch internal difficult sample loss.
Next, the method will be described in detail.
The step 1 specifically comprises the following steps:
step 1.1, setting a camera pitch angle facing a three-dimensional model to be 0 degree by using a mesh _ to _ sdf library, rendering a depth map under each angle of view by using an azimuth angle of 0 degree, 45 degrees and 90 degrees respectively to obtain three depth maps;
step 1.2, reading a draft image or a depth map image by utilizing an OpenCV library, and converting the RGB image into a gray scale image;
step 1.3, traversing all pixel values of the gray-scale image row by row, column by column, judging whether the pixel gray-scale value is less than 250, if so, comparing whether the row and column number of the pixel is less than the minimum row and column number or greater than the maximum row and column number, and updating the minimum row and column number and the maximum row and column number;
step 1.4, according to the obtained minimum row and column number and the maximum row and column number, utilizing an OpenCV library to cut an original image to obtain a cut image;
step 1.5, calculating the aspect ratio of the cut image, zooming the maximum value of the width and the height to 250 pixels, and zooming the value with the smaller value of the width and the height with the same aspect ratio to obtain a zoomed image;
step 1.6, filling a white boundary into the zoomed image by utilizing an OpenCV (open content description language) library to enable the size of the final image to be equal to。
The step 2 specifically comprises the following steps:
step 2.1, constructing two network branches for extracting sketch features and three-dimensional model depth maps, wherein the two network branches have the same structure and are not shared, each branch takes Resnet50 as a reference network, a self-adaptive pooling layer is added behind layer4 of Resnet50, the parameters of the pooling layer are (3, 1), and 2048-dimensional feature vectors of an upper part, a middle part and a lower part which are divided in the horizontal direction are obtained after pooling;
step 2.2, adding a 1x1 convolution layer, a BatchNorm layer and a LeakyReLU layer behind the feature vector of each region to the upper, middle and lower regions of the sketch and the upper, middle and lower regions of the three-dimensional model depth map to obtain 256-dimensional feature vectors;
step 2.3, connecting the 256-dimensional characteristic vectors of the upper area of the draft and the 256-dimensional characteristic vectors of the upper area of the three-dimensional model depth map to the same classification layer, similarly, connecting the middle area and the lower area of the draft and the three-dimensional model depth map to the classification layer respectively, and sharing 3 classification layers for the draft and the three-dimensional model depth map;
and 2.4, performing supervision training by using identity consistency loss, region consistency similarity loss and batch internal difficult sample loss to obtain a trained sketch and a three-dimensional model depth map feature extraction network.
Further, the identity consistency loss function is expressed as:
wherein the content of the first and second substances,features of upper, middle and lower regions of the image with 256 dimensions,the weight of the jth class in the classification level, r is the scale of magnification.
The expression of the region consistency similarity loss function is:
wherein the content of the first and second substances,corresponding to 256-dimensional feature vectors of the upper, middle and lower regions of the sketch respectively,256-dimensional feature vectors corresponding to upper, middle and lower regions of the three-dimensional model depth map.
The expression of the loss of the difficult samples in the batch is as follows:
p types of sketches and P three-dimensional models are randomly selected in a minimum batch, sketches which are hand-drawn under K visual angles are selected for each type of sketches, depth maps which are rendered under K visual angles are selected for each three-dimensional model, and D represents cosine distance.
The step 3 specifically comprises the following steps:
step 3.1, rendering all three-dimensional models to be retrieved into depth degrees under 3 visual angles according to the step 1, performing spatial alignment processing, and performing spatial alignment processing on the query sketch according to the step 1;
step 3.2, inputting all the three-dimensional model depth maps to be retrieved after the spatial alignment treatment into the three-dimensional model depth map extraction network trained in the step 2 to extract the merging features of the upper, middle and lower regions of the three-dimensional model depth maps, and inputting the query sketch after the spatial alignment treatment into the sketch feature extraction network trained in the step 2 to extract the merging features of the upper, middle and lower regions of the query sketch;
and 3.3, calculating cosine distances of the characteristics of the query sketch and the characteristics of all depth maps of the three-dimensional model to be retrieved, sorting according to the cosine distances from large to small, taking the maximum value of the cosine distances in 3 visual angles of the three-dimensional model as the similarity of the three-dimensional model, and carrying out duplicate removal treatment on a sorting result to obtain a retrieval result of the three-dimensional model.
In order to verify the performance of the method for retrieving the three-dimensional model by the spatially aligned fine-grained sketch, performance evaluation is carried out on a three-dimensional model fine-grained retrieval data set-Chair data set which is based on the sketch and is used in the prior art. The Chair dataset, presented in 2021, consisted of 1005 three-dimensional models of chairs, each model having three hand-drawn sketches at viewing angles of 20 degrees pitch, 0 degrees azimuth, 30 degrees azimuth, and 75 degrees azimuth, respectively. 804 chair three-dimensional models are arranged in the training set, 2412 hand-drawn sketch images are arranged in the training set, 201 chair three-dimensional models are arranged in the testing set, and 603 hand-drawn sketches are arranged in the testing set. Rank-1 and Rank-5 were used as evaluation indexes, and compared with other leading edge methods, the results are shown in Table 1.
TABLE 1
Wherein:
SBSVSR (ref.: Q. Yu, F. Liu, Y. -Z. Song, T. Xiang, T.M. Hospedales, and C.C. Loy, "Sketch me that shoe," in Proc. IEEE Conf. Comp. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 799-
FG-T-M (references: H, Su, S, Maji, E, Kalogerakis, and E, Learned-Miller, "Multi-view connected network for 3D shape recognition," in Proc. IEEE int. Conf. Comp. Vis. (ICCV), Dec. 2015, pp. 945 953.)
FG-T-A-M (ref.: J. Song, Q. Yu, Y. -Z. Song, T. Xiaoang, and T.M. Hospedales, "Deep space-semiconductor integration for fine-grained-t mutextured-based image retrieval," in Proc. IEEE int. Conf. com. Vis. (ICCV), Oct. 2017, pp. 5551-
FG-T-V (ref.: X. He, T. Huang, S. Bai, and X. Bai, "View N-gram network for 3D object retrieve," in Proc. IEEE int. Conf. Comp. Vis., Oct. 2019, pp. 7515-
FG-T-P (ref: R.Q. Charles, H.Su, M.Kaichun, and L.J. Guibas, "PointNet: Deep learning on points sections for 3D classification and segmentation," in Proc. IEEE Conf. Compup. Vis. Pattern recognition, Jul. 2017, pp. 652-
FG-T-S (reference: C. Esteves, C. Allen-Blanchette, A. Makadia, and K. Daniilidis, "Learning SO (3) equivalent representation with statistical CNNs," in Proc. Eur. Conf. Comp. Vis., 2018, pp. 52-52. -68.)
Methods proposed by Qi (references: A. Qi et al, "heated Fine-Grained Sketch-Based 3D Shape Retrieval," IEEE trans. on Image processing., vol. 30, pp. 8595-
Through experimental comparison, the method for searching the three-dimensional model by the spatially aligned fine-grained sketch has excellent performance.
Details not described in the present specification belong to the prior art known to those skilled in the art.
Although the present invention has been described in detail and illustrated with reference to the embodiments, the present invention and the applicable embodiments are not limited thereto, and those skilled in the art can make various modifications according to the principle of the present invention and can also apply a part of the method of the present invention to other systems. Thus, modifications made in accordance with the principles of the present invention should be understood to fall within the scope of the present invention.
Claims (2)
1. A method for searching a three-dimensional model by a multi-region spatially aligned fine-grained sketch is characterized by comprising the following steps:
rendering depth maps of all three-dimensional models projected under 3 visual angles, and performing spatial alignment preprocessing on the depth maps and sketch data;
constructing a multi-region feature extraction network for simultaneously extracting the sketch features and the three-dimensional model depth map, performing joint supervision training by using identity consistency loss, region consistency similarity loss and difficult sample loss in batches, and updating network parameters through back propagation to obtain the trained sketch and three-dimensional model depth map multi-region feature extraction network;
respectively extracting the characteristics of the query sketch and the three-dimensional model depth map to be retrieved by utilizing the trained sketch and the three-dimensional model depth map multi-region characteristic extraction network, and performing characteristic similarity sorting by adopting cosine distance to obtain a retrieved three-dimensional model result;
wherein, the step of rendering the depth maps under the projection of 3 visual angles of all the three-dimensional models and performing spatial alignment preprocessing on the depth maps and the sketch data comprises the following steps,
rendering three depth maps of the three-dimensional model according to three angles of 0 degree, 45 degrees and 90 degrees of an azimuth angle by using a mesh _ to _ sdf library;
reading a draft image or a depth image by utilizing an OpenCV (open circuit vehicle library) library, and converting an RGB (red, green and blue) image into a gray image;
traversing all pixel values of the gray-scale image, finding out all pixels with the gray-scale value smaller than 250, and comparing the minimum row column number and the maximum row column number corresponding to the pixels;
according to the obtained minimum row and column number and the maximum row and column number, cutting the original image by utilizing an OpenCV library to obtain a cut image;
calculating the aspect ratio of the cut image, zooming the maximum value of the width and the height to 250 pixels, and zooming the smaller value of the width and the height with the same aspect ratio to obtain a zoomed image;
filling a white boundary into the zoomed image by utilizing an OpenCV (open circuit computer vision library) library to enable the size of the final image to be 256 × 256;
the construction of the multi-region feature extraction network for simultaneously extracting the sketch features and the three-dimensional model depth map, the joint supervision training of identity consistency loss, region consistency similarity loss and difficult sample loss in batches and the updating of network parameters through back propagation comprises the following steps,
constructing two network branches with the same structure and different parameters for extracting sketch features and a three-dimensional model depth map, wherein each branch takes Resnet50 as a reference network, an adaptive pooling layer is added behind layer4 of Resnet50, the parameters of the pooling layer are (3, 1), and the upper part, the middle part and the lower part after pooling are obtained;
adding a 1x1 convolution layer, a BatchNorm layer and a LeakyReLU layer behind each part of the upper, middle and lower regions of the sketch and the upper, middle and lower regions of the three-dimensional model depth map to obtain 256-dimensional feature vectors;
the 256-dimensional characteristic vectors of the upper area of the sketch and the 256-dimensional characteristic vectors of the upper area of the three-dimensional model depth map are connected with the same classification layer, and similarly, the middle area and the lower area of the sketch and the three-dimensional model depth map are respectively connected with the classification layers to form 3 classification layers;
and jointly supervising and training by using identity consistency loss, similarity consistency loss and batch internal difficult sample loss to obtain a trained sketch and a three-dimensional model depth map feature extraction network.
2. The method of claim 1 for retrieving a three-dimensional model from a multi-region spatially aligned fine-grained sketch, characterized in that: the method comprises the following steps of respectively extracting the characteristics of a query sketch and a three-dimensional model depth map to be retrieved by utilizing a multi-region characteristic extraction network of a sketch and a three-dimensional model depth map which are finished by training, and sequencing the characteristic similarity by adopting cosine distance to obtain a retrieved three-dimensional model result,
rendering all three-dimensional models to be retrieved into depth maps under 3 visual angles, performing spatial alignment processing, and performing spatial alignment processing on query sketches;
inputting all three-dimensional model depth maps to be retrieved after spatial alignment processing into a trained three-dimensional model depth map extraction network to extract the merging characteristics of the upper, middle and lower regions of the three-dimensional model depth maps, and inputting the query sketch after spatial alignment processing into the trained sketch feature extraction network to extract the merging characteristics of the upper, middle and lower regions of the query sketch;
calculating cosine distances of the characteristics of the query sketch and the characteristics of all depth maps of the three-dimensional model to be retrieved, sorting according to the cosine distances from large to small, taking the maximum value of the cosine distances in the depth maps under three visual angles of the three-dimensional model as the similarity between the three-dimensional model and the sketch, and carrying out duplicate removal treatment on the sorting result to obtain the retrieval result of the three-dimensional model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210561621.1A CN114647753B (en) | 2022-05-23 | 2022-05-23 | Fine-grained sketch retrieval three-dimensional model method with multi-region space alignment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210561621.1A CN114647753B (en) | 2022-05-23 | 2022-05-23 | Fine-grained sketch retrieval three-dimensional model method with multi-region space alignment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114647753A CN114647753A (en) | 2022-06-21 |
CN114647753B true CN114647753B (en) | 2022-08-12 |
Family
ID=81996511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210561621.1A Active CN114647753B (en) | 2022-05-23 | 2022-05-23 | Fine-grained sketch retrieval three-dimensional model method with multi-region space alignment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114647753B (en) |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6351350B2 (en) * | 2014-04-04 | 2018-07-04 | 国立大学法人豊橋技術科学大学 | 3D model retrieval system and 3D model retrieval method |
CN110188228B (en) * | 2019-05-28 | 2021-07-02 | 北方民族大学 | Cross-modal retrieval method based on sketch retrieval three-dimensional model |
US11361505B2 (en) * | 2019-06-06 | 2022-06-14 | Qualcomm Technologies, Inc. | Model retrieval for objects in images using field descriptors |
CN111488474B (en) * | 2020-03-21 | 2022-03-18 | 复旦大学 | Fine-grained freehand sketch image retrieval method based on attention enhancement |
CN112069336B (en) * | 2020-08-04 | 2022-10-14 | 中国科学院软件研究所 | Fine-grained image retrieval method and system based on scene sketch |
CN112085072B (en) * | 2020-08-24 | 2022-04-29 | 北方民族大学 | Cross-modal retrieval method of sketch retrieval three-dimensional model based on space-time characteristic information |
CN113392244A (en) * | 2021-06-10 | 2021-09-14 | 北京印刷学院 | Three-dimensional model retrieval method and system based on depth measurement learning |
-
2022
- 2022-05-23 CN CN202210561621.1A patent/CN114647753B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114647753A (en) | 2022-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109034077B (en) | Three-dimensional point cloud marking method and device based on multi-scale feature learning | |
CN106529573A (en) | Real-time object detection method based on combination of three-dimensional point cloud segmentation and local feature matching | |
CN112085072B (en) | Cross-modal retrieval method of sketch retrieval three-dimensional model based on space-time characteristic information | |
CN109101981B (en) | Loop detection method based on global image stripe code in streetscape scene | |
CN109086777A (en) | A kind of notable figure fining method based on global pixel characteristic | |
CN110019914A (en) | A kind of three-dimensional modeling data storehouse search method for supporting three-dimensional scenic interaction | |
US8429163B1 (en) | Content similarity pyramid | |
Shleymovich et al. | Object detection in the images in industrial process control systems based on salient points of wavelet transform analysis | |
CN113159232A (en) | Three-dimensional target classification and segmentation method | |
CN112084952B (en) | Video point location tracking method based on self-supervision training | |
CN116385707A (en) | Deep learning scene recognition method based on multi-scale features and feature enhancement | |
CN113095371A (en) | Feature point matching method and system for three-dimensional reconstruction | |
CN114332172A (en) | Improved laser point cloud registration method based on covariance matrix | |
CN114647753B (en) | Fine-grained sketch retrieval three-dimensional model method with multi-region space alignment | |
CN112329662A (en) | Multi-view saliency estimation method based on unsupervised learning | |
CN110070626B (en) | Three-dimensional object retrieval method based on multi-view classification | |
Li et al. | A new algorithm of vehicle license plate location based on convolutional neural network | |
CN111597367A (en) | Three-dimensional model retrieval method based on view and Hash algorithm | |
CN111291611A (en) | Pedestrian re-identification method and device based on Bayesian query expansion | |
WO2023279604A1 (en) | Re-identification method, training method for target re-identification network and related product | |
CN114817595A (en) | Sketch-based three-dimensional model retrieval method, device, equipment and medium | |
CN114723973A (en) | Image feature matching method and device for large-scale change robustness | |
CN115240079A (en) | Multi-source remote sensing image depth feature fusion matching method | |
Liang et al. | Color feature extraction and selection for image retrieval | |
Uskenbayeva et al. | Contour analysis of external images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |