CN105930382A - Method for searching for 3D model with 2D pictures - Google Patents
Method for searching for 3D model with 2D pictures Download PDFInfo
- Publication number
- CN105930382A CN105930382A CN201610230860.3A CN201610230860A CN105930382A CN 105930382 A CN105930382 A CN 105930382A CN 201610230860 A CN201610230860 A CN 201610230860A CN 105930382 A CN105930382 A CN 105930382A
- Authority
- CN
- China
- Prior art keywords
- model
- picture
- different
- neural networks
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 30
- 239000013598 vector Substances 0.000 claims abstract description 23
- 238000012549 training Methods 0.000 claims abstract description 13
- 230000000007 visual effect Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- 238000009877 rendering Methods 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 claims description 2
- 238000000513 principal component analysis Methods 0.000 claims description 2
- 238000005286 illumination Methods 0.000 claims 1
- 230000008901 benefit Effects 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000007935 neutral effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 108091028732 Concatemer Proteins 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of data search and provides a cross-model method for searching for a 3D model with 2D pictures. The method comprises following steps: 1) establishing a 3D model library; 2) extracting characteristic vectors; 3) training convolutional neural networks; 4) inputting the characteristic vectors; 5) completing matching retrieval. According to the method, 3D objects and 2D pictures are projected into a new space for measuring the similarity of 3D and 2D; therefore, the problem that the similarity of 3D and 2D data cannot be measured and retrieved due to different data formats is solved; at the same time, the invention also provides an end to end solution scheme which is more efficient than other traditional frames and has better real-time performance.
Description
Technical field
The present invention relates to field of data search, be related specifically to a kind of cross-module type uses 2D picture searching 3D mould
The method of type.
Background technology
3D mathematical model has been widely used for our daily life, and such as 3D prints, computer numerical control
Manufacture (CNM), 3D video display, virtual reality (VR), the field such as Finite Element Simulation Analysis (FEA).
In life, equally exist other application many, such as, build 3D model of place based on 2D image.
In the existing method building 3D model of place based on 2D image, the most traditional method is for each
Object carries out 3D modeling respectively, then sets up 3D module in a given scene.But, 3D models
Generally require a lot of time, especially for needing to set up the situation of complete 3D scene, be difficult to the completeest
Become whole problem.Another kind of method is 2D information removal search 3D model based on certain objects.But, no
The degree of difficulty that same data type is directly mated is very big, and therefore, this method is difficult to realize.
Summary of the invention
It is an object of the invention to: a kind of 3D shape by object based on degree of depth study and 2D picture are provided
It is associated together the method directly carrying out retrieving, compared with different-format and realizes multiple domain object with side group speed-up ratio
Retrieval.
To achieve these goals, present invention employs following technical scheme:
A kind of method of 2D picture searching 3D model, comprises the steps:
S1, structure 3D model library, described 3D model library includes the 3D Models Sets of multiple object, wherein,
The 3D Models Sets of every kind of object includes that basic model collection and training pattern collection, described basic model collection include passing through
Collecting the multiple different 3D basic model of this kind of object obtained, described training pattern collection includes by rendering
3D basic model that described basic model is concentrated and multiple textures of producing 3D training pattern different with visual angle;
S2, all basic models of every kind of object in described 3D model library and training pattern rigidity by their entirety are become
Change alignment, produce multiple 2D pictures by the projection of different visual angles, different background, and extract described multiple
The characteristic vector of 2D picture, constitutes set of eigenvectors Pi of this kind of object;
S3, setting up convolutional neural networks, described convolutional neural networks includes input layer, several unit modules
And output layer, each unit module all includes convolutional layer and pond layer, and described output layer is Euclidean distance loss
Layer, for calculating the similarity between 2D picture and corresponding 3D model;
S4,2D picture to be matched to arbitrary dimension, use image processing techniques to be transformed into the dimension of fixed size
Degree, extracts characteristic vector Fi, inputs described convolutional neural networks;Meanwhile, by described 3D model library every kind
Set of eigenvectors Pi of object inputs described convolutional neural networks;
S5, described convolutional neural networks carry out 2D picture to be matched and the 3D of multiple object in 3D model library
The feature fitting of model, calculates similarity;Result of calculation based on similarity degree, carries out described 2D to be matched
Picture and the characteristic matching of object model in 3D model library, complete retrieval.
Further, in step S2, described different visual angles includes 10~50 visual angles, and different background includes
Different light condition and different background feature.
Further, in step S2, the method extracting 2D picture feature vector includes size constancy conversion spy
Levy representation, histograms of oriented gradients method and local binary patterns method.
Further, in step S2, when extracting the characteristic vector of 2D picture, use principal component analysis or line
Property discriminant analysis or orthogonal Laplce's eigenface analysis or border fisher analysis reduce characteristic dimension to carry
High matching efficiency.
Further, described 2D picture to be matched, before extracting characteristic vector Fi, also includes denoising and uses up
The step processed is carried out according to equalization algorithm.
Further, in step S5, described convolutional neural networks is during calculating similarity, based on often
Secondary calculated different model Euclidean distance residual errors, the iteration carrying out parameter updates, so that result of calculation is more
Accurately.
The method using 2D picture searching 3D model that the present invention provides, by building a new convolutional Neural
Network, carries out the feature fitting of 2D picture and 3D model, calculates similarity, thus realizes 2D to be matched
Picture and the characteristic matching of model in 3D model library, complete retrieval.The input of described convolutional neural networks is two
Dimension image and the characteristic vector of threedimensional model, the basic module of middle every layer comprises a convolutional layer and a pond
Changing layer, output layer Euclidean distance loss layer substituted for the Softmax layer commonly used.Due to conventional degree of depth study
Network is substantially for classification problem, and general Softmax is conventional sorter model;The present invention to solve
Matter of utmost importance certainly is tolerance 2D picture and the difference of 3D model, and measures nothing with Softmax (classification)
The difference of method given amounts, is inaccurate.Therefore, in the convolutional neural networks of the present invention, employ Europe
Formula range loss layer is replaced Softmax layer and is calculated difference, with complete 2D picture and corresponding 3D model it
Between Similarity measures.
The method using 2D picture searching 3D model of the present invention, in order to the accuracy and expansion increasing search is searched
Rope scope, is trained the 3D basic model of multiple object collected and has been extended, by different visual angles,
Rendering of different background, adds the 3D training pattern that multiple texture is different with visual angle, improves the logical of model
The property used.
The method using 2D picture searching 3D model of the present invention, described convolutional neural networks is at matching process
In Euclidean distance residual errors based on the different calculated different models iteration that carries out parameter update, enter
One step improves the accuracy of coupling.
The invention have the benefit that 3D model and 2D picture projection to a new space, at this
In new space, the similarity of 3D and 2D can be measured, thus solve because data form is different, 2D
The problem cannot measured with 3D data similarity and retrieve.Meanwhile, the solution of the present invention is compared to other
Traditional framework is more efficient, has more real-time.Use the training pattern of the inventive method and final system
Implementation model is one to one, and once the degree of depth has learnt, and whole system just can put into use in real time.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet by the method for 2D picture searching 3D model of the present invention;
Fig. 2 be the present invention method in convolutional neural networks structure composition schematic diagram;
Fig. 3 is the unit module of convolutional neural networks basic compositional model schematic diagram in Fig. 2.
Detailed description of the invention
In order to be further appreciated by the present invention, below in conjunction with embodiment, the preferred embodiment of the invention is described,
It is understood that these describe simply for further illustrating the features and advantages of the present invention rather than to this
The restriction of invention claim.
The method using 2D picture searching 3D model of the present invention, overall flow is 3D object and 2D picture
Projecting to a new space, in the space that this is new, the similarity of 3D and 2D can be measured.
Adopt and have several advantage in this way, first, solve because data form is different, 2D and 3D data phase
The problem cannot measured like property and retrieve;Secondly, it is proposed that a solution end to end, compared to it
His traditional framework is more efficient, has more real-time.
Concrete, as it is shown in figure 1, the method for the present invention includes following several step: construct 3D model library,
Extraction characteristic vector, training volume and neutral net, input feature value is retrieved with completing coupling.
1) in the structure 3D model library stage, for same object, the present invention have collected multiple different 3D
Model, tests the similarity between shape and figure with tectonic model storehouse based on these shapes with this.
Have many good qualities by 3D model creation model library, in a series of model libraries comprising multiple 3D shape, can
To concentrate the embedded space obtaining powerful abundant expression 3D model.Unlike 2D picture, 3D model by
In relative stable with change to rotating, thus relatively it is not easy to be disturbed by external environment, thus between them
Matching ratio more relatively reliable.It addition, truer and complete the representing that be object of 3D model, thus more hold
Easily extract various features information in overall and local multi-angle, thus obtain 2D image, can be more accurate
Ground provides Back ground Information for the coupling between 2D picture below, and the paired comparison between 2D picture is also
More there are quantity of information and accuracy.
2) set of eigenvectors is extracted.For the information of the given 3D shape of complete performance, first, the present invention is by mould
All examples rigid transformation by their entirety in type storehouse.So-called rigid transformation is exactly position (the translation change of only object
Change) and change towards (rotation transformation), and the conversion of shape invariance.Then, these examples are alignd,
Picture is produced by the projection of k viewpoint.For each body, this process can be expressed as Ii={Ii, v}kv=1,
Wherein i represents is that picture i, k represent direction.
Preferably, k value could be arranged to about 10-50.Alternatively, it is also possible to fifty-fifty around object distribution angle
All of regarding sphere to cover.For each Ii value, characteristic vector can be extracted.Extract the side of characteristic vector
Method includes size constancy converting characteristic representation (SIFT), histograms of oriented gradients method (HOG) and local
The methods such as binary pattern method (LBP).Current situation based on deep neural network, it is also possible to neural by the degree of depth
Network obtains eigenvalue.If to improve the block mold stability for change in size further, it is also possible to
Eigenvalue is extracted from stage construction (different sizes).The eigenvalue extracted with different scale from different perspectives leads to
Cross concatemer to link together thus show given 3D model.For improving matching efficiency, keeping original
On the premise of data distribution, present invention application PCA reduces characteristic dimension.Except PCA, it is also possible to application
Other machines learning method, such as linear discriminant analysis (LDA), orthogonal Laplce's eigenface (OLPP),
Border fisher analysis (MFA) etc..
3) can be by real-life 2D image and corresponding 3D based on convolutional neural networks (CNN)
Model interaction, the present invention uses CNN as whole framework to associate real world picture and 3D model.Should
The output layer of convolutional neural networks is Euclidean distance loss layer, be used for calculating 2D picture and corresponding 3D model it
Between similarity.Wherein, CNN is a kind of deformation of neutral net.Except having the fault-tolerance of neutral net
Etc. basic feature, CNN has a feature of its own: partially connected, share weight, down-sampling.In the present invention,
CNN is to be made up of regular being connected to each other of multilamellar neuron, including input layer, convolutional layer, pond layer and
Output layer.Wherein, input layer generally uses gray level image, it is possible to use RGB color image.Convolutional layer
It is to be deconvoluted an image inputted by a trainable wave filter, then adds a biasing and obtain;Figure
After carrying out convolution, reduce the scale of image with a sub-sampling procedures, here it is the function of pond layer.
Finally, Euclidean distance loss function unit output layer is formed.This part be mainly used in calculate input vector and
Euclidean distance between parameter vector, between input and parameter vector, gap is the biggest, and the output of this part is the biggest.
In CNN, each neuron is that the local acceptance region from last layer obtains input, therefore can extract
Local feature also remains its apparent position relative to other features simultaneously.The neuron of middle each layer be with
The form of characteristic pattern (feature map) is organized, and multiple characteristic patterns then constitute a hidden layer, is in same
Neuron node in one characteristic pattern has common convolution kernel i.e. to share weight, and this structure can not only be relatively
The good invariance keeping translation can also reduce weight quantity to be trained accordingly.
As in figure 2 it is shown, the basic parameter of a concrete CNN model is:
Input: the picture of 224 × 224 sizes, 3 passages;
Ground floor convolution: the convolution kernel of 5 × 5 sizes 96;Ground floor pond: the core of 2 × 2;
Second layer convolution: 3 × 3 convolution kernels 256;Second layer pond: the core of 2 × 2;
Third layer convolution: the convolution kernel of 3 × 3 384;Third layer pond: the core of 2 × 2;By third layer pond
The 4096 dimension outputs changed are as the input of embedded space.
As it is shown on figure 3, each unit module includes rolling up basic unit and pond layer.Wherein, by the fortune of convolutional layer
Calculate, former 2D picture can be carried out image enhaucament, reduce picture noise;And pond layer is by utilizing image office
The principle of portion's dependency, carries out sub-sample to image, reduces data processing amount and retains useful information simultaneously.
In the whole framework of above-mentioned model, new Euclidean distance loss layer is used to replace softmax layer, with this
Calculate the distance between 2D picture feature vector sum 3D model in actual life.It as original image and it
A criterion between the object comprised, by peeling off the image of interference factor, such as light, viewpoint
And background characteristics, it is projected in the embedded space relative to object, thus accelerate image and shape and
Comparison between different shape figure.Image is transformed into embedded space, and performs any comparison there,
It it is substantially the comparison between simulation purely 3D shape.This part be mainly used in calculating input vector and parameter to
Euclidean distance between amount, between input and parameter vector, gap is the biggest, and the output of this part is the biggest;Output
It is worth the least, shows that the similarity between 3D model and 2D picture or relatedness are the highest.
In the framework of the present invention, the effect of convolutional neural networks (CNN) includes learning different data format
Projection and identify upper thread object information.One of them the biggest advantage is exactly data-driven, and this is with before
The method of rule-based manual designs flow process different, this largely have benefited from current big data and
The progress of GPU technology. having had more training data, we can obtain more accurate and powerful net
Network, and these are all automatically performed.
4) terminal use uploads the picture of arbitrary dimension, corresponding picture application primary image is processed knowledge and becomes
Change the dimension of fixed size into.For given picture, extract characteristic vector Fi, and and 3D feature space in
The set of eigenvectors Pi matching of different objects, result of calculation based on similarity degree, it is possible to find correspondence
3D model, and then complete whole search.
The explanation of above example is only intended to help to understand method and the core concept thereof of the present invention.Should refer to
Go out, for those skilled in the art, under the premise without departing from the principles of the invention, also
The present invention can be carried out some improvement and modification, these improve and modify and also fall into the claims in the present invention
In protection domain.
Claims (6)
1. the method with 2D picture searching 3D model, it is characterised in that comprise the steps:
S1, structure 3D model library, described 3D model library includes the 3D Models Sets of multiple object, wherein,
The 3D Models Sets of every kind of object includes that basic model collection and training pattern collection, described basic model collection include passing through
Collecting the multiple different 3D basic model of this kind of object obtained, described training pattern collection includes by rendering
3D basic model that described basic model is concentrated and multiple textures of producing 3D training pattern different with visual angle;
S2, all basic models of every kind of object in described 3D model library and training pattern rigidity by their entirety are become
Change alignment, produce multiple 2D pictures by the projection of different visual angles, different background, and extract described multiple
The characteristic vector of 2D picture, constitutes set of eigenvectors Pi of this kind of object;
S3, setting up convolutional neural networks, described convolutional neural networks includes input layer, several unit modules
And output layer, each unit module all includes convolutional layer and pond layer, and described output layer is Euclidean distance loss
Layer, for calculating the similarity between 2D picture and corresponding 3D model;
S4,2D picture to be matched to arbitrary dimension, use image processing techniques to be transformed into the dimension of fixed size
Degree, extracts characteristic vector Fi, inputs described convolutional neural networks;Meanwhile, by described 3D model library every kind
Set of eigenvectors Pi of object inputs described convolutional neural networks;
S5, described convolutional neural networks carry out 2D picture to be matched and the 3D of multiple object in 3D model library
The feature fitting of model, calculates similarity;Result of calculation based on similarity degree, carries out described 2D to be matched
Picture and the characteristic matching of object model in 3D model library, complete retrieval.
2. as claimed in claim 1 by the method for 2D picture searching 3D model, it is characterised in that step
In rapid S2, described different visual angles includes 10~50 visual angles, and different background includes different light condition and not
Same background characteristics.
3. as claimed in claim 1 by the method for 2D picture searching 3D model, it is characterised in that step
In rapid S2, the method extracting 2D picture feature vector includes size constancy converting characteristic representation, direction ladder
Degree histogram method and local binary patterns method.
4. as claimed in claim 3 by the method for 2D picture searching 3D model, it is characterised in that step
In rapid S2, when extracting the characteristic vector of 2D picture, use principal component analysis or linear discriminant analysis or orthogonal
Laplce's eigenface analysis or border fisher analysis reduce characteristic dimension to improve matching efficiency.
5. as claimed in claim 1 by the method for 2D picture searching 3D model, it is characterised in that step
In rapid S4, described 2D picture to be matched, before extracting characteristic vector Fi, also includes denoising and equalizes with illumination
Change the step that algorithm carries out processing.
6. the method using 2D picture searching 3D model as described in any one of claim 1-5, its feature exists
In, in step S5, described convolutional neural networks is during calculating similarity, based on being calculated every time
Different model Euclidean distance residual errors, carry out parameter iteration update so that result of calculation is more accurate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610230860.3A CN105930382A (en) | 2016-04-14 | 2016-04-14 | Method for searching for 3D model with 2D pictures |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610230860.3A CN105930382A (en) | 2016-04-14 | 2016-04-14 | Method for searching for 3D model with 2D pictures |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105930382A true CN105930382A (en) | 2016-09-07 |
Family
ID=56838164
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610230860.3A Pending CN105930382A (en) | 2016-04-14 | 2016-04-14 | Method for searching for 3D model with 2D pictures |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105930382A (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447707A (en) * | 2016-09-08 | 2017-02-22 | 华中科技大学 | Image real-time registration method and system |
CN106503143A (en) * | 2016-10-21 | 2017-03-15 | 广东工业大学 | A kind of image search method and device |
CN106649665A (en) * | 2016-12-14 | 2017-05-10 | 大连理工大学 | Object-level depth feature aggregation method for image retrieval |
CN106778856A (en) * | 2016-12-08 | 2017-05-31 | 深圳大学 | A kind of object identification method and device |
CN107066559A (en) * | 2017-03-30 | 2017-08-18 | 天津大学 | A kind of method for searching three-dimension model based on deep learning |
CN107315995A (en) * | 2017-05-18 | 2017-11-03 | 中国科学院上海微系统与信息技术研究所 | A kind of face identification method based on Laplce's logarithm face and convolutional neural networks |
CN107369204A (en) * | 2017-07-27 | 2017-11-21 | 北京航空航天大学 | A kind of method for recovering the basic three-dimensional structure of scene from single width photo based on deep learning |
CN107507126A (en) * | 2017-07-27 | 2017-12-22 | 大连和创懒人科技有限公司 | A kind of method that 3D scenes are reduced using RGB image |
CN108334627A (en) * | 2018-02-12 | 2018-07-27 | 北京百度网讯科技有限公司 | Searching method, device and the computer equipment of new media content |
CN108829784A (en) * | 2018-05-31 | 2018-11-16 | 百度在线网络技术(北京)有限公司 | Panorama recommended method, device, equipment and computer-readable medium |
CN108875080A (en) * | 2018-07-12 | 2018-11-23 | 百度在线网络技术(北京)有限公司 | A kind of image search method, device, server and storage medium |
CN108960288A (en) * | 2018-06-07 | 2018-12-07 | 山东师范大学 | Threedimensional model classification method and system based on convolutional neural networks |
CN109049716A (en) * | 2018-10-29 | 2018-12-21 | 北京航空航天大学 | Generation method, device, electronic equipment and the storage medium of 3 D-printing illustraton of model |
CN109344786A (en) * | 2018-10-11 | 2019-02-15 | 深圳步智造科技有限公司 | Target identification method, device and computer readable storage medium |
CN109684499A (en) * | 2018-12-26 | 2019-04-26 | 清华大学 | A kind of the solid object search method and system of free-viewing angle |
CN109711472A (en) * | 2018-12-29 | 2019-05-03 | 北京沃东天骏信息技术有限公司 | Training data generation method and device |
CN110019914A (en) * | 2018-07-18 | 2019-07-16 | 王斌 | A kind of three-dimensional modeling data storehouse search method for supporting three-dimensional scenic interaction |
CN110291358A (en) * | 2017-02-20 | 2019-09-27 | 欧姆龙株式会社 | Shape estimation device |
CN110458946A (en) * | 2019-08-09 | 2019-11-15 | 长沙眸瑞网络科技有限公司 | It constructs 3D model eigenvectors and 3D model method is searched for according to characteristics of image |
CN110879863A (en) * | 2018-08-31 | 2020-03-13 | 阿里巴巴集团控股有限公司 | Cross-domain search method and cross-domain search device |
CN112149691A (en) * | 2020-10-10 | 2020-12-29 | 上海鹰瞳医疗科技有限公司 | Neural network searching method and device for binocular vision matching |
CN112950786A (en) * | 2021-03-01 | 2021-06-11 | 哈尔滨理工大学 | Vehicle three-dimensional reconstruction method based on neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101350016A (en) * | 2007-07-20 | 2009-01-21 | 富士通株式会社 | Device and method for searching three-dimensional model |
CN102375831A (en) * | 2010-08-13 | 2012-03-14 | 富士通株式会社 | Three-dimensional model search device and method thereof and model base generation device and method thereof |
US20130317901A1 (en) * | 2012-05-23 | 2013-11-28 | Xiao Yong Wang | Methods and Apparatuses for Displaying the 3D Image of a Product |
-
2016
- 2016-04-14 CN CN201610230860.3A patent/CN105930382A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101350016A (en) * | 2007-07-20 | 2009-01-21 | 富士通株式会社 | Device and method for searching three-dimensional model |
CN102375831A (en) * | 2010-08-13 | 2012-03-14 | 富士通株式会社 | Three-dimensional model search device and method thereof and model base generation device and method thereof |
US20130317901A1 (en) * | 2012-05-23 | 2013-11-28 | Xiao Yong Wang | Methods and Apparatuses for Displaying the 3D Image of a Product |
Non-Patent Citations (1)
Title |
---|
FANG WANG等: "sketch-based 3d shape retrieval using convolutional neural net works", 《CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447707B (en) * | 2016-09-08 | 2018-11-16 | 华中科技大学 | A kind of image Real-time Registration and system |
CN106447707A (en) * | 2016-09-08 | 2017-02-22 | 华中科技大学 | Image real-time registration method and system |
CN106503143A (en) * | 2016-10-21 | 2017-03-15 | 广东工业大学 | A kind of image search method and device |
CN106778856A (en) * | 2016-12-08 | 2017-05-31 | 深圳大学 | A kind of object identification method and device |
US10417526B2 (en) | 2016-12-08 | 2019-09-17 | Shenzhen University | Object recognition method and device |
CN106649665A (en) * | 2016-12-14 | 2017-05-10 | 大连理工大学 | Object-level depth feature aggregation method for image retrieval |
CN110291358B (en) * | 2017-02-20 | 2022-04-05 | 欧姆龙株式会社 | Shape estimating device |
CN110291358A (en) * | 2017-02-20 | 2019-09-27 | 欧姆龙株式会社 | Shape estimation device |
US11036965B2 (en) | 2017-02-20 | 2021-06-15 | Omron Corporation | Shape estimating apparatus |
CN107066559A (en) * | 2017-03-30 | 2017-08-18 | 天津大学 | A kind of method for searching three-dimension model based on deep learning |
CN107066559B (en) * | 2017-03-30 | 2019-12-27 | 天津大学 | Three-dimensional model retrieval method based on deep learning |
CN107315995B (en) * | 2017-05-18 | 2020-07-31 | 中国科学院上海微系统与信息技术研究所 | Face recognition method based on Laplace logarithmic face and convolutional neural network |
CN107315995A (en) * | 2017-05-18 | 2017-11-03 | 中国科学院上海微系统与信息技术研究所 | A kind of face identification method based on Laplce's logarithm face and convolutional neural networks |
CN107507126A (en) * | 2017-07-27 | 2017-12-22 | 大连和创懒人科技有限公司 | A kind of method that 3D scenes are reduced using RGB image |
CN107369204A (en) * | 2017-07-27 | 2017-11-21 | 北京航空航天大学 | A kind of method for recovering the basic three-dimensional structure of scene from single width photo based on deep learning |
CN107507126B (en) * | 2017-07-27 | 2020-09-18 | 和创懒人(大连)科技有限公司 | Method for restoring 3D scene by using RGB image |
CN107369204B (en) * | 2017-07-27 | 2020-01-07 | 北京航空航天大学 | Method for recovering basic three-dimensional structure of scene from single photo |
CN108334627A (en) * | 2018-02-12 | 2018-07-27 | 北京百度网讯科技有限公司 | Searching method, device and the computer equipment of new media content |
CN108829784A (en) * | 2018-05-31 | 2018-11-16 | 百度在线网络技术(北京)有限公司 | Panorama recommended method, device, equipment and computer-readable medium |
CN108960288A (en) * | 2018-06-07 | 2018-12-07 | 山东师范大学 | Threedimensional model classification method and system based on convolutional neural networks |
CN108875080A (en) * | 2018-07-12 | 2018-11-23 | 百度在线网络技术(北京)有限公司 | A kind of image search method, device, server and storage medium |
CN110019914B (en) * | 2018-07-18 | 2023-06-30 | 王斌 | Three-dimensional model database retrieval method supporting three-dimensional scene interaction |
CN110019914A (en) * | 2018-07-18 | 2019-07-16 | 王斌 | A kind of three-dimensional modeling data storehouse search method for supporting three-dimensional scenic interaction |
CN110879863B (en) * | 2018-08-31 | 2023-04-18 | 阿里巴巴集团控股有限公司 | Cross-domain search method and cross-domain search device |
CN110879863A (en) * | 2018-08-31 | 2020-03-13 | 阿里巴巴集团控股有限公司 | Cross-domain search method and cross-domain search device |
CN109344786A (en) * | 2018-10-11 | 2019-02-15 | 深圳步智造科技有限公司 | Target identification method, device and computer readable storage medium |
CN109049716A (en) * | 2018-10-29 | 2018-12-21 | 北京航空航天大学 | Generation method, device, electronic equipment and the storage medium of 3 D-printing illustraton of model |
CN109684499B (en) * | 2018-12-26 | 2020-11-06 | 清华大学 | Free-view three-dimensional object retrieval method and system |
CN109684499A (en) * | 2018-12-26 | 2019-04-26 | 清华大学 | A kind of the solid object search method and system of free-viewing angle |
CN109711472A (en) * | 2018-12-29 | 2019-05-03 | 北京沃东天骏信息技术有限公司 | Training data generation method and device |
CN110458946B (en) * | 2019-08-09 | 2022-11-04 | 长沙眸瑞网络科技有限公司 | Method for constructing feature vector of 3D model and searching 3D model according to image features |
CN110458946A (en) * | 2019-08-09 | 2019-11-15 | 长沙眸瑞网络科技有限公司 | It constructs 3D model eigenvectors and 3D model method is searched for according to characteristics of image |
CN112149691A (en) * | 2020-10-10 | 2020-12-29 | 上海鹰瞳医疗科技有限公司 | Neural network searching method and device for binocular vision matching |
CN112149691B (en) * | 2020-10-10 | 2021-10-15 | 北京鹰瞳科技发展股份有限公司 | Neural network searching method and device for binocular vision matching |
CN112950786A (en) * | 2021-03-01 | 2021-06-11 | 哈尔滨理工大学 | Vehicle three-dimensional reconstruction method based on neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105930382A (en) | Method for searching for 3D model with 2D pictures | |
He et al. | Towards fast and accurate real-world depth super-resolution: Benchmark dataset and baseline | |
Song et al. | Region-based quality estimation network for large-scale person re-identification | |
Wu et al. | 3d shapenets for 2.5 d object recognition and next-best-view prediction | |
Wu et al. | 3d shapenets: A deep representation for volumetric shapes | |
CN108038906B (en) | Three-dimensional quadrilateral mesh model reconstruction method based on image | |
CN112927354B (en) | Three-dimensional reconstruction method, system, storage medium and terminal based on example segmentation | |
CN110458957A (en) | A kind of three-dimensional image model construction method neural network based and device | |
CN107229757A (en) | The video retrieval method encoded based on deep learning and Hash | |
CN101477529B (en) | Three-dimensional object retrieval method and apparatus | |
CN111242841A (en) | Image background style migration method based on semantic segmentation and deep learning | |
CN109359534B (en) | Method and system for extracting geometric features of three-dimensional object | |
CN111967533B (en) | Sketch image translation method based on scene recognition | |
CN112085835B (en) | Three-dimensional cartoon face generation method and device, electronic equipment and storage medium | |
CN112862949B (en) | Object 3D shape reconstruction method based on multiple views | |
CN112949647A (en) | Three-dimensional scene description method and device, electronic equipment and storage medium | |
CN113392244A (en) | Three-dimensional model retrieval method and system based on depth measurement learning | |
CN114782417A (en) | Real-time detection method for digital twin characteristics of fan based on edge enhanced image segmentation | |
CN110197226B (en) | Unsupervised image translation method and system | |
CN117011465A (en) | Tree three-dimensional reconstruction method and device, electronic equipment and storage medium | |
CN113011506B (en) | Texture image classification method based on deep fractal spectrum network | |
CN115690497A (en) | Pollen image classification method based on attention mechanism and convolutional neural network | |
CN107133284A (en) | A kind of view method for searching three-dimension model based on prevalence study | |
Tan et al. | Local features and manifold ranking coupled method for sketch-based 3D model retrieval | |
He et al. | Minimum spanning tree based stereo matching using image edge and brightness information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20170411 Address after: 6 970-1, No. 305-306, higher education road, Wuchang Street, Yuhang District, Zhejiang, Hangzhou Applicant after: Hangzhou Link Technology Co., Ltd. Address before: 737100 Gansu city of Jinchang province West Yan'an Road District Tianrun Jiayuan 3 Building 2 5 floor Applicant before: Yan Jinlong |
|
TA01 | Transfer of patent application right | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160907 |
|
RJ01 | Rejection of invention patent application after publication |