CN104751463A - Three-dimensional model optimal visual angle selection method based on sketch outline features - Google Patents

Three-dimensional model optimal visual angle selection method based on sketch outline features Download PDF

Info

Publication number
CN104751463A
CN104751463A CN201510145279.7A CN201510145279A CN104751463A CN 104751463 A CN104751463 A CN 104751463A CN 201510145279 A CN201510145279 A CN 201510145279A CN 104751463 A CN104751463 A CN 104751463A
Authority
CN
China
Prior art keywords
mrow
msubsup
msub
contour
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510145279.7A
Other languages
Chinese (zh)
Other versions
CN104751463B (en
Inventor
梁爽
赵龙
贾金原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201510145279.7A priority Critical patent/CN104751463B/en
Publication of CN104751463A publication Critical patent/CN104751463A/en
Application granted granted Critical
Publication of CN104751463B publication Critical patent/CN104751463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional model optimal visual angle selection method based on sketch outline features. The method includes using a feature matching algorithm on the basis of outline line contexts, and mapping determined hand-painted sketches to the visual angles of corresponded three-dimensional models; on the basis of the frequency of the three-dimensional model visual angles mapped by the hand-painted sketches, selecting positive and negative training samples of the potential optimal visual angles of the models; establishing feature vectors for the three-dimensional models through a 'word bag' module, and learning and acquiring a classifier of the potential optimal visual angles of the three-dimensional models through a 'support vector machine' on the basis of the positive and negative training samples; guiding the diversity of the three-dimensional model visual angles into the sorting algorithm, and selecting the optimal visual angles of the determined number for the three-dimensional models. By the aid of the method, the selection results meet the human visual sense better, and the higher adaptability is provided.

Description

Three-dimensional model optimal visual angle selection method based on sketch outline characteristics
Technical Field
The invention relates to the field of image processing and computer graphics, in particular to a three-dimensional model optimal visual angle selection method based on sketch outline characteristics.
Background
In recent years, three-dimensional computer graphics technology has been developed for a long time and is an indispensable part of daily life. Three-dimensional models play an increasingly important role, in particular, as an essential element of three-dimensional computer graphics. In order to obtain a better operation result in a real application program, the analytical modeling algorithms related to various three-dimensional models are required to have higher calculation precision. Automatically choosing the best viewing angle for a three-dimensional model is one of the most important algorithms, often working as a pre-processing of other three-dimensional model related algorithms.
Three-dimensional model optimal view selection algorithms have been widely used in various three-dimensional computer graphics applications, including: virtual reality, three-dimensional model retrieval, computer-aided design (CAD), and three-dimensional multimedia. The so-called three-dimensional model optimal visual angle selection algorithm is that any one three-dimensional model is given, a given number of observation visual angles are calculated for the model, and the visual angles are made to be most consistent with the visual perception of human beings.
In the current research work, a plurality of different three-dimensional model optimal view angle selection algorithms have been proposed. Many of these algorithms are dedicated to exploring the geometric features of a three-dimensional model, such as the structural relationships between vertices and patches that make up the three-dimensional model, and the connections to the human visual system, and include the significance of the model (Mesh salience) and the Entropy of the view of the model (Viewpoint entry), among others. Their goal is to solve the problem of optimal perspective selection of three-dimensional models by analyzing which part of the three-dimensional model is most interesting for human observation. However, modeling this problem is very difficult, since accurate structural analysis of the three-dimensional model is itself a very challenging task.
Disclosure of Invention
The invention aims to provide a three-dimensional model optimal visual angle selection method based on sketch outline characteristics, aiming at learning information capable of reflecting the habit of observing objects by a human visual system from related hand-drawn sketches and calculating the optimal visual angle corresponding to a three-dimensional model.
The technical scheme of the invention is as follows:
a three-dimensional model optimal visual angle selection method based on sketch outline characteristics comprises the following specific steps:
step A: calculating the similarity between the sketch and the perspective projection drawing of the three-dimensional model based on a feature matching algorithm of the contour line context environment, so as to map all the given freehand sketches to the perspective of the corresponding three-dimensional model;
and B: acquiring the mapping probability of the three-dimensional model visual angle drawn by the hand according to the similarity between the measured sketch and the three-dimensional model visual angle, setting a constraint condition based on the mapping probability, and selecting a positive and negative sample database training set of the potential optimal visual angle of the model;
and C: constructing a feature vector for each three-dimensional model by using a bag-of-words model, and training a classifier of a potential optimal visual angle of the three-dimensional model by using a support vector machine based on positive and negative samples;
step D: and introducing the diversity of the three-dimensional model visual angles into a visual angle sorting algorithm, and selecting the top N given number of optimal visual angles for each three-dimensional model.
Before directly comparing the similarity between the sketch and the three-dimensional model visual angle, the method for selecting the optimal visual angle of the three-dimensional model further comprises the following operations: firstly, converting a projection diagram of a three-dimensional model visual angle into a contour diagram similar to a hand-drawn sketch; then, forming pixel points in the contour lines on all the contour graphs into contour groups, and merging the contour groups according to the correlation; then, the similarity between the two contour groups is compared, and finally, the similarity of the contour map is calculated according to the similarity of the contour groups.
The optimal visual angle selection method of the three-dimensional model comprises two contour groups giAnd gjThe correlation degree comparison formula between the two is as follows:
a(gi,gj)=|cos(θiij)·cos(θjij)|2
wherein, giDefined as the initial set of contours, xiIs the average position of the contour group on the contour map, θiIs the average edge direction of the contour group, gjDefined as a set of contrast contours, xjIs the average position of the contour group on the contour map, θjIs the mean edge direction of the contour group, θijIs xiAnd xjThe size of the included angle.
The optimal visual angle selection method of the three-dimensional model comprises two contour groups giAnd gjThe similarity of the shapes between them is:
<math> <mrow> <msub> <mi>d</mi> <mi>app</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>g</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <msub> <mi>d</mi> <mi>spa</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>g</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mi>spa</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <mi>cos</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&theta;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
wherein, thetaxIs a contour group gxEdge direction of dspa(gi,gj) Is the Euclidean distance of the mean position in the respective contour map after two contour group normalization, and σspaThe value of (b) is a constant value.
The optimal visual angle selection method of the three-dimensional model comprises two contour groups giAnd gjThe context information between the two is added into the similarity calculation, and the specific method comprises the following steps: an undirected graph is first constructed for each contour map. Then, calculating the similarity of any two paths in the contour group by using an undirected graph; and finally, calculating the context similarity of the contour group according to the similarity of any two paths, specifically: constructing an undirected graph G (V, E) for each contour graph, wherein V is a node set in the undirected graph, E is an edge set in the graph, and definingThe slave nodes are contour group giThe similarity of any two paths is as follows:
<math> <mrow> <msubsup> <mi>d</mi> <mi>walk</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msubsup> <mi>W</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>,</mo> <msubsup> <mi>W</mi> <mi>j</mi> <mi>n</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>d</mi> <mi>app</mi> </msub> <mrow> <mo>(</mo> <msubsup> <mi>w</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>w</mi> <mi>j</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
whereinIs on the wayThe kth node of (1), two contour groups giAnd gjThe context similarity of (a) is:
<math> <mrow> <msubsup> <mi>d</mi> <mi>cont</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>g</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msup> <msub> <mi>P</mi> <mi>i</mi> </msub> <mi>n</mi> </msup> <mo>|</mo> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mo>{</mo> <mi>a</mi> <mo>|</mo> <mi>a</mi> <mo>&Element;</mo> <msup> <msub> <mi>P</mi> <mi>i</mi> </msub> <mi>n</mi> </msup> <mo>}</mo> </mrow> </munder> <munder> <mi>max</mi> <mrow> <mo>{</mo> <mi>b</mi> <mo>|</mo> <mi>b</mi> <mo>&Element;</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mi>n</mi> </msup> <mo>}</mo> </mrow> </munder> <msubsup> <mi>d</mi> <mi>walk</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
wherein, Pi nIs all slave nodes giSet of starting ordered paths of length n, | Pi nI represents the set Pi nThe number of paths contained in (1).
The method for selecting the optimal view angle of the three-dimensional model further obtains the context information similarity between the two contour graphs ci and cj according to the similarity of the two contour groups, and specifically comprises the following steps:
<math> <mrow> <msubsup> <mi>S</mi> <mi>cont</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> </munder> <munder> <mi>max</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>}</mo> </mrow> </munder> <msubsup> <mi>d</mi> <mi>cont</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
wherein,shown in the profile view ciThe set of profiles contained in (1), and | ciI is shown at ciThe number of contour groups contained in (1).
The three-dimensional model optimal visual angle selection algorithm is based on two contour groups giAnd gjThe similarity of the shapes between the two is further obtained based on the key points at the corners of the outline, and the similarity of the context information of the outline is specifically as follows:
<math> <mrow> <msub> <mi>S</mi> <mi>key</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msub> <mi>K</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>K</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> </munder> <munder> <mi>max</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>K</mi> <mi>j</mi> </msub> <mo>}</mo> </mrow> </munder> <msubsup> <mi>d</mi> <mi>app</mi> <mn>4</mn> </msubsup> <mrow> <mo>(</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
wherein, KxFor sets of contour groups containing only keypoints, all contained in set KxP in (1)i nIs the profile map cxThe key context information of (1) describes an operator.
The three-dimensional model optimal visual angle selection algorithm obtains a sketchFrom the perspective of the three-dimensional modelThe specific method of mapping probability that is plotted is: for each
p ( s i j , v i k ) = S key ( s i j , c i k ) - min S key ( s i j , c i m ) max S key ( s i j , c i m )
WhereinRepresenting from three-dimensional model perspectiveAnd calculating the obtained contour map.
The three-dimensional model optimal view angle selection algorithm comprises the following steps of obtaining positive and negative samples according to the mapping probability: when in useAt the moment, the sketch is drawnPerspective mapping to three-dimensional modelAll three-dimensional model visual angles meeting the constraint condition are used as positive samples in training; computing a set S of sketchesiAll ofAverage value of (d), when the three-dimensional model is viewed fromWhen the average value of (a) is less than a fixed threshold value, the view angle is taken as a negative sample, and the decision function of the whole sampling strategy is as follows:
<math> <mrow> <mi>&theta;</mi> <mrow> <mo>(</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>,</mo> <mi>if</mi> <mo>&Exists;</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>p</mi> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>,</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>;</mo> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> <mi>if</mi> <mo>&ForAll;</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <mfrac> <mn>1</mn> <mi>m</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>m</mi> </munder> <mi>p</mi> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>,</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>&lt;</mo> <mi>&xi;</mi> <mo>;</mo> </mtd> </mtr> <mtr> <mtd> <mi>null</mi> <mo>,</mo> <mi>otherwise</mi> <mo>.</mo> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
wherein,show thatTaking 0 as a positive sample means that it is taken as a negative sample, and ξ is a set threshold.
The three-dimensional model optimal visual angle selection algorithm uses an evaluation function t when the three-dimensional model visual angles are sequencedi
ti=si+α(Φ(vi)),
Where Φ (vi) is a penalty function and α (·) is a monotonically decreasing function.
The invention has the beneficial effects that: the present invention discloses a novel a priori knowledge that a viewing angle is one of the potentially best viewing angles for a three-dimensional model when one often renders the model from that viewing angle. In the process of mapping the sketch to the three-dimensional model visual angle, the similarity measurement is carried out by utilizing the context information of the sketch outline, and the problem that the sketch contains a large amount of deformation noise is effectively solved. Compared with other three-dimensional model optimal visual angle selection methods, the method based on machine learning has more stable performance, and is universal and different three-dimensional models; the selected three-dimensional model visual angle is more in line with the visual perception of the human visual system, and is particularly suitable for the three-dimensional model retrieval task.
Drawings
Fig. 1 is a schematic workflow diagram of the overall framework of the invention.
Fig. 2a, 2b, 2c, 2d, 2e, 2f are schematic diagrams of the segmentation of the profile map into profile groups.
Fig. 3a, 3b, 3c are schematic diagrams of extracting key contour groups in contour maps.
Fig. 4a, 4b, 4c are schematic diagrams comparing perspective similarity of two three-dimensional models using IoU.
FIG. 5 is a schematic diagram of mapping a freehand sketch onto a corresponding three-dimensional model perspective using keypoint-based contour context information similarity.
FIG. 6 shows the calculation results of the optimal view selection algorithm for various three-dimensional models on the same model.
FIG. 7 is a flow chart of a method provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples.
The main purpose of the invention is to learn the information reflecting the habit of observing objects by the human visual system from the relevant hand-drawn sketch. First, a three-dimensional model m is giveniThe invention aims to train a three-dimensional model optimal visual angle classifier through a support vector machine based on the context information of contours in the data. By this classifier, it is possible to derive a three-dimensional model m fromiUniformly surrounding the three-dimensional view space V on the spherical surfaceiA number of optimal viewing angles for the three-dimensional model are selected, and these viewing angles reflect the viewing positions that humans tend to select when hand-painting the object.
Referring to fig. 7, the method for selecting the optimal viewing angle of the three-dimensional model based on the sketch outline features mainly comprises the following four steps:
step A: calculating the similarity between the sketch and the perspective projection drawing of the three-dimensional model based on a feature matching algorithm of the contour line context environment, so as to map all the given freehand sketches to the perspective of the corresponding three-dimensional model;
and B: acquiring the mapping probability of the three-dimensional model visual angle drawn by the hand according to the similarity between the measured sketch and the three-dimensional model visual angle, setting a constraint condition based on the mapping probability, and selecting a positive and negative sample database training set of the potential optimal visual angle of the model;
and C: constructing a feature vector for each three-dimensional model by using a bag-of-words model, and training a classifier of a potential optimal visual angle of the three-dimensional model by using a support vector machine based on positive and negative samples;
step D: and introducing the diversity of the three-dimensional model visual angles into a visual angle sorting algorithm, and selecting the top N given number of optimal visual angles for each three-dimensional model.
Reference is made to fig. 1 for a description of the steps of the method described above, which shows a schematic workflow according to the invention. The present method will be described in detail in the following sections of the specification as well.
The goal of method step a is to use the context information of the contour to measure the sketch and the projected contour map of the three-dimensional model perspective. Before directly comparing the similarity between the sketch and the three-dimensional model visual angle, the following operations are further carried out:
firstly, the projection view of the three-dimensional model is converted into a contour map similar to a hand-drawn sketch.
Then, the pixel points in the contour lines on all the contour maps form a contour line group, which is called a contour group for short, and the contour groups are combined according to the correlation.
Then, the similarity between the two contour groups is compared, and finally, the similarity of the contour map is calculated according to the similarity of the contour groups. The specific explanation is as follows:
the invention utilizes the outer contour lines, closed curves, heuristic contour lines and model boundary lines formed by the three-dimensional model at each view angle to generate a final contour projection diagram. As shown in fig. 2(a) and (b), an example of a contour projection view calculated using this method is shown. The specific method comprises the following steps: given any one contour map, firstly, the edge sparse operation is used on the map to change the contour line on the sketch into one pixel point width, as shown in fig. 2 (c). Then, the gradient direction on each pixel point is calculated by using a Sobel operator. A very sparse line profile can be generated through the operation result, each line has only one pixel point width, and each pixel point p has an edge direction thetapAnd (4) degree. This process can significantly reduce the amount of noise contained in the initial contour map while having high computational efficiency.
The method for forming the contour group comprises the following steps: and (2) uniformly sampling some pixel points on each line of the preprocessed contour map to serve as seeds, based on the pixel seeds, iteratively combining neighbor pixel points in eight directions around the seeds by using a greedy algorithm until the sum of the included angles of the edge directions of the neighbor pixel points is greater than a threshold, wherein the threshold is 90 degrees in the embodiment of the invention, and thus the initial contour group on the contour map is generated. For each initial contour group giDefinition, xiIs the average position of the contour group on the contour map, θiComparing the contour group g for the mean edge direction of the contour groupjDefinition, xjIs the average position of the contour group on the contour map, θjIs the average edge direction of the set of contours. Thus, the correlation between each two contour groups can then be defined as:
a(gi,gj)=|cos(θiij)·cos(θjij)|2the system of equation 1
Wherein, thetaijIs xiAnd xjThe size of the included angle. Using the formula to divide a contour map into a plurality of contour groups in a structural sense. Equation 1 is used to further merge profile groups until the correlation between any two adjacent profile groups is less than a given threshold, where the correlation metric threshold is set to 0.8 in the embodiment of the present invention, and when less than 0.8, no profile combination is performed. Intuitively, this method can further incorporate the set of contours with smaller included angles, as shown in fig. 2(d) which is a schematic diagram of the result of the final set of contours. The contour grouping method is computationally simple and also very efficient for the calculation of contour context information.
Using contextual information of content to measure similarity of different parts of an object has proven to be very effective in many three-dimensional model analysis research efforts. The context information of a contour refers to how a given contour line is connected with its surrounding contours, and this feature tends to provide rich similarity information.
In the present embodiment, two contour groups are considered more similar if their own shapes and their context information are the same. The invention uses graph theory model to further explain the similarity calculation method of the contour group: defining two contour groups giAnd gjSimilarity of shape d betweenapp(gi,gj) Comprises the following steps:
<math> <mrow> <msub> <mi>d</mi> <mi>app</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>g</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mrow> <msub> <mi>d</mi> <mi>spa</mi> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>g</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mi>spa</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <mi>cos</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&theta;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </math> the system of equation 2
Wherein, thetaxIs a contour group gxEdge direction of dspa(gi,gj) Is the Euclidean distance of the mean position in the respective contour map after two contour group normalization, and σspaThe value of (b) is a fixed value, and is 0.2 in the examples of the present invention. From the above equation it can be seen that the assumption that two contour groups are considered similar is if and only if the positions in the contour map where they are located are similar and their edge directions are also similar.
In order to add the context information of the contour group into the similarity matching algorithm, the method specifically comprises the following steps:
an undirected graph is first constructed for each contour map.
The undirected graph is then used to calculate the similarity of any two paths in the contour set.
And finally, calculating the context similarity of the contour group according to the similarity of any two paths.
The concrete explanation is as follows: and constructing an undirected graph G (V, E) for each contour graph, wherein V is a node set in the undirected graph, and E is an edge set in the graph. Each node in V represents a contour group, and any two contour groups are connected by an edge E E if they are physically located adjacent on the contour map. Followed byDefinition ofIs a slave node (i.e. a contour group) giIf the length of the path from the last start is n, the similarity of any two paths is as follows:
<math> <mrow> <msubsup> <mi>d</mi> <mi>walk</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msubsup> <mi>W</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>,</mo> <msubsup> <mi>W</mi> <mi>j</mi> <mi>n</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>d</mi> <mi>app</mi> </msub> <mrow> <mo>(</mo> <msubsup> <mi>w</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>w</mi> <mi>j</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math> the system of equation 3
Wherein,is on the wayThe kth node of (c). It can be seen that the pathIs a series of slave nodes giStarting ordered sequence of nodes, thisA sequence describes a node giSurrounding local context structure information. Thus, it is possible to search for the contour group giAnd gjThe most similar matching pairs among all the ordered node paths are formed, and the sum of the similarity of the matching pairs is taken as giAnd gjThe specific calculation formula of the context similarity is as follows:
<math> <mrow> <msubsup> <mi>d</mi> <mi>cont</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>g</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msup> <msub> <mi>P</mi> <mi>i</mi> </msub> <mi>n</mi> </msup> <mo>|</mo> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mo>{</mo> <mi>a</mi> <mo>|</mo> <mi>a</mi> <mo>&Element;</mo> <msup> <msub> <mi>P</mi> <mi>i</mi> </msub> <mi>n</mi> </msup> <mo>}</mo> </mrow> </munder> <munder> <mi>max</mi> <mrow> <mo>{</mo> <mi>b</mi> <mo>|</mo> <mi>b</mi> <mo>&Element;</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mi>n</mi> </msup> <mo>}</mo> </mrow> </munder> <msubsup> <mi>d</mi> <mi>walk</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> </mrow> </math> the system of equation 4
Wherein, Pi nIs all slave nodes giStarting length n isSet of order paths, | Pi nI represents the set Pi nThe number of paths contained in (1). Due to Pi nCan effectively describe the contour group giAll the surrounding context information of (1) is called Pi nIs a contour group giThe context information feature description operator. It is apparent that when n is 0, formula 4 is degenerated into formula 2 in which only the similarity of shapes between contour groups is compared. The similarity based on the context information between the two contour maps can be directly compared by the similarity comparison method defined by equation 4.
Given any two contour maps, firstly, the two contour maps are used for constructing a corresponding undirected structure map by using the method described above, then the most matched contour group is found in the other contour map by using the formula 4 for all contour groups in one contour map, and the sum of the similarities of the contour groups is taken as the similarity of the two contour maps. Thus, two contour maps ciAnd cjThe calculation formula of the context information similarity between the two is as follows:
<math> <mrow> <msubsup> <mi>S</mi> <mi>cont</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> </munder> <munder> <mi>max</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>}</mo> </mrow> </munder> <msubsup> <mi>d</mi> <mi>cont</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math> the system of equation 5
Wherein,shown in the profile view ciThe set of profiles contained in (1), and | ciI is shown at ciThe number of contour groups contained in (1). Obviously, when n is 0, equation 5 degenerates to consider only the case where the contour set itself is based on shape similarity, in the form:
<math> <mrow> <msub> <mi>S</mi> <mi>app</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> </msub> <mi></mi> </mrow> <munder> <mi>max</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>}</mo> </mrow> </munder> <msub> <mi>d</mi> <mi>app</mi> </msub> <mrow> <mo>(</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math> the system of equation 6
In the context information-based contour similarity calculation method given in equation 5, the maximum length n of each path needs to be specified. When n is 0, it is equivalent to compare the contour maps using only the shape similarity of the contour groups. The use of a larger value of n can describe more global structure information, while the use of a smaller value of n can contain relatively less information but at a faster computation speed. It was found through experimentation that the most stable results can be produced when the value of n is between 3 and 5, and therefore the value of n is fixed at 4 in the present invention. Fig. 2(e) and (f) show two different paths resulting from the same profile group when n is 4.
The similarity matching algorithm for matching a given profile is one of the core algorithms of the present invention, because it is frequently used in both the later learning sampling and training phases, and therefore must possess an efficient computational speed. However, in equation 5, for each contour group matching pair, the optimal matching solution needs to be found in a large search space, which is a very complicated calculation process, and therefore, the method needs to be further accelerated.
In a contour map, most of the context information is often contained in the corners of the contour lines, compared to a single straight line segment, which contains little useful information. Therefore, when finding the optimal matching solution of the contour group, it is not necessary to search in all the contour groups, but rather only the contour groups at the corners of the contour. Therefore, the invention further provides a contour map context information matching method based on key points, and the specific algorithm is as follows:
the corner points of a given contour map can be efficiently calculated using detectors such as the multi-scale gaussian operator (Difference of gaussian), the Hessian operator and the Harris-Laplace operator, which are defined as the keypoints of the contour map. Therefore, the method for calculating the similarity of the contour map based on the key points comprises the following steps: for each contour map cxFirstly, calculating all key points in the graph by using a Harris-Laplace operator; then, according to the position of each contour group, a contour group set K containing only the key points can be obtainedxFIG. 3 shows the calculation process, wherein FIG. 3(b) shows the keypoints calculated using the Harris-Laplace operator, and FIG. 3(c) shows the set of further calculated key contour sets; then, the similarity of the context information of the contour map based on the key points is defined as:
<math> <mrow> <msub> <mi>S</mi> <mi>key</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msub> <mi>K</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>K</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> </munder> <munder> <mi>max</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>K</mi> <mi>j</mi> </msub> <mo>}</mo> </mrow> </munder> <msubsup> <mi>d</mi> <mi>app</mi> <mn>4</mn> </msubsup> <mrow> <mo>(</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math> the system of equation 7
And, define all the contents contained in the set KxP in (1)i n(equation 4) is the profile cxThe key context information of (1) describes an operator.
After the similarity matching algorithm of the contour map is determined, the learning algorithm of the three-dimensional model optimal visual angle classifier is further explained. Before describing the learning algorithm, the database training set used by the classifier learning algorithm is selected using the following method. In the training process, a three-dimensional model set M needs to be given and each three-dimensional model M needs to be giveniE M related hand-drawn sketch set Si. In the experiment, a Princeton three-dimensional model reference library (PSB model library for short) and a corresponding PSB model libraryA Sketch library (provided by the paper "Sketch-based Shape Retrieval") is used as the dataset. The PSB model library divides the whole library into two parts each containing 907 different kinds of models, which are used as training and testing sets respectively. For each class of three-dimensional model, it provides the same class of hand-drawn sketch set. Each sketch in the set consists of a closed freehand curve and a closed outline, and basically meets the experimental requirements of the invention. The invention trains the three-dimensional model optimal perspective classifier using the training set provided by the PSB, and verifies the computational performance of the invention using the test set.
As with other machine learning problems, positive and negative samples required for training are provided in order to learn an effective classifier that can be used for perspective classification. However, the original PSB database does not indicate information related to the perspective of the three-dimensional model, and therefore, the present invention adopts the following method to sample positive and negative samples.
And mapping all the sketches to corresponding three-dimensional model visual angles by using the contour map context information similarity matching algorithm based on the key points, and further screening the best and worst visual angles of each model as positive and negative samples by using the relationship. The method comprises the following steps:
first, for each three-dimensional model m in the datasetiE.g. M, with K views uniformly chosen on its bounding sphere and for each viewCalculator contour mapThe preferred value of K according to the invention of 300 gives the most stable results.
Then, for each sketch belonging to the three-dimensional model miUsing keypoint-based contour mapsThe context information similarity measurement method calculates eachAndfor similarity of pairs, a sketch is formedFrom the perspective of the three-dimensional modelThe probability plotted is: for each
p ( s i j , v i k ) = S key ( s i j , c i k ) - min S key ( s i j , c i m ) max S key ( s i j , c i m ) The system of equation 8
Wherein,representing from three-dimensional model perspectiveAnd calculating the obtained contour map. It is obvious thatAt the moment, the sketch is drawnPerspective mapping to three-dimensional modelAnd all three-dimensional model perspectives meeting the constraint condition are taken as positive samples in training. To collect negative sample data, a set S of sketches needs to be computediAll ofAverage value of (d), when the three-dimensional model is viewed fromIf the average value of (d) is less than a fixed threshold, then the view is taken as a negative sample. An intuitive explanation of this sampling strategy for negative examples is that if a human is drawing a three-dimensional model almost never from a certain perspective, thenThis viewing angle is considered to be a poor viewing angle.
And finally, acquiring positive and negative sample data for the three-dimensional model contained in each PSB database by adopting the strategy, and acquiring any three-dimensional model view angleThe whole sampling strategy can be summarized as the following decision function:
<math> <mrow> <mi>&theta;</mi> <mrow> <mo>(</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>,</mo> <mi>if</mi> <mo>&Exists;</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>p</mi> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>,</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>;</mo> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> <mi>if</mi> <mo>&ForAll;</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <mfrac> <mn>1</mn> <mi>m</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>m</mi> </munder> <mi>p</mi> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>,</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>&lt;</mo> <mi>&xi;</mi> <mo>;</mo> </mtd> </mtr> <mtr> <mtd> <mi>null</mi> <mo>,</mo> <mi>otherwise</mi> <mo>.</mo> </mtd> </mtr> </mtable> </mfenced> </mrow> </math> the system of equation 9
WhereinShow thatTaking 0 as a positive sample means that it is taken as a negative sample, and the threshold ξ is set to 0.05 through experimentation. All of these results need to be computed off-line in advance in order to save the computation overhead time of the sampling phase. After the required positive and negative sample database training set is acquired, the three-dimensional model optimal visual angle classifier can be trained according to the following process. The specific method comprises the following steps:
a bag of words model is first used to compute feature vectors for the perspective of each three-dimensional model. In the training data set, in order to cover enough various feature descriptors, one million key context information descriptors in total need to be selected from positive and negative samples respectively and randomly; then, constructing a context information descriptor vocabulary from the descriptors by using a k-medoids clustering algorithmTable (7). In the clustering result, all clustering centers W ═ WiForm an entire vocabulary, wiThe feature vectors of the ith descriptor vocabulary are represented, the perspective features of each three-dimensional model can be represented as the frequency of occurrence of each feature vocabulary in the context information descriptor vocabulary of the corresponding outline diagram. The size | W | of the vocabulary directly affects the precision of the subsequent classification result and is a very important parameter, so the invention adopts the parameter optimization framework proposed by the paper "Sketch-based Shape Retrieval" to obtain the optimal value thereof, and finally fixes the value of | W | at 800.
Order toRepresenting three-dimensional model perspectivesThe characteristic vector calculated by the bag-of-words model, the invention aims to learn an evaluation functionTo predict, for each candidate perspective of the three-dimensional model, the likelihood that a human would hand-draw the three-dimensional model from that perspective. Training a classifier by using a support vector machine, wherein a specific evaluation function formula is as follows:
<math> <mrow> <mi>Score</mi> <mrow> <mo>(</mo> <msubsup> <mi>h</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mi>t</mi> <mo>&CenterDot;</mo> <msubsup> <mi>h</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>-</mo> <mi>b</mi> </mrow> </math> 10. c
Where t and b are the correlation and biased coefficients, respectively, learned from training. Since most of the feature vectors in this problem are sparse, while LIBLINEAR is a SVM tool library that is particularly optimized for sparse features, the present invention uses LIBLINEAR to train the classifier. It is worth noting that to balance the number of positive and negative samples used in the training process, five thousand samples are equally selected from the pre-computed positive and negative sample sets, respectively, for training of the classifier.
All the candidate view angles v at each three-dimensional model miAfter all the scores are obtained by using the evaluation function of the formula 10, the first few views with the highest scores can be selected as the best views of the three-dimensional model by sorting all the scores from high to low. However, due to each angle of view viAre uniformly distributed on the surrounding spherical surface of the three-dimensional model, and the similarly positioned visual angles have similar projection contour diagrams, so that the visual angles are very similar. If simply choose the top N highest scoring viThe optimal viewing angle results in the viewing angles being concentrated in a local area on one side of the three-dimensional model, which is not useful in practical applications. In order to include all the different optimal viewing angles of all the three-dimensional models as much as possible in the results returned by the algorithm of the present invention, the diversity characteristics between them need to be taken into account when sorting the viewing angles. Thus, the ranking algorithm used by the present invention, which is described in detail below, encourages higher ranked views of the three-dimensional model to be distributed as far as possible among different locations of the three-dimensional model.
Let siRepresenting the three-dimensional model view angle v using equation 10iThe initial score is calculated, and the invention introduces a new evaluation function tiIt is defined as:
ti=si+α(Φ(vi) Which is shown in equation 11
Wherein phi (v)i) Is a penalty function that acts to suppress the scores of similar three-dimensional model perspectives in the results; and α (-) is a monotonically decreasing function that is used to control the penalty strength of the penalty function. It was found that,the function α (·) is not required to be specially selected as long as it can be quickly decremented to 0The value of σ is 0.2. Then, the penalty function can be written as:
<math> <mrow> <mi>&Phi;</mi> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>max</mi> <mrow> <mo>{</mo> <msub> <mi>v</mi> <mi>j</mi> </msub> <mo>|</mo> <msub> <mi>v</mi> <mi>j</mi> </msub> <mo>&Element;</mo> <mi>T</mi> <mo>}</mo> </mrow> </munder> <mi>IoU</mi> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>v</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> </math> which is shown in equation 12
Where T is a set of a series of views, each three-dimensional model view in the set being ranked higher than a given view vi. IoU (intersection over Union) is used to measure whether the views of two three-dimensional models are similar, IoU is defined as the intersection of the projected areas of two views divided by their union. Fig. 5 shows an example of using IoU to measure the similarity between two viewing angles, where the similarity between viewing angles (a) and (b) is 0.87 and the similarity between viewing angles (a) and (c) is 0.43, which are calculated to fit the human observation. Obviously, the penalty function Φ (v)i) When the model visual angles are sorted, the candidate visual angles which are very similar to the visual angles at the front of the ranking can be subjected to scoring punishment, so that the effect of suppressing the similar visual angles is achieved, and a new evaluation function t is enabled to be realized through a punishment functioniConsidering the diversity of the view anglesTaking into account. To avoid loss of generality, s is defined asi=tiWhen, T is the empty set. In addition, in the sequencing process, in order to obtain a more stable view angle position, a mean-shift algorithm is used for searching a scored local optimal value of each determined three-dimensional model view angle as a final result, so that errors caused by uniform sampling can be effectively reduced. The complete three-dimensional model view ordering algorithm of the invention is as follows:
inputting: each three-dimensional model view viInitial score s for e Vi
And (3) outputting: set T, which contains the best views of the first N three-dimensional models
In order to prove the effectiveness of the three-dimensional model optimal view angle selection algorithm provided by the invention, the performances of various contour map similarity comparison methods provided by the invention are firstly compared, and the most effective contour context information matching mode based on key points is proved; and secondly, the effectiveness of the invention is proved by comparing the invention with other advanced three-dimensional model optimal visual angle selection algorithms in a three-dimensional model retrieval task. In experiments, the present invention was trained and validated in a PSB three-dimensional model database, as described above.
Comparing the similarity of two given contour maps is a very important algorithm in the present invention because it greatly affects the accuracy of the trained classifier and the final selection result. Therefore, it is necessary to compare all proposed contour map similarity calculation methods. First, one hundred three-dimensional models and their associated freehand sketch data were randomly sampled in a test dataset. Meanwhile, ten users are invited to manually calibrate all relevant hand-drawn sketches of the three-dimensional models from which perspective, and then the data manually calibrated by the users are used as the judgment standard. Then, the sketch is mapped to the view angle of the three-dimensional model in the same way by using the different contour map similarity matching ways described above, and the accuracy of each similarity matching algorithm is defined as follows:
<math> <mrow> <mi>Accuracy</mi> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mi>IoU</mi> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> <mo>,</mo> <msubsup> <mi>t</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math> which, in turn, is shown as equation 13
Wherein,representing the view selected by the kth user for the three-dimensional model, and n being the number of users, the similarity between the two three-dimensional model views is compared in equation 13 using the IoU calculation described above. Finally, the average accuracy of the mapping of each contour map similarity calculation method on all the one hundred three-dimensional models is taken as a standard for evaluating the method. All the similarity calculation methods of the profile are shown in table 1, including those based on the shape of the profile (formula 6), those based on the context information of the profile (formula 5), and those based on the context information of the profile at the key points (formula 7).
TABLE 1 accuracy of similarity calculation method for different contour graphs
From table 1, it can be seen that considering the context information of the contour in comparing the similarity of the contour map can significantly increase the accuracy of matching, but the computational complexity is also greatly improved. On the premise of sacrificing a small amount of matching accuracy, the key point detection technology is added into the matching algorithm, so that a large amount of calculation overhead can be saved. Therefore, using the keypoint-based contour context information similarity matching algorithm is an optimal way after balancing the calculation accuracy and speed. FIG. 5 lists some of the results of mapping a freehand sketch to its corresponding three-dimensional model perspective using the keypoint-based contour context information similarity matching algorithm proposed by the present invention.
Since selecting the viewing angle of a three-dimensional model is a subjective task, evaluating the performance of an automatic three-dimensional model optimal viewing angle selection algorithm cannot directly use the accuracy of the algorithm on a fixed database as an index. In the invention, the performance of the three-dimensional model optimal visual angle selection algorithm is indirectly compared by applying the three-dimensional model optimal visual angle selection algorithm to a three-dimensional model retrieval task and comparing the final retrieval accuracy. The three-dimensional model Retrieval tasks were set up according to the method described in the paper "Sketch-based Shape Retrieval", including data sampling, parameter setting, and using GALIF as a model feature extraction algorithm. The Area (AUC) included in the Precision-Recall Curve (Precision-Recall Curve) is used as an index for evaluating the performance of the search results.
In the whole Retrieval process, different three-dimensional model optimal view selection algorithms are respectively used for selecting candidate views for each model to serve as input of a subsequent three-dimensional model Retrieval task, and the methods comprise uniformly distributed views (N views are uniformly sampled on a surrounding spherical surface of each three-dimensional model), a method based on model significance (proposed by a paper 'Mesh sales'), an optimal view classifier (proposed by a paper 'Sketch-based Shape Retrieval'), an internet picture-based method (proposed by a paper 'Web-image drive views of 3D maps'), and the method disclosed by the invention. For each method, the optimal viewing angles with different numbers are obtained by continuously adjusting the parameters of the method until the AUC indexes of all the methods are approximate to 0.23, and obviously, the performance of the viewing angle selection algorithm for searching the performance index of the AUC which can reach the standard by using less three-dimensional model viewing angles is better, and detailed results are shown in Table 2.
TABLE 2 Performance of the optimal perspective selection algorithm for different three-dimensional models when used for three-dimensional model retrieval
As shown in table 2, the method of the present invention achieves the most excellent AUC index under the condition of using the minimum number of views, and especially, it should be noted that the method saves nearly two times of the number of views at least compared with the other methods, and saves six times of the number of views compared with the uniformly distributed view selection method, which proves that the present invention has the extremely high accuracy of view selection. Fig. 6 shows the results obtained using the optimal view selection algorithm without three-dimensional models, from top to bottom, for the present invention, an internet picture-based method, an optimal view classifier, and a model significance-based method.
The invention discloses another novel priori knowledge, and the optimal visual angle of the three-dimensional model can be effectively selected. The invention considers that when people draw a three-dimensional model from a certain visual angle frequently, the visual angle is one of the potential best visual angles of the three-dimensional model. With the rapid development of three-dimensional model retrieval algorithms based on sketches, many three-dimensional model resource libraries containing corresponding hand-drawn sketches have been established. These databases establish a meaningful connection between the freehand sketch and the three-dimensional model, which is used to model the algorithm proposed by the present invention. However, hand-drawn sketches are a more specific human visual information carrier than traditional images. In a hand-drawn sketch, only a pure black outline is often included, and color information is lacking. And the lines are drawn manually, so that the lines are full of various deformation and noise, and the characteristics of the sketch add great difficulty to feature matching and similarity measurement. Clearly, accurately mapping a freehand sketch to the viewing angle of a corresponding three-dimensional model is a very challenging problem. To solve this problem, the present invention proposes a method for similarity measurement using context information of sketch outlines. The outline of the sketch is a set of a series of edge pixel points such as straight lines and curves in the sketch; the context information of the sketch outline refers to how these outline segments are combined with their surrounding outlines, and this information often represents a meaningful sub-portion of the sketch (e.g., the outline of the tail or leg of an animal). It should be noted that the context information of the sketch outline always implies rich characteristics of the sketch and can be effectively used to measure the similarity between them. The experimental result proves that the similarity measurement method can obviously reduce the influence caused by noise such as sketch deformation and has stable performance.
In addition, the context information of the sketch outline also contains common features that can reflect the habits of the human visual system. For example, humans always prefer to draw an animal or four legs of a table at the bottom of the sketch, while the animal's tail is always drawn at the horizontal sides of the sketch. In view of the above, the present invention further provides a machine learning-based method, which uses context information of sketch outlines and can be used to learn a general three-dimensional model optimal view angle classifier, which can automatically select an optimal view angle for different types of three-dimensional models. Experiments prove that the method has good performance compared with other three-dimensional model optimal visual angle selection methods, and is particularly suitable for three-dimensional model retrieval tasks.
Recently, Liu et al also put forward a novel three-dimensional model optimal view selection algorithm in the paper "Web-image driver views of 3D maps", which introduces another brand-new problem-solving idea for this field. This paper is distinctive in that it does not directly analyze the connection between the three-dimensional model and the human visual system, but rather utilizes a medium that itself contains visual information of the human viewing object to estimate the potentially best perspective of the three-dimensional model. Image resources on the existing internet are used to model this approach, such images reflecting the visual information of how a human being views an object when photographing. Experiments prove that the method achieves very good performance in selection of the optimal viewing angle for the three-dimensional model.
However, the present invention has three main differences compared to the Liu method. First, the present invention solves this problem using another medium that reflects the habit of the human visual system to observe objects, namely, a hand-drawn sketch; the hand-drawn sketch expresses the visual angles from which people prefer to draw the three-dimensional object, and compared with the internet picture, the hand-drawn sketch can more directly model the optimal visual angle selection problem of the three-dimensional model. Secondly, the visual information contained in the pictures and the sketches is different in nature: pictures often have rich color and texture characteristics, and sketches only contain single-color contour lines; therefore, the method uses a feature extraction mode completely different from that of the Liu method, and provides a feature matching algorithm based on the context environment of the contour lines to calculate the similarity between the sketch and the perspective projection drawing of the three-dimensional model. Thirdly, the Liu method highly depends on the category information of the three-dimensional model when the visual angle is selected, namely the specific category of the three-dimensional model must be appointed before the optimal visual angle is calculated, and the visual angle selection cannot be carried out on the three-dimensional model of unknown category; the invention improves the problem, provides a general visual angle selection method based on machine learning, and selects the optimal visual angle for the three-dimensional model of unknown category by learning the commonality of the optimal visual angle of the three-dimensional model.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A three-dimensional model optimal visual angle selection method based on sketch outline characteristics is characterized by comprising the following specific steps:
step A: calculating the similarity between the sketch and the perspective projection drawing of the three-dimensional model based on a feature matching algorithm of the contour line context environment, so as to map all the given freehand sketches to the perspective of the corresponding three-dimensional model;
and B: acquiring the mapping probability of the three-dimensional model visual angle drawn by the hand according to the similarity between the measured sketch and the three-dimensional model visual angle, setting a constraint condition based on the mapping probability, and selecting a positive and negative sample database training set of the potential optimal visual angle of the model;
and C: constructing a feature vector for each three-dimensional model by using a bag-of-words model, and training a classifier of a potential optimal visual angle of the three-dimensional model by using a support vector machine based on positive and negative samples;
step D: and introducing the diversity of the three-dimensional model visual angles into a visual angle sorting algorithm, and selecting the top N given number of optimal visual angles for each three-dimensional model.
2. The method of claim 1, further comprising the following steps before directly comparing the similarity between the sketch and the perspective of the three-dimensional model: firstly, converting a projection diagram of a three-dimensional model visual angle into a contour diagram similar to a hand-drawn sketch; then, forming pixel points in the contour lines on all the contour graphs into contour groups, and merging the contour groups according to the correlation; then, the similarity between the two contour groups is compared, and finally, the similarity of the contour map is calculated according to the similarity of the contour groups.
3. The method of claim 2, wherein each two contour groups g are selectediAnd gjThe correlation degree comparison formula between the two is as follows:
a(gi,gj)=|cos(θiij)·cos(θjij)|2
wherein, giDefined as the initial set of contours, xiIs the average position of the contour group on the contour map, θiIs the average edge direction of the contour group, gjDefined as a set of contrast contours, xjIs the average position of the contour group on the contour map, θjIs the mean edge direction of the contour group, θijIs xiAnd xjThe size of the included angle.
4. The method of claim 2The method for selecting the optimal visual angle of the three-dimensional model is characterized in that two contour groups giAnd gjThe similarity of the shapes between them is:
<math> <mrow> <msub> <mi>d</mi> <mi>app</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>g</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <msup> <mrow> <msub> <mi>d</mi> <mi>spa</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>g</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> <mrow> <mn>2</mn> <msubsup> <mi>&sigma;</mi> <mi>spa</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <mi>cos</mi> <mrow> <mo>(</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>&theta;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
wherein, thetaxIs a contour group gxEdge direction of dspa(gi,gj) Is the Euclidean distance of the mean position in the respective contour map after two contour group normalization, and σspaThe value of (b) is a constant value.
5. The method of claim 4, wherein two contour sets g are selectediAnd gjThe context information between the two is added into the similarity calculation, and the specific method comprises the following steps: an undirected graph is first constructed for each contour map. Then, calculating the similarity of any two paths in the contour group by using an undirected graph; and finally, calculating the context similarity of the contour group according to the similarity of any two paths, specifically: constructing an undirected graph G (V, E) for each contour graph, wherein V is a node set in the undirected graph, E is an edge set in the graph, and definingThe slave nodes are contour group giThe similarity of any two paths is as follows:
<math> <mrow> <msubsup> <mi>d</mi> <mi>walk</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msubsup> <mi>W</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>,</mo> <msubsup> <mi>W</mi> <mi>j</mi> <mi>n</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>d</mi> <mi>app</mi> </msub> <mrow> <mo>(</mo> <msubsup> <mi>w</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>w</mi> <mi>j</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
whereinIs on the wayThe kth node of (1), two contour groups giAnd gjThe context similarity of (a) is:
<math> <mrow> <msubsup> <mi>d</mi> <mi>cont</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>g</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msup> <msub> <mi>P</mi> <mi>i</mi> </msub> <mi>n</mi> </msup> <mo>|</mo> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mo>{</mo> <mi>a</mi> <mo>|</mo> <mi>a</mi> <mo>&Element;</mo> <msup> <msub> <mi>P</mi> <mi>i</mi> </msub> <mi>n</mi> </msup> <mo>}</mo> </mrow> </munder> <munder> <mi>max</mi> <mrow> <mo>{</mo> <mi>b</mi> <mo>|</mo> <mi>b</mi> <mo>&Element;</mo> <msup> <msub> <mi>P</mi> <mi>j</mi> </msub> <mi>n</mi> </msup> <mo>}</mo> </mrow> </munder> <msubsup> <mi>d</mi> <mi>walk</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </math>
wherein, Pi nIs all slave nodes giSet of starting ordered paths of length n, | Pi nI represents the set Pi nThe number of paths contained in (1).
6. The method for selecting the optimal viewing angle of the three-dimensional model according to claim 5, wherein the context information similarity between the two contour maps ci and cj is further obtained according to the similarity between the two contour groups, which specifically comprises:
<math> <mrow> <msubsup> <mi>S</mi> <mi>cont</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> </munder> <munder> <mi>max</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>}</mo> </mrow> </munder> <msubsup> <mi>d</mi> <mi>cont</mi> <mi>n</mi> </msubsup> <mrow> <mo>(</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
wherein,shown in the profile view ciThe set of profiles contained in (1), and | ciI is shown at ciThe number of contour groups contained in (1).
7. The three-dimensional model optimal view selection algorithm of claim 6, wherein g is based on two contour setsiAnd gjThe similarity of the shapes between the two is further obtained based on the key points at the corners of the outline, and the similarity of the context information of the outline is specifically as follows:
<math> <mrow> <msub> <mi>S</mi> <mi>key</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>c</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msub> <mi>K</mi> <mi>i</mi> </msub> <mo>|</mo> </mrow> </mfrac> <munder> <mi>&Sigma;</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>K</mi> <mi>i</mi> </msub> <mo>}</mo> </mrow> </munder> <munder> <mi>max</mi> <mrow> <mo>{</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>|</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>K</mi> <mi>j</mi> </msub> <mo>}</mo> </mrow> </munder> <msubsup> <mi>d</mi> <mi>app</mi> <mn>4</mn> </msubsup> <mrow> <mo>(</mo> <msubsup> <mi>g</mi> <mi>i</mi> <mi>x</mi> </msubsup> <mo>,</mo> <msubsup> <mi>g</mi> <mi>j</mi> <mi>y</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
wherein, KxFor sets of contour groups containing only keypoints, all contained in set KxP in (1)i nIs the profile map cxThe key context information of (1) describes an operator.
8. The three-dimensional model optimal perspective selection algorithm of claim 7, wherein a sketch is obtainedFrom the perspective of the three-dimensional modelThe specific method of mapping probability that is plotted is: for each
p ( s i j , v i k ) = S key ( s i j , c i k ) - min S key ( s i j , c i m ) max S key ( s i j , c i m )
WhereinRepresenting from three-dimensional model perspectiveAnd calculating the obtained contour map.
9. The three-dimensional model optimal view selection algorithm according to claim 8, wherein the method for obtaining positive and negative samples according to the mapping probability comprises: when in useAt the moment, the sketch is drawnPerspective mapping to three-dimensional modelAll three-dimensional model visual angles meeting the constraint condition are used as positive samples in training; computing for all over the sketch set SiAverage value of (d), when the three-dimensional model is viewed fromWhen the average value of (a) is less than a fixed threshold value, the view angle is taken as a negative sample, and the decision function of the whole sampling strategy is as follows:
<math> <mrow> <mi>&theta;</mi> <mrow> <mo>(</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>,</mo> <mi>if</mi> <mo>&Exists;</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>p</mi> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>,</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>;</mo> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> <mi>if</mi> <mo>&ForAll;</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>&Element;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <mfrac> <mn>1</mn> <mi>m</mi> </mfrac> <munder> <mi>&Sigma;</mi> <mi>m</mi> </munder> <mi>p</mi> <mrow> <mo>(</mo> <msubsup> <mi>s</mi> <mi>i</mi> <mi>m</mi> </msubsup> <mo>,</mo> <msubsup> <mi>v</mi> <mi>i</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>&lt;</mo> <mi>&zeta;</mi> <mo>;</mo> </mtd> </mtr> <mtr> <mtd> <mi>null</mi> <mo>,</mo> <mi>otherwise</mi> <mo>.</mo> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
wherein,show thatTaking 0 as a positive sample means that it is taken as a negative sample, and ζ is a set threshold.
10. The algorithm for selecting an optimal view angle for a three-dimensional model according to claim 9, wherein an evaluation function ti is used in the sorting of the three-dimensional model view angles:
ti=si+α(Φ(vi)),
wherein, phi (v)i) Is a penalty function, and α (-) is a monotonically decreasing function。
CN201510145279.7A 2015-03-31 2015-03-31 A kind of threedimensional model optimal viewing angle choosing method based on sketch outline feature Active CN104751463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510145279.7A CN104751463B (en) 2015-03-31 2015-03-31 A kind of threedimensional model optimal viewing angle choosing method based on sketch outline feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510145279.7A CN104751463B (en) 2015-03-31 2015-03-31 A kind of threedimensional model optimal viewing angle choosing method based on sketch outline feature

Publications (2)

Publication Number Publication Date
CN104751463A true CN104751463A (en) 2015-07-01
CN104751463B CN104751463B (en) 2017-10-13

Family

ID=53591082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510145279.7A Active CN104751463B (en) 2015-03-31 2015-03-31 A kind of threedimensional model optimal viewing angle choosing method based on sketch outline feature

Country Status (1)

Country Link
CN (1) CN104751463B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017124713A1 (en) * 2016-01-18 2017-07-27 华为技术有限公司 Data model determination method and apparatus
CN108133218A (en) * 2017-12-14 2018-06-08 内蒙古科技大学 Infrared target detection method, equipment and medium
CN108170823A (en) * 2018-01-04 2018-06-15 江西师范大学 Hand-drawn interactive three-dimensional model retrieval method based on high-level semantic attribute understanding
CN109165313A (en) * 2018-07-11 2019-01-08 山东师范大学 A kind of threedimensional model bilayer search method and device based on Feature Descriptor
CN109213884A (en) * 2018-11-26 2019-01-15 北方民族大学 A kind of cross-module state search method based on Sketch Searching threedimensional model
CN113126132A (en) * 2021-04-09 2021-07-16 内蒙古科电数据服务有限公司 Method and system for calibrating and analyzing track in mobile inspection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163343A (en) * 2011-04-11 2011-08-24 西安交通大学 Three-dimensional model optimal viewpoint automatic obtaining method based on internet image
CN102831638A (en) * 2012-07-06 2012-12-19 南京大学 Three-dimensional human body multi-gesture modeling method by adopting free-hand sketches
US20130218530A1 (en) * 2010-06-29 2013-08-22 3Shape A/S 2d image arrangement
CN103295025A (en) * 2013-05-03 2013-09-11 南京大学 Automatic selecting method of three-dimensional model optimal view

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130218530A1 (en) * 2010-06-29 2013-08-22 3Shape A/S 2d image arrangement
CN102163343A (en) * 2011-04-11 2011-08-24 西安交通大学 Three-dimensional model optimal viewpoint automatic obtaining method based on internet image
CN102831638A (en) * 2012-07-06 2012-12-19 南京大学 Three-dimensional human body multi-gesture modeling method by adopting free-hand sketches
CN103295025A (en) * 2013-05-03 2013-09-11 南京大学 Automatic selecting method of three-dimensional model optimal view

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017124713A1 (en) * 2016-01-18 2017-07-27 华为技术有限公司 Data model determination method and apparatus
CN108133218A (en) * 2017-12-14 2018-06-08 内蒙古科技大学 Infrared target detection method, equipment and medium
CN108170823A (en) * 2018-01-04 2018-06-15 江西师范大学 Hand-drawn interactive three-dimensional model retrieval method based on high-level semantic attribute understanding
CN109165313A (en) * 2018-07-11 2019-01-08 山东师范大学 A kind of threedimensional model bilayer search method and device based on Feature Descriptor
CN109213884A (en) * 2018-11-26 2019-01-15 北方民族大学 A kind of cross-module state search method based on Sketch Searching threedimensional model
CN109213884B (en) * 2018-11-26 2021-10-19 北方民族大学 Cross-modal retrieval method based on sketch retrieval three-dimensional model
CN113126132A (en) * 2021-04-09 2021-07-16 内蒙古科电数据服务有限公司 Method and system for calibrating and analyzing track in mobile inspection
CN113126132B (en) * 2021-04-09 2022-11-25 内蒙古科电数据服务有限公司 Method and system for calibrating and analyzing track in mobile inspection

Also Published As

Publication number Publication date
CN104751463B (en) 2017-10-13

Similar Documents

Publication Publication Date Title
Elad et al. On bending invariant signatures for surfaces
Hoiem et al. Putting objects in perspective
EP3179407B1 (en) Recognition of a 3d modeled object from a 2d image
CN104751463B (en) A kind of threedimensional model optimal viewing angle choosing method based on sketch outline feature
US8429174B2 (en) Methods, systems, and data structures for performing searches on three dimensional objects
Glover et al. Monte carlo pose estimation with quaternion kernels and the bingham distribution
Yang et al. Sketch-based modeling of parameterized objects.
US9183467B2 (en) Sketch segmentation
CN104200240B (en) A kind of Sketch Searching method based on content-adaptive Hash coding
CN104637090B (en) A kind of indoor scene modeling method based on single picture
Varley Automatic creation of boundary-representation models from single line drawings
Cho et al. Mode-seeking on graphs via random walks
CN105894047A (en) Human face classification system based on three-dimensional data
CN105512674B (en) RGB-D object identification method and device based on the adaptive similarity measurement of dense Stereo Matching
Mei et al. Scene-adaptive off-road detection using a monocular camera
Li et al. Hierarchical semantic parsing for object pose estimation in densely cluttered scenes
CN110147841A (en) The fine grit classification method for being detected and being divided based on Weakly supervised and unsupervised component
Joo et al. Globally optimal inlier set maximization for Atlanta world understanding
Le et al. DeepSafeDrive: A grammar-aware driver parsing approach to Driver Behavioral Situational Awareness (DB-SAW)
Zhao et al. Learning best views of 3D shapes from sketch contour
CN111325237A (en) Image identification method based on attention interaction mechanism
Huang et al. Tracking-by-detection of 3d human shapes: from surfaces to volumes
Delaye et al. Fuzzy relative positioning templates for symbol recognition
US7146048B2 (en) Representation of shapes for similarity measuring and indexing
CN111709269B (en) Human hand segmentation method and device based on two-dimensional joint information in depth image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: TONGJI UNIVERSITY

Free format text: FORMER OWNER: LIANG SHUANG

Effective date: 20150722

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20150722

Address after: 201800 Siping Road 1239, Shanghai, Yangpu District

Applicant after: Tongji University

Address before: 201800 Shanghai city Jiading District Tongji University good building 314

Applicant before: Liang Shuang

GR01 Patent grant
GR01 Patent grant