CN102163343B - Three-dimensional model optimal viewpoint automatic obtaining method based on internet image - Google Patents

Three-dimensional model optimal viewpoint automatic obtaining method based on internet image Download PDF

Info

Publication number
CN102163343B
CN102163343B CN 201110089940 CN201110089940A CN102163343B CN 102163343 B CN102163343 B CN 102163343B CN 201110089940 CN201110089940 CN 201110089940 CN 201110089940 A CN201110089940 A CN 201110089940A CN 102163343 B CN102163343 B CN 102163343B
Authority
CN
China
Prior art keywords
image
similarity
dimensional model
network
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110089940
Other languages
Chinese (zh)
Other versions
CN102163343A (en
Inventor
黄华
张磊
刘洪�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN 201110089940 priority Critical patent/CN102163343B/en
Publication of CN102163343A publication Critical patent/CN102163343A/en
Application granted granted Critical
Publication of CN102163343B publication Critical patent/CN102163343B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a three-dimensional model optimal viewpoint automatic obtaining method based on an internet image, and the method provided by the invention comprises the following steps: sufficiently using the network which is a mass resource library to combine with the technical indexes such as contour similarity, shape similarity and minutiae feature coincidence degree to measure the similarity of a projection and a network image, and screening out the number of the network image corresponding to the projection corresponding to each viewpoint; and selecting out the most commonly used viewpoint and determining the viewpoint as the optimal viewpoint of the three-dimensional model. According to the method provided by the invention, the good viewpoint is not singly regulated as a parameter such as visual area and curative entropy, but the visual habit is sufficiently combined to obtain the optimal viewpoint which can be easily accepted by the people.

Description

The three-dimensional model optimal viewing angle automatic obtaining method of Internet-based image
Technical field
The present invention relates to a kind of Computer Image Processing method, be specifically related to a kind of three-dimensional model optimal viewing angle automatic obtaining method of Internet-based image.
Background technology
A given three-dimensional model when from different angles, it being observed, due to the visual information of the different directions that has carried this three-dimensional model, may show the diverse form of expression.The essence of seeking optimal viewing angle is to find out the viewpoint of a carrying maximum fault information, and this viewpoint is conducive to people and more in depth removes to observe and understand given three-dimensional model.In recent years, the optimal viewing angle problem has obtained the broad research of academia, and is applied in a lot of practical problemss, as shape recognition and classification, three-dimensional model view editor, image-based play up, three-dimensional model search etc.
Which type of visual angle to be this problem of optimal viewing angle for, to go back now neither one authority's definition.When research optimal viewing angle problem, people go to define according to the own practical application of facing usually.By research computer graphical psychology, the people such as Blanz have proposed to determine four attributes of optimal viewing angle: be beneficial to identification, familiarity, can use function representation and aesthetical standard, and optimal viewing angle is subjected to the geometrical property of three-dimensional model to affect (Blanz to a great extent, V., et al., What object attributes determine canonical views PERCEPTION-LONDON-, 1999.28:p.575-600.).In conjunction with these achievements in research, optimal viewing angle often is defined as providing for people the visual angle of the maximum visual informations of this model.Wherein, visual information can further show as the descriptors such as curvature, topology or profile entropy, and optimal viewing angle is exactly to make as much as possible these descriptors as seen.Although these descriptors can show its feature well for some model,, can't confirm that these information can represent the mankind's perception fully.And, in actual life, also usually can run into some these descriptors and process the example of makeing mistakes, as facing a televisor, the curvature feature at its back side obviously is better than the front, and concerning us, its best view but is present in the place ahead.
Optimal viewing angle must be that most people are inclined to the visual angle of selection when people face an object.Clearly, for given model, can not inquire one by one their in the eyes of optimal viewing angle of the whole mankind.Yet the internet shares own platform of taking a picture for one of people.The high speed development of soft and hardware technology, also make the internet progressively go deep into daily life in recent years.Constantly share the photo of oneself taking on network along with people, the internet has become the image data base of a magnanimity.For example, on famous image sharing website Flickr, the image volume of nowadays sharing has surpassed 4,000,000,000, and on the google webpage, the image over 10,000,000,000 can be for people's retrieval.People usually can select one and identify oneself all well and good angle when taking own interested object, and namely optimal viewing angle goes to take, and, the image that shares on the network of putting, the works that the visual angle of selecting is especially felt quite pleased.The network data amount is huge, for any given model, can easily find capacity to contain the image of this model by network.Summary of the invention
The object of the present invention is to provide a kind of three-dimensional model optimal viewing angle automatic obtaining method of Internet-based image.
For achieving the above object, the technical solution used in the present invention is:
1) utilize the image characteristic extracting method of color-based contrast, ask for the characteristic pattern of the network image of download;
2) the input three-dimensional model is obtained the curvature on each summit, three-dimensional model surface, namely obtains the perspective view of the surface shape features of three-dimensional model in order to the shape facility that characterizes the three-dimensional model surface;
3) based on the registration of the outline of the projected image of three-dimensional model and network image, this registration is weighed the degree of contour convergence between two width images, and by asking for contour convergence degree maximum between two width images, calculate the energy function of profile similarity degree between token image;
4) based on the similarity of the contour shape of the projected image of three-dimensional model and network image, this similarity characterizes the consistance of picture shape between two width images, and obtains according to this similarity the energy function that characterizes shape similarity;
5) based on the goodness of fit of the minutia of the characteristic pattern of the network image of the perspective view of the surface shape features of three-dimensional model and download, this goodness of fit confirms that the minutia of two width images overlaps degree, according to the goodness of fit of this minutia, obtain the characteristic of correspondence energy function;
6) according to the similarity energy function of the characteristic pattern of the network image of the perspective view of the surface shape features of three-dimensional model and download, method with statistics is asked for optimal viewing angle, wherein, the network image similar to the three-dimensional model projected image is considered to the image identical with this three-dimensional model current visual angle, a threshold value D is set, utilize sampling statistical method, calculate image I in decision network image set I iWith image P in projection atlas P jThe threshold value D that whether belongs to the same visual angle of model " count with P in the similarity energy function d at each visual angle i,jGreater than the number of threshold value D, and the power sequence is fallen in these numbers, take out three groups of numbers of number maximum, three visual angle perspective views of the best that Here it is tries to achieve;
The result of the energy function stack of the energy function that described similarity energy function is the profile similarity degree, the energy function of shape similarity and the minutia goodness of fit.
2, the three-dimensional model optimal viewing angle automatic obtaining method of Internet-based image as claimed in claim 1, its concrete execution in step is as follows:
Step 1: the three-dimensional model M for input, equably it is done parallel projection in different visual angles, obtain its projection atlas P;
Step 2: associated picture downloaded in the key word with the three-dimensional model M that inputs on network, and be partitioned into the prospect of network image with graph cut algorithm, network consisting atlas I;
Step 3: the mask figure that obtains respectively each image in projection atlas P and network atlas I;
Step 4: the mask figure that utilizes step 3 to ask for, at first the target with the mask artwork of the mask artwork of projected image and network image is placed in the same coordinate system, and utilize Principal Component Analysis Algorithm that the coordinate of the mask artwork of the mask artwork of projected image and network image is adjusted, then with two width image I iAnd P jContour similarity be defined as:
[formula one]
A i , j = exp ( - Area ( I i - P j ) + Area ( P j - I i ) 2 · Area ( I i ∩ P j ) )
I wherein i-P jExpression is contained in I iBut be not contained in P jThe zone, P j-I iExpression is contained in P jBut be not contained in I iThe zone, I i∩ P jExpression is included in I simultaneously iAnd P jIn part, Area represents the area that this is regional;
Step 5: for the mask figure that step 3 is asked for, utilize the Canny operator to obtain respectively outline map U and the V of the mask artwork of the mask artwork of projected image and network image: at first, edge figure U and V sample respectively
Figure GDA00003454303600042
With Given point set p i, the structural information of each sampled point is defined as:
[formula two]
h i(k)=#{q≠p i:(q-p i)∈bin(k)}
Wherein, bin (k) expression pixel is positioned at the number of k regional point of disk, and carries out normalization at log space; Q refers at p iIn the edge image of place with some p iThe distance relation (q-p that satisfies condition i) point of ∈ bin (k);
Set point is to { p i, q j, define the inconsistent degree of this shape of 2 and be:
[formula three]
c i , j = 1 2 Σ k = 1 K [ h i ( k ) - h j ( k ) ] 2 h i ( k ) + h j ( k )
Wherein, the number of disk segmented areas when K represents to add up a little structural information, h i(k) and h j(k) represent respectively the some p of definition in [formula two] iWith a q jStructural information;
Sampling point set { the p at given two width figure edges iAnd { q i, the shape similarity that defines this two width figure is:
[formula four]
C i , j = exp ( - Σ i c i , π ( i ) )
In formula, π is the arrangement that makes the inconsistent degree of function acquisition minimum shape in [formula three];
Step 6: for network image I, at its R, G, three color spaces of B, every bit x in the definition image kEigenwert be:
[formula five]
Att ( x k ) = Σ n = 0 255 f n D ( m , n )
In formula, f nExpression pixel x kPixel value a nThe frequency of occurrences in this width image, D represents the distance between pixel value;
In order further to extract the unique point that obviously is different from other point in image, will put x kEigenwert be modified to:
[formula six]
SalI = | | ▿ Att | |
Wherein,
Figure GDA00003454303600063
Gradient is asked in expression, || .|| represents delivery;
Step 7: for given three-dimensional model M, calculate the curvature on each summit of this model, it is defined as the eigenwert SalM on each summit;
Step 8: given two width image I iAnd P j, the goodness of fit that defines the minutia between them is:
[formula seven]
S i , j = exp ( - Σ x k ∈ I i ∩ P j | | SalI ( x k ) - SalM ( x k ) | | )
The curvature characteristic image that the color characteristic image that step 6 is calculated and step 7 calculate utilizes [formula seven] to calculate the details goodness of fit between them;
Step 9: in conjunction with the contour similarity of trying to achieve previously, shape similarity and the minutia goodness of fit, between definition two width images, the similarity energy function is:
[formula eight]
d i,j=w 1A(I i,P j)+w 2C(I i,P j)+w 3S(I i,P j)
Wherein, w 1, w 2, w 3Be respectively the weights of three energy functions, A, C, S represent respectively contour similarity, shape similarity and the minutia goodness of fit of two width images;
Step 10: for each width image in projection atlas P, respectively with network chart image set I in image comparison, and obtain similarity energy function d between them i,j, a threshold value D is set, utilize sampling statistical method, calculate image I in decision network image set I iWith image P in projection atlas P jThe threshold value D that whether belongs to the same visual angle of model, count with P in the similarity energy function d at each visual angle i,jGreater than the number of threshold value D, and the power sequence is fallen in these numbers, take out three groups of numbers of number maximum, be three visual angle perspective views of the best of trying to achieve;
Step 11: according to three optimal viewing angle perspective views that calculate, the coordinate information at set visual angle when obtaining atlas P is finally confirmed three optimal viewing angles of this model.
The present invention evenly arranges viewpoint by surrounding on sphere at the three-dimensional model that provides, and obtains corresponding parallel projection view according to these viewpoints.Then, according to the key word of this three-dimensional model, obtain the image corresponding with this model from network.Go to weigh the similarity of perspective view and network image according to technical indicators such as contour similarity, shape similarity and the minutia goodnesses of fit, filter out the number of the corresponding network image of perspective view corresponding to each visual angle, pick out the most frequently used visual angle of people, and it is confirmed as the optimal viewing angle of this three-dimensional model.
Given three-dimensional model is done parallel projection equably at different visual angles, form a projection atlas; Simultaneously, use the relevant keyword of this model to download associated picture from network, and utilize graph cut to these Image Segmentation Usings, the network consisting atlas.In this process, allow the user suitably to screen and man-machine interactively, to improve the quality of segmentation result; Secondly, the atlas that obtains is previously obtained respectively the mask figure of each image, utilize principal component analysis (PCA) to calculate coordinate relation between two width images, and according to result of calculation, coordinate is adjusted, then utilize relation of inclusion between two width image correspondence position profiles to obtain contour similarity between two width figure; Again, utilize the Canny operator to ask the edge to mask image, and edge is sampled.Utilize the edge sample point set of two width images, calculate the shape similarity of two width images; Then, utilize the color characteristic information of characteristic information and the network image on the summit on three-dimensional model, relatively the minutia information of two width images, calculate the minutia goodness of fit between two width images; At last, be respectively the given suitable weights of contour similarity, shape similarity and the minutia goodness of fit, calculate the similarity energy function between two width images.Utilize this energy function to calculate the number of network image corresponding to each viewpoint, and the viewpoint that will obtain on this basis maximum corresponding numbers is defined as optimal viewing angle.wherein, when carrying out form fit, by Shape context information (Belongie, S., J.Malik, and J.Puzicha, Shape matching and object recognition using shape contexts.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002:p.509-522.Ling, H.and D.W.Jacobs, Shape classification using the inner-distance.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007:p.286-299.) obtain the structural information of each sampled point of image border, determine the shape inconsistency of two width figure by the shape information difference that compares corresponding point between two width figure.When carrying out the contrast of image detail characteristic information, eigenwert (Zhai by each pixel value, Y.and M.Shah.Visual attention detection in video sequences using spatiotemporal cues.2006:ACM.) calculate the characteristic information of each pixel, by relatively between two width image respective pixel feature difference determine details inconsistency between two width images.
Description of drawings
Fig. 1 is the process flow diagram that the present invention is based on the three-dimensional model optimal viewing angle automatic acquisition algorithm of the Internet images;
Fig. 2 is schematic diagram of the present invention;
Fig. 3 utilizes different visual angles to obtain the schematic diagram of parallel projection view;
Fig. 4 is network atlas schematic diagram;
Fig. 5 is the outline schematic diagram;
Fig. 6 is the form fit schematic diagram;
Fig. 7 is three-dimensional model and network image minutia figure;
Fig. 8 is the optimal viewing angle perspective view (shown three visual angles of each model the best in figure, and indicated the ratio of each shared network image in visual angle) that utilizes the department pattern that the inventive method tries to achieve.
Embodiment
The below will the present invention is described in detail with reference to the accompanying drawings.
Fig. 1 is process flow diagram of the present invention.The present invention mainly is divided into 11 steps:
Referring to Fig. 1,2,
Step 1: for the three-dimensional model M of input, at first on the encirclement sphere, uniform sampling is carried out at the visual angle, then at each visual angle of sampling, it is done parallel projection (Fig. 3), finally obtain its projection atlas P.
Step 2: associated picture downloaded in the key word with the three-dimensional model M that inputs on network, and be partitioned into the prospect (Fig. 4) of network image, network consisting atlas I with graph cut algorithm.Before cutting apart, can carry out manual screening, do not meet the image of demand with rejecting.After cutting apart, utilize interactive means that the undesirable image of segmentation effect is processed.
Step 3: the mask figure that obtains respectively each image in projection atlas M and network atlas I.
Step 4: the mask figure that utilizes step 3 to ask for for eliminating camera in the impact of angle and scaling ratio compared result, need to adjust to position and direction corresponding between two width images.At first, the target of the mask artwork of projected image and the mask artwork of network image is placed in the same coordinate system, and their geometric center is all moved to true origin.Then, utilize Principal Component Analysis Algorithm to obtain the principal direction of two width images, and calculate convergent-divergent, rotation relationship between them, their coordinate is adjusted.At last, the coincidence relation under same coordinate between two width images is analyzed, and with two width image I iAnd P jContour similarity be defined as:
[formula one]
A i , j = exp ( - Area ( I i - P j ) + Area ( P j - I i ) 2 · Area ( I i ∩ P j ) )
I wherein i-P jExpression is contained in I iBut be not contained in P jThe zone, P j-I iExpression is contained in P jBut be not contained in I iThe zone, I i∩ P jExpression is included in I simultaneously iAnd P jIn part, Area represents this regional area (Fig. 5).
Step 5: the atlas I that asks for for step 3 and the mask figure of P, at first utilize the Canny operator to obtain respectively their outline map U and V.Then, edge figure samples respectively, obtains point set
Figure GDA00003454303600102
With By Shape context information (Belongie, S., J.Malik, and J.Puzicha, Shape matching and object recognition using shape contexts.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002:p.509-522.) obtain the structural information (Fig. 6) of each sampled point of image border.Given point set p i, the structural information of each sampled point is defined as:
[formula two]
h i(k)=#{q≠p i:(q-p i)∈bin(k)}
Wherein, bin (k) expression pixel is positioned at the number of k regional point of disk, and carries out normalization at log space.Q refers at p iIn the edge image of place with some p iThe distance relation (q-p that satisfies condition i) point of ∈ bin (k).And, in order to obtain more exactly the structural information of each point, the distance of using in formula is inner distance (inner-distance) (Ling, H.and D.W.Jacobs, Shape classification using the inner-distance.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007:p.286-299.).
Structural information between two pixels in the Given Graph picture is compared, just can access the difference between structural information between them.Set point is to { p i, q j, define the inconsistent degree of this shape of 2 and be:
[formula three]
c i , j = 1 2 Σ k = 1 K [ h i ( k ) - h j ( k ) ] 2 h i ( k ) + h j ( j )
Wherein, the number of disk segmented areas when K represents to add up a little structural information, h i(k) and h j(k) represent respectively the some p of definition in [formula two] iWith a q jStructural information.
By comparing the shape information difference of corresponding point between two width figure, just can determine the inconsistency of the shape of two width figure.Given two width images are obtained their edge sample point set { p iAnd { q i, the shape similarity that defines this two width figure is:
[formula four]
C i , j = exp ( - Σ i c i , π ( i ) )
In formula, π is the arrangement that makes the inconsistent degree of function acquisition minimum shape in [formula three].
Step 6: be described by the color feature value (Zhai, Y.and M.Shah.Visual attention detection in video sequences using spatiotemporal cues.2006:ACM.) of each pixel minutia to image.Given coloured image I, at its R, G, three color spaces of B, each pixel I in definition I kEigenwert be:
SalS ( I k ) = Σ ∀ I i ∈ I | | I k - I i | |
Wherein, || .|| represents the distance between pixel.
Following formula can be rewritten as:
SalS(I k)=||I k-I 1||+||I k-I 2||+...+||I k-I N||
Wherein, N is pixel count total in image I.
With pixel I kPixel value be expressed as a k, following formula can further be rewritten as:
SalS(I k)=||a m-a 0||+||a m-a 1||+...+...
Due in image R, G, three color spaces of B, each pixel can only be between 0-255 value.Therefore, can define every bit x in image kEigenwert be:
[formula five]
Att ( x k ) = Σ n = 0 255 f n D ( m , n )
In formula, f nExpression pixel x kPixel value a nThe frequency of occurrences in this width image, D represents the distance between pixel value.
In order further to extract the unique point that obviously is different from other points in image, will put x kEigenwert be modified to:
[formula six]
SalI = | | ▿ Att | |
Wherein,
Figure GDA00003454303600124
Gradient is asked in expression, || .|| represents delivery (figure seven).
Step 7: a given three-dimensional model M, its surface characteristics information spinner will be presented as the curvature feature between each summit, surface.Calculate the curvature on each summit of this model, it is defined as the eigenwert SalM(Fig. 7 on each summit).
Step 8: given two width image I iAnd P j, the goodness of fit that defines the minutia between them is:
[formula seven]
S i , j = exp ( - Σ x k ∈ I i ∩ P j | | SalI ( x k ) - SalM ( x k ) | | )
The curvature characteristic image that the color characteristic image that step 6 is calculated and step 7 calculate utilizes [formula seven] to calculate the details goodness of fit between them;
Step 9: in conjunction with the contour similarity of trying to achieve previously, shape similarity and the minutia goodness of fit, between definition two width images, the similarity energy function is:
[formula eight]
d i,j=w 1A(I i,P j)+w 2C(I i,P j)+w 3S(I i,P j)
Wherein, w 1, w 2, w 3Be respectively the weights of three energy functions, A, C, S represent respectively contour similarity, shape similarity and the minutia goodness of fit of two width images.
Step 10: for each width image in projection atlas P, respectively with network chart image set I in image comparison, and obtain similarity energy function d between them i,jA threshold value D is set, and the network image similar to the three-dimensional model projected image is considered to the image identical with this three-dimensional model current visual angle, utilizes sampling statistical method, calculates image I in decision network image set I iWith image P in projection atlas P jThe threshold value D that whether belongs to the same visual angle of model." count with P in the similarity energy function d at each visual angle i,jGreater than the number of threshold value D, and power sequence is fallen in these numbers, take out three groups of numbers of number maximum, three visual angle perspective views (Fig. 8) of our the best of trying to achieve that Here it is.
Step 11: according to three optimal viewing angle perspective views that calculate, the coordinate information at set visual angle when obtaining projection atlas P is finally confirmed three optimal viewing angles of this model.”
As mentioned above, the present invention proposes a kind of three-dimensional model optimal viewing angle automatic obtaining method of Internet-based image, it takes full advantage of this magnanimity resources bank of network, go to weigh the similarity of perspective view and network image in conjunction with technical indicators such as contour similarity, shape similarity and the minutia goodnesses of fit, filter out the number of the corresponding network image of perspective view corresponding to each visual angle, pick out the most frequently used visual angle of people, and it is confirmed as the optimal viewing angle of this three-dimensional model.The method solely is not defined as some parameters with good visual angle, and as effective area, curvature entropy etc., but fully in conjunction with people's visual custom, the optimal viewing angle of obtaining like this is easier for people to accept.
Although the present invention is explained and describe with reference to accompanying drawing, the professional and technical personnel should be appreciated that, without departing from the spirit and scope of the present invention, can carry out various other changes, additions and deletions therein or to it.

Claims (2)

1. the three-dimensional model optimal viewing angle automatic obtaining method of an Internet-based image is characterized in that comprising following steps:
1) utilize the image characteristic extracting method of color-based contrast, ask for the characteristic pattern of the network image of download;
2) the input three-dimensional model is obtained the curvature on each summit, three-dimensional model surface, namely obtains the perspective view of the surface shape features of three-dimensional model in order to the shape facility that characterizes the three-dimensional model surface;
3) based on the registration of the outline of the projected image of three-dimensional model and network image, this registration is weighed the degree of contour convergence between two width images, and by asking for contour convergence degree maximum between two width images, calculate the energy function of profile similarity degree between token image;
4) based on the similarity of the contour shape of the projected image of three-dimensional model and network image, this similarity characterizes the consistance of picture shape between two width images, and obtains according to this similarity the energy function that characterizes shape similarity;
5) based on the goodness of fit of the minutia of the characteristic pattern of the network image of the perspective view of the surface shape features of three-dimensional model and download, this goodness of fit confirms that the minutia of two width images overlaps degree, according to the goodness of fit of this minutia, obtain the characteristic of correspondence energy function;
6) according to the similarity energy function of the characteristic pattern of the network image of the perspective view of the surface shape features of three-dimensional model and download, method with statistics is asked for optimal viewing angle, wherein, the network image similar to the three-dimensional model projected image is considered to the image identical with this three-dimensional model current visual angle, a threshold value D is set, utilize sampling statistical method, calculate image I in decision network image set I iWith image P in projection atlas P jThe threshold value D that whether belongs to the same visual angle of model count with P in the similarity energy function d at each visual angle i,jGreater than the number of threshold value D, and the power sequence is fallen in these numbers, take out three groups of numbers of number maximum, three visual angle perspective views of the best that Here it is tries to achieve;
The result of the energy function stack of the energy function that described similarity energy function is the profile similarity degree, the energy function of shape similarity and the minutia goodness of fit.
2. the three-dimensional model optimal viewing angle automatic obtaining method of Internet-based image as claimed in claim 1, its concrete execution in step is as follows:
Step 1: the three-dimensional model M for input, equably it is done parallel projection in different visual angles, obtain its projection atlas P;
Step 2: associated picture downloaded in the key word with the three-dimensional model M that inputs on network, and be partitioned into the prospect of network image with graph cut algorithm, network consisting atlas I;
Step 3: the mask figure that obtains respectively each image in projection atlas P and network atlas I;
Step 4: the mask figure that utilizes step 3 to ask for, at first the target with the mask artwork of the mask artwork of projected image and network image is placed in the same coordinate system, and utilize Principal Component Analysis Algorithm that the coordinate of the mask artwork of the mask artwork of projected image and network image is adjusted, then with two width image I iAnd P jContour similarity be defined as:
[formula one]
Figure FDA00003454303500021
I wherein i-P jExpression is contained in I iBut be not contained in P jThe zone, P j-I iExpression is contained in P jBut be not contained in I iThe zone, I i∩ P jExpression is included in I simultaneously iAnd P jIn part, Area represents the area that this is regional;
Step 5: for the mask figure that step 3 is asked for, utilize the Canny operator to obtain respectively outline map U and the V of the mask artwork of the mask artwork of projected image and network image: at first, edge figure U and V sample respectively
Figure FDA00003454303500034
With
Figure FDA00003454303500035
Given point set p i, the structural information of each sampled point is defined as:
[formula two]
h i(k)=#{q≠p i:(q-p i)∈bin(k)}
Wherein, bin (k) expression pixel is positioned at the number of k regional point of disk, and carries out normalization at log space; Q refers at p iIn the edge image of place with some p iThe distance relation (q-p that satisfies condition i) point of ∈ bin (k);
Set point is to { p i, q j, define the inconsistent degree of this shape of 2 and be:
[formula three]
Figure FDA00003454303500031
Wherein, the number of disk segmented areas when K represents to add up a little structural information, h i(k) and h j(k) represent respectively the some p of definition in [formula two] iWith a q jStructural information;
Sampling point set { the p at given two width figure edges iAnd { q i, the shape similarity that defines this two width figure is:
[formula four]
Figure FDA00003454303500032
In formula, π is the arrangement that makes the inconsistent degree of function acquisition minimum shape in [formula three];
Step 6: for network image I, at its R, G, three color spaces of B, every bit x in the definition image kEigenwert be:
[formula five]
Figure FDA00003454303500033
In formula, f nExpression pixel x kPixel value a nThe frequency of occurrences in this width image, D represents the distance between pixel value;
In order further to extract the unique point that obviously is different from other point in image, will put x kEigenwert be modified to:
[formula six]
Figure FDA00003454303500042
Wherein,
Figure FDA00003454303500043
Gradient is asked in expression, || .|| represents delivery;
Step 7: for given three-dimensional model M, calculate the curvature on each summit of this model, it is defined as the eigenwert SalM on each summit;
Step 8: given two width image I iAnd P j, the goodness of fit that defines the minutia between them is:
[formula seven]
Figure FDA00003454303500041
The curvature characteristic image that the color characteristic image that step 6 is calculated and step 7 calculate utilizes [formula seven] to calculate the details goodness of fit between them;
Step 9: in conjunction with the contour similarity of trying to achieve previously, shape similarity and the minutia goodness of fit, between definition two width images, the similarity energy function is:
[formula eight]
d i,j=w 1A(I i,P j)+w 2C(I i,P j)+w 3S(I i,P j)
Wherein, w 1, w 2, w 3Be respectively the weights of three energy functions, A, C, S represent respectively contour similarity, shape similarity and the minutia goodness of fit of two width images;
Step 10: for each width image in projection atlas P, respectively with network chart image set I in image comparison, and obtain similarity energy function d between them i,j, a threshold value D is set, utilize sampling statistical method, calculate image I in decision network image set I iWith image P in projection atlas P jThe threshold value D that whether belongs to the same visual angle of model, count with P in the similarity energy function d at each visual angle i,jGreater than the number of threshold value D, and the power sequence is fallen in these numbers, take out three groups of numbers of number maximum, be three visual angle perspective views of the best of trying to achieve;
Step 11: according to three optimal viewing angle perspective views that calculate, the coordinate information at set visual angle when obtaining atlas P is finally confirmed three optimal viewing angles of this model.
CN 201110089940 2011-04-11 2011-04-11 Three-dimensional model optimal viewpoint automatic obtaining method based on internet image Expired - Fee Related CN102163343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110089940 CN102163343B (en) 2011-04-11 2011-04-11 Three-dimensional model optimal viewpoint automatic obtaining method based on internet image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110089940 CN102163343B (en) 2011-04-11 2011-04-11 Three-dimensional model optimal viewpoint automatic obtaining method based on internet image

Publications (2)

Publication Number Publication Date
CN102163343A CN102163343A (en) 2011-08-24
CN102163343B true CN102163343B (en) 2013-11-06

Family

ID=44464553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110089940 Expired - Fee Related CN102163343B (en) 2011-04-11 2011-04-11 Three-dimensional model optimal viewpoint automatic obtaining method based on internet image

Country Status (1)

Country Link
CN (1) CN102163343B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182765B (en) * 2014-08-21 2017-03-22 南京大学 Internet image driven automatic selection method of optimal view of three-dimensional model
CN104751463B (en) * 2015-03-31 2017-10-13 同济大学 A kind of threedimensional model optimal viewing angle choosing method based on sketch outline feature
CN107346529A (en) * 2016-05-07 2017-11-14 浙江大学 A kind of digital picture quality evaluation method and device
CN113457161B (en) * 2021-07-16 2024-02-13 深圳市腾讯网络信息技术有限公司 Picture display method, information generation method, device, equipment and storage medium
CN114554294A (en) * 2022-03-04 2022-05-27 天比高零售管理(深圳)有限公司 Live broadcast content filtering and prompting method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101004748A (en) * 2006-10-27 2007-07-25 北京航空航天大学 Method for searching 3D model based on 2D sketch
CN101271469A (en) * 2008-05-10 2008-09-24 深圳先进技术研究院 Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101004748A (en) * 2006-10-27 2007-07-25 北京航空航天大学 Method for searching 3D model based on 2D sketch
CN101271469A (en) * 2008-05-10 2008-09-24 深圳先进技术研究院 Two-dimension image recognition based on three-dimensional model warehouse and object reconstruction method

Also Published As

Publication number Publication date
CN102163343A (en) 2011-08-24

Similar Documents

Publication Publication Date Title
WO2020215985A1 (en) Medical image segmentation method and device, electronic device and storage medium
WO2020151489A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
Goferman et al. Context-aware saliency detection
CN102163343B (en) Three-dimensional model optimal viewpoint automatic obtaining method based on internet image
CN104850822B (en) Leaf identification method under simple background based on multi-feature fusion
CN107862698A (en) Light field foreground segmentation method and device based on K mean cluster
CN102567993B (en) Fingerprint image quality evaluation method based on main component analysis
CN104182765A (en) Internet image driven automatic selection method of optimal view of three-dimensional model
CN102663391A (en) Image multifeature extraction and fusion method and system
CN102508917B (en) Multi-dimensional object robust high-speed retrieval and positioning method for some feature images
Behrisch et al. Magnostics: Image-based search of interesting matrix views for guided network exploration
CN110490238A (en) A kind of image processing method, device and storage medium
CN103473551A (en) Station logo recognition method and system based on SIFT operators
Richardson et al. Extracting scar and ridge features from 3D-scanned lithic artifacts
CN103336835B (en) Image retrieval method based on weight color-sift characteristic dictionary
CN106484692A (en) A kind of method for searching three-dimension model
CN107341813A (en) SAR image segmentation method based on structure learning and sketch characteristic inference network
CN107305691A (en) Foreground segmentation method and device based on images match
CN103383700A (en) Image retrieval method based on margin directional error histogram
CN103399863B (en) Image search method based on the poor characteristic bag of edge direction
CN105808665A (en) Novel hand-drawn sketch based image retrieval method
Bagchi et al. A robust analysis, detection and recognition of facial features in 2.5 D images
Rangkuti et al. Batik image retrieval based on similarity of shape and texture characteristics
Li et al. The research on traffic sign recognition based on deep learning
CN106415606A (en) Edge-based recognition, systems and methods

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20131106

Termination date: 20160411