CN106997617A - The virtual rendering method of mixed reality and device - Google Patents
The virtual rendering method of mixed reality and device Download PDFInfo
- Publication number
- CN106997617A CN106997617A CN201710140525.9A CN201710140525A CN106997617A CN 106997617 A CN106997617 A CN 106997617A CN 201710140525 A CN201710140525 A CN 201710140525A CN 106997617 A CN106997617 A CN 106997617A
- Authority
- CN
- China
- Prior art keywords
- image
- point
- plane
- characteristic point
- mixed reality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a kind of virtual rendering method of mixed reality and device, methods described includes:Shooting is moved by camera, at least two planes matched with the user visual field is obtained and determines image;Image is determined according at least two sheet of planar, the space lattice plane that the user visual field includes is drawn;According to space lattice plane, it is determined that virtual scene corresponding with 3D objects to be shown;Virtual scene is exported to mixed reality glasses, so that user is observed by the mixed reality scene after the real scene fusion of virtual scene and the acquisition of mixed reality glasses.The embodiment of the present invention determines the corresponding space plane of reality scene by the image of camera acquisition, and blend to form virtual image by 3D objects and space plane, the virtual image strong sense of reality so formed, the purpose of deception people's brain has been reached to a certain extent, allow people to be difficult to differentiate reality scene and virtual scene, and cost of implementation can be reduced.
Description
Technical field
The present embodiments relate to mixed reality technical field, more particularly to a kind of virtual rendering method of mixed reality and dress
Put.
Background technology
MR (Mixed Reality, mixed reality technology) is a kind of by real world information and the mixing of virtual world information
The technology presented is superimposed, is related to that multi-media processing, three-dimensional modeling, real-time video show and controlled, real-time tracking and scene melt
New technology and the new tools such as conjunction.MR systems have three it is prominent the characteristics of:1. the information integration of real world and virtual world;②
With real-time, interactive;3. it is to increase positioning dummy object in three dimension scale space.MR technologies can be widely applied to military, doctor
The fields such as treatment, building, education, engineering, video display, amusement.
In order to realize mixed reality effect, current common presentation mode, such as Google glass, Microsoft hololens etc. are all
It is that the display image that will virtually build is directly superimposed in real world images carry out mixed display, still, toward inside in display image
Hold complicated, it is necessary to which complicated algorithm, which calculates virtual image and the superposed positions and virtual image of real world images, needs the contracting carried out
The processing such as put, rotate, could farthest cause the mixed image strong sense of reality after superposition.Therefore, the defect of prior art
It is that MR systems realize technical sophistication, cost is high, and the sense of reality is not good.
The content of the invention
The embodiment of the present invention provides a kind of virtual rendering method of mixed reality and device, can reduce the base of cost of implementation
On plinth, the virtual presentation for realizing mixed reality of high validity.
In a first aspect, the embodiments of the invention provide a kind of virtual rendering method of mixed reality, including:
Shooting is moved by camera, at least two planes matched with the user visual field is obtained and determines image, its
In, the camera is configured on the mixed reality glasses that the user wears;
Image is determined according at least two sheet of planar, the space lattice plane that the user visual field includes is drawn;
According to the space lattice plane, it is determined that virtual scene corresponding with 3D objects to be shown;
The virtual scene is exported to the mixed reality glasses, so that the user is observed by the virtual scene
And the mixed reality scene after the real scene fusion of the mixed reality glasses acquisition.
Second aspect, the embodiment of the present invention additionally provides a kind of mixed reality and device is virtually presented, including:
Plane determines image collection module, for moving shooting by camera, obtains at least two and is regarded with user
The plane that open country matches determines image, wherein, the camera is configured on the mixed reality glasses that the user wears;
Space lattice plane drafting module, image is determined at least two sheet of planar according to, is drawn in the user visual field
Including space lattice plane;
Virtual scene determining module, for according to the space lattice plane, it is determined that void corresponding with 3D objects to be shown
Intend scene;
Virtual scene output module, for the virtual scene to be exported to the mixed reality glasses, so that described use
Family is observed by the mixed reality scene after the real scene fusion of the virtual scene and mixed reality glasses acquisition.
The embodiment of the present invention moves shooting by camera, obtains at least two planes matched with the user visual field
Determine image;Image is determined according at least two sheet of planar, the space lattice plane that the user visual field includes is drawn;According to institute
Space lattice plane is stated, it is determined that virtual scene corresponding with 3D objects to be shown;The virtual scene is exported to the mixing
Reality glasses, to realize that the user observes the real scene by the virtual scene and mixed reality glasses acquisition
The effect of mixed reality scene after fusion, the virtual image strong sense of reality so formed, has reached deception people to a certain extent
The purpose of brain, allows people to be difficult to differentiate real scene and virtual scene, compared with existing Google glass, Microsoft hololens,
Cost of implementation is low.
Brief description of the drawings
Fig. 1 is a kind of flow chart for the virtual rendering method of mixed reality that the embodiment of the present invention one is provided;
Fig. 2 a are a kind of flow charts for the virtual rendering method of mixed reality that the embodiment of the present invention two is provided;
Fig. 2 b are the characteristic point positions point at t-1 moment in the virtual rendering method of mixed reality that the embodiment of the present invention two is provided
Cloth schematic diagram;
Fig. 2 c be the embodiment of the present invention two provide a kind of virtual rendering method of mixed reality in t characteristic point position
Put distribution schematic diagram;
Fig. 2 d are the image characteristic points that extract in the virtual rendering method of a kind of mixed reality that the embodiment of the present invention two is provided
Distribution schematic diagram;
Fig. 2 e are that the space lattice that determines is put down in the virtual rendering method of a kind of mixed reality that the embodiment of the present invention two is provided
Face schematic diagram;
Fig. 2 f are that the space lattice that determines is put down in the virtual rendering method of a kind of mixed reality that the embodiment of the present invention two is provided
Face schematic diagram;
Fig. 3 a are a kind of flow charts for the virtual rendering method of mixed reality that the embodiment of the present invention three is provided;
Fig. 3 b are that a kind of 3D objects to be shown that the embodiment of the present invention three is provided are superimposed on the signal after space lattice plane
Figure;
Fig. 3 c be in the virtual rendering method of a kind of mixed reality that the embodiment of the present invention three is provided will by two passages
Virtual scene exports the schematic diagram into the different eyeglasses of the mixed reality glasses;
Fig. 4 is the structure chart that device is virtually presented in a kind of mixed reality that the embodiment of the present invention four is provided.
Embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining the present invention, rather than limitation of the invention.It also should be noted that, in order to just
Part related to the present invention rather than entire infrastructure are illustrate only in description, accompanying drawing.
First, in order to be easy to explanation hereinafter, the inventive concept of the present invention is briefly described first.
In various embodiments of the present invention, mixed reality glasses are used, the mixed reality glasses are arrived by light reflective projection
On translucent ophthalmic len, the scene of wide-angle acquisition almost as real scene can be realized.On the mixed reality glasses
Include camera, for simulating the scenery in human eye acquisition surrounding environment, the scenery that the camera is shot is not used to life
Into virtual scene or reality scene, but what the visual field for being used only for determining to wear the user of the mixed reality glasses included
Space lattice plane.It is determined that after the space lattice plane, and then 3D objects to be shown can be superimposed on the space networks
On lattice plane, virtual scene is generated.The virtual scene can be finally exported to the mixed reality glasses, so that described
User is observed by the mixed reality after the real scene fusion of the virtual scene and mixed reality glasses acquisition
Scape.
Embodiment one
Fig. 1 is a kind of flow chart for the virtual rendering method of mixed reality that the embodiment of the present invention one is provided, and the present embodiment is fitted
Situation for merging reality scene and virtual scene.This method can be in virtually by mixed reality provided in an embodiment of the present invention
Existing device is performed, and the device can be realized using software and/or hardware by the way of, and can typically be integrated in for showing mixing now
In the mixed reality glasses of real field scape.As shown in figure 1, the method for the present embodiment is specifically included:
S110, move shooting by camera, obtain at least two plane determination figures matched with the user visual field
Picture.
In the present embodiment, the camera is configured on the mixed reality glasses that the user wears, for simulating
The eyes of the user obtain the scenery in surrounding environment, and then the space that can be included with the accurate determination user visual field
Grid plan.
Wherein, the plane determines that image refers to what the camera was shot, for determining the sky that the user visual field includes
Between grid plan image.At least two sheet of planar determine that image is continuously shot for camera for Same Scene, and
In shooting process by the short-distance movement of camera at the diverse location, so that what is shot has overlapping many of part scene
Image is opened, namely:Different planes determines to include one or more identical scenery in image.
S120, at least two sheet of planar according to determine image, and the space lattice that drawing the user visual field includes is put down
Face.
In the present embodiment, after at least two sheet of planar determine image in acquisition, each plane can be extracted first and is determined
Characteristic point in image.So-called characteristic point, refers to the point that scene features are embodied in image, these characteristic points are not by the size of scenery
With the influence of rotation, the tolerance changed for light, noise, micro- visual angle is also at a relatively high, it is easy to recognizes scenery and rarely has
The point of misidentification.After the characteristic point during Different Plane determines image is extracted, by way of Feature Points Matching, Different Plane is obtained
The same characteristic features point that image includes is determined, afterwards can be according to the motion of the same characteristic features point in Different Plane determines image
Track, obtains corresponding space lattice attribute structure, and then can draw corresponding space according to the space lattice attribute structure
Grid plan.Space lattice plane is used to characterize at least one plane in the space, and can come in the form of grid is connected
Represent each plane.
Typically, SIFT (Scale-invariant feature transform, scale invariant feature change can be passed through
Change) algorithm obtains the characteristic point that includes of plane.
S130, according to the space lattice plane, it is determined that virtual scene corresponding with 3D objects to be shown.
Wherein, the 3D objects to be shown specifically refer to need to be merged with the real scene that mixed reality glasses are obtained
And the virtual 3D objects shown.Wherein, inventor has been found that the content that can be observed not considered in user's present viewing field is straight
Connect and virtually show the 3D objects, the validity of the mixed reality scene is low, poor user experience.
Accordingly, in the present embodiment, the space lattice plane that the user visual field includes is obtained first, can contemplate afterwards
3D objects to be shown and the position relationship of each plane, so that the 3D objects to be shown and the space lattice plane are overlapped
After generate corresponding virtual scene, to strengthen virtual image (i.e.:The 3D objects to be shown) the sense of reality.
S140, the virtual scene exported to the mixed reality glasses, so that the user is observed by the void
Intend the mixed reality scene after the real scene fusion that scene and the mixed reality glasses are obtained.
Wherein, in order to finally obtain fusion real scene and the virtual scene mixed reality scene, it is necessary to should
Virtual scene is exported to mixed reality glasses.
The embodiment of the present invention moves shooting by camera, obtains at least two planes matched with the user visual field
Determine image;Image is determined according at least two sheet of planar, the space lattice plane that the user visual field includes is drawn;According to institute
Space lattice plane is stated, it is determined that virtual scene corresponding with 3D objects to be shown;The virtual scene is exported to the mixing
Reality glasses, to realize that the user observes the real scene by the virtual scene and mixed reality glasses acquisition
The effect of mixed reality scene after fusion, the virtual image strong sense of reality so formed, has reached deception people to a certain extent
The purpose of brain, it is really scene and virtual scene to allow people to be difficult to differentiate, compared with existing Google glass, Microsoft hololens,
Cost of implementation is low.
Embodiment two
Fig. 2 a are a kind of flow chart for the virtual rendering method of mixed reality that the embodiment of the present invention two is provided.The present embodiment with
Further optimized based on above-described embodiment, in the present embodiment, image will be determined according at least two sheet of planar, painted
The space lattice plane that the user visual field includes is made, is specifically optimized for:Respectively image is determined at least two sheet of planar
Middle extraction characteristic point, and at least two sheet of planar determine image, mark the position of same characteristic features point;According to described identical
Position of the characteristic point in Different Plane determines image, obtains the kinematic feature factor of the same characteristic features point;According to the fortune
Dynamic characteristic parameter, draws the space lattice plane that the user visual field includes.
Accordingly, the method for the present embodiment is specifically included:
S210, move shooting by camera, obtain at least two plane determination figures matched with the user visual field
Picture, wherein, the camera is configured on the mixed reality glasses that the user wears.
S220, at least two sheet of planar determine image characteristic point is extracted respectively, and at least two sheet of planar
Determine in image, mark the position of same characteristic features point.
In an optional embodiment of the present embodiment, characteristic point is extracted in the plane determines image, specifically
It can comprise the steps:
S2201, according to the plane corresponding at least two scale space images of image are determined, generate Gaussian difference component
Picture, and detect that extreme point is used as characteristic point in the difference of Gaussian image.
In this step, can be according to following formula:
Determine that image I (x, y) carries out multiscale space conversion to plane described in every width, generate at least two metric space figures
As L (x, y, σ);Wherein, (x, y) is the two-dimensional coordinate of pixel in image, and I (x, y) is that plane determines image, and σ is that yardstick is empty
Between the factor, different σ Gaussian function G (x, y, σ) generates different scale space images;
Afterwards, adjacent scale space images are subtracted each other into result, is used as difference of Gaussian image D (x, y), different Gaussian differences
The difference of Gaussian pyramid of partial image correspondence different layers.
Wherein, in order to readily appreciate hereinafter, the concept of image pyramid is introduced first.If having in every group of image pyramid
Dried layer, first group of first layer for original image (i.e.:I (x, y)), original image is then done into Gaussian smoothing (Gaussian convolution, a height
This is obscured), obtain L (x, y, σ).There is a parameter σ in Gaussian smoothing, 1.6 are taken in SIFT, σ is then multiplied into a ratio system
Number k carrys out smooth first group of second layer as new smoothing factor and obtains first group of third layer.Repeated several times, obtain first group
L layers respectively corresponding smoothing parameter be:0, σ, k σ, k2 σ ... (in embodiments of the present invention, above-mentioned parameters are defined as
Different σ).Then first group of last piece image is done into the down-sampled first layer for obtaining second group that scale factor is 2, then
The second layer that the Gaussian smoothing that parameter is σ obtains second group is done to second group of first layer, k σ's is done to second group of the second layer
The smooth third layer for obtaining second group, by that analogy, until finally giving the image pyramid.
Difference of Gaussian (Difference of Gaussian, DoG) pyramid, is constructed by above-mentioned image pyramid,
Pyramidal first group of first tomographic image of difference of Gaussian is by first group of second tomographic image of image pyramid group first that subtracts the first
Tomographic image is obtained;Pyramidal first group of second tomographic image of difference of Gaussian is subtracted by first group of third layer image of image pyramid
First group of second tomographic image is obtained, the like.Every group of image of image pyramid just generates Gaussian difference after all so handling
Each difference of Gaussian image (D (x, y)) for dividing pyramid to include.
, can be according to formula in generation difference of Gaussian pyramid after the difference of Gaussian image of each layer:Derivation is carried out to the difference of Gaussian image of generation, and will
Derivative is equal to extreme point when 0Alternately characteristic point, wherein, Dl(x, y) represents in the difference of Gaussian pyramid the
L layers of difference of Gaussian image;
Afterwards, calculateIf calculating what is obtainedThen will be corresponding
Alternative features pointIt is retained as the characteristic point.0.7 is default threshold value, can also be adjusted according to different demands
The threshold value is used as the foundation of selection characteristic point.Represent in characteristic pointThe difference of Gaussian image at place;Represent difference of Gaussian pyramid in l layers in characteristic pointThe difference of Gaussian image at place;Represent high
In this difference pyramid l-1 layers in characteristic pointThe difference of Gaussian image at place.
S2202, determination directioin parameter corresponding with each characteristic point, and according to the directioin parameter, it is determined that with each institute
State the corresponding Feature Descriptor of characteristic point.
In this step, after calculating obtains each characteristic point, 16 × 16 window is taken centered on characteristic point, will be described
Each pixel included in window is as the key point associated with the characteristic point;
According to formula:And θ
(x, y)=tan2 ((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculates each key point (x, y) place gradient
Modulus value m (x, y) and gradient direction θ (x, y), L is corresponding scale space images (passes through with the key point (x, y)
S2201 is calculated and obtained);
In a clockwise direction, 45 degree of angles are span, choose 8 direction value (it is typical, 0, π/4, pi/2,3 π/4, π, 5 π/4,
3 pi/2s, 7 π/4), (it is only capable of taking foregoing choosing according to the modulus value of gradient and the direction of gradient at obtained each key point is calculated
8 direction value taken), histogram is constructed, and regard the direction of the corresponding gradient of the histogrammic peak value as the characteristic point
Principal direction;
Reference axis is rotated to the principal direction of the characteristic point, according to terraced at each key point associated with the characteristic point
The modulus value of degree and the direction of gradient, generate the Feature Descriptor F of the characteristic point;
Wherein, f (n) ∈ F,F (n) is that n-th associated with the characteristic point is crucial
The feature description of point;a1, a2, a3To preset weighted value;θnFor the direction of gradient at the n-th key point, mnFor gradient at the n-th key point
Modulus value, pnFor the relative position between the n-th key point and the characteristic point.Wherein, n ∈ [1, N];N is the characteristic point institute
The total quantity of the key point of association.
By S2202 operation, each characteristic point can be used the gradient of multiple key points around this feature point
Feature Descriptor F that the modulus value of direction and gradient is constituted is characterized.
By above-mentioned S2201 and S2202, it can generate and determine the corresponding Feature Descriptor of image difference with each plane,
(typical, Feature Descriptor corresponding with a characteristic point is the vector of one 128 dimension), afterwards can be by adjacent shooting time
Two sheet of planar each Feature Descriptor for determining in image matched (it is typical, can be by calculating the side of Euclidean distance
Formula), default matching condition is met, that is, represents to include same characteristic features point, Jin Erke in above-mentioned two sheet of planar determines image
To mark the same characteristic features point in above-mentioned two sheet of planar determines image respectively.
S230, the position according to the same characteristic features point in Different Plane determines image, obtain the same characteristic features point
Kinematic feature factor.
In the present embodiment, the kinematic feature factor can include:The moving direction of the same characteristic features point, mobile speed
Degree and displacement.
Accordingly, in an optional embodiment of the present embodiment, according to the same characteristic features point in Different Plane
The position in image is determined, obtaining the kinematic feature factor of the same characteristic features point can include:
According to position of the same characteristic features point in Different Plane determines image, the movement of the same characteristic features point is calculated
Direction and shift value;
The shooting time and the shift value of image are determined according to the Different Plane, the same characteristic features point is calculated
Translational speed.
Wherein, the t-1 moment in the virtual rendering method of mixed reality that the embodiment of the present invention two is provided is shown in figure 2b
Characteristic point (characteristic point 1, characteristic point 2 and characteristic point 3) position distribution schematic diagram, the embodiment of the present invention is shown in figure 2 c
The characteristic point position distribution schematic diagram of t in a kind of two virtual rendering methods of mixed reality provided.
By taking characteristic point 1 as an example, it is determined that characteristic point 1 determine image in the plane that the t-1 moment obtains in position, and should
Characteristic point 1 is behind the position during the plane that t is obtained determines image, according to the difference value between above-mentioned two position, it is determined that
The moving direction and shift value of this feature point 1;Afterwards according to the time difference Δ t and the shift value between t-1 and t,
Calculate the translational speed of characteristic point 1.
S240, the space lattice plane included according to the kinematic feature factor, the drafting user visual field.
In an optional embodiment of the present embodiment, according to the kinematic feature factor, draw the user and regard
The space lattice plane that Yezhong includes, can include:
According to formula:Calculate the space lattice attribute structure p at t+1 momentt+1;
Wherein, FtFor the Feature Descriptor of t characteristic point, qtFor the confidence weight of t characteristic point, the confidence
Spend weight to be determined according to similarity distance of the characteristic point at least two planes determine image, vtFor the shifting of t characteristic point
Dynamic speed, wtFor the moving direction of t characteristic point, ntFor the shift value of t characteristic point, Δ t be t+1 moment and t it
Between incremental time;(vt,nt) Δ t is characterized the sub- increment function of description;((wt,nt) Δ t) be confidence level value increase function;
According to the space lattice attribute structure at t+1 moment, the space that the user visual field includes is inscribed when being plotted in t+1
Grid plan.
It should be noted that in order to draw space lattice plane of the user at the t+1 moment, it is necessary first to obtain user
The Feature Descriptor of one or more characteristic point of t, the moving direction of characteristic point, shift value and translational speed generation
Space lattice attribute structure pt+1。
Wherein, it is to be based on two sheet of planar determination figures when two sheet of planar of calculating determine the same characteristic features point that image includes
What the similarity distance between the characteristic point that picture includes was determined.In fact, the same characteristic features point determined by similarity distance
It is probably correct, it is also possible to mistake.Therefore, it is determined that during space lattice attribute structure, introducing the confidence of characteristic point
The concept of weight is spent, when calculating obtains same characteristic features point, the similarity distance between two characteristic points is nearer, this feature point
Confidence weight is higher.
(vt,nt) Δ t is characterized the sub- increment function of description, ((wt,nt) Δ t) is confidence level value increase function, it can be with
Self-defined setting is carried out according to actual conditions.Wherein, Feature Descriptor increment function is by vt、ntAnd tri- parameters of Δ t are determined;
Confidence level value increase function is by wt ntAnd tri- parameters of Δ t are determined.
S250, according to the space lattice plane, it is determined that virtual scene corresponding with 3D objects to be shown.
S260, the virtual scene exported to the mixed reality glasses, so that the user is observed by the void
Intend the mixed reality scene after the real scene fusion that scene and the mixed reality glasses are obtained.
Wherein, show in the virtual rendering method of a kind of mixed reality that the embodiment of the present invention two is provided and extract in figure 2d
Image characteristic point distribution schematic diagram;Show that a kind of mixed reality that the embodiment of the present invention two is provided virtually is presented in Fig. 2 e
The space lattice floor map determined in method;Show that another mixing that the embodiment of the present invention two is provided shows in figure 2f
The space lattice floor map determined in real virtual rendering method.
Because traditional mixed reality installation cost is high, the sense of reality of technical sophistication or experience is not strong, is unfavorable for scale
Promote.Compared with prior art, the present embodiment technical scheme is inexpensive, the virtual rendering method of mixed reality of high realism,
Fusion with virtual image is built by video camera space plane, the strong sense of reality of virtual image has reached take advantage of to a certain extent
The purpose of deceitful brain, allows people to be difficult to differentiate reality scene and virtual scene, so that in the sector applications such as education, game, body
Test effect more preferable.Because scheme cost is low, be conducive to popular popularization and application.
Embodiment three
Fig. 3 a are a kind of flow chart for the virtual rendering method of mixed reality that the embodiment of the present invention three is provided.The present embodiment with
Further optimized based on above-described embodiment, in the present embodiment, will be according to the space lattice plane, it is determined that with waiting to show
Show that the corresponding virtual scene of 3D objects is further refined;
The virtual scene is being exported to before the mixed reality glasses, further preferably included:The camera is clapped
The video taken the photograph, and the space lattice plane drawn are shielded;
Meanwhile, the virtual scene is exported to the mixed reality glasses, is specifically optimized for:By the virtual scene point
Solve as two-pass video content, and the two-pass video content is exported into the different eyeglasses of the mixed reality glasses.
Accordingly, the method for the present embodiment specifically includes following steps:
S310, move shooting by camera, obtain at least two plane determination figures matched with the user visual field
Picture, wherein, the camera is configured on the mixed reality glasses that the user wears.
S320, at least two sheet of planar according to determine image, and the space lattice that drawing the user visual field includes is put down
Face.
S330, the sized data according to the 3D objects to be shown and the space lattice plane size, control institute
The scaling of 3D objects to be shown is stated, by the 3D objects to be shown to be sized being superimposed upon the space lattice plane
On, and obtain space plane coordinate corresponding with the 3D objects to be shown.
Specifically, according to the length data of the 3D objects to be shown and obtained space lattice plane sizes are drawn,
The scaling of the 3D objects to be shown is controlled, 3D objects are superimposed upon in the space lattice plane, and 3D objects are carried out
Collision detection, to avoid the space plane of 3D object penetration real scenes, or and shadow nested with other objects in real scene
Ring the authenticity of the mixed reality scene.
S340, according to the mapping relations between space plane coordinate and virtual coordinates, and with the 3D things to be shown
The corresponding space plane coordinate of body, determines the virtual coordinates of the 3D objects to be shown, to generate and the 3D objects to be shown
Corresponding virtual scene.
Wherein, show that a kind of 3D objects to be shown are superimposed on the schematic diagram after space lattice plane in fig 3b.
Specifically, virtual coordinates of the point P in virtual coordinate system in virtual scene are set as (X, Y, Z), in space plane
(the X of space plane in coordinate systemO,YO,ZO) between transformation relation meet equation below:
Wherein, XT,YT,ZT, φ, γ, k represents real space camera position and orientation angles respectively, and l is zoom scale.
In virtual coordinate system, by driving position and the angular transformation of virtual video camera, to convert the imaging of virtual scene, for protecting
Perspective projection relationship match when card virtual scene image is synthesized with reality scene image.Further, three dimensional field sight spot P seat
Scale value M and its perspective projection point P on the image plane1Coordinate value m between relation be:
Wherein, C is camera parameter matrix, including camera intrinsic parameter:Camera focal coordinates (fx,fy) and photocentre coordinate (cx,
cy), R is camera spin matrix, and T is camera translation matrix.
, can be according to space plane coordinate corresponding with the 3D objects to be shown, it is determined that described treat by above-mentioned formula
The virtual coordinates of 3D objects are shown, and then virtual scene corresponding with the 3D objects to be shown can be generated.
S350, the video for shooting the camera, and the space lattice plane drawn are shielded.
S360, the virtual scene is decomposed into two-pass video content, and by the two-pass video content export to
In the different eyeglasses of the mixed reality glasses.
Wherein, show in figure 3 c logical in the virtual rendering method of a kind of mixed reality that the embodiment of the present invention three is provided
Cross two passages and virtual scene is exported to the schematic diagram into the different eyeglasses of the mixed reality glasses.
The present embodiment determines the corresponding space plane of reality scene by the image of camera acquisition, and by 3D objects with it is empty
Between plane blend to form virtual image, the virtual image strong sense of reality so formed has reached the deception National People's Congress to a certain extent
The purpose of brain, allows people to be difficult to differentiate reality scene and virtual scene, and can reduce cost of implementation.
Example IV
The structural representation of device is virtually presented in a kind of mixed reality that Fig. 4 show the offer of the embodiment of the present invention four.Such as
Shown in Fig. 4, described device includes:
Plane determines image collection module 410, for moving shooting by camera, obtains at least two and user
The plane that the visual field matches determines image, wherein, the camera is configured on the mixed reality glasses that the user wears;
Space lattice plane drafting module 420, image is determined at least two sheet of planar according to, draws the user visual field
The space lattice plane included;
Virtual scene determining module 430, for according to the space lattice plane, it is determined that corresponding with 3D objects to be shown
Virtual scene;
Virtual scene output module 440, for the virtual scene to be exported to the mixed reality glasses, so that described
User is observed by the mixed reality after the real scene fusion of the virtual scene and mixed reality glasses acquisition
Scape.
The embodiment of the present invention moves shooting by camera, obtains at least two planes matched with the user visual field
Determine image;Image is determined according at least two sheet of planar, the space lattice plane that the user visual field includes is drawn;According to institute
Space lattice plane is stated, it is determined that virtual scene corresponding with 3D objects to be shown;The virtual scene is exported to the mixing
Reality glasses, to realize that the user observes the real scene by the virtual scene and mixed reality glasses acquisition
The effect of mixed reality scene after fusion, the virtual image strong sense of reality so formed, has reached deception people to a certain extent
The purpose of brain, it is really scene and virtual scene to allow people to be difficult to differentiate, compared with existing Google glass, Microsoft hololens,
Cost of implementation is low.
On the basis of the various embodiments described above, the space lattice plane drafting module can include:
Characteristic point determining unit, determines to extract characteristic point in image for respectively at least two sheet of planar, and in institute
State at least two sheet of planar to determine in image, mark the position of same characteristic features point;
Kinematic feature factor acquiring unit, for the position according to the same characteristic features point in Different Plane determines image
Put, obtain the kinematic feature factor of the same characteristic features point;
Space lattice plane drawing unit, includes for according to the kinematic feature factor, drawing the user visual field
Space lattice plane.
On the basis of the various embodiments described above, the characteristic point determining unit can specifically include:
Difference of Gaussian image generates subelement, for according to and the plane determine that corresponding at least two yardstick of image is empty
Between image, generate difference of Gaussian image, and in the difference of Gaussian image detect extreme point be used as characteristic point;
Feature Descriptor determination subelement, for determining directioin parameter corresponding with each characteristic point, and according to described
Directioin parameter, it is determined that Feature Descriptor corresponding with each characteristic point.
On the basis of the various embodiments described above, the difference of Gaussian image generates subelement, specifically can be used for:
According to formula:L (x, y, σ)=G (x, y, σ) * I (x, y);AndTo described
Plane determines that image I (x, y) carries out multiscale space conversion, generates at least two scale space images L (x, y, σ);Wherein,
(x, y) is the two-dimensional coordinate of pixel in image, and σ is the metric space factor, and different σ Gaussian function G (x, y, σ) generations are different
Scale space images;
Adjacent scale space images are subtracted each other into result, difference of Gaussian image D (x, y), different Gaussian difference components is used as
As the difference of Gaussian pyramid of correspondence different layers;
According to formulaTo the Gaussian difference component of generation
As carrying out derivation, and extreme point during by derivative equal to 0Alternately characteristic point, wherein, Dl(x, y) represents Gaussian difference
L layers of difference of Gaussian image in point pyramid;
CalculateIf calculating what is obtainedThen will be alternative accordingly
Characteristic pointIt is retained as the characteristic point.
On the basis of the various embodiments described above, the Feature Descriptor determination subelement specifically can be used for:
Taken centered on characteristic point 16 × 16 window, using each pixel included in the window as with the feature
The key point of point association;
According to formula:And θ
(x, y)=tan2 ((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculates each key point (x, y) place gradient
Modulus value m (x, y) and gradient direction θ (x, y), L be with the key point (x, y) corresponding scale space images;
In a clockwise direction, 45 degree of angles are span, choose 8 direction value, at each key point obtained according to calculating
The modulus value of gradient and the direction of gradient, construct histogram, and using the direction of the corresponding gradient of the histogrammic peak value as
The principal direction of the characteristic point;
Reference axis is rotated to the principal direction of the characteristic point, according to terraced at each key point associated with the characteristic point
The modulus value of degree and the direction of gradient, generate the Feature Descriptor F of the characteristic point;
Wherein, f (n) ∈ F,F (n) is that n-th associated with the characteristic point is crucial
The feature description of point;a1, a2, a3To preset weighted value;θnFor the direction of gradient at the n-th key point, mnFor gradient at the n-th key point
Modulus value, pnFor the relative position between the n-th key point and the characteristic point.
On the basis of the various embodiments described above, the kinematic feature factor acquiring unit specifically can be used for:
According to position of the same characteristic features point in Different Plane determines image, the movement of the same characteristic features point is calculated
Direction and shift value;
The shooting time and the shift value of image are determined according to the Different Plane, the same characteristic features point is calculated
Translational speed.
On the basis of the various embodiments described above, the space lattice plane drawing unit specifically can be used for:
According to formula:Calculate the space lattice attribute structure p at t+1 momentt+1;
Wherein, FtFor the Feature Descriptor of t characteristic point, qtFor the confidence weight of t characteristic point, the confidence
Spend weight to be determined according to similarity distance of the characteristic point at least two planes determine image, vtFor the shifting of t characteristic point
Dynamic speed, wtFor the moving direction of t characteristic point, ntFor the shift value of t characteristic point, Δ t be t+1 moment and t it
Between incremental time;(vt,nt) Δ t is characterized the sub- increment function of description;((wt,nt) Δ t) be confidence level value increase function;
According to the space lattice attribute structure at t+1 moment, the space that the user visual field includes is inscribed when being plotted in t+1
Grid plan.
On the basis of the various embodiments described above, the virtual scene determining module specifically can be used for:
According to the sized data of the 3D objects to be shown and the size of the space lattice plane, wait to show described in control
Show the scaling of 3D objects, by the 3D objects to be shown to be sized being superimposed upon in the space lattice plane, and obtain
Take space plane coordinate corresponding with the 3D objects to be shown;
According to the mapping relations between space plane coordinate and virtual coordinates, and it is corresponding with the 3D objects to be shown
Space plane coordinate, determine the virtual coordinates of the 3D objects to be shown, with generate it is corresponding with the 3D objects to be shown
Virtual scene.
On the basis of the various embodiments described above, it can also include:Shroud module, is used for:
The virtual scene is being exported to before the mixed reality glasses, the video that the camera is shot, with
And the space lattice plane drawn is shielded.
On the basis of the various embodiments described above, the virtual scene output module can be specifically for:
The virtual scene is decomposed into two-pass video content, and the two-pass video content is exported to described mixed
In the different eyeglasses for closing Reality glasses.
The mixed reality provided in above-described embodiment is virtually presented device and can perform what any embodiment of the present invention was provided
The virtual rendering method of mixed reality, possesses the execution corresponding functional module of this method and beneficial effect.Not in the above-described embodiments
The ins and outs of detailed description, reference can be made to the virtual rendering method of mixed reality that any embodiment of the present invention is provided.
By the description above with respect to embodiment, it is apparent to those skilled in the art that, the present invention
It can be realized by software and required common hardware, naturally it is also possible to realized by hardware, but the former is more in many cases
Good embodiment.Understood based on such, what technical scheme substantially contributed to prior art in other words
Part can be embodied in the form of software product, and the computer software product can be stored in computer-readable recording medium
In, such as floppy disk, read-only storage (Read-Only Memory, ROM), the random access memory (Random of computer
Access Memory, RAM), flash memory (FLASH), hard disk or CD etc., including some instructions are to cause a computer to set
Standby (can be personal computer, server, or network equipment etc.) performs the method described in each embodiment of the invention.
Note, above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that
The invention is not restricted to specific embodiment described here, can carry out for a person skilled in the art it is various it is obvious change,
Readjust and substitute without departing from protection scope of the present invention.Therefore, although the present invention is carried out by above example
It is described in further detail, but the present invention is not limited only to above example, without departing from the inventive concept, also
Other more equivalent embodiments can be included, and the scope of the present invention is determined by scope of the appended claims.
Claims (11)
1. a kind of virtual rendering method of mixed reality, it is characterised in that including:
Shooting is moved by camera, at least two planes matched with the user visual field is obtained and determines image, wherein, institute
Camera is stated to be configured on the mixed reality glasses that the user wears;
Image is determined according at least two sheet of planar, the space lattice plane that the user visual field includes is drawn;
According to the space lattice plane, it is determined that virtual scene corresponding with 3D objects to be shown;
The virtual scene is exported to the mixed reality glasses so that the user observe by the virtual scene and
Mixed reality scene after the real scene fusion that the mixed reality glasses are obtained.
2. according to the method described in claim 1, it is characterised in that determine image according at least two sheet of planar, draw institute
The space lattice plane that the user visual field includes is stated, including:
Characteristic point is extracted at least two sheet of planar determine image respectively, and image is determined at least two sheet of planar
In, mark the position of same characteristic features point;
According to position of the same characteristic features point in Different Plane determines image, the motion feature of the same characteristic features point is obtained
Parameter;
According to the kinematic feature factor, the space lattice plane that the user visual field includes is drawn.
3. method according to claim 2, it is characterised in that extract characteristic point in the plane determines image, specifically
Including:
Corresponding at least two scale space images of image are determined according to the plane, difference of Gaussian image is generated, and in institute
State and detect extreme point as characteristic point in difference of Gaussian image;
It is determined that directioin parameter corresponding with each characteristic point, and according to the directioin parameter, it is determined that with each characteristic point pair
The Feature Descriptor answered.
4. method according to claim 3, it is characterised in that determine image corresponding at least two according to the plane
Scale space images, generate difference of Gaussian image, and detect extreme point as characteristic point, bag in the difference of Gaussian image
Include:
According to formula:L (x, y, σ)=G (x, y, σ) * I (x, y);AndTo the plane
Determine that image I (x, y) carries out multiscale space conversion, generate at least two scale space images L (x, y, σ);Wherein, (x, y)
For the two-dimensional coordinate of pixel in image, σ is the metric space factor, and different σ Gaussian function G (x, y, σ) generates different chis
Spend spatial image;
Adjacent scale space images are subtracted each other into result, difference of Gaussian image D (x, y), different difference of Gaussian images pair is used as
Answer the difference of Gaussian pyramid of different layers;
According to formulaThe difference of Gaussian image of generation is entered
Row derivation, and extreme point during by derivative equal to 0Alternately characteristic point, wherein, Dl(x, y) represents Gaussian difference parting
L layers of difference of Gaussian image in word tower;
CalculateIf calculating what is obtainedThen by corresponding alternative features
PointIt is retained as the characteristic point;
Represent in characteristic pointThe difference of Gaussian image at place;L layers are represented in difference of Gaussian pyramid
In characteristic pointThe difference of Gaussian image at place;Represent difference of Gaussian pyramid in l-1 layers in characteristic pointThe difference of Gaussian image at place.
5. the method according to claim 3 or 4, it is characterised in that it is determined that directioin parameter corresponding with each characteristic point,
And according to the directioin parameter, it is determined that Feature Descriptor corresponding with each characteristic point, including:
Taken centered on characteristic point 16 × 16 window, using each pixel included in the window as with the characteristic point close
The key point of connection;
According to formula:And θ (x, y)
=tan2 ((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y))) calculates the modulus value of each key point (x, y) place gradient
M (x, y) and gradient direction θ (x, y), L are corresponding scale space images with the key point (x, y);
In a clockwise direction, 45 degree of angles are span, choose 8 direction value, gradient at each key point obtained according to calculating
Modulus value and gradient direction, construct histogram, and using the direction of the corresponding gradient of the histogrammic peak value as described
The principal direction of characteristic point;
Reference axis is rotated to the principal direction of the characteristic point, according to gradient at each key point associated with the characteristic point
The direction of modulus value and gradient, generates the Feature Descriptor F of the characteristic point;
Wherein, f (n) ∈ F,F (n) is the n-th key point associated with the characteristic point
Feature is described;a1, a2, a3To preset weighted value;θnFor the direction of gradient at the n-th key point, mnFor the mould of gradient at the n-th key point
Value, pnFor the relative position between the n-th key point and the characteristic point.
6. method according to claim 2, it is characterised in that image is determined in Different Plane according to the same characteristic features point
In position, obtain the kinematic feature factor of the same characteristic features point, including:
According to position of the same characteristic features point in Different Plane determines image, the moving direction of the same characteristic features point is calculated
And shift value;
The shooting time and the shift value of image are determined according to the Different Plane, the movement of the same characteristic features point is calculated
Speed.
7. method according to claim 6, it is characterised in that according to the kinematic feature factor, draws the user visual field
The space lattice plane included, including:
According to formula:Calculate the space lattice attribute structure p at t+1 momentt+1;
Wherein, FtFor the Feature Descriptor of t characteristic point, qtFor the confidence weight of t characteristic point, the confidence level power
Similarity distance determination of the repeated root according to characteristic point at least two planes determine image, vtFor the mobile speed of t characteristic point
Degree, wtFor the moving direction of t characteristic point, ntFor the shift value of t characteristic point, Δ t is between t+1 moment and t
Incremental time;(vt,nt) Δ t is characterized the sub- increment function of description;((wt,nt) Δ t) be confidence level value increase function;
According to the space lattice attribute structure at t+1 moment, the space lattice that the user visual field includes is inscribed when being plotted in t+1
Plane.
8. according to the method described in claim 1, it is characterised in that according to the space lattice plane, it is determined that with 3D to be shown
The corresponding virtual scene of object, including:
According to the sized data of the 3D objects to be shown and the size of the space lattice plane, the 3D to be shown is controlled
The scaling of object, by the 3D objects to be shown to be sized being superimposed upon in the space lattice plane, and obtain with
The corresponding space plane coordinate of the 3D objects to be shown;
According to the mapping relations between space plane coordinate and virtual coordinates, and sky corresponding with the 3D objects to be shown
Between plane coordinates, determine the virtual coordinates of the 3D objects to be shown, with generate it is corresponding with the 3D objects to be shown virtually
Scene.
9. according to the method described in claim 1, it is characterised in that exported by the virtual scene to mixed reality eye
Before mirror, in addition to:
The video that the camera is shot, and the space lattice plane drawn are shielded.
10. method according to claim 9, it is characterised in that export the virtual scene to mixed reality eye
Mirror, including:
The virtual scene is decomposed into two-pass video content, and the two-pass video content is exported existing to the mixing
In the different eyeglasses of real glasses.
11. device is virtually presented in a kind of mixed reality, it is characterised in that including:
Plane determines image collection module, for moving shooting by camera, obtains at least two and user visual field phase
The plane of matching determines image, wherein, the camera is configured on the mixed reality glasses that the user wears;
Space lattice plane drafting module, image is determined at least two sheet of planar according to, and drawing the user visual field includes
Space lattice plane;
Virtual scene determining module, for according to the space lattice plane, it is determined that virtual field corresponding with 3D objects to be shown
Scape;
Virtual scene output module, for the virtual scene to be exported to the mixed reality glasses, so that the user sees
Measure by the mixed reality scene after the real scene fusion of the virtual scene and mixed reality glasses acquisition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710140525.9A CN106997617A (en) | 2017-03-10 | 2017-03-10 | The virtual rendering method of mixed reality and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710140525.9A CN106997617A (en) | 2017-03-10 | 2017-03-10 | The virtual rendering method of mixed reality and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106997617A true CN106997617A (en) | 2017-08-01 |
Family
ID=59431413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710140525.9A Pending CN106997617A (en) | 2017-03-10 | 2017-03-10 | The virtual rendering method of mixed reality and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106997617A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108227920A (en) * | 2017-12-26 | 2018-06-29 | 中国人民解放军陆军航空兵学院 | Move enclosure space method for tracing and tracing system |
CN109427094A (en) * | 2017-08-28 | 2019-03-05 | 福建天晴数码有限公司 | A kind of method and system obtaining mixed reality scene |
CN110276836A (en) * | 2018-03-13 | 2019-09-24 | 幻视互动(北京)科技有限公司 | A kind of method and MR mixed reality intelligent glasses accelerating characteristic point detection |
CN111915736A (en) * | 2020-08-06 | 2020-11-10 | 黄得锋 | AR interaction control system, device and application |
CN112150885A (en) * | 2019-06-27 | 2020-12-29 | 统域机器人(深圳)有限公司 | Cockpit system based on mixed reality and scene construction method |
CN112233172A (en) * | 2020-09-30 | 2021-01-15 | 北京零境科技有限公司 | Video penetration type mixed reality method, system, readable storage medium and electronic equipment |
CN113842227A (en) * | 2021-09-03 | 2021-12-28 | 上海涞秋医疗科技有限责任公司 | Medical auxiliary three-dimensional model positioning matching method, system, equipment and medium |
RU211700U1 (en) * | 2021-10-12 | 2022-06-17 | Общество с ограниченной ответственностью "Хайтек-Склад" | Mixed Reality Formation Helmet (MR Helmet) |
CN118070554A (en) * | 2024-04-16 | 2024-05-24 | 北京天创凯睿科技有限公司 | Flight simulation mixed reality display system |
-
2017
- 2017-03-10 CN CN201710140525.9A patent/CN106997617A/en active Pending
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109427094A (en) * | 2017-08-28 | 2019-03-05 | 福建天晴数码有限公司 | A kind of method and system obtaining mixed reality scene |
CN109427094B (en) * | 2017-08-28 | 2022-10-21 | 福建天晴数码有限公司 | Method and system for acquiring mixed reality scene |
CN108227920A (en) * | 2017-12-26 | 2018-06-29 | 中国人民解放军陆军航空兵学院 | Move enclosure space method for tracing and tracing system |
CN108227920B (en) * | 2017-12-26 | 2021-05-11 | 中国人民解放军陆军航空兵学院 | Motion closed space tracking method and system |
CN110276836A (en) * | 2018-03-13 | 2019-09-24 | 幻视互动(北京)科技有限公司 | A kind of method and MR mixed reality intelligent glasses accelerating characteristic point detection |
CN112150885B (en) * | 2019-06-27 | 2022-05-17 | 统域机器人(深圳)有限公司 | Cockpit system based on mixed reality and scene construction method |
CN112150885A (en) * | 2019-06-27 | 2020-12-29 | 统域机器人(深圳)有限公司 | Cockpit system based on mixed reality and scene construction method |
CN111915736A (en) * | 2020-08-06 | 2020-11-10 | 黄得锋 | AR interaction control system, device and application |
CN112233172A (en) * | 2020-09-30 | 2021-01-15 | 北京零境科技有限公司 | Video penetration type mixed reality method, system, readable storage medium and electronic equipment |
CN113842227A (en) * | 2021-09-03 | 2021-12-28 | 上海涞秋医疗科技有限责任公司 | Medical auxiliary three-dimensional model positioning matching method, system, equipment and medium |
CN113842227B (en) * | 2021-09-03 | 2024-04-05 | 上海涞秋医疗科技有限责任公司 | Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium |
RU211700U1 (en) * | 2021-10-12 | 2022-06-17 | Общество с ограниченной ответственностью "Хайтек-Склад" | Mixed Reality Formation Helmet (MR Helmet) |
CN118070554A (en) * | 2024-04-16 | 2024-05-24 | 北京天创凯睿科技有限公司 | Flight simulation mixed reality display system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106997617A (en) | The virtual rendering method of mixed reality and device | |
CN111243093B (en) | Three-dimensional face grid generation method, device, equipment and storage medium | |
Wang et al. | 360sd-net: 360 stereo depth estimation with learnable cost volume | |
CN100485720C (en) | 360 degree around panorama generation method based on serial static image | |
CN106651767A (en) | Panoramic image obtaining method and apparatus | |
JP2006053694A (en) | Space simulator, space simulation method, space simulation program and recording medium | |
CN113689503B (en) | Target object posture detection method, device, equipment and storage medium | |
CN111079565B (en) | Construction method and identification method of view two-dimensional attitude template and positioning grabbing system | |
US20150138193A1 (en) | Method and device for panorama-based inter-viewpoint walkthrough, and machine readable medium | |
CN112819875B (en) | Monocular depth estimation method and device and electronic equipment | |
da Silveira et al. | 3d scene geometry estimation from 360 imagery: A survey | |
JP7479729B2 (en) | Three-dimensional representation method and device | |
CN115496863B (en) | Short video generation method and system for scene interaction of movie and television intelligent creation | |
CN117011493B (en) | Three-dimensional face reconstruction method, device and equipment based on symbol distance function representation | |
CN113706373A (en) | Model reconstruction method and related device, electronic equipment and storage medium | |
CN114125269A (en) | Mobile phone real-time panoramic shooting method based on deep learning | |
CN116681839B (en) | Live three-dimensional target reconstruction and singulation method based on improved NeRF | |
CN111899293B (en) | Virtual and real shielding processing method in AR application | |
Khan et al. | Towards monocular neural facial depth estimation: Past, present, and future | |
Ma et al. | Pattern Recognition and Computer Vision: 4th Chinese Conference, PRCV 2021, Beijing, China, October 29–November 1, 2021, Proceedings, Part II | |
JP2002092597A (en) | Method and device for processing image | |
Liu et al. | Synthesis and identification of three-dimensional faces from image (s) and three-dimensional generic models | |
Shang et al. | Semantic Image Translation for Repairing the Texture Defects of Building Models | |
Yao et al. | A new environment mapping method using equirectangular panorama from unordered images | |
Agus et al. | PEEP: Perceptually Enhanced Exploration of Pictures. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170801 |
|
WD01 | Invention patent application deemed withdrawn after publication |