CN113205584A - Multi-view projection method based on Bezier curve - Google Patents
Multi-view projection method based on Bezier curve Download PDFInfo
- Publication number
- CN113205584A CN113205584A CN202110413937.1A CN202110413937A CN113205584A CN 113205584 A CN113205584 A CN 113205584A CN 202110413937 A CN202110413937 A CN 202110413937A CN 113205584 A CN113205584 A CN 113205584A
- Authority
- CN
- China
- Prior art keywords
- projection
- user
- point
- curve
- view
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/30—Polynomial surface description
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Mathematical Analysis (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Algebra (AREA)
- Computer Hardware Design (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a Bezier curve-based multi-view projection method, which allows two users in a virtual scene to share a consistent view. The method comprises the following steps: firstly, a target point is selected as a working area center O, and the positions of two users are taken as projection viewpoints V1And V2And is in V1O and V2Selecting two endpoints P of Bezier curve on O1And P2Calculating a control point C by combining the center O of the working area1,C2From P1,C1,C2,P2A cubic bezier curve was constructed. The control point C of the Bezier curve is then optimized in the user's view volume1And C2In such a position that the curve is as smooth as possible. Finally, the position V of collaborators is calculated2As a projection point, withP2Far projection plane as visual center, with user's position V1As projection point, with P1The scene is divided into three areas, namely a working area, a transition area and a user area for a central near projection plane, a new projection algorithm is designed, and the view of the collaborators in the working area is provided for the user with very slight errors.
Description
Technical Field
The invention relates to the field of images, in particular to a multi-view projection method based on a Bezier curve.
Background
Consider an interactive virtual reality scenario in which a user works side-by-side with collaborators. Collaborators must be able to indicate the workspace location to the user so that they can see it when working together. Collaborators see a part of the scene due to their different viewpoints, and the user cannot see the part of the content due to the occlusion, which causes the communication block. Thus, the collaborators use the virtual pointer to point to the content of the part of the scene, and the virtual pointer point is not seen by the user. One possible solution is to let the user move freely to let himself observe the virtual pointing point, but this takes some time and effort, making the cooperation inefficient. Another possible solution is to have the user switch between his view and the views of the collaborators, however this has the disadvantage that the user does not see the collaborators' avatars when using the collaborators view, e.g. gestures made by the collaborators can be seen as required, which requires the user to switch back and forth between the two views at the right time, and thus the communication becomes more complicated.
Disclosure of Invention
In order to solve the technical problem, the invention provides a novel multi-view projection camera model based on a Bezier curve, and the cooperation efficiency is improved by reducing the difference between users and collaborators when observing a virtual reality scene. By the method, a multi-view projection view of virtual reality is presented to a user, and the view of the user is gradually transited to the view of collaborators. The multi-view projection camera model based on the Bezier curve divides a virtual scene into a working area, a transition area and a user area. Wherein the workspace is an area where the user and the collaborators work together, the user area is a portion closest to the user, the area includes the collaborators' avatars, and the transition area is an area connecting the workspace and the user area. In this way, no matter where the virtual indication point of the collaborator is in the working area, as long as the collaborator can see it, the user can see the virtual indication point after projective transformation of the scene by using the camera model proposed by the present invention, and furthermore, the user can view the collaborator through rotation of the head according to the need of the user, for example, view the posture of the collaborator. The bezier-based curvilinear camera model is a new type of camera model with curved light that stitches together the user and collaborators' views to generate the visualization of the multi-view projection. The virtual scene is first warped according to the novel camera, and then the warped virtual scene is rendered for each eye of the user according to a traditional projection method.
The technical scheme adopted by the invention is as follows: a multi-view projection method based on a Bezier curve is disclosed, wherein a multi-view projection camera model based on the Bezier curve is called as an exchange camera, and the multi-view projection method based on the Bezier curve mainly comprises the following three steps:
step (1), calculating parameters for constructing the exchange camera: the view provider selects a center O of the workspace in conjunction with the viewpoint location V of the view provider2And viewpoint position V of view receiver1Forming a projection horizontal plane pi at the same height and V2Selecting P among O according to a fixed proportion2At V1Selecting points P between O according to a fixed proportion1,P2P1Namely two end points of the Bezier curve;
step (2), optimizing a Bezier curve: control point C2Is located at P2V2Point in the direction of (1), C1Is located at P1Point in the O direction, at P2V2Taking C equidistantly in the direction2150 sample points of (1), at P1O is with C1The 100 sampling points are combined in pairs to obtain 150 × 100 different Bezier curves, then each curve is evaluated, and the curve with the best score is selected as a projection curve of the exchange camera; the evaluation for each curve requires a calculation of V1A view volume as a viewpoint, with P on the curve2Starting from P1To end inSequentially taking a certain number of sampling points on the Bessel curve, and sequentially calculating the normal n of the oversampling pointskI.e. a straight line perpendicular to the tangent. For each normal line, calculating the intersection point of the normal line and the user view in sequence, and counting the number e of error intersection pointskThe score of the evaluated curve is calculated by the sum of all the error intersection points. Finally, selecting the one with the highest score as the optimal solution;
and (3) projecting the input vertexes by using a switching camera, judging the region to which each input vertex Q of the geometric patch to be rendered belongs according to the vertex Q, and adopting different projection strategies according to different regions. The method specifically comprises the following steps:
for the vertex of the user area, no change is needed, with the user's camera parameters, with V1Directly projecting the position viewpoint in a perspective projection mode to obtain a projection point Qp;
For the vertex of the workspace, no changes need to be made to collaborate on the camera parameters, V2Projection point Q is obtained for projection viewpoint by projection in traditional modep;
For the vertex of the transition region, in order to generate continuous change between the working region and the user region, a Bezier curve is required to be relied on for projection; the input vertex is recorded as Q, firstly, a vertical plane containing a point Q is found from the Bezier curve through a binary search method based on the parameter t of the Bezier curve, a value t meeting the condition is calculated, and the vertical plane pi is calculatedtWhile calculating the corresponding point P on the Bezier curvetAnd normal nt(ii) a Then calculate Q in the plane pitAbove is by PtCoordinates (x) being the origin of the coordinate systemt,yt) Then (x)t,yt) Scaled down to obtain Q1At pi1Above by P1Coordinates (x) being the origin of the coordinate system1,y1) (ii) a The calculation formula of the scale factor f is as follows:
f=V1P1/(V2P2+(V1P1-V2P2)t);
in the plane of pi1Internally with P1As the origin of the coordinate system, according to Q1Coordinate (x) of1,y1) Calculate Q1Coordinate of (2), last projected point QpCalculating by combining projection parameters of a user camera according to a traditional projection mode;
projection points Q of all three types of vertices QpThe depth z of the vertex Q under the user view is taken as the depth of the back projection point Q', and finally the projection parameters of the user camera are used for dividing Q intopThe back projection results in the result Q' of the vertex transformation.
The principle of the invention is as follows:
(1) according to the projection principle of the conventional projection method, the far plane and the near plane of the view object are connected by a straight line passing through the center points of the two planes, but the two planes must be parallel, and the straight line is the radial center line of the light. According to this principle, it can be improved to use a cubic Bezier curve as the propagation path of light to connect the far plane and the near plane, so that the two planes do not necessarily need to be parallel, and the curve is at two end points (P)2,P1) Are perpendicular to the distal and proximal planes, respectively. It can therefore be considered that each point in space is first projected onto the far plane, then onto the near plane along the bezier curve, and then onto the final viewing plane of the user in a conventional projection method.
(2) A Bezier curve with smaller torsion degree in a visual scene is used as a projection central line, so that the virtual scene in the transition region can generate smaller torsion, and the scene has better fidelity effect.
Compared with the prior art, the invention has the advantages that:
1. compared with the Graph Camera method proposed by the Voicu team, the method has the advantages that the similar triangular projection algorithm is used for minimizing the observation error of the user and the collaborator views on the working area for the content of the working area, the method can obtain the projection image with zero error, and the user and the collaborator have the same understanding on the scene.
2. Compared with the traditional free movement method, the method can reduce the burden of the user for completing the task and has higher efficiency.
3. Compared with the method for switching the views, the method can enable the user to observe the content of the working area and the content of the user area at the same time without frequently switching the views.
4. The present invention proposes a camera model based on bezier curves that allows the user to smoothly transition to collaborators' views.
Drawings
FIG. 1 is a flow chart of the construction and use of an exchange camera;
FIG. 2 is a schematic diagram of the distribution of parameters of the exchange camera;
FIG. 3 is an exchange camera build pseudo code;
FIG. 4 is a Bezier curve with false intersections;
FIG. 5 is a Bezier curve without false intersections;
FIG. 6 is Bezier curve evaluation pseudocode;
FIG. 7 is an exchange camera projection algorithm pseudo-code;
FIG. 8 is a schematic projection diagram of a transition region;
fig. 9 is a diagram of the actual effect of the method.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by a person skilled in the art based on the embodiments of the present invention belong to the protection scope of the present invention without creative efforts.
Fig. 1 is a flow chart of the construction and use of the exchange camera, and fig. 2 is a schematic diagram of the distribution of parameters of the exchange camera, and the invention is further explained below with reference to other figures and embodiments.
1. Construction method of exchange camera
The flow of the construction method of the camera is shown in FIG. 3, and the input parameters of the construction algorithm include (1) the user viewpoint position V1And a visual body F1Composed user main view, (2) with collaborator viewpoint position V2Collaborator view F2The main view of the collaborators, (3) the center point O of the working area, (4) the distance d from the user to the near plane1And (5) distance d from collaborator to far plane2. The output of the algorithm is a constructed camera.
Step 1, with V1,V2O forms a plane pi on which all formation parameters are limited;
step 3, calculating a rule P1To P2By a control point P2,C2,C1,P1And (4) forming. With P1And P2To select different C1And C2Different bezier curves can be obtained;
and 4, in order to find out the curve which meets the condition from the large number of curves, an enumeration algorithm is designed to enumerate each curve and evaluate the curve, and the curve with the highest score is selected.
2. Evaluation algorithm of Bezier curve
The switching camera proposed by the invention needs to calculate a bezier curve as smooth as possible to be used as a projection curve of the transition region. The bezier curve shown in fig. 4 has a phenomenon that a normal line intersects with a normal line, and thus the curve is not suitable for constructing the camera model proposed by the present invention, and conversely, the bezier curve shown in fig. 5 is suitable for constructing the camera model proposed by the present invention. The only difference is the control point C1And C2The position of (a).
Finding a suitable control point C in the construction algorithm by using the evaluation flow as shown in FIG. 61And C2. The inputs to the evaluation algorithm include (1) the user viewpoint position V1And a visual body F1The formed user front view, (2) the formed plane pi, (3) the near plane pi1And far plane pi2(4) a bezier curve b to be evaluated; the algorithm returns a score for the input curve b. The method specifically comprises the following steps:
step 1, an evaluation algorithm firstly calculates a quadrangle L1L2R2R1The quadrilateral is shown in both fig. 4 and 5;
step 3, if the current normal nkWithout generating an erroneous intersection, the last correct normal n is updatedgIs nk. Total number of false intersections e generated from each normalkAnd accumulating to obtain. After all the sample points on the curve have traversed, the reciprocal of e +1 is calculated as the score of the evaluated curve.
3. Projection algorithm for virtual scene
The exchange camera obtains a transformed virtual scene by displacing each vertex in the scene according to a certain rule. And then projecting the transformed geometric data to a left eye screen and a right eye screen of the user helmet by using projection parameters of the left eye camera and the right eye camera which are obtained by the analysis of the VR helmet respectively. The process of vertex transformation of a virtual scene is shown in fig. 7.
The inputs to the process are the bezier curve based multi-view projection camera model obtained from the first step of computation and the vertices Q that need to be transformed. The purpose of the algorithm is to calculate the position Q of the point Q on the user's viewpThen Q is projected based on the user's conventional camera projection parameterspAnd (3) obtaining a back projection point Q' by back projection, wherein the process comprises the following steps:
step 1, judging the area of the point Q, if Q is in the user area, namely in pi1Lower region, using the projection parameters (V) of the user camera1,F1) Projected onto the user view. If Q is in the working region, i.e. [ pi ]2The upper region is directly matched with the camera parameters (V) of the collaborators2,F2) Projecting and calculating coordinates on a screen;
and 3, projecting the top point of the transition region. As shown in fig. 8, a vertical plane containing a point Q is first found from the bezier curve by a binary search algorithm based on the parameter t of the bezier curve. The search interval for parameter t is initialized to [0,1 ]]Each iteration of the binary search algorithm halves the interval and keeps point Q at the far plane π2And near plane pi1Finally, a value satisfying the condition is found to calculate the vertical plane pitAt the same time, the point P on the Bezier curve can be calculatedtAnd normal nt. Then calculate Q in the plane pitAbove is by PtCoordinates (x) being the origin of the coordinate systemt,yt) Then (x)t,yt) Scaled down to obtain Q1At pi1Above by P1Coordinates (x) being the origin of the coordinate system1,y1). The formula for the scaling factor f is V1P1/(V2P2+(V1P1-V2P2) t). In the plane of pi1Internally with P1As the origin of the coordinate system, according to Q1Coordinate (x) of1,y1) Q can be calculated1Coordinate of (2), last projected point QpThe projection parameters of the user camera can be combined and calculated according to a traditional projection mode;
step 4, projection points Q of all three types of vertexes QpThe depth z of (a) is taken as the depth of the backprojection point Q' with the depth under the user view. Finally, Q is converted by the projection parameters of the user camerapAnd (4) carrying out back projection to obtain a vertex transformation result Q', and carrying out rasterization calculation on the three types of projection results in a depth cache mode to obtain a final result.
The method designs a new projection algorithm for the working area, the transition area and the user area in a targeted manner, provides the view of the collaborators in the working area for the user and has very small errors. In experimental research, the method is compared with the traditional method in the same task scene, and experimental results show that the method is more efficient and is more beneficial to reducing the error rate.
According to the embodiment of the invention, user experiments are carried out, and the experimental effect is shown in fig. 9. Divided into three groups, with the researchers acting as collaborators and the participants being users. The virtual scene of the first set of experiments is a factory, collaborators point to a pipeline by using a virtual ray, and a user needs to select the corresponding color of the pipeline. The virtual scenario for the second set of experiments is the same factory, collaborators point to one pipe, then make a gesture with an arm, then point to another pipe, and the user is asked to select the corresponding "color-gesture-color" option on the table. The virtual scene of the third set of experiments is a game scene, and the user needs to select the button connected to the ball pointed by the collaborator. The experiment had three types of control conditions, the first Control Condition (CC)1) The next user can only observe the directions of the collaborators by moving. Second Control Condition (CC)2) The next user can toggle back and forth between his view and the collaborator view to observe. The third condition is an Experimental Condition (EC) under which a user can observe using the bezier curve-based multi-view projection method proposed by the present invention. For experiment one, compare CC1And results of EC, comparative CC for experiment two2And results of EC, comparative CC for experiment three1,CC2And the results of EC. The experimental results show that by using the bezier curve-based multi-view projection method, the user can complete the cooperative task with less burden and obtain higher accuracy.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but various changes may be apparent to those skilled in the art, and it is intended that all inventive concepts utilizing the inventive concepts set forth herein be protected without departing from the spirit and scope of the present invention as defined and limited by the appended claims.
Claims (3)
1. A multi-view projection method based on a Bezier curve is disclosed, wherein a multi-view projection camera model based on the Bezier curve is called as an exchange camera, and the multi-view projection method based on the Bezier curve mainly comprises the following three steps:
step (1), calculating parameters for constructing the exchange camera: the view provider selects a center O of the workspace in conjunction with the viewpoint location V of the view provider2And viewpoint position V of view receiver1Forming a projection horizontal plane pi at the same height and V2Selecting P among O according to a fixed proportion2At V1Selecting points P between O according to a fixed proportion1,P2And P1Namely two end points of the Bezier curve;
step (2), optimizing a Bezier curve: control point C2Is located at P2V2Point in the direction of (1), C1Is located at P1Point in the O direction, at P2V2Taking C equidistantly in the direction2150 sample points of (1), at P1O is with C1The 100 sampling points are combined in pairs to obtain 150 × 100 different Bezier curves, then each curve is evaluated, and the curve with the best score is selected as a projection curve of the exchange camera; the evaluation for each curve requires a calculation of V1A view volume as a viewpoint, with P on the curve2Starting from P1As an end point, a certain number of sampling points are sequentially taken on the Bessel curve, and the normal n of the oversampling points is sequentially calculatedkI.e. a straight line perpendicular to the tangent. For each normal line, calculating the intersection point of the normal line and the user view in sequence, and counting the number e of error intersection pointskThe score of the evaluated curve is calculated by the sum of all the error intersection points. Finally, selecting the one with the highest score as the optimal solution;
and (3) projecting the input vertexes by using a switching camera, judging the region to which the vertex belongs for the vertex Q of each input geometric patch to be rendered, and adopting different projection strategies according to different regions.
2. The Bezier curve-based multiview projection method according to claim 1,
the working area is an area where the user and the collaborators work together;
the user zone is the closest part to the user, the zone including avatars of collaborators;
the transition area is an area connecting the work area and the user area.
3. The Bezier curve-based multi-view projection method according to claim 1, wherein in the step (3), the input vertices are projected by using a swap camera, for each input vertex Q, a region to which the vertex belongs needs to be determined, and different projection strategies are adopted according to different regions, specifically comprising:
for the vertex of the user area, no change is needed, with the user's camera parameters, with V1Directly projecting the position viewpoint in a perspective projection mode to obtain a projection point Qp;
For the vertex of the workspace, no changes need to be made to collaborate on the camera parameters, V2Projection point Q is obtained for projection viewpoint by projection in traditional modep;
For the vertex of the transition region, in order to generate continuous change between the working region and the user region, a Bezier curve is required to be relied on for projection; the input vertex is recorded as Q, firstly, a vertical plane containing a point Q is found from the Bezier curve through a binary search method based on the parameter t of the Bezier curve, a value t meeting the condition is calculated, and the vertical plane pi is calculatedtWhile calculating the corresponding point P on the Bezier curvetAnd normal nt(ii) a Then calculate Q in the plane pitAbove is by PtCoordinates (x) being the origin of the coordinate systemt,yt) Then (x)t,yt) Scaled down to obtain Q1At pi1Above by P1Coordinates (x) being the origin of the coordinate system1,y1) (ii) a The calculation formula of the scale factor f is as follows:
f=V1P1/(V2P2+(V1P1-V2P2)t);
in the plane of pi1Internally with P1As the origin of the coordinate system, according to Q1Coordinate (x) of1,y1) Calculate Q1Coordinate of (2), last projected point QpCalculating by combining projection parameters of a user camera according to a traditional projection mode;
projection points Q of all three types of vertices QpThe depth z of the vertex Q under the user view is taken as the depth of the back projection point Q', and finally the projection parameters of the user camera are used for dividing Q intopThe back projection results in the result Q' of the vertex transformation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110413937.1A CN113205584B (en) | 2021-04-16 | 2021-04-16 | Multi-view projection method based on Bezier curve |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110413937.1A CN113205584B (en) | 2021-04-16 | 2021-04-16 | Multi-view projection method based on Bezier curve |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113205584A true CN113205584A (en) | 2021-08-03 |
CN113205584B CN113205584B (en) | 2022-04-26 |
Family
ID=77027265
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110413937.1A Active CN113205584B (en) | 2021-04-16 | 2021-04-16 | Multi-view projection method based on Bezier curve |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113205584B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106817512A (en) * | 2016-12-22 | 2017-06-09 | 济南中维世纪科技有限公司 | A kind of camera lens shade antidote based on Bezier |
CN110889874A (en) * | 2019-12-04 | 2020-03-17 | 南京美基森信息技术有限公司 | Error evaluation method for calibration result of binocular camera |
CN111586384A (en) * | 2020-05-29 | 2020-08-25 | 燕山大学 | Projection image geometric correction method based on Bessel curved surface |
-
2021
- 2021-04-16 CN CN202110413937.1A patent/CN113205584B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106817512A (en) * | 2016-12-22 | 2017-06-09 | 济南中维世纪科技有限公司 | A kind of camera lens shade antidote based on Bezier |
CN110889874A (en) * | 2019-12-04 | 2020-03-17 | 南京美基森信息技术有限公司 | Error evaluation method for calibration result of binocular camera |
CN111586384A (en) * | 2020-05-29 | 2020-08-25 | 燕山大学 | Projection image geometric correction method based on Bessel curved surface |
Non-Patent Citations (2)
Title |
---|
AHOLA, SIMO: "DEVELOPING A VIRTUAL REALITY APPLICATION IN UNITY", 《LAHTI UNIVERSITY OF APPLIED SCIENCES》 * |
ZENG HONG 等: "Large screen stereo projection splice based on parametric curve", 《2012 IEEE FIFTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI)》 * |
Also Published As
Publication number | Publication date |
---|---|
CN113205584B (en) | 2022-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Vanacken et al. | Exploring the effects of environment density and target visibility on object selection in 3D virtual environments | |
CN110163942B (en) | Image data processing method and device | |
Shin et al. | Fat graphs: constructing an interactive character with continuous controls | |
CN104183016B (en) | A kind of construction method of quick 2.5 dimension building model | |
US9881417B2 (en) | Multi-view drawing apparatus of three-dimensional objects, and method | |
CN102222363A (en) | Method for fast constructing high-accuracy personalized face model on basis of facial images | |
US11176666B2 (en) | Cut-surface display of tubular structures | |
CN106709975A (en) | Interactive three-dimensional human face expression animation editing method and system and extension method | |
CN115861547B (en) | Model surface spline generating method based on projection | |
Becher et al. | Feature-based volumetric terrain generation | |
CN115546409A (en) | Automatic generation method of three-dimensional face model | |
CN102663802B (en) | A kind of game landform road generates method and apparatus | |
Boukhayma et al. | Surface motion capture animation synthesis | |
CN106548447A (en) | Obtain the method and device of medical science two dimensional image | |
CN113205584B (en) | Multi-view projection method based on Bezier curve | |
Petkov et al. | Interactive visibility retargeting in vr using conformal visualization | |
CN105279788A (en) | Method for generating object space swept volume | |
de Toledo et al. | Iterative methods for visualization of implicit surfaces on GPU | |
CN109064547A (en) | A kind of single image hair method for reconstructing based on data-driven | |
US20020013683A1 (en) | Method and device for fitting surface to point group, modeling device, and computer program | |
JP2003228725A (en) | 3d image processing system | |
Li et al. | Animating cartoon faces by multi‐view drawings | |
Tavares et al. | Efficient approximate visibility of point sets on the GPU | |
CN109920058B (en) | Tooth segmentation method based on anisotropy measurement | |
Zhang et al. | Haptic interaction with a polygon mesh reconstructed from images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |