WO2019033923A1 - 一种图像渲染方法和系统 - Google Patents

一种图像渲染方法和系统 Download PDF

Info

Publication number
WO2019033923A1
WO2019033923A1 PCT/CN2018/097918 CN2018097918W WO2019033923A1 WO 2019033923 A1 WO2019033923 A1 WO 2019033923A1 CN 2018097918 W CN2018097918 W CN 2018097918W WO 2019033923 A1 WO2019033923 A1 WO 2019033923A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendered
data
rendering
object data
reference object
Prior art date
Application number
PCT/CN2018/097918
Other languages
English (en)
French (fr)
Inventor
伏英娜
金宇林
Original Assignee
迈吉客科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 迈吉客科技(北京)有限公司 filed Critical 迈吉客科技(北京)有限公司
Publication of WO2019033923A1 publication Critical patent/WO2019033923A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to an image rendering method and system.
  • Image rendering technology is to superimpose a designer-designed rendering object (such as a magic sticker) onto an image or data stream captured by an electronic device such as a camera or an object to be rendered (such as a character's head) to form an image or data stream with a rendering effect.
  • Image rendering technology is one of the most common augmented reality technologies.
  • FIG. 1a is a front view view of an object to be rendered in an image rendering method in the prior art.
  • FIG. 1b is a side view effect view of an object to be rendered in the image rendering method in the prior art.
  • a rendering object such as a magic sticker
  • the object and the object to be rendered can be accurately matched; when the object to be rendered has a left-right swing angle or an up-and-down swing angle, the render object adjusts its own rendering angle according to the specific value of the swing angle to try to accurately fit the object to be rendered, but Since the object to be rendered is a three-dimensional entity in general, when the object to be rendered has left and right or up and down angle adjustments, the rendering object having the patching feature cannot adjust its two-dimensional mapping angle only by the specific value of the swing angle, so The result of the rendering object and the object to be rendered is not high, and the rendering effect is poor.
  • an embodiment of the present invention provides an image rendering method and system, which solves the problem that the fitting degree of the rendering object and the object to be rendered is not high and the rendering effect is poor in the existing image rendering process.
  • the image rendering method provided by the embodiment of the present invention includes: generating reference object data to be rendered according to the reference object to be rendered; generating rendering object data matching the reference object data to be rendered according to the reference object data to be rendered and the rendering object; The data and the object to be rendered generate object data to be rendered that match the reference object data to be rendered; the rendering object data and the object data to be rendered are matched in real time.
  • the reference object data to be rendered includes key point coordinate data.
  • the reference object data to be rendered further includes mesh model data generated from key point coordinate data.
  • the reference object data to be rendered further includes three-dimensional corner data.
  • generating the reference object data to be rendered according to the reference object to be rendered includes: selecting a plurality of key points of the reference object to be rendered; establishing a reference coordinate system, and using coordinate data of the key point in the reference coordinate system as a to-be-rendered Baseline object data.
  • generating the reference object data to be rendered according to the reference object to be rendered includes: selecting a plurality of key points of the reference object to be rendered; collecting three-dimensional corner data of the reference object to be rendered; establishing a reference coordinate system, and setting the key point
  • the coordinate data in the reference coordinate system and the three-dimensional corner data of the reference object to be rendered are used as reference object data to be rendered.
  • the key point is a contour key point of an element contained in the reference object to be rendered.
  • generating, according to the reference object data to be rendered and the rendering object, the rendering object data that matches the reference object data to be rendered includes: selecting a feature point of the rendering object according to the reference object data to be rendered; The feature points are combined with the reference coordinate system to generate rendering object data that matches the reference object data to be rendered.
  • generating, according to the reference object data to be rendered and the rendering object, the rendering object data that matches the reference object data to be rendered includes: selecting a feature point of the rendering object according to a key point in the reference object data to be rendered; Rendering the three-dimensional corner data of the reference object to be rendered in the reference object data to match the three-dimensional corner data of the rendered object; combining the feature point of the rendering object and the three-dimensional corner data of the rendering object with the reference coordinate system to generate a data matching the reference object to be rendered Render object data.
  • generating object data to be rendered that matches the reference object data to be rendered according to the reference object data to be rendered and the object to be rendered includes: selecting a feature point of the object to be rendered according to the reference object data to be rendered; The feature points of the object are combined with the reference coordinate system to generate object data to be rendered that matches the reference object data to be rendered.
  • generating object data to be rendered that matches the reference object data to be rendered according to the reference object data to be rendered and the object to be rendered includes: selecting feature points of the object to be rendered according to key points in the reference object data to be rendered Matching the three-dimensional corner data of the object to be rendered according to the three-dimensional corner data of the reference object to be rendered in the reference object data to be rendered; combining the feature points of the object to be rendered and the three-dimensional corner data of the object to be rendered with the reference coordinate system to generate a match The object data to be rendered of the reference object data to be rendered.
  • the method further comprises: performing a buffer operation on the generated object data to be rendered.
  • the embodiment of the present invention further provides an image rendering system, comprising: reference data generating means, configured to generate reference object data to be rendered according to a reference object to be rendered; and a rendering data generating device, configured to: according to the reference object data to be rendered and the rendering object Generating rendering object data that matches the data of the reference object to be rendered; the data generating device to be rendered is configured to generate object data to be rendered that matches the reference object data to be rendered according to the reference object data to be rendered and the object to be rendered; tracking matching device, Match rendering object data and object data to be rendered in real time.
  • reference data generating means configured to generate reference object data to be rendered according to a reference object to be rendered
  • a rendering data generating device configured to: according to the reference object data to be rendered and the rendering object Generating rendering object data that matches the data of the reference object to be rendered
  • the data generating device to be rendered is configured to generate object data to be rendered that matches the reference object data to be rendered according to the reference object data to be rendered and the object to be rendered
  • the reference data generating apparatus includes: a key point selecting unit for selecting a plurality of key points of the reference object to be rendered; a reference data generating unit, configured to establish a reference coordinate system, and the key point is in the reference coordinate system The coordinate data in the data is used as the reference object data to be rendered.
  • the reference data generating apparatus includes: a key point selecting unit for selecting a plurality of key points of the reference object to be rendered; a corner data collecting unit, configured to collect three-dimensional corner data of the reference object to be rendered; and reference data
  • the synthesizing unit is configured to establish a reference coordinate system, and use the coordinate data of the key point in the reference coordinate system and the three-dimensional corner data of the reference object to be rendered as the reference object data to be rendered.
  • the rendering data generating apparatus includes: selecting a rendering object feature point unit, configured to select a feature point of the rendering object according to the reference object data to be rendered; and a rendering object data generating unit, configured to render the feature point of the object In combination with the reference coordinate system, rendering object data that matches the reference object data to be rendered is generated.
  • the rendering data generating apparatus includes: a rendering object feature point selecting unit, configured to select a feature point of the rendering object according to a key point in the reference object data to be rendered; and a rendering object corner data collecting unit, configured to The three-dimensional corner data of the reference object to be rendered in the reference object data to be rendered matches the three-dimensional corner data of the rendering object; the rendering object data synthesizing unit is configured to combine the feature point of the rendering object and the three-dimensional corner data of the rendering object with the reference coordinate system , generating rendering object data that matches the reference object data to be rendered.
  • the data to be rendered apparatus includes: a feature point unit to be rendered, a feature point for selecting an object to be rendered according to the reference object data to be rendered; and an object data generating unit to be rendered, to be used
  • the feature points of the rendered object are combined with the reference coordinate system to generate object data to be rendered that matches the reference object data to be rendered.
  • the to-be-rendered data generating apparatus includes: an object feature point selecting unit to be rendered, configured to select a feature point of the object to be rendered according to a key point in the reference object data to be rendered; a collecting unit, configured to match three-dimensional corner data of the object to be rendered according to the three-dimensional corner data of the reference object to be rendered in the reference object data to be rendered; the object data synthesizing unit to be rendered, the feature point of the object to be rendered and the object to be rendered The three-dimensional corner data is combined with the reference coordinate system to generate object data to be rendered that matches the reference object data to be rendered.
  • the system further includes: a buffering device for performing a buffering operation on the object data to be rendered generated by the rendering data generating device.
  • the image rendering method provided by the embodiment of the present invention generates the reference object data to be rendered, and generates rendering object data and object data to be rendered that match the reference object data to be rendered according to the reference object data to be rendered, and finally matches the rendering object data and the waiting object in real time.
  • the method of rendering the object data realizes the exact matching between the rendering object data and the object data to be rendered, thereby further realizing the accurate fitting of the rendering object and the object to be rendered, improving the fitting degree of the rendering object and the object to be rendered, and enhancing the Rendering effect.
  • FIG. 1a is a front view view of an object to be rendered in an image rendering method in the prior art.
  • FIG. 1b is a side view effect view of an object to be rendered in the image rendering method in the prior art.
  • FIG. 2 is a schematic flowchart diagram of an image rendering method according to a first embodiment of the present invention.
  • FIG. 3 is a specific flowchart of a step of generating a reference object data to be rendered according to a reference object to be rendered according to an image rendering method according to a first embodiment of the present invention.
  • FIG. 4 is a specific flowchart of a step of generating a rendering object data matching a reference object data to be rendered according to a reference object data to be rendered and a rendering object according to the image rendering method provided by the first embodiment of the present invention.
  • FIG. 5 is a specific flowchart of a step of generating an object data to be rendered that matches a reference object data to be rendered and a target object to be rendered according to a reference object data to be rendered and an object to be rendered according to the image rendering method provided by the first embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a mesh model of an image rendering method according to a second embodiment of the present invention.
  • FIG. 7 is a schematic flowchart diagram of an image rendering method according to a third embodiment of the present invention.
  • FIG. 8a is a front view view of a two-dimensional object to be rendered according to an image rendering method according to a fourth embodiment of the present invention.
  • FIG. 8b is a side view of a two-dimensional object to be rendered according to an image rendering method according to a fourth embodiment of the present invention.
  • FIG. 9a is a schematic diagram of a three-dimensional avatar model of an image rendering method according to a fifth embodiment of the present invention.
  • FIG. 9b is a schematic diagram showing a combination of a three-dimensional avatar model and a three-dimensional rendering object according to an image rendering method according to a fifth embodiment of the present invention.
  • FIG. 9c is a schematic diagram showing a combination of an object to be rendered and a three-dimensional rendering object in an image rendering method according to a fifth embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of an image rendering system according to a sixth embodiment of the present invention.
  • FIG. 11 is a detailed structural diagram of a reference data generating apparatus of an image rendering system according to a sixth embodiment of the present invention.
  • FIG. 12 is a detailed structural diagram of a rendering data generating apparatus of an image rendering system according to a sixth embodiment of the present invention.
  • FIG. 13 is a detailed structural diagram of a data to be rendered apparatus for an image rendering system according to a sixth embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of an image rendering system according to a seventh embodiment of the present invention.
  • FIG. 2 is a schematic flowchart diagram of an image rendering method according to a first embodiment of the present invention. As shown in FIG. 2, the image rendering method provided by the first embodiment of the present invention includes:
  • Step 10 Generate reference object data to be rendered according to the reference object to be rendered.
  • Step 20 Generate rendering object data matching the reference object data to be rendered according to the reference object data to be rendered and the rendering object.
  • Step 30 Generate object data to be rendered that matches the reference object data to be rendered according to the reference object data to be rendered and the object to be rendered.
  • Step 40 Match the rendered object data and the object data to be rendered in real time.
  • the exact manner of matching the rendered object and the object to be rendered includes, but is not limited to, an overlay or a fusion manner to fully meet the actual requirements in different situations.
  • the reference object data to be rendered includes key point coordinate data.
  • FIG. 3 is a specific flowchart of a step of generating a reference object data to be rendered according to a reference object to be rendered according to the image rendering method provided by the first embodiment of the present invention.
  • step 10 in the first embodiment of the present invention includes:
  • Step 11 Select several key points of the reference object to be rendered.
  • the key point is the contour key of the element contained in the reference object to be rendered.
  • the reference object to be rendered contains several key elements, and the key points of these key elements are used as the key points of the reference object to be rendered to facilitate the accurate matching and positioning of the subsequent rendered object data and the object data to be rendered. .
  • the reference object to be rendered is a human face image
  • the eyes, the nose, and the mouth may be determined as key elements
  • the key point mark of the key element is extracted as a key point of the reference object to be rendered.
  • Step 12 Establish a reference coordinate system, and use the coordinate data of the key point in the reference coordinate system as the reference object data to be rendered.
  • a plurality of key points of the reference object to be rendered are collected, and a reference coordinate system is established according to the reference object to be rendered, and the coordinate data of the selected key points in the reference coordinate system is used as the reference object data to be rendered, thereby ensuring subsequent generation.
  • the operation of rendering the object data and the object data to be rendered can be performed based on the key points of the reference object to be rendered, thereby further improving the accuracy of real-time matching tracking of the rendered object data and the object data to be rendered.
  • FIG. 4 is a specific flowchart of a step of generating a rendering object data matching a reference object data to be rendered according to a reference object data to be rendered and a rendering object according to the image rendering method provided by the first embodiment of the present invention.
  • step 20 of the first embodiment of the present invention includes:
  • Step 21 Select a feature point of the rendered object according to the reference object data to be rendered.
  • the feature points of the selected rendering object should be based on the key points in the reference object data to be rendered, and the selection rules can be freely set according to actual conditions. For example, the feature points and the to-be-selected rendering objects can be set. The key points in the rendering reference object data are accurately matched.
  • Step 22 Combine the feature points of the rendering object with the reference coordinate system to generate rendering object data that matches the reference object data to be rendered.
  • step 22 the feature point of the rendering object selected in step 21 is combined with the reference coordinate system of the key point of the reference object data to be rendered, and the rendering is performed in the same coordinate system as the key point of the reference object data to be rendered.
  • the object feature point coordinate data is used as the rendering object data.
  • FIG. 5 is a specific flowchart of a step of generating an image matching method to be rendered according to a reference object data to be rendered and an object to be rendered according to a first embodiment of the present invention.
  • step 30 of the first embodiment of the present invention includes:
  • Step 31 Select a feature point of the object to be rendered according to the reference object data to be rendered.
  • the selected feature points of the object to be rendered should be based on the key points in the reference object data to be rendered, and the selection rules can be freely set according to actual conditions, for example, the selected objects to be rendered can be set.
  • the feature points correspond exactly to the key points in the reference object data to be rendered.
  • Step 32 Combine the feature points of the object to be rendered with the reference coordinate system to generate object data to be rendered that matches the reference object data to be rendered.
  • step 32 the feature points of the object to be rendered selected in step 31 are combined with the reference coordinate system of the key point of the reference object data to be rendered, and the key points of the reference object data to be rendered are generated in the same coordinate system.
  • the object feature point coordinate data to be rendered is used as the object data to be rendered.
  • the first embodiment of the present invention establishes the reference object data to be rendered, and generates rendering object data and object data to be rendered that match the reference object data to be rendered according to the reference object data to be rendered, and finally matches the rendering object data and the object data to be rendered in real time.
  • the method realizes the exact matching between the rendering object data and the object data to be rendered, thereby further realizing the accurate fitting of the rendering object and the object to be rendered, improving the fitting degree of the rendering object and the object to be rendered, and enhancing the rendering effect.
  • FIG. 6 is a schematic diagram of a mesh model of an image rendering method according to a second embodiment of the present invention.
  • the second embodiment of the present invention is substantially the same as the first embodiment, and the differences will be described below only, and the same portions will not be described again.
  • the reference object data to be rendered includes key point coordinate data and mesh model data generated based on the key point coordinate data. That is to say, in the embodiment of the present invention, a plurality of key points of the reference object to be rendered are collected, and the collected key points are connected to each other to generate a mesh model, and finally the key point coordinate data and the key point coordinate data are The generated mesh model data is collectively used as reference object data to be rendered.
  • the mesh model is generated by using the key points, and the mesh model data is generated by using the key point coordinate data, so that the subsequent rendering object data and the object data to be rendered generated by using the target object data to be rendered also include the self-characteristics.
  • the manner of the grid model data corresponding to the point data further improves the accuracy of the matching positioning of the rendered object data and the object data to be rendered.
  • the mesh model corresponding to the key point or the feature point is formed by connecting any two key points or feature points with lines. It should be noted that the specific connection method of the feature points in the rendering object data and the specific connection manner of the feature points in the object data to be rendered should be strictly consistent with the key point connection manner in the reference object data to be rendered, for example, when the reference object data to be rendered is to be rendered.
  • the first feature point corresponding to the first key point in the rendered object data should be connected with the third feature point corresponding to the third key point by a straight line, to be rendered
  • the first feature point corresponding to the first key point in the object data should also be connected with the third feature point corresponding to the third key point by a straight line.
  • connection points between key points or feature points are not limited to straight lines, but may also be curved lines.
  • the grid model data includes straight line data connecting each key point or feature point, including the slope of each connecting line, a corresponding curve function, etc., to fully improve the matching accuracy of each matching process, thereby Improve the tracking accuracy of the rendering object data and the object data to be rendered in real time.
  • the finite element method is used to form the mesh model, that is, the finite element splitting operation is performed on the reference target object, the object to be rendered, and the rendered object to form respective mesh models, and then each mesh model is further calculated.
  • the transformation data between the data is transformed by the transformation data to finally achieve the purpose of matching rendering.
  • the layout of the mesh model formed by the finite element method is more reasonable and scientific, and it is easier to achieve the purpose of accurate matching rendering.
  • FIG. 7 is a schematic flowchart diagram of an image rendering method according to a third embodiment of the present invention.
  • the third embodiment of the present invention is extended on the basis of the first embodiment of the present invention.
  • the third embodiment of the present invention is basically the same as the first embodiment, and only the differences and the similarities are described below. No longer.
  • the method further includes:
  • Step 35 Perform a buffer operation on the generated object data to be rendered.
  • the object data to be rendered generally exists in the photo or video stream, so the object to be rendered in the actual application process is in the state of real-time transformation, so the buffer operation of the generated object data to be rendered can effectively alleviate the subsequent matching tracking. Stress, speed up the response speed of rendering object data matching tracking.
  • the third embodiment of the present invention can effectively ensure that the rendering object data matches the real-time performance of the object data to be rendered, and improves the rendering speed of the image rendering method provided by the embodiment of the present invention.
  • Rendering object data has a hysteresis in matching the process of tracking the data of the object to be rendered.
  • FIG. 8a is a front view view of a two-dimensional object to be rendered according to an image rendering method according to a fourth embodiment of the present invention.
  • FIG. 8b is a side view of a two-dimensional object to be rendered according to an image rendering method according to a fourth embodiment of the present invention.
  • the fourth embodiment of the present invention applies the provided image rendering method to the field of two-dimensional image rendering.
  • the object to be rendered is a human face image
  • the rendered object is a cartoon magic sticker including elements such as an ear and an antler.
  • a facial recognition algorithm is used to collect a number of key points of the human face in the reference image, the key points are contour points of the eyes, nose, ears and mouth of the human face; establishing a reference coordinate system, and calculating the collected The coordinate data of several key points in the reference coordinate system; then, any two of the collected key points are connected by a straight line, that is, a mesh model based on several key points is formed, and the formed mesh model is at the reference coordinates. The data in the system is collected together.
  • the feature points of the rendered elements such as the ear and the antler are collected according to the collected key points of the human face, and the collected feature points are matched with the corresponding key points of the human face.
  • the contour point of the lower edge of the left ear of the rendering element (the lower edge shown in FIG. 8a) is collected as a feature point, and the feature point of the left ear of the element is rendered with the upper left edge of the human face (the upper left edge as shown in FIG. 8a).
  • the key points are matched (that is, the coordinates are coincident, and finally the superimposed display or fused display of the rendered object and the object to be rendered is realized).
  • the rendering elements included in other rendering objects all find corresponding key points in the human face to match.
  • the image to be rendered is also a human face image
  • the facial recognition algorithm is used to acquire several feature points of the human face contained in any frame in the photo or video stream to be rendered, and the collected feature points and the reference image key points in the reference coordinate system. One by one match.
  • the feature points of the rendered element are matched with the feature points of the image to be rendered, and finally the display of the rendering effect is implemented.
  • the feature points of the rendered element and the matching key points, and the feature points of the rendered element and the feature points of the image to be rendered that match the matching element are not limited to the coordinate matching pattern, for example, There is a matching mode of the fixed distance in the fixed direction (the fixed direction can be implemented by the straight line data in the mesh model) to fully enhance the scalability and adaptability of the image rendering method provided by the embodiment of the present invention.
  • the fourth embodiment of the present invention applies the image rendering method provided by the embodiment of the present invention to the field of two-dimensional face rendering, so that the object to be rendered, which is a two-dimensional face rendering, that is, the human face and the rendered object are more closely matched, and the fit is improved. Accuracy.
  • FIG. 9a is a schematic diagram of a three-dimensional avatar model of an image rendering method according to a fifth embodiment of the present invention.
  • FIG. 9b is a schematic diagram showing a combination of a three-dimensional avatar model and a three-dimensional rendering object according to an image rendering method according to a fifth embodiment of the present invention.
  • FIG. 9c is a schematic diagram showing a combination of an object to be rendered and a three-dimensional rendering object in an image rendering method according to a fifth embodiment of the present invention.
  • the fifth embodiment of the present invention applies the provided image rendering method to the field of three-dimensional image rendering.
  • the object to be rendered is a human head, and the rendered object is a three-dimensional cartoon mask magic sticker.
  • the fifth embodiment of the present invention is basically the same as the above-described fourth embodiment, and only the differences will be mainly described below.
  • the image rendering method provided by the fifth embodiment of the present invention firstly uses the feature recognition algorithm to acquire several key points of the human head in the reference image (ie, three-dimensional head key points, such as joint points), and then according to the human body. Several key points of the head calibrate the feature points corresponding to the positions of the key points in the three-dimensional cartoon mask.
  • the feature recognition algorithm is used to identify the feature point of the object to be rendered in the image to be rendered corresponding to the key point position of the human head in the reference image, and then the rendering object in the same reference coordinate system (ie The 3D cartoon mask is matched to track the object to be rendered in the image to be rendered.
  • the part to be rendered is covered by the rendered object to display the rendered object, and the part of the image to be rendered that is not covered by the rendered object displays the image to be rendered, thereby finally rendering 3D image rendering effect.
  • the part of the rendered object that should be occluded by the image to be rendered can be culled by means of an algorithm to be more suitable for practical applications.
  • the reference coordinate system applied in the three-dimensional image rendering process is a three-dimensional coordinate system to fully satisfy the three-dimensional data matching requirements.
  • the fifth embodiment of the present invention implements the rendering of a three-dimensional stereoscopic image by applying the image rendering method provided by the embodiment of the present invention to the field of three-dimensional image rendering, and presents a rendering effect with high conformity.
  • the feature recognition algorithm is used to identify the feature point of the object to be rendered corresponding to the key point position of the human head in the reference image, and then the rendering object in the same reference coordinate system matches the image to be rendered.
  • the part of the rendered object that should be occluded by the image to be rendered can be culled by means of an algorithm to be more suitable for practical applications.
  • a feature recognition algorithm is first used to acquire a plurality of key points of a human head in a reference image (ie, three-dimensional head key points, such as two eyes of a person) and a human body in the reference image.
  • the three-dimensional corner data of the head (including the corner data of the pitch, yaw, and roll directions, wherein the pitch is the rotation angle data about the x-axis, the yaw is the rotation angle data about the y-axis, and the roll is the rotation angle data about the z-axis rotation.
  • the feature points corresponding to the key point positions are calibrated in the three-dimensional cartoon mask (ie, the rendered object), and the three-dimensional corner data of the human head in the collected reference image and the three-dimensional cartoon are The three-dimensional corner data of the mask correspond one-to-one.
  • the feature recognition algorithm is used to identify feature points of the object to be rendered in the image to be rendered corresponding to the key point position of the human head in the reference image, and the feature recognition algorithm is used to identify the to-be-rendered image
  • the three-dimensional corner data of the object is rendered, and the three-dimensional corner data of the object to be rendered is in one-to-one correspondence with the three-dimensional corner data of the human head in the reference image.
  • the two eyes as the key points of the human head in the reference image are aligned or overlapped with the feature points corresponding to the two eyes of the human head in the reference image in the three-dimensional cartoon mask, and then utilized.
  • the three-dimensional corner data of the human head in the captured reference image and the three-dimensional corner data of the three-dimensional cartoon mask are used to perform rotation, translation and scaling based on key points and feature points by the three-dimensional cartoon mask and the human head in the reference image.
  • the human head of the image with the three-dimensional cartoon mask similarly, by means of the feature point of the object to be rendered in the image to be rendered corresponding to the key point position of the human head in the reference image and the human head in the reference image
  • the three-dimensional corner data corresponding to the three-dimensional corner data corresponds to the three-dimensional corner data of the object to be rendered, and the object to be rendered in the image to be rendered is matched with the human head in the reference image.
  • the three-dimensional cartoon mask is matched with the object to be rendered in the image to be rendered by means of the human head of the reference image to present a three-dimensional image rendering effect.
  • the specific steps of matching the object to be rendered in the human head or the image to be rendered with the three-dimensional cartoon mask in the reference image by means of the key point and the three-dimensional corner data are: firstly, using the two to be matched The three-dimensional corner data rotates the three-dimensional cartoon mask so that it coincides with the rotation direction of the human head in the reference image or the object to be rendered in the image to be rendered, thereby completing the rotation matching process. Secondly, the key points and feature points of the two to be matched are aligned or overlapped, and the matched key points and feature points are used as fixed points to complete the translation and scaling matching process of the two to be matched.
  • the three-dimensional cartoon mask since the three-dimensional cartoon mask is to be overlapped with the object to be rendered in the image to be rendered, the three-dimensional cartoon mask should have a visible portion and an invisible portion (ie, an occlusion portion of the object to be rendered in the image to be rendered),
  • a standard transparent human head three-dimensional model is created instead of the human head in the reference image, using OpenGL (Open Graphics Library) or Direct3D (based on Microsoft's common object model).
  • the 3D graphics application programming interface depth buffer to draw the transparent human head 3D model, and retain the depth data of the transparent human head 3D model in the depth buffer, and then set the depth buffer to "less than or equal to"
  • the state is used to draw a three-dimensional cartoon mask, and finally the depth detection and occlusion relationship are used to eliminate the invisible part of the three-dimensional cartoon mask (such as the back side of the human head).
  • the reference image is not necessarily a human head image, but may also be another part image.
  • the selection and generation of key points and three-dimensional corner data of other parts of the image may be freely set according to actual conditions, and the present invention is no longer one by one. Narration.
  • FIG. 10 is a schematic structural diagram of an image rendering system according to a sixth embodiment of the present invention. As shown in FIG. 10, an image rendering system according to a sixth embodiment of the present invention includes:
  • the reference data generating device 100 is configured to generate reference object data to be rendered according to the reference object to be rendered.
  • the rendering data generating apparatus 200 is configured to generate rendering object data matching the reference object data to be rendered according to the reference object data to be rendered and the rendering object.
  • the to-be-rendered data generating apparatus 300 is configured to generate object data to be rendered that matches the reference object data to be rendered according to the reference object data to be rendered and the object to be rendered.
  • the tracking matching device 400 is configured to match the rendering object data and the object data to be rendered in real time.
  • FIG. 11 is a detailed structural diagram of a device for generating a reference data of an image rendering system according to a sixth embodiment of the present invention.
  • the reference data generating apparatus 100 in the image rendering system according to the sixth embodiment of the present invention includes:
  • the key selection unit 110 is configured to select a plurality of key points of the reference object to be rendered.
  • the reference data generating unit 120 is configured to establish a reference coordinate system, and use coordinate data of the key point in the reference coordinate system as the reference object data to be rendered.
  • the reference data generating apparatus includes: a key point selecting unit for selecting a plurality of key points of the reference object to be rendered; a corner data collecting unit, configured to collect three-dimensional corner data of the reference object to be rendered; and reference data
  • the synthesizing unit is configured to establish a reference coordinate system, and use the coordinate data of the key point in the reference coordinate system and the three-dimensional corner data of the reference object to be rendered as the reference object data to be rendered.
  • FIG. 12 is a detailed structural diagram of an apparatus for generating rendering data of an image rendering system according to a sixth embodiment of the present invention.
  • the rendering data generating apparatus 200 in the image rendering system provided by the sixth embodiment of the present invention includes:
  • the rendering object feature point selecting unit 210 is configured to select a feature point of the rendering object according to the reference object data to be rendered.
  • the rendering object data generating unit 220 is configured to combine the feature points of the rendering object with the reference coordinate system to generate rendering object data that matches the reference object data to be rendered.
  • the rendering data generating apparatus includes: a rendering object feature point selecting unit, configured to select a feature point of the rendering object according to a key point in the reference object data to be rendered; and a rendering object corner data collecting unit, configured to The three-dimensional corner data of the reference object to be rendered in the reference object data to be rendered matches the three-dimensional corner data of the rendering object; the rendering object data synthesizing unit is configured to combine the feature point of the rendering object and the three-dimensional corner data of the rendering object with the reference coordinate system , generating rendering object data that matches the reference object data to be rendered.
  • FIG. 13 is a detailed structural diagram of an apparatus for generating data to be rendered in an image rendering system according to a sixth embodiment of the present invention.
  • the data to be rendered apparatus 300 in the image rendering system according to the sixth embodiment of the present invention includes:
  • the object to be rendered feature point selection unit 310 is configured to select a feature point of the object to be rendered according to the reference object data to be rendered.
  • the object data to be rendered 320 is configured to combine the feature points of the object to be rendered with the reference coordinate system to generate object data to be rendered that matches the reference object data to be rendered.
  • the to-be-rendered data generating apparatus includes: an object feature point selecting unit to be rendered, configured to select a feature point of the object to be rendered according to a key point in the reference object data to be rendered; a collecting unit, configured to match three-dimensional corner data of the object to be rendered according to the three-dimensional corner data of the reference object to be rendered in the reference object data to be rendered; the object data synthesizing unit to be rendered, the feature point of the object to be rendered and the object to be rendered The three-dimensional corner data is combined with the reference coordinate system to generate object data to be rendered that matches the reference object data to be rendered.
  • FIG. 14 is a schematic structural diagram of an image rendering system according to a seventh embodiment of the present invention. As shown in FIG. 14, the seventh embodiment of the present invention is extended on the basis of the sixth embodiment of the present invention. The seventh embodiment of the present invention is basically the same as the sixth embodiment, and only the differences and the similarities are described below. No longer.
  • the to-be-rendered data generating apparatus 300 and the tracking matching apparatus 400 further include:
  • the buffering device 350 is configured to perform a buffering operation on the object data to be rendered generated by the rendering data generating apparatus 300.

Abstract

本发明一实施例提供的一种图像渲染方法,包括:根据待渲染基准对象生成待渲染基准对象数据;根据待渲染基准对象数据和渲染对象生成匹配于待渲染基准对象数据的渲染对象数据;根据待渲染基准对象数据和待渲染对象生成匹配于待渲染基准对象数据的待渲染对象数据;实时匹配渲染对象数据和待渲染对象数据。本发明实施例提供的图像渲染方法通过建立待渲染基准对象数据,并根据待渲染基准对象数据生成匹配于待渲染基准对象数据的渲染对象数据和待渲染对象数据,最后实时匹配渲染对象数据和待渲染对象数据的方式,实现了渲染对象与待渲染对象的准确贴合,提高了渲染对象与待渲染对象的贴合度,增强了渲染效果。

Description

一种图像渲染方法和系统
本申请要求2017年08月14日提交的申请号为No.201710695496.2的中国申请的优先权,通过引用将其全部内容并入本文。
技术领域
本发明涉及计算机技术领域,具体涉及一种图像渲染方法和系统。
发明背景
图像渲染技术是将设计师设计的渲染对象(比如魔法贴)叠加到摄像机等电子设备采集的图像或数据流中的待渲染对象(比如人物头部)以形成具备渲染效果的图像或数据流。图像渲染技术是最常见的增强现实技术之一。
图1a所示为现有技术中的图像渲染方法的待渲染对象正视效果图。图1b所示为现有技术中的图像渲染方法的待渲染对象侧视效果图。如图1a和图1b所示,现有技术中,当待渲染对象(比如人物头部)处于正视姿态时,渲染对象(比如魔法贴)能够通过调节自身的大小来匹配待渲染对象,使渲染对象与待渲染对象能够准确贴合;当待渲染对象存在左右摆动角度或者上下摆动角度时,渲染对象根据摆动角度的具体值来调整自身呈现角度以尽量做到与待渲染对象准确贴合,但是由于一般情况下待渲染对象为三维实体,当待渲染对象存在左右或上下角度的调整时,具备面片化特征的渲染对象仅靠摆动角度的具体值不能同步调整自身的二维映射角度,因此导致渲染对象与待渲染对象的贴合度不高、渲染效果差。
发明内容
有鉴于此,本发明实施例提供一种图像渲染方法和系统,以解决现有图像渲染过程中渲染对象与待渲染对象的贴合度不高、渲染效果差的问题。
本发明实施例提供的图像渲染方法包括:根据待渲染基准对象生成待渲染基准对象数据;根据待渲染基准对象数据和渲染对象生成匹配于待渲染基准对象数据的渲染对象数据;根据待渲染基准对象数据和待渲染对象生成匹配于待渲染基准对象数据的待渲染对象数据;实时匹配渲染对象数据和待渲染对象数据。
在本发明一实施例中,待渲染基准对象数据包括关键点坐标数据。
在本发明一实施例中,待渲染基准对象数据进一步包括根据关键点坐标数据生成的网格模型数据。
在本发明一实施例中,待渲染基准对象数据进一步包括三维转角数据。
在本发明一实施例中,根据待渲染基准对象生成待渲染基准对象数据包括:选取待渲染基准对象的若干关键点;建立基准坐标系,将关键点在基准坐标系中的坐标数据作为待渲染基准对象数据。
在本发明一实施例中,根据待渲染基准对象生成待渲染基准对象数据包括:选取待渲染基准对象的若干关键点;采集待渲染基准对象的三维转角数据;建立基准坐标系,将关键点在基准坐标系中的坐标数据和待渲染基准对象的三维转角数据作为待渲染基准对象数据。
在本发明一实施例中,关键点为待渲染基准对象中包含的元素的轮廓关键点。
在本发明一实施例中,根据所述待渲染基准对象数据和渲染对象生成匹配于待渲染基准对象数据的渲染对象数据包括:根据待渲染基准对象数据选取渲染对象的特征点;将渲染对象的特征点与基准坐标系相结合,生成匹配于待渲染基准对象数据的渲染对象数据。
在本发明一实施例中,根据待渲染基准对象数据和渲染对象生成匹配于待渲染基准对象数据的渲染对象数据包括:根据待渲染基准对象数据中的关键点选取渲染对象的特征点;根据待渲染基准对象数据中的待渲染基准对象的三维转角数据匹配渲染对象的三维转角数据;将渲染对象的特征点和渲染对象的三维转角数据与基准坐标系相结合,生成匹配于待渲染基准对象数据的渲染对象数据。
在本发明一实施例中,根据待渲染基准对象数据和待渲染对象生成匹配于待渲染基准对象数据的待渲染对象数据包括:根据待渲染基准对象数据选取待渲染对象的特征点;将待渲染对象的特征点与基准坐标系相结合,生成匹配于待渲染基准对象数据的待渲染对象数据。
在本发明一实施例中,根据待渲染基准对象数据和待渲染对象生成匹配于待渲染基准对象数据的待渲染对象数据包括:根据待渲染基准对象数据中的关键点选取待渲染对象的特征点;根据待渲染基准对象数据中的待渲染基准对象的三维转角数据匹配待渲染对象的三维转角数据;将待渲染对象的特征点和待渲染对象的三维转角数据与基准坐标系相结合,生成匹配于待渲染基准对象数据的待渲染对象数据。
在本发明一实施例中,该方法在根据待渲染基准对象数据和待渲染对象生成匹配于待渲染基准对象数据的待渲染对象数据步骤之后进一步包括:对生成的待渲染对象数据进行缓存操作。
本发明实施例还提供了一种图像渲染系统,包括:基准数据生成装置,用于根据待渲染基准对象生成待渲染基准对象数据;渲染数据生成装置,用于根据待渲染基准对象数据和渲染对象生成匹配于待渲染基准对象数据的渲染对象数据;待渲染数据生成装置,用于根据待渲染基准对象数据和待渲染对象生成匹配于待 渲染基准对象数据的待渲染对象数据;跟踪匹配装置,用于实时匹配渲染对象数据和待渲染对象数据。
在本发明一实施例中,基准数据生成装置包括:关键点选取单元,用于选取待渲染基准对象的若干关键点;基准数据生成单元,用于建立基准坐标系,将关键点在基准坐标系中的坐标数据作为待渲染基准对象数据。
在本发明一实施例中,基准数据生成装置包括:关键点选取单元,用于选取待渲染基准对象的若干关键点;转角数据采集单元,用于采集待渲染基准对象的三维转角数据;基准数据合成单元,用于建立基准坐标系,将关键点在基准坐标系中的坐标数据和待渲染基准对象的三维转角数据作为待渲染基准对象数据。
在本发明一实施例中,渲染数据生成装置包括:选取渲染对象特征点单元,用于根据待渲染基准对象数据选取渲染对象的特征点;渲染对象数据生成单元,用于将渲染对象的特征点与基准坐标系相结合,生成匹配于待渲染基准对象数据的渲染对象数据。
在本发明一实施例中,渲染数据生成装置包括:渲染对象特征点选取单元,用于根据待渲染基准对象数据中的关键点选取渲染对象的特征点;渲染对象转角数据采集单元,用于根据待渲染基准对象数据中的待渲染基准对象的三维转角数据匹配渲染对象的三维转角数据;渲染对象数据合成单元,用于将渲染对象的特征点和渲染对象的三维转角数据与基准坐标系相结合,生成匹配于待渲染基准对象数据的渲染对象数据。
在本发明一实施例中,待渲染数据生成装置包括:选取待渲染对象特征点单元,用于根据待渲染基准对象数据选取待渲染对象的特征点;待渲染对象数据生成单元,用于将待渲染对象的特征点与基准坐标系相结合,生成匹配于待渲染基准对象数据的待渲染对象数据。
在本发明一实施例中,待渲染数据生成装置包括:待渲染对象特征点选取单元,用于根据待渲染基准对象数据中的关键点选取所述待渲染对象的特征点;待渲染对象转角数据采集单元,用于根据待渲染基准对象数据中的待渲染基准对象的三维转角数据匹配待渲染对象的三维转角数据;待渲染对象数据合成单元,用于将待渲染对象的特征点和待渲染对象的三维转角数据与基准坐标系相结合,生成匹配于待渲染基准对象数据的待渲染对象数据。
在本发明一实施例中,该系统进一步包括:缓存装置,用于对待渲染数据生成装置生成的待渲染对象数据进行缓存操作。
本发明实施例提供的图像渲染方法通过建立待渲染基准对象数据,并根据待渲染基准对象数据生成匹配于待渲染基准对象数据的渲染对象数据和待渲染对象数据,最后实时匹配渲染对象数据和待渲染对象数据的方式,实现了渲染对象数 据与待渲染对象数据的准确匹配,从而进一步实现了渲染对象与待渲染对象的准确贴合,提高了渲染对象与待渲染对象的贴合度,增强了渲染效果。
附图简要说明
图1a所示为现有技术中的图像渲染方法的待渲染对象正视效果图。
图1b所示为现有技术中的图像渲染方法的待渲染对象侧视效果图。
图2所示为本发明第一实施例提供的图像渲染方法的流程示意图。
图3所示为本发明第一实施例提供的图像渲染方法的根据待渲染基准对象生成待渲染基准对象数据步骤的具体流程图。
图4所示为本发明第一实施例提供的图像渲染方法的根据待渲染基准对象数据和渲染对象生成匹配于待渲染基准对象数据的渲染对象数据步骤的具体流程图。
图5所示为本发明第一实施例提供的图像渲染方法的根据待渲染基准对象数据和待渲染对象生成匹配于待渲染基准对象数据的待渲染对象数据步骤的具体流程图。
图6所示为本发明第二实施例提供的图像渲染方法的网格模型示意图。
图7所示为本发明第三实施例提供的图像渲染方法的流程示意图。
图8a所示为本发明第四实施例提供的图像渲染方法的二维待渲染对象正视效果图。
图8b所示为本发明第四实施例提供的图像渲染方法的二维待渲染对象侧视效果图。
图9a所示为本发明第五实施例提供的图像渲染方法的三维头像模型示意图。
图9b所示为本发明第五实施例提供的图像渲染方法的三维头像模型与三维渲染对象的结合示意图。
图9c所示为本发明第五实施例提供的图像渲染方法的待渲染对象与三维渲染对象的结合示意图。
图10所示为本发明第六实施例提供的图像渲染系统的结构示意图。
图11所示为本发明第六实施例提供的图像渲染系统的基准数据生成装置的具体结构图。
图12所示为本发明第六实施例提供的图像渲染系统的渲染数据生成装置的具体结构图。
图13所示为本发明第六实施例提供的图像渲染系统的待渲染数据生成装置的具体结构图。
图14所示为本发明第七实施例提供的图像渲染系统的结构示意图。
实施本发明的方式
为使本发明的目的、技术手段和优点更加清楚明白,以下结合附图对本发明作进一步详细说明。
图2所示为本发明第一实施例提供的图像渲染方法的流程示意图。如图2所示,本发明第一实施例提供的图像渲染方法包括:
步骤10:根据待渲染基准对象生成待渲染基准对象数据。
步骤20:根据待渲染基准对象数据和渲染对象生成匹配于待渲染基准对象数据的渲染对象数据。
步骤30:根据待渲染基准对象数据和待渲染对象生成匹配于待渲染基准对象数据的待渲染对象数据。
步骤40:实时匹配渲染对象数据和待渲染对象数据。
在本发明一实施例中,渲染对象与待渲染对象的准确贴合方式包括但不限于叠加或融合方式,以充分满足不同情况下的实际需求。
在本发明一实施例中,待渲染基准对象数据包括关键点坐标数据。具体地,图3所示为本发明第一实施例提供的图像渲染方法的根据待渲染基准对象生成待渲染基准对象数据步骤的具体流程图。如图3所示,本发明第一实施例中的步骤10中包括:
步骤11:选取待渲染基准对象的若干关键点。
优选地,关键点为待渲染基准对象中包含的元素的轮廓关键点。实际应用过程中,待渲染基准对象中包含若干关键的元素,将这些关键的元素的轮廓关键点作为待渲染基准对象的关键点有助于后续的渲染对象数据和待渲染对象数据的准确匹配定位。
例如,在本发明一实施例中待渲染基准对象为人体面部图像,则可以将眼睛、鼻子、嘴确定为关键的元素,将关键元素的轮廓关键点标记提取出来作为待渲染基准对象的关键点,以便提高渲染对象数据和待渲染对象数据的追踪匹配的准确度。
步骤12:建立基准坐标系,将关键点在基准坐标系中的坐标数据作为待渲染基准对象数据。
实际应用过程中,采集待渲染基准对象的若干关键点,根据待渲染基准对象建立基准坐标系,将选取的若干关键点在基准坐标系中的坐标数据作为待渲染基准对象数据,从而保证后续生成渲染对象数据和待渲染对象数据的操作能够以待渲染基准对象的关键点为基准进行,从而进一步提高了渲染对象数据与待渲染对象数据的实时匹配跟踪的准确度。
图4所示为本发明第一实施例提供的图像渲染方法的根据待渲染基准对象数据和渲染对象生成匹配于待渲染基准对象数据的渲染对象数据步骤的具体流程图。如图4所示,本发明第一实施例的步骤20中包括:
步骤21:根据待渲染基准对象数据选取渲染对象的特征点。
在步骤21中,选取的渲染对象的特征点应当以待渲染基准对象数据中的关键点为基准,选取规则可根据实际情况自由设定,比如,可设定选取的渲染对象的特征点与待渲染基准对象数据中的关键点一一准确对应。
步骤22:将渲染对象的特征点与基准坐标系相结合,生成匹配于待渲染基准对象数据的渲染对象数据。
在步骤22中,将步骤21中选取的渲染对象的特征点与待渲染基准对象数据的关键点所在的基准坐标系相结合,生成与待渲染基准对象数据的关键点在同一坐标系下的渲染对象特征点坐标数据,将该坐标数据作为渲染对象数据。
图5所示为本发明第一实施例提供的图像渲染方法的根据待渲染基准对象数据和待渲染对象生成匹配于待渲染基准对象数据待渲染对象数据步骤的具体流程图。如图5所示,本发明第一实施例的步骤30中包括:
步骤31:根据待渲染基准对象数据选取待渲染对象的特征点。
同理,在步骤31中,选取的待渲染对象的特征点应当以待渲染基准对象数据中的关键点为基准,选取规则可根据实际情况自由设定,比如,可设定选取的待渲染对象的特征点与待渲染基准对象数据中的关键点一一准确对应。
步骤32:将待渲染对象的特征点与基准坐标系相结合,生成匹配于待渲染基准对象数据的待渲染对象数据。
在步骤32中,将步骤31中选取的待渲染对象的特征点与待渲染基准对象数据的关键点所在的基准坐标系相结合,生成与待渲染基准对象数据的关键点在同一坐标系下的待渲染对象特征点坐标数据,将该坐标数据作为待渲染对象数据。
本发明第一实施例通过建立待渲染基准对象数据,并根据待渲染基准对象数据生成匹配于待渲染基准对象数据的渲染对象数据和待渲染对象数据,最后实时匹配渲染对象数据和待渲染对象数据的方式,实现了渲染对象数据与待渲染对象数据的准确匹配,从而进一步实现了渲染对象与待渲染对象的准确贴合,提高了渲染对象与待渲染对象的贴合度,增强了渲染效果。
图6所示为本发明第二实施例提供的图像渲染方法的网格模型示意图。如图6所示,本发明第二实施例与第一实施例基本相同,下述仅重点叙述不同之处,相同之处不再赘述。
在本发明第二实施例中,待渲染基准对象数据包括关键点坐标数据和根据关键点坐标数据生成的网格模型数据。也就是说,在本发明实施例中,采集待渲染 基准对象的若干关键点,并将所采集的若干关键点之间相互连接生成网格模型,最后将关键点坐标数据和根据关键点坐标数据生成的网格模型数据共同作为待渲染基准对象数据。
本发明实施例通过借助关键点生成网格模型,并借助关键点坐标数据生成网格模型数据,从而使后续利用待渲染基准对象数据生成的渲染对象数据和待渲染对象数据中同样包括与自身特征点数据对应的网格模型数据的方式,进一步提高了渲染对象数据和待渲染对象数据的匹配定位的精确度。
在本发明一实施例中,关键点或特征点对应的网格模型的形成方式为:用线条连接任意两个关键点或特征点。应当注意,渲染对象数据中的特征点具体连接方式以及待渲染对象数据中的特征点具体连接方式均应当与待渲染基准对象数据中的关键点连接方式严格一致,比如,当待渲染基准对象数据中的第一关键点和第三关键点直线连接时,渲染对象数据中的与第一关键点对应的第一特征点应当和与第三关键点对应的第三特征点利用直线连接,待渲染对象数据中的与第一关键点对应的第一特征点也应当和与第三关键点对应的第三特征点利用直线连接。
应当理解,关键点或特征点之间的连接线不限于直线,也可以是曲线。
在本发明一实施例中,网格模型数据中包括连接各关键点或特征点的直线数据,包括各连接线的斜率、对应的曲线函数等,以充分提高各个匹配过程的匹配准确性,从而提高渲染对象数据与待渲染对象数据实时匹配跟踪准确性。
在本发明一实施例中,采用有限元方法形成网格模型,即通过对待渲染基准对象、待渲染对象以及渲染对象进行有限元剖分操作形成各自的网格模型,再进一步计算各个网格模型之间的变换数据,利用变换数据进行相关变换从而最终实现匹配渲染的目的。采用有限元剖分方法形成的网格模型布局更加合理、科学,更易实现准确匹配渲染的目的。
图7所示为本发明第三实施例提供的图像渲染方法的流程示意图。如图7所示,在本发明第一实施例的基础上延伸出本发明第三实施例,本发明第三实施例与第一实施例基本相同,下面仅重点描述不同之处,相同之处不再赘述。在本发明第三实施例中,步骤30和步骤40之间进一步包括:
步骤35:对生成的待渲染对象数据进行缓存操作。
应当理解,待渲染对象数据一般都存在于照片或视频流中,因此实际应用过程中的待渲染对象处于实时变换的状态,因此对生成的待渲染对象数据进行缓存操作能够有效缓解后续的匹配跟踪压力,加快渲染对象数据匹配跟踪的反应速度。
本发明第三实施例通过对生成的待渲染对象数据进行缓存操作,能够有效保证渲染对象数据匹配跟踪待渲染对象数据的实时性,提高本发明实施例提供的图像渲染方法的渲染速度,防止出现渲染对象数据在匹配跟踪待渲染对象数据的过 程中出现滞后情况。
图8a所示为本发明第四实施例提供的图像渲染方法的二维待渲染对象正视效果图。图8b所示为本发明第四实施例提供的图像渲染方法的二维待渲染对象侧视效果图。如图8a和图8b所示,本发明第四实施例将提供的图像渲染方法应用到二维图像渲染领域。在本发明第四实施例中,待渲染对象为人体面部图像,渲染对象为包括耳朵和鹿角等元素的卡通魔法贴。
实际应用过程中,首先利用面部识别算法采集基准图像中的人体面部的若干关键点,该若干关键点为人体面部的眼睛、鼻子、耳朵、嘴的轮廓关键点;建立基准坐标系,计算采集的若干关键点在基准坐标系中的坐标数据;然后将采集的若干关键点中的任意其中两点用直线连接,即形成基于若干关键点的网格模型,并将形成的网格模型在基准坐标系中的数据一并采集。
根据采集的人体面部关键点采集耳朵和鹿角等渲染元素的特征点,并将采集的特征点与相应的人体面部关键点相匹配。比如,采集渲染元素左耳朵的下边缘(如图8a所示的下边缘)轮廓点作为特征点,并将渲染元素左耳朵的特征点与人体面部的左上边缘(如图8a所示的左上边缘)的关键点相匹配(即坐标重合,最终实现渲染对象与待渲染对象的叠加显示或融合显示)。同理,其他渲染对象中包括的渲染元素均在人体面部中找到对应的关键点进行匹配。
待渲染图像同样为人体面部图像,利用面部识别算法采集待渲染照片或者视频流中任一帧包含的人体面部的若干特征点,并将采集的若干特征点与基准坐标系中的基准图像关键点一一匹配。最后,将渲染元素的特征点与待渲染图像的特征点匹配对应,最终实现渲染效果的显示。
当然,渲染元素的特征点和与之相匹配的关键点、以及渲染元素的特征点和与之相匹配的待渲染图像的特征点之间不仅局限于坐标重合的匹配模式,比如,亦可以是存在固定方向上的固定距离的匹配模式(固定方向可通过网格模型中的直线数据来实现标记),以充分增强本发明实施例提供的图像渲染方法的可扩展性和适应性。
本发明第四实施例通过将本发明实施例提供的图像渲染方法应用到二维面部渲染领域,使二维面部渲染的待渲染对象即人体面部与渲染对象之间更加贴合,提高了贴合准确度。
图9a所示为本发明第五实施例提供的图像渲染方法的三维头像模型示意图。图9b所示为本发明第五实施例提供的图像渲染方法的三维头像模型与三维渲染对象的结合示意图。图9c所示为本发明第五实施例提供的图像渲染方法的待渲染对象与三维渲染对象的结合示意图。如图9a、图9b和图9c所示,本发明第五实施例将提供的图像渲染方法应用到三维图像渲染领域。在本发明第五实施例中, 待渲染对象为人体头部,渲染对象为三维卡通面具魔法贴。本发明第五实施例与上述第四实施例基本相同,下面仅重点叙述不同之处。
本发明第五实施例提供的图像渲染方法在实际应用过程中,首先利用特征识别算法采集基准图像中的人体头部的若干关键点(即三维头部关键点,比如关节点),然后依照人体头部的若干关键点在三维卡通面具中标定与关键点位置相对应的特征点。当输入待渲染图像时,利用特征识别算法识别待渲染图像中的待渲染对象的与基准图像中的人体头部的关键点位置对应的特征点,而后处于同一基准坐标系下的渲染对象(即三维卡通面具)匹配跟踪待渲染图像中的待渲染对象,在显示阶段,待渲染图像被渲染对象覆盖的部分显示渲染对象,待渲染图像未被渲染对象覆盖的部分显示待渲染图像,从而最终呈现三维图像渲染效果。
此外,在三维图像渲染过程中,可以借助于算法剔除渲染对象中应当被待渲染图像遮挡的部分,以更贴合实际应用。
应当理解,三维图像渲染过程中所应用的基准坐标系为三维坐标系,以充分满足三维数据匹配需求。
本发明第五实施例通过将本发明实施例提供的图像渲染方法应用到三维图像渲染领域,实现了三维立体图像的渲染,并且呈现了高贴合度的渲染效果。
当输入待渲染图像时,利用特征识别算法识别待渲染对象的与基准图像中的人体头部的关键点位置对应的特征点,而后处于同一基准坐标系下的渲染对象匹配跟踪待渲染图像中的待渲染对象,在显示阶段,待渲染图像被渲染对象覆盖的部分显示渲染对象,待渲染图像未被渲染对象覆盖的部分显示待渲染图像,从而最终呈现三维图像渲染效果。
此外,在三维图像渲染过程中,可以借助于算法剔除渲染对象中应当被待渲染图像遮挡的部分,以更贴合实际应用。
本发明一实施例提供的图像渲染方法中,首先利用特征识别算法采集基准图像中的人体头部的若干关键点(即三维头部关键点,比如人的两只眼睛)和基准图像中的人体头部的三维转角数据(包括pitch、yaw、roll三个方向的转角数据,其中pitch为绕x轴旋转的转角数据,yaw为绕y轴旋转的转角数据,roll为绕z轴旋转的转角数据),然后依照人体头部的若干关键点在三维卡通面具(即渲染对象)中标定与关键点位置相对应的特征点,并且将采集的基准图像中的人体头部的三维转角数据与三维卡通面具的三维转角数据一一对应。当输入待渲染图像时,利用特征识别算法识别待渲染图像中的待渲染对象的与基准图像中的人体头部的关键点位置对应的特征点,并且利用特征识别算法识别待渲染图像中的待渲染对象的三维转角数据,并将待渲染对象的三维转角数据与基准图像中的人体头部的三维转角数据一一对应。
实际应用过程中,首先将基准图像中的人体头部的作为关键点的两只眼睛与三维卡通面具中与基准图像中的人体头部的两只眼睛相对应的特征点对齐或重叠,然后利用采集的基准图像中的人体头部的三维转角数据和三维卡通面具的三维转角数据将三维卡通面具与基准图像中的人体头部进行基于关键点和特征点的旋转、平移和缩放,从而实现基准图像的人体头部与三维卡通面具的匹配,同理,借助于待渲染图像中的待渲染对象的与基准图像中的人体头部的关键点位置对应的特征点以及与基准图像中的人体头部的三维转角数据相对应的待渲染对象三维转角数据,进行待渲染图像中的待渲染对象与基准图像中的人体头部的匹配。最终,借助于基准图像的人体头部将三维卡通面具与待渲染图像中的待渲染对象进行匹配,呈现三维图像渲染效果。
在本发明一实施例中,借助于关键点和三维转角数据进行基准图像中的人体头部或待渲染图像中的待渲染对象与三维卡通面具匹配的具体步骤为:首先利用待匹配的两者的三维转角数据旋转三维卡通面具,使其与基准图像中的人体头部或待渲染图像中的待渲染对象的转动方向一致,从而完成旋转匹配过程。其次,将待匹配的两者的关键点和特征点进行对齐或重叠操作,将匹配后的关键点和特征点作为定点完成待匹配的两者的平移和缩放匹配过程。
应当理解,借助于关键点和三维转角数据进行基准图像中的人体头部或待渲染图像中的待渲染对象与三维卡通面具匹配的具体过程及步骤不限于本发明上述实施例所限定的过程和步骤,其他任何能够容易想到的借助于关键点和转角数据进行匹配的方法过程和步骤均应包含在本发明技术方案内,本发明实施例在此不再一一赘述。
进一步地,由于三维卡通面具要实现与待渲染图像中的待渲染对象的重叠匹配,因此三维卡通面具应具有可见部分和不可见部分(即被待渲染图像中的待渲染对象遮挡部分),因此,在本发明一实施例中,创建一个代替基准图像中的人体头部的、标准的透明人体头部三维模型,利用OpenGL(Open Graphics Library,开放图形库)或者Direct3D(基于微软的通用对象模式的三维图形应用程序编程接口)的深度缓存来绘制该透明人体头部三维模型,并在深度缓存中保留该透明人体头部三维模型的深度数据,然后将深度缓存设置为“小于或者小于等于”状态来进行三维卡通面具的绘制,最后利用深度检测和遮挡关系来剔除掉三维卡通面具的不可见部分(比如人体头部后侧)。
应当理解,基准图像不一定是人体头部图像,亦可以是其他部位图像,其他部位图像的关键点和三维转角数据的选取及生成可根据实际情况自由设定,本发明在此不再一一赘述。
图10所示为本发明第六实施例提供的图像渲染系统的结构示意图。如图10 所示,本发明第六实施例提供的图像渲染系统包括:
基准数据生成装置100,用于根据待渲染基准对象生成待渲染基准对象数据。
渲染数据生成装置200,用于根据待渲染基准对象数据和渲染对象生成匹配于待渲染基准对象数据的渲染对象数据。
待渲染数据生成装置300,用于根据待渲染基准对象数据和待渲染对象生成匹配于待渲染基准对象数据的待渲染对象数据。
跟踪匹配装置400,用于实时匹配渲染对象数据和待渲染对象数据。
图11所示为本发明第六实施例提供的图像渲染系统的生成基准数据装置的具体结构图。如图11所示,本发明第六实施例提供的图像渲染系统中的基准数据生成装置100包括:
关键点选取单元110,用于选取待渲染基准对象的若干关键点。
基准数据生成单元120,用于建立基准坐标系,将关键点在基准坐标系中的坐标数据作为待渲染基准对象数据。
在本发明一实施例中,基准数据生成装置包括:关键点选取单元,用于选取待渲染基准对象的若干关键点;转角数据采集单元,用于采集待渲染基准对象的三维转角数据;基准数据合成单元,用于建立基准坐标系,将关键点在基准坐标系中的坐标数据和待渲染基准对象的三维转角数据作为待渲染基准对象数据。
图12所示为本发明第六实施例提供的图像渲染系统的生成渲染数据装置的具体结构图。如图12所示,本发明第六实施例提供的图像渲染系统中的渲染数据生成装置200包括:
渲染对象特征点选取单元210,用于根据待渲染基准对象数据选取渲染对象的特征点。
渲染对象数据生成单元220,用于将渲染对象的特征点与基准坐标系相结合,生成匹配于待渲染基准对象数据的渲染对象数据。
在本发明一实施例中,渲染数据生成装置包括:渲染对象特征点选取单元,用于根据待渲染基准对象数据中的关键点选取渲染对象的特征点;渲染对象转角数据采集单元,用于根据待渲染基准对象数据中的待渲染基准对象的三维转角数据匹配渲染对象的三维转角数据;渲染对象数据合成单元,用于将渲染对象的特征点和渲染对象的三维转角数据与基准坐标系相结合,生成匹配于待渲染基准对象数据的渲染对象数据。
图13所示为本发明第六实施例提供的图像渲染系统的生成待渲染数据数据装置的具体结构图。如图13所示,本发明第六实施例提供的图像渲染系统中的待渲染数据生成装置300包括:
待渲染对象特征点选取单元310,用于根据待渲染基准对象数据选取待渲染 对象的特征点。
待渲染对象数据生成单元320,用于将待渲染对象的特征点与基准坐标系相结合,生成匹配于待渲染基准对象数据的待渲染对象数据。
在本发明一实施例中,待渲染数据生成装置包括:待渲染对象特征点选取单元,用于根据待渲染基准对象数据中的关键点选取所述待渲染对象的特征点;待渲染对象转角数据采集单元,用于根据待渲染基准对象数据中的待渲染基准对象的三维转角数据匹配待渲染对象的三维转角数据;待渲染对象数据合成单元,用于将待渲染对象的特征点和待渲染对象的三维转角数据与基准坐标系相结合,生成匹配于待渲染基准对象数据的待渲染对象数据。
图14所示为本发明第七实施例提供的图像渲染系统的结构示意图。如图14所示,在本发明第六实施例的基础上延伸出本发明第七实施例,本发明第七实施例与第六实施例基本相同,下面仅重点描述不同之处,相同之处不再赘述。
在本发明第七实施例中,待渲染数据生成装置300和跟踪匹配装置400之间进一步包括:
缓存装置350,用于对待渲染数据生成装置300生成的待渲染对象数据进行缓存操作。
以上仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (20)

  1. 一种图像渲染方法,其特征在于,包括:
    根据待渲染基准对象生成待渲染基准对象数据;
    根据所述待渲染基准对象数据和渲染对象生成匹配于所述待渲染基准对象数据的渲染对象数据;
    根据所述待渲染基准对象数据和待渲染对象生成匹配于所述待渲染基准对象数据的待渲染对象数据;
    实时匹配所述渲染对象数据和所述待渲染对象数据。
  2. 根据权利要求1所述的图像渲染方法,其特征在于,所述待渲染基准对象数据包括关键点坐标数据。
  3. 根据权利要求2所述的图像渲染方法,其特征在于,所述待渲染基准对象数据进一步包括根据所述关键点坐标数据生成的网格模型数据。
  4. 根据权利要求2所述的图像渲染方法,其特征在于,所述待渲染基准对象数据进一步包括三维转角数据。
  5. 根据权利要求1所述的图像渲染方法,其特征在于,所述根据待渲染基准对象生成待渲染基准对象数据包括:
    选取待渲染基准对象的若干关键点;
    建立基准坐标系,将所述关键点在所述基准坐标系中的坐标数据作为所述待渲染基准对象数据。
  6. 根据权利要求1所述的图像渲染方法,其特征在于,所述根据待渲染基准对象生成待渲染基准对象数据包括:
    选取待渲染基准对象的若干关键点;
    采集待渲染基准对象的三维转角数据;
    建立基准坐标系,将所述关键点在所述基准坐标系中的坐标数据和所述待渲染基准对象的三维转角数据作为所述待渲染基准对象数据。
  7. 根据权利要求5或6所述的图像渲染方法,其特征在于,所述关键点为所述待渲染基准对象中包含的元素的轮廓关键点。
  8. 根据权利要求1所述的图像渲染方法,其特征在于,所述根据所述待渲染基准对象数据和所述渲染对象生成匹配于所述待渲染基准对象数据的渲染对象数据包括:
    根据所述待渲染基准对象数据选取所述渲染对象的特征点;
    将所述渲染对象的特征点与所述基准坐标系相结合,生成匹配于所述待渲染基准对象数据的渲染对象数据。
  9. 根据权利要求1所述的图像渲染方法,其特征在于,所述根据所述待渲染基准对象数据和所述渲染对象生成匹配于所述待渲染基准对象数据的渲染对象数据包括:
    根据所述待渲染基准对象数据中的所述关键点选取所述渲染对象的特征点;
    根据所述待渲染基准对象数据中的所述待渲染基准对象的三维转角数据匹配所述渲染对象的三维转角数据;
    将所述渲染对象的特征点和所述渲染对象的三维转角数据与所述基准坐标系相结合,生成匹配于所述待渲染基准对象数据的渲染对象数据。
  10. 根据权利要求1所述的图像渲染方法,其特征在于,所述根据所述待渲染基准对象数据和所述待渲染对象生成匹配于所述待渲染基准对象数据的待渲染对象数据包括:
    根据所述待渲染基准对象数据选取所述待渲染对象的特征点;
    将所述待渲染对象的特征点与所述基准坐标系相结合,生成匹配于所述待渲染基准对象数据的待渲染对象数据。
  11. 根据权利要求1所述的图像渲染方法,其特征在于,所述根据所述待渲染基准对象数据和所述待渲染对象生成匹配于所述待渲染基准对象数据的待渲染对象数据包括:
    根据所述待渲染基准对象数据中的所述关键点选取所述待渲染对象的特征点;
    根据所述待渲染基准对象数据中的所述待渲染基准对象的三维转角数据匹配所述待渲染对象的三维转角数据;
    将所述待渲染对象的特征点和所述待渲染对象的三维转角数据与所述基准坐标系相结合,生成匹配于所述待渲染基准对象数据的待渲染对象数据。
  12. 根据权利要求1所述的图像渲染方法,其特征在于,在所述根据所述待渲染基准对象数据和待渲染对象生成匹配于所述待渲染基准对象数据的待渲染对象数据步骤之后进一步包括:
    对生成的所述待渲染对象数据进行缓存操作。
  13. 一种图像渲染系统,其特征在于,包括:
    基准数据生成装置,用于根据待渲染基准对象生成待渲染基准对象数据;
    渲染数据生成装置,用于根据所述待渲染基准对象数据和渲染对象生成匹配于所述待渲染基准对象数据的渲染对象数据;
    待渲染数据生成装置,用于根据所述待渲染基准对象数据和待渲染对象生成匹配于所述待渲染基准对象数据的待渲染对象数据;
    跟踪匹配装置,用于实时匹配所述渲染对象数据和所述待渲染对象数据。
  14. 根据权利要求13所述的图像渲染系统,其特征在于,所述基准数据生成装置包括:
    关键点选取单元,用于选取待渲染基准对象的若干关键点;
    基准数据生成单元,用于建立基准坐标系,将所述关键点在所述基准坐标系中的坐标数据作为所述待渲染基准对象数据。
  15. 根据权利要求13所述的图像渲染系统,其特征在于,所述基准数据生成装置包括:
    关键点选取单元,用于选取待渲染基准对象的若干关键点;
    转角数据采集单元,用于采集待渲染基准对象的三维转角数据;
    基准数据合成单元,用于建立基准坐标系,将所述关键点在所述基准坐标系中的坐标数据和所述待渲染基准对象的三维转角数据作为所述待渲染基准对象数据。
  16. 根据权利要求13所述的图像渲染系统,其特征在于,所述渲染数据生成装置包括:
    渲染对象特征点选取单元,用于根据所述待渲染基准对象数据选取所述渲染对象的特征点;
    渲染对象数据生成单元,用于将所述渲染对象的特征点与所述基准坐标系相结合,生成匹配于所述待渲染基准对象数据的渲染对象数据。
  17. 根据权利要求13所述的图像渲染系统,其特征在于,所述渲染数据生成装置包括:
    渲染对象特征点选取单元,用于根据所述待渲染基准对象数据中的所述关键点选取所述渲染对象的特征点;
    渲染对象转角数据采集单元,用于根据所述待渲染基准对象数据中的所述待渲染基准对象的三维转角数据匹配所述渲染对象的三维转角数据;
    渲染对象数据合成单元,用于将所述渲染对象的特征点和所述渲染对象的三维转角数据与所述基准坐标系相结合,生成匹配于所述待渲染基准对象数据的渲染对象数据。
  18. 根据权利要求13所述的图像渲染系统,其特征在于,所述待渲染数据生成装置包括:
    待渲染对象特征点选取单元,用于根据所述待渲染基准对象数据选取所述待渲染对象的特征点;
    待渲染对象数据生成单元,用于将所述待渲染对象的特征点与所述基准坐标系相结合,生成匹配于所述待渲染基准对象数据的待渲染对象数据。
  19. 根据权利要求13所述的图像渲染系统,其特征在于,所述待渲染数据生成装置包括:
    待渲染对象特征点选取单元,用于根据所述待渲染基准对象数据中的所述关键点选取所述待渲染对象的特征点;
    待渲染对象转角数据采集单元,用于根据所述待渲染基准对象数据中的所述待渲染基准对象的三维转角数据匹配所述待渲染对象的三维转角数据;
    待渲染对象数据合成单元,用于将所述待渲染对象的特征点和所述待渲染对象的三维转角数据与所述基准坐标系相结合,生成匹配于所述待渲染基准对象数据的待渲染对象数据。
  20. 根据权利要求13所述的图像渲染系统,其特征在于,进一步包括:
    缓存装置,用于对待渲染数据生成装置生成的所述待渲染对象数据进行缓存操作。
PCT/CN2018/097918 2017-08-14 2018-08-01 一种图像渲染方法和系统 WO2019033923A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710695496.2A CN107481310B (zh) 2017-08-14 2017-08-14 一种图像渲染方法和系统
CN201710695496.2 2017-08-14

Publications (1)

Publication Number Publication Date
WO2019033923A1 true WO2019033923A1 (zh) 2019-02-21

Family

ID=60600495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097918 WO2019033923A1 (zh) 2017-08-14 2018-08-01 一种图像渲染方法和系统

Country Status (2)

Country Link
CN (1) CN107481310B (zh)
WO (1) WO2019033923A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481310B (zh) * 2017-08-14 2020-05-08 迈吉客科技(北京)有限公司 一种图像渲染方法和系统
CN108537867B (zh) * 2018-04-12 2020-01-10 北京微播视界科技有限公司 根据用户肢体运动的视频渲染方法和装置
CN108615261B (zh) * 2018-04-20 2022-09-09 深圳市天轨年华文化科技有限公司 增强现实中图像的处理方法、处理装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090195539A1 (en) * 2005-01-07 2009-08-06 Tae Seong Kim Method of processing three-dimensional image in mobile device
CN102262788A (zh) * 2010-05-24 2011-11-30 上海一格信息科技有限公司 个人三维形象互动试妆信息数据处理方法及装置
CN104881114A (zh) * 2015-05-13 2015-09-02 深圳彼爱其视觉科技有限公司 一种基于3d眼镜试戴的角度转动实时匹配方法
CN105681684A (zh) * 2016-03-09 2016-06-15 北京奇虎科技有限公司 基于移动终端的图像实时处理方法及装置
CN107481310A (zh) * 2017-08-14 2017-12-15 迈吉客科技(北京)有限公司 一种图像渲染方法和系统

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090092473A (ko) * 2008-02-27 2009-09-01 오리엔탈종합전자(주) 3차원 변형 가능 형상 모델에 기반한 3차원 얼굴 모델링방법
CN103116902A (zh) * 2011-11-16 2013-05-22 华为软件技术有限公司 三维虚拟人头像生成方法、人头像运动跟踪方法和装置
CN104463938A (zh) * 2014-11-25 2015-03-25 福建天晴数码有限公司 三维虚拟试妆方法和装置
CN104778712B (zh) * 2015-04-27 2018-05-01 厦门美图之家科技有限公司 一种基于仿射变换的人脸贴图方法和系统
CN106845400B (zh) * 2017-01-19 2020-04-10 南京开为网络科技有限公司 一种基于人脸关键点跟踪实现特效而产生的品牌展示方法
CN106919906B (zh) * 2017-01-25 2021-04-20 迈吉客科技(北京)有限公司 一种图像互动方法及互动装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090195539A1 (en) * 2005-01-07 2009-08-06 Tae Seong Kim Method of processing three-dimensional image in mobile device
CN102262788A (zh) * 2010-05-24 2011-11-30 上海一格信息科技有限公司 个人三维形象互动试妆信息数据处理方法及装置
CN104881114A (zh) * 2015-05-13 2015-09-02 深圳彼爱其视觉科技有限公司 一种基于3d眼镜试戴的角度转动实时匹配方法
CN105681684A (zh) * 2016-03-09 2016-06-15 北京奇虎科技有限公司 基于移动终端的图像实时处理方法及装置
CN107481310A (zh) * 2017-08-14 2017-12-15 迈吉客科技(北京)有限公司 一种图像渲染方法和系统

Also Published As

Publication number Publication date
CN107481310A (zh) 2017-12-15
CN107481310B (zh) 2020-05-08

Similar Documents

Publication Publication Date Title
US11645801B2 (en) Method for synthesizing figure of virtual object, electronic device, and storage medium
KR102136241B1 (ko) 표정 검출 성능을 갖는 머리-장착형 디스플레이
JP4434890B2 (ja) 画像合成方法及び装置
KR101687017B1 (ko) 머리 착용형 컬러 깊이 카메라를 활용한 손 위치 추정 장치 및 방법, 이를 이용한 맨 손 상호작용 시스템
JP5818773B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP6264834B2 (ja) ガイド方法、情報処理装置およびガイドプログラム
US20130063560A1 (en) Combined stereo camera and stereo display interaction
WO2017075932A1 (zh) 基于三维显示的手势操控方法和系统
CN108335365A (zh) 一种影像引导的虚实融合处理方法及装置
JP2017182274A (ja) 情報処理装置およびコンピュータープログラム
JP2021166075A (ja) 画像の投影方法、装置、デバイス及び記憶媒体
WO2019033923A1 (zh) 一种图像渲染方法和系统
JP4834424B2 (ja) 情報処理装置、情報処理方法、及びプログラム
WO2020134925A1 (zh) 人脸图像的光照检测方法、装置、设备和存储介质
CN110910507A (zh) 计算机实现方法、计算机可读介质和用于混合现实的系统
JP2022183177A (ja) ヘッドマウントディスプレイ装置
Fischer et al. A hybrid tracking method for surgical augmented reality
CN111599002A (zh) 用于生成图像的方法和装置
TW202011356A (zh) 影像處理方法與影像處理裝置
JP2010033397A (ja) 画像合成装置および方法
KR20180059171A (ko) 3차원 상호작용을 위한 증강현실 제공 장치 및 방법
WO2018170678A1 (zh) 一种头戴式显示装置及其手势动作识别方法
TWI564841B (zh) 即時影像合成裝置、方法與電腦程式產品
JP2023093170A (ja) 携帯端末装置、および、そのプログラム
TW201019265A (en) Auxiliary design system and method for drawing and real-time displaying 3D objects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18846222

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (=EPO FORM 1205A DATED 04.08.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18846222

Country of ref document: EP

Kind code of ref document: A1