WO2024067320A1 - Virtual object rendering method and apparatus, and device and storage medium - Google Patents

Virtual object rendering method and apparatus, and device and storage medium Download PDF

Info

Publication number
WO2024067320A1
WO2024067320A1 PCT/CN2023/120222 CN2023120222W WO2024067320A1 WO 2024067320 A1 WO2024067320 A1 WO 2024067320A1 CN 2023120222 W CN2023120222 W CN 2023120222W WO 2024067320 A1 WO2024067320 A1 WO 2024067320A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
position information
target
vertex
vertices
Prior art date
Application number
PCT/CN2023/120222
Other languages
French (fr)
Chinese (zh)
Inventor
蔡鑫
刘佳成
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024067320A1 publication Critical patent/WO2024067320A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the embodiments of the present disclosure relate to the field of augmented reality technology, for example, to a method, device, equipment and storage medium for rendering a virtual object.
  • AR augmented reality
  • the embodiments of the present disclosure provide a method, apparatus, device and storage medium for rendering a virtual object, so that a virtual object being tried on can fit well with the movement of a target object, and can make the virtual object have the dynamic feeling and texture of a real object, thereby improving the rendering effect.
  • an embodiment of the present disclosure provides a method for rendering a virtual object, comprising:
  • the virtual object is rendered based on the target position information to obtain a target image; wherein the target object in the target image wears the virtual object.
  • the present disclosure also provides a virtual object rendering device, including:
  • An acquisition module configured to acquire the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object; wherein the multiple vertices of the virtual object are multiple vertices of the 3D model corresponding to the virtual object;
  • a position information transformation module configured to transform the initial position information of the plurality of vertices based on the position information of the skeleton points to obtain the intermediate position information of the plurality of vertices
  • a target position information acquisition module configured to update the intermediate position information of at least some of the multiple vertices of the virtual object based on the set material information to obtain the target position information
  • a rendering module is configured to render the virtual object based on the target position information to obtain a target image; wherein the target object in the target image wears the virtual object.
  • an embodiment of the present disclosure further provides an electronic device, the electronic device comprising:
  • processors one or more processors
  • a storage device configured to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the virtual object rendering method as described in the embodiment of the present disclosure.
  • the embodiments of the present disclosure further provide a storage medium comprising computer executable instructions, which, when executed by a computer processor, are used to execute the method for rendering a virtual object as described in the embodiments of the present disclosure.
  • FIG1 is a schematic diagram of a flow chart of a method for rendering a virtual object provided by an embodiment of the present disclosure
  • FIG2 is a schematic diagram of a rendered virtual object provided by an embodiment of the present disclosure.
  • FIG3 is an example diagram of a color noise diagram provided by an embodiment of the present disclosure.
  • FIG4a is an example diagram of a target object sampling diagram provided by an embodiment of the present disclosure.
  • FIG4b is an example diagram of a first mask image provided by an embodiment of the present disclosure.
  • FIG4c is an example diagram of a virtual object provided by an embodiment of the present disclosure.
  • FIG4d is an example diagram of a target image provided by an embodiment of the present disclosure.
  • FIG5a is an example diagram of displaying a virtual object provided by an embodiment of the present disclosure.
  • FIG5b is another example diagram of displaying a virtual object provided by an embodiment of the present disclosure.
  • FIG6 is a schematic diagram of the structure of a virtual object rendering device provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure.
  • the types, scope of use, usage scenarios, etc. of the personal information involved in this disclosure should be informed to the user and the user's authorization should be obtained in an appropriate manner in accordance with relevant laws and regulations.
  • a prompt message is sent to the user to clearly prompt the user that the operation requested to be performed will require obtaining and using the user's personal information.
  • the user can autonomously choose whether to submit the operation to the user according to the prompt message.
  • Software or hardware such as electronic devices, applications, servers or storage media provide personal information.
  • the prompt information in response to receiving an active request from the user, may be sent to the user in the form of a pop-up window, in which the prompt information may be presented in text form.
  • the pop-up window may also carry a selection control for the user to choose "agree” or “disagree” to provide personal information to the electronic device.
  • the data involved in this technical solution shall comply with the requirements of relevant laws, regulations and relevant provisions.
  • Figure 1 is a flow chart of a method for rendering a virtual object provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure is applicable to the situation of rendering a virtual object.
  • the method can be executed by a rendering device for a virtual object, which can be implemented in the form of software and/or hardware.
  • a rendering device for a virtual object can be implemented in the form of software and/or hardware.
  • it can be implemented by an electronic device, which can be a mobile terminal, a personal computer (PC) or a server, etc.
  • PC personal computer
  • the method comprises:
  • the current image can be a static image, a real-time captured image, or a frame of an image in a video.
  • the target object can be a person or an animal. In this application scenario, the target object is a person.
  • the virtual object can be a pre-built 3D virtual object, such as virtual clothing or virtual accessories.
  • the multiple vertices of the virtual object can be the multiple vertices of the 3D model corresponding to the virtual object.
  • the initial position information of the multiple vertices of the virtual object can be understood as the three-dimensional coordinates of the multiple vertices of the virtual object in the world coordinate system.
  • the skeleton point position information may be the three-dimensional coordinates of the skeleton key points in the world coordinate system.
  • the method for obtaining the skeleton point position information of the target object in the current image may be: performing skeleton key point detection or posture detection on the target object in the current image, thereby obtaining the position information of multiple skeleton key points.
  • the skeleton point position information may be obtained by using a posture detection algorithm or a skeleton key point detection algorithm, which is not limited here.
  • the virtual object moves along with the movement of the target object, that is, the position information of the multiple vertices of the virtual object changes along with the change of the position information of the skeleton points.
  • the method of transforming the initial position information based on the position information of the skeleton point can be: For each vertex of the multiple vertices of the body: obtain multiple skeleton points associated with the vertex of the virtual object and the position influence weights of the multiple skeleton points on the vertex of the virtual object; determine the vertex transformation information of the vertex based on the position information and the position influence weights of the multiple skeleton points; transform the initial position information of the vertex based on the vertex transformation information of the vertex to obtain the intermediate position information of the vertex.
  • the position influence weight can be understood as the degree of influence of the position of the bone point on the position of the vertex of the virtual object.
  • the position influence weight is related to the distance between the bone point and the vertex of the virtual object. The closer the distance between the bone point and the vertex of the virtual object point, the greater the position influence weight of the bone point on the vertex of the virtual object. The farther the distance between the bone point and the vertex of the virtual object, the smaller the position influence weight of the bone point on the vertex of the virtual object, thereby presenting the effect that the virtual object moves with the movement of the human body. For example, for the vertex of the virtual object close to the knee bone point, it is greatly affected by the knee bone point. When the human body bends the knee, the vertex of the virtual object close to the knee bone point will have a large position change with the knee bone point.
  • the process of obtaining multiple skeleton points associated with the vertices of the virtual object and the position influence weights of the multiple skeleton points on the vertices of the virtual object can be: first, the virtual object is bound to the skeleton points of the target object, and the multiple skeleton points associated with each vertex of the virtual object and the position influence weights of the multiple skeleton points on the vertices of the virtual object are determined according to the binding result.
  • each vertex of the virtual object is associated with four skeleton points.
  • the method of determining the vertex transformation information of the vertex based on the position influence weights and position information of the multiple skeleton points can be: weighted summation of the position information of the multiple skeleton points based on the position influence weights to obtain the vertex transformation information of the vertex.
  • the vertex transformation information may be represented by a matrix, and the initial position information of the vertex may be transformed based on the vertex transformation information of the vertex by multiplying the vector corresponding to the initial position information of the vertex by the matrix corresponding to the vertex transformation information of the vertex, thereby obtaining the vector corresponding to the intermediate position information of the vertex.
  • the initial position information is transformed based on the vertex transformation information, so that the virtual object moves with the movement of the target object, thereby improving the realistic effect of the virtual object being worn on the target object.
  • the material information may be set based on material parameter representation, and different material parameters correspond to different materials.
  • the set material may be a cloth material.
  • the method of updating the intermediate position information of at least some of the multiple vertices of the virtual object based on the set material information may be: obtaining the set material information and the motion state of at least some of the multiple vertices of the virtual object; updating the intermediate position information of at least some of the vertices based on the set material information;
  • the material setting is performed on at least some of the vertices of the virtual object according to the material information and the motion state of at least some of the vertices to obtain the target position information.
  • the target position information includes the target position information of the plurality of vertices of the virtual object.
  • the intermediate position information of at least some of the vertices among the plurality of vertices can be obtained by performing a material setting calculation on the at least some of the vertices. For the remaining vertices among the plurality of vertices, the intermediate position information of the vertices can be directly used as the target position information of the vertices.
  • the material setting information can be a parameter characterizing the material properties, which can include air resistance, stretch coefficient, compression coefficient, bending coefficient, etc. The material setting information can be set according to actual needs.
  • the motion state of the vertices of the virtual object can be information such as the motion speed and motion direction of the vertices of the virtual object in the current image relative to the previous image.
  • a pre-developed material solution plug-in can be called to perform material solution.
  • the material solution plug-in performs a material setting calculation on the material parameters, the intermediate position information of at least some of the vertices, and the motion state, and outputs the target position information.
  • the material setting calculation is performed on at least some of the vertices of the virtual object according to the intermediate position information, the material setting information, and the motion state, so that the virtual object can have the dynamic and texture of the set material.
  • the method of solving the material setting for at least part of the vertices of the virtual object according to the intermediate position information of at least part of the vertices, the set material information and the motion state of at least part of the vertices may be: if a virtual object support body is set in the current scene; obtain support information according to the virtual object support body; solve the material setting for at least part of the vertices of the virtual object according to the intermediate position information of at least part of the vertices, material parameters, the motion state of at least part of the vertices and the support information.
  • the virtual object support can be set according to the collision principle, that is, it can be a collision body, and the support information can be understood as collision information.
  • the virtual object in the area where the virtual object support is set, the virtual object will collide with the virtual object support, that is, the virtual object cannot pass through the virtual object support, which can make the virtual object present a certain shape.
  • the virtual object support can be set by a material solution plug-in, and the material solution plug-in obtains support information according to the set virtual object support.
  • the material solution plug-in can set the material parameters, support information, at least part of the intermediate position information and motion state of the vertices to solve the material, so as to obtain the target position information after the solution.
  • FIG2 is a schematic diagram of the virtual object rendered in this embodiment.
  • the virtual object is a skirt. Since the vertices below the waist of the skirt are set to solve the material solution, the skirt below the waist has the dynamic and texture of gauze, and a virtual object support is set under the skirt, so the skirt hem presents a bulging effect.
  • the target object in the target image wears the virtual object.
  • multiple vertices of the virtual object are rendered to corresponding positions in the picture based on the target position information, thereby obtaining a target image.
  • the virtual object is rendered based on the target position information
  • the method for obtaining the target image can be: sampling preset color information from the color noise map based on the mapping coordinates of each vertex of the virtual object; offsetting the preset color information according to the time information of the current image to obtain the offset-processed preset color information corresponding to the vertex; adjusting the initial color of the vertex of the virtual object according to the offset-processed preset color information corresponding to the vertex and the target position information to obtain the target color of the vertex; rendering the virtual object based on the target position information and the target color of each vertex to obtain the target image.
  • the mapping coordinates of the multiple vertices may be the coordinates of the multiple vertices in the surface map of the virtual object.
  • the color noise map may be a randomly generated noise map.
  • FIG. 3 is an example of a color noise map in this embodiment
  • FIG. 3 is a noise map after grayscale processing
  • the original image is a color map.
  • the color noise map and the surface map of the virtual object have the same size, that is, the pixels between the two images correspond one to one, so that the preset color information can be sampled from the color noise map according to the mapping coordinates.
  • the time information of the current image can be understood as the timestamp information of the current image in the video.
  • the color information can be represented by three color channel (RGB) values.
  • the process of offsetting the preset color information according to the time information of the current image can be: firstly, linearly transform the time information to obtain the offset; then convert the RGB information of the preset color information into the HSV (Hue, Saturation, Value) color space, and then offset the three components in the HSV information based on the offset to obtain the HSV information after the offset, and then convert the HSV information back to RGB information.
  • the linear transformation of the time information can be: multiply the time information by the set value to obtain the offset.
  • H1 H0
  • S1 S0+t/360
  • S1 is the S component after the offset
  • V0 is the V component before the offset
  • V1 is the V component after the offset
  • the initial color of the vertex of the virtual object is adjusted according to the preset color information and target position information after the offset processing corresponding to the vertex
  • the target color of the vertex can be obtained by: determining the normal direction and viewing direction of each vertex of the virtual object according to the target position information; determining the color transformation information corresponding to the vertex according to the preset color information after the offset processing corresponding to the vertex; determining the color adjustment amount of the vertex according to the color transformation information corresponding to the vertex, the normal direction of the vertex and the viewing direction of the vertex; and accumulating the color adjustment amount of the vertex with the initial color of the vertex of the virtual object to obtain the target color of the vertex.
  • the normal direction of the vertex of the virtual object may be the normal direction of the section where the vertex of the virtual object is located, and the viewing direction of the vertex of the virtual object may be the direction in which the optical center of the camera points to the vertex of the virtual object.
  • the color transformation information may be represented by a color transformation matrix.
  • the process of determining the color transformation information corresponding to the vertex based on the color information can be: converting the three channel color values of the preset color information corresponding to the vertex after the offset processing into angle information, then calculating the sine and cosine of the three angle information, and finally constructing the color transformation matrix corresponding to the vertex based on the sine and cosine of the three angle information.
  • angle information (RGB-a)*b* ⁇
  • a and b are set values, for example: a can be 0.5 and b can be 2.
  • the color transformation matrix can be expressed as: [sinY*cosZ, cosY*sinZ*sinX-sinY*cosX, cosY*sinZ*sinX+sinY*cosX, sinY*cosZ, sinY*cosX+sinY*sinZ*sinX, conY*sinZ*sinX-sinX*conZ, -sinZ, cosZ*sinX, sinZ*cosX].
  • the color adjustment amount may be a brightness value.
  • the process of determining the color adjustment amount of the vertex according to the color transformation information corresponding to the vertex, the normal direction of the vertex, and the viewing direction of the vertex may be: performing dot multiplication of the color transformation matrix corresponding to the color transformation information with the vector corresponding to the normal direction and the vector corresponding to the viewing direction in turn to obtain the color adjustment amount.
  • the color adjustment amount of each vertex is accumulated with the initial color of the vertex of the virtual object to obtain the target color of the vertex.
  • the color of the vertex of the virtual object is adjusted according to the preset color information, normal direction, and viewing direction after the time information offset processing, so as to generate a flickering effect in which the virtual object changes dynamically with the movement of the target object.
  • a flickering effect is presented on the "skirt".
  • the virtual object is rendered based on the target position information
  • the target image is obtained by: sampling the current image according to the virtual 3D target object model and the virtual object model to obtain a target object sampling map; converting the target object sampling map into a first mask map; fusing the current image and the virtual object map based on the first mask map to obtain a target image; and rendering the target image to the current screen.
  • the virtual 3D target object model can be a virtual 3D model obtained by binding the standard target object model to the posture of the target object in the current image.
  • the virtual object model can be understood as a 3D model corresponding to the virtual object.
  • the process of sampling the current image according to the virtual 3D target object model and the virtual object model can be: first, the occlusion relationship between the target object and the virtual object is determined according to the virtual 3D target object model and the virtual object model, and the pixel points of the target object that are not occluded are sampled from the current image based on the occlusion relationship, thereby obtaining a target object sampling map.
  • Figure 4a is an example diagram of the target object sampling map in this embodiment. As shown in Figure 4a, the non-black area in Figure 4a is the pixel points of the target object that are not occluded.
  • the method of converting the target object sampling map into the first mask map may be: identifying the target object in the target object sampling map to obtain the first mask map.
  • FIG4b is an example of the first mask map, as shown in FIG4b, the white area in FIG4b is the target object area.
  • the virtual object map can be understood as a two-dimensional (2D) map obtained by projecting the virtual object onto the screen according to the target position information.
  • FIG4c is an example of a virtual object in this embodiment.
  • the denim jacket in FIG4c is a virtual object.
  • the method of fusing the current image and the virtual object image based on the first mask image may be: determining a weighted weight according to pixel values of pixels in the first mask image, and fusing the current image and the virtual object image based on the weighted weight.
  • the fusion formula can be expressed as: a*current image+(1-a)*virtual object image.
  • FIG4d is an example of a target image in this embodiment.
  • the current image and the virtual object image are fused based on the first mask image corresponding to the target object sampling image, so that the virtual object can be accurately worn on the target object.
  • the current image and the virtual object image are fused based on the first mask image to obtain the target image by: blurring the first mask image; and fusing the current image and the virtual object image based on the blurred first mask image to obtain the target image.
  • the blurring process may adopt a blurring process algorithm, which is not limited here.
  • the method of fusing the current image and the virtual object image based on the blurred first mask image may be: determining a weighted weight according to the pixel value of the pixel point in the blurred first mask image, and fusing the current image and the virtual object image based on the weighted weight.
  • b is used as the weighted weight of the current image
  • 1-b is used as the weighted weight of the virtual object image.
  • the fusion formula can be expressed as: b*current image+(1-b)*virtual object image.
  • the current image and the virtual object image are fused based on the first mask image after blur processing, so that a smooth transition can be achieved between the target object and the virtual object.
  • the virtual object is rendered based on the target position information
  • a method for obtaining the target image may be: obtaining a virtual 3D target object model corresponding to the target object in the current image; determining a second mask image according to the normal direction and viewing direction of the virtual 3D target object model; and fusing the current image and the virtual object image based on the second mask image to obtain the target image.
  • the standard target object model is bound to the posture of the target object in the current image to obtain a virtual 3D target object model.
  • the normal direction of the virtual 3D target object model can be understood as the normal direction of the section where the 3D points in the virtual 3D target object model are located, and the viewing angle direction of the virtual 3D target object model can be the direction in which the optical center of the camera points to the 3D points in the virtual 3D target object model.
  • the method for determining the second mask image based on the normal direction and the viewing angle direction of the virtual 3D target object model can be: dot multiplying the normal direction and the viewing angle direction, and using the dot multiplication result as the pixel value of the pixel point in the second mask image.
  • the method for fusing the current image and the virtual object image based on the second mask image can be: based on The weighted weight is determined according to the pixel value of the pixel point in the second mask image, and the current image and the virtual object image are fused based on the weighted weight.
  • the pixel value of the pixel point in the second mask image is c
  • c is used as the weighted weight of the current image
  • 1-c is used as the weighted weight of the virtual object image.
  • the fusion formula can be expressed as: c*current image+(1-c)*virtual object image.
  • the fusion efficiency can be improved by determining the second mask image based on the normal direction and the viewing direction of the virtual 3D target object model to fuse the current image and the virtual object image.
  • the following step is also included: gradually rendering and displaying the virtual object in a set order within a set time period according to the initial position information and/or mapping coordinates of multiple vertices of the virtual object.
  • the setting order can be from top to bottom, from bottom to top, from left to right, or from right to left.
  • the setting duration can be any value between 1 and 2 seconds.
  • the gradual rendering display can be an iterative rendering display or a gradual rendering display in different areas. In this application scenario, when the user enters the clothing fitting tool and a person is detected, the clothing to be tried on begins to be displayed in a gradual display manner.
  • the method of iteratively rendering and displaying the virtual object in a set order within a set time length according to the initial position information and/or mapping coordinates of multiple vertices of the virtual object can be: determining the vertices of the virtual object to be displayed at each moment within the set time length according to the initial position information and/or mapping coordinates, and displaying the vertices of the virtual object to be displayed in the screen corresponding to the moment, thereby obtaining a video of the virtual object being gradually displayed.
  • Figures 5a-5b are example diagrams for displaying virtual objects. As shown in Figure 5a, a portion of clothing is displayed at this moment, and as shown in Figure 5b, the clothing is fully displayed. In this embodiment, iteratively displaying virtual objects can improve the display effect.
  • the method for iteratively rendering and displaying the virtual object in a set order within a set time length according to the initial position information and/or mapping coordinates of multiple vertices of the virtual object can be: for the current moment within the set time length, determining the control parameters according to the current moment; determining the target reference value of each vertex of the virtual object according to the initial position information and/or mapping coordinates of the vertex; determining the vertex of the virtual object to be displayed at the current moment according to the target reference value and control parameters of each vertex; and rendering the vertices of the virtual object to be displayed at the current moment.
  • the method of determining the control parameter according to the current moment may be: obtaining the progress of the current moment within the set time length, and determining the control parameter according to the progress. For example, assuming that the set time length is T, the time length between the current moment and the start moment is t, and the progress is t/T, then the progress can be used as the control parameter.
  • the role of the target reference is to determine whether the vertex of the corresponding virtual object is rendered.
  • the method of determining the target reference of each vertex according to the initial position information and/or mapping coordinates of each vertex of the virtual object can be: for each of the multiple vertices of the virtual object, determine the first reference according to the initial position information of the vertex of the virtual object; sample the gray value from the set noise map according to the mapping coordinates of the vertex of the virtual object, and determine the sampled gray value as the second reference; The consideration and/or the second reference quantity determines a target reference quantity for the vertex.
  • the initial position information of the vertices of the virtual object can be understood as the initial three-dimensional coordinates of the vertices constituting the virtual object in the world coordinate system, that is, the initial position information includes three coordinate components x, y, and z.
  • the process of determining the first reference amount according to the initial position information of the vertices of the virtual object can be: firstly, the distance between the vertices of the virtual object and the origin after being projected to the xz plane is calculated, and then the distance is linearly transformed, and then the y component is exponentially calculated, and finally the linear transformation result and the exponential transformation result are accumulated to obtain the first reference amount.
  • the distance between the vertices of the virtual object and the origin after being projected to the xz plane can be expressed as: length(xz)
  • the exponential calculation of the y component can be expressed as pow(y, a), which means to obtain y to the power of a.
  • the first reference value may be restricted in value range so that the first reference value is within a set range, and the first reference value after the value range restriction is subjected to exponential calculation.
  • the set range may be 0-1.5.
  • the mapping coordinates can be understood as the coordinates of the surface map corresponding to the virtual object.
  • the surface map has the same size as the set noise map, that is, the pixels in the two images correspond one to one, so the grayscale value corresponding to each pixel of the surface map can be sampled from the set noise map according to the mapping coordinates, and finally the grayscale value is used as the second reference.
  • the second reference can also be multiplied by the set influence factor to obtain the final second reference.
  • Determining the target reference amount based on the first reference amount and/or the second reference amount can be understood as: using the first reference amount as the target reference amount, or using the second reference amount as the target reference amount, or using the weighted sum of the first reference amount and the second reference amount as the target reference amount.
  • determining the target reference amount based on the initial position information and/or the mapping coordinates can improve the calculation accuracy.
  • the method for determining the vertices of the virtual object to be displayed at the current moment based on the target reference value and the control parameter can be: if the target reference value is less than the control parameter, the vertices of the virtual object are not the vertices of the virtual object to be displayed; if the target reference value is greater than or equal to the control parameter, the vertices based on the virtual object are the vertices of the virtual object to be displayed.
  • the method of rendering the vertices of the virtual object to be displayed at the current moment may be: determining the special effect color of the vertices of the virtual object according to the target reference amount and the control parameter; and rendering the vertices of the virtual object to be displayed at the current moment according to the special effect color.
  • the target reference value is less than the control parameter, it indicates that the vertices of the virtual object corresponding to the target reference value do not need to be rendered at the current moment, so the vertices of the virtual object are ignored.
  • the target reference value is greater than or equal to the control parameter, it indicates that the vertices of the virtual object corresponding to the target reference value need to be rendered at the current moment.
  • the vertices of the virtual object to be rendered at the current moment are determined by comparing the target reference value with the control parameter, so that the virtual object appears to be gradually rendered. The effect of display.
  • the method of determining the special effect color of the vertex of the virtual object based on the target reference amount and the control parameters can be: obtaining the special effect control range; performing smooth transition processing on the target reference amount based on the special effect control range and the control parameters to obtain the special effect adjustment amount; adjusting the set color based on the special effect adjustment amount to obtain the special effect color.
  • the upper and lower limits of special effect control are set by the user and is not limited here.
  • the smooth transition processing method may be to set a smooth transition processing function, for example, a smoothstep function.
  • the process of performing smooth transition processing on the target reference amount based on the special effect control range and the control parameter can be: firstly, performing a first smooth transition processing on the target reference amount based on the control parameter and the special effect control upper limit value to obtain the first sub-special effect adjustment amount; then performing a second smooth transition processing on the target reference amount based on the control parameter, the special effect control upper limit value and the special effect control lower limit value to obtain the second sub-special effect adjustment amount; finally, the first sub-special effect adjustment amount and the second sub-special effect adjustment amount are accumulated to obtain the special effect adjustment amount.
  • the first smooth transition processing can be expressed as: smoothstep(U, U+a*D1, A)
  • the second smooth transition processing can be expressed as: smoothstep(U-a*D2, U+a*D1, A), where U represents the control parameter, D1 represents the special effect control upper limit value, D2 represents the special effect control lower limit value, A represents the target reference amount, and a is a set value, for example, a is 0.1.
  • the method of adjusting the set color based on the special effect adjustment amount can be: multiplying the special effect adjustment amount by the set color to obtain the special effect color.
  • the target reference amount is smoothly transitioned based on the special effect control range and control parameters to generate an edge glow effect.
  • the technical solution of the disclosed embodiment obtains the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object; transforms the initial position information of multiple vertices of the virtual object based on the skeleton point position information to obtain the intermediate position information of multiple vertices of the virtual object; updates the intermediate position information of at least part of the vertices of the virtual object based on the set material information to obtain the target position information; renders the virtual object based on the target position information to obtain the target image; wherein the target object in the target image wears the virtual object.
  • the virtual object rendering method provided by the disclosed embodiment transforms the initial position information of multiple vertices of the virtual object based on the skeleton point position information of the target object, and updates the intermediate position information of at least part of the vertices of the virtual object based on the set material information, so that the virtual object being tried on can fit well with the movement of the target object, and can make the virtual object have the dynamic and texture of the real object, thereby improving the rendering effect.
  • FIG6 is a schematic diagram of the structure of a virtual object rendering device provided by an embodiment of the present disclosure.
  • the device includes: an acquisition module 610 configured to acquire the bone structure of a target object in a current image; The skeleton point position information and the initial position information of multiple vertices of the virtual object; wherein the multiple vertices of the virtual object are the multiple vertices of the 3D model corresponding to the virtual object; a position information transformation module 620 is configured to transform the initial position information of the multiple vertices based on the skeleton point position information to obtain the intermediate position information of the multiple vertices; a target position information acquisition module 630 is configured to update the intermediate position information of at least part of the multiple vertices of the virtual object based on the set material information to obtain the target position information; a rendering module 640 is configured to render the virtual object based on the target position information to obtain a target image; wherein the target object in the target image wears the virtual object.
  • the position information transformation module 620 is configured to: obtain multiple bone points associated with each vertex of the virtual object and the position influence weights of the multiple bone points on each vertex of the virtual object; determine the vertex transformation information of each vertex based on the position information of the multiple bone points and the position influence weights; transform the initial position information of each vertex based on the vertex transformation information of each vertex to obtain the intermediate position information of each vertex.
  • the target position information acquisition module 630 is configured to: obtain set material information and the motion state of at least some of the vertices of the virtual object; solve the set material of at least some of the vertices of the virtual object according to the intermediate position information of at least some of the vertices, the set material information and the motion state of at least some of the vertices to obtain the target position information.
  • the target position information acquisition module 630 is configured to solve the set material of at least some of the vertices of the virtual object according to the intermediate position information of at least some of the vertices, the set material information and the motion state of at least some of the vertices in the following manner: if a virtual object support body is set in the current scene; obtain support information according to the virtual object support body; solve the set material of at least some of the vertices of the virtual object according to the intermediate position information of at least some of the vertices, the material parameters, the motion state of at least some of the vertices and the support information.
  • the rendering module 640 is configured to: sample preset color information from a color noise map based on the texture coordinates of each vertex of the virtual object; perform offset processing on the preset color information according to the time information of the current image to obtain the offset processed preset color information corresponding to each vertex; adjust the initial color of each vertex of the virtual object according to the offset processed preset color information corresponding to each vertex and the target position information to obtain the target color of each vertex;
  • the virtual object is rendered based on the target position information and the target color of each vertex to obtain a target image.
  • the rendering module 640 is configured to adjust the initial color of each vertex of the virtual object according to the preset color information after the offset processing corresponding to each vertex and the target position information to obtain the target color of each vertex in the following manner: determining The normal direction and viewing direction of each vertex of the virtual object; determining the color transformation information corresponding to each vertex according to the preset color information after offset processing corresponding to each vertex; determining the color adjustment amount of each vertex according to the color transformation information corresponding to each vertex, the normal direction of each vertex and the viewing direction of each vertex; adding the color adjustment amount of each vertex with the initial color of the vertex of the virtual object to obtain the target color of the vertex.
  • the rendering module 640 is configured to: sample the current image according to the virtual 3D target object model and the virtual object model to obtain a target object sampling map; convert the target object sampling map into a first mask map; fuse the current image and the virtual object map based on the first mask map to obtain a target image; and render the target image to the current screen.
  • the rendering module 640 is configured to fuse the current image and the virtual object image based on the first mask image to obtain a target image in the following manner: blurring the first mask image; and fusing the current image and the virtual object image based on the blurred first mask image to obtain a target image.
  • the rendering module 640 is configured to: obtain a virtual 3D target object model corresponding to the target object in the current image; determine a second mask image according to the normal direction and viewing direction of the virtual 3D target object model; and fuse the current image and the virtual object image based on the second mask image to obtain a target image.
  • a virtual object display module configured to: gradually render and display the virtual object in a set order within a set time period according to the initial position information and/or mapping coordinates of multiple vertices of the virtual object.
  • the virtual object display module is configured to: for the current moment within the set time length, determine the control parameters according to the current moment; determine the target reference value of each vertex of the virtual object according to the initial position information and/or mapping coordinates of each vertex; determine the vertices of the virtual object to be displayed at the current moment according to the target reference value and control parameters of each vertex; and render the vertices of the virtual object to be displayed at the current moment.
  • the virtual object rendering device provided in the embodiments of the present disclosure can execute the virtual object rendering method provided in any embodiment of the present disclosure, and has the functional modules and effects corresponding to the execution method.
  • the multiple units and modules included in the above-mentioned device are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized; in addition, the names of the multiple units and modules are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the embodiments of the present disclosure.
  • FIG7 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure. Referring to FIG7 below, it shows a schematic diagram of the structure of an electronic device (e.g., a terminal device or server in FIG7) 500 suitable for implementing an embodiment of the present disclosure.
  • the terminal device in the embodiment of the present disclosure may include, but is not limited to, a mobile
  • the electronic device shown in FIG7 is only an example and should not bring any limitation to the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 500 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 501, which can perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 502 or a program loaded from a storage device 508 to a random access memory (RAM) 503.
  • a processing device e.g., a central processing unit, a graphics processing unit, etc.
  • RAM random access memory
  • various programs and data required for the operation of the electronic device 500 are also stored.
  • the processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504.
  • An input/output (I/O) interface 505 is also connected to the bus 504.
  • the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 507 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 508 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 509.
  • the communication device 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data.
  • FIG. 7 shows an electronic device 500 having a variety of devices, it should be understood that it is not required to implement or have all of the devices shown. More or fewer devices may be implemented or have alternatively.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program can be downloaded and installed from a network through a communication device 509, or installed from a storage device 508, or installed from a ROM 502.
  • the processing device 501 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the electronic device provided by the embodiment of the present disclosure and the method for rendering a virtual object provided by the above embodiment belong to the same inventive concept.
  • the embodiments of the present disclosure provide a computer storage medium on which a computer program is stored.
  • the program is executed by a processor, the method for rendering a virtual object provided in the above embodiments is implemented.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, RAM, ROM, an erasable programmable read-only memory (EPROM) or flash memory, an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in conjunction with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • Computer readable signal media may also be any computer readable medium other than computer readable storage media, which may send, propagate or transmit a program for use by or in conjunction with an instruction execution system, apparatus or device.
  • the program code contained on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (RF), etc., or any suitable combination of the above.
  • the client and the server may communicate using any currently known or future developed network protocol, such as HyperText Transfer Protocol (HTTP), and may be interconnected with any form or medium of digital data communication (e.g., a communication network).
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include a local area network (LAN), a wide area network (WAN), an internet (e.g., the Internet), and a peer-to-peer network (e.g., an ad hoc peer-to-peer network), as well as any currently known or future developed network.
  • the computer-readable medium may be included in the electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device When the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object; transforms the initial position information of the multiple vertices based on the skeleton point position information to obtain the intermediate position information of the multiple vertices; updates the intermediate position information of at least part of the multiple vertices of the virtual object based on the set material information to obtain the target position information; renders the virtual object based on the target position information to obtain the target image; wherein, the target object in the target image wears the virtual object.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or combinations thereof, including, but not limited to, object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural programming languages, such as "C" or similar programming languages.
  • the program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the user's computer via any type of network, including (LAN or WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
  • each box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function.
  • the functions marked in the box can also occur in a different order from the order marked in the accompanying drawings. For example, two boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved.
  • each box in the block diagram and/or flow chart, and the combination of the boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs the specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
  • the units and modules involved in the embodiments described in the present disclosure may be implemented by software or hardware.
  • the names of the units and modules do not limit the units themselves.
  • the acquisition module may also be described as "a module for acquiring the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object".
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Parts
  • SOC System on Chip
  • CPLD Complex Programmable Logic Device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • Machine-readable storage medium More specific examples would include electrical connections based on one or more wires, portable computer disks, hard disks, RAM, ROM, EPROM or flash memory, optical fibers, portable CD-ROMs, optical storage devices, magnetic storage devices, or any suitable combination of the above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A virtual object rendering method and apparatus, and a device and a storage medium. The method comprises: acquiring skeleton point position information of a target object in the current image and initial position information of a plurality of vertexes of a virtual object; on the basis of the skeleton point position information, transforming the initial position information of the plurality of vertexes to obtain intermediate position information of the plurality of vertexes; on the basis of set material information, updating the intermediate position information of at least some of the plurality of vertexes of the virtual object to obtain target position information; and on the basis of the target position information, rendering the virtual object to obtain a target image.

Description

虚拟物体的渲染方法、装置、设备及存储介质Virtual object rendering method, device, equipment and storage medium
本申请要求在2022年09月29日提交中国专利局、申请号为202211204095.X的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the China Patent Office on September 29, 2022, with application number 202211204095.X. The entire contents of this application are incorporated by reference into this application.
技术领域Technical Field
本公开实施例涉及增强现实技术领域,例如涉及一种虚拟物体的渲染方法、装置、设备及存储介质。The embodiments of the present disclosure relate to the field of augmented reality technology, for example, to a method, device, equipment and storage medium for rendering a virtual object.
背景技术Background technique
随着增强现实(Augmented Reality,AR)技术的不断发展,通过AR技术在现实场景中生成虚拟物体已经被广泛应用。为用户试穿服装就是AR技术中的一种应用场景。相关技术中,试穿的服装无法与人物的运动很好的贴合,且缺少真实服装的动感及质感,试穿效果较差。With the continuous development of augmented reality (AR) technology, the generation of virtual objects in real scenes through AR technology has been widely used. Trying on clothes for users is an application scenario of AR technology. In related technologies, the clothes tried on cannot fit the movement of the characters well, and lack the dynamics and texture of real clothes, so the try-on effect is poor.
发明内容Summary of the invention
本公开实施例提供一种虚拟物体的渲染方法、装置、设备及存储介质,使得试穿的虚拟物体可以与目标对象的运动很好的贴合,且可以使得虚拟物体具有真实物体的动感及质感,提高渲染效果。The embodiments of the present disclosure provide a method, apparatus, device and storage medium for rendering a virtual object, so that a virtual object being tried on can fit well with the movement of a target object, and can make the virtual object have the dynamic feeling and texture of a real object, thereby improving the rendering effect.
第一方面,本公开实施例提供了一种虚拟物体的渲染方法,包括:In a first aspect, an embodiment of the present disclosure provides a method for rendering a virtual object, comprising:
获取当前图像中目标对象的骨骼点位置信息以及虚拟物体的多个顶点的初始位置信息;其中,所述虚拟物体的多个顶点为所述虚拟物体对应的三维(3 Dimensional,3D)模型的多个顶点;Obtaining the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object; wherein the multiple vertices of the virtual object are multiple vertices of a three-dimensional (3D) model corresponding to the virtual object;
基于所述骨骼点位置信息对所述多个顶点的初始位置信息进行变换,获得所述多个顶点的中间位置信息;Transforming the initial position information of the plurality of vertices based on the skeleton point position information to obtain intermediate position information of the plurality of vertices;
基于设定材质信息对所述虚拟物体的所述多个顶点中的至少部分顶点的中间位置信息进行更新,获得目标位置信息;updating intermediate position information of at least some of the multiple vertices of the virtual object based on the set material information to obtain target position information;
基于所述目标位置信息对所述虚拟物体进行渲染,获得目标图像;其中,所述目标图像中的目标对象穿戴所述虚拟物体。The virtual object is rendered based on the target position information to obtain a target image; wherein the target object in the target image wears the virtual object.
第二方面,本公开实施例还提供了一种虚拟物体的渲染装置,包括: In a second aspect, the present disclosure also provides a virtual object rendering device, including:
获取模块,设置为获取当前图像中目标对象的骨骼点位置信息以及虚拟物体的多个顶点的初始位置信息;其中,所述虚拟物体的多个顶点为所述虚拟物体对应的3D模型的多个顶点;An acquisition module, configured to acquire the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object; wherein the multiple vertices of the virtual object are multiple vertices of the 3D model corresponding to the virtual object;
位置信息变换模块,设置为基于所述骨骼点位置信息对所述多个顶点的初始位置信息进行变换,获得所述多个顶点的中间位置信息;A position information transformation module, configured to transform the initial position information of the plurality of vertices based on the position information of the skeleton points to obtain the intermediate position information of the plurality of vertices;
目标位置信息获取模块,设置为基于设定材质信息对所述虚拟物体的所述多个顶点中的至少部分顶点的中间位置信息进行更新,获得目标位置信息;a target position information acquisition module, configured to update the intermediate position information of at least some of the multiple vertices of the virtual object based on the set material information to obtain the target position information;
渲染模块,设置为基于所述目标位置信息对所述虚拟物体进行渲染,获得目标图像;其中,所述目标图像中的目标对象穿戴所述虚拟物体。A rendering module is configured to render the virtual object based on the target position information to obtain a target image; wherein the target object in the target image wears the virtual object.
第三方面,本公开实施例还提供了一种电子设备,所述电子设备包括:In a third aspect, an embodiment of the present disclosure further provides an electronic device, the electronic device comprising:
一个或多个处理器;one or more processors;
存储装置,设置为存储一个或多个程序,a storage device configured to store one or more programs,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本公开实施例所述的虚拟物体的渲染方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the virtual object rendering method as described in the embodiment of the present disclosure.
第四方面,本公开实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本公开实施例所述的虚拟物体的渲染方法。In a fourth aspect, the embodiments of the present disclosure further provide a storage medium comprising computer executable instructions, which, when executed by a computer processor, are used to execute the method for rendering a virtual object as described in the embodiments of the present disclosure.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。Throughout the drawings, the same or similar reference numerals represent the same or similar elements. It should be understood that the drawings are schematic and that the originals and elements are not necessarily drawn to scale.
图1是本公开实施例所提供的一种虚拟物体的渲染方法的流程示意图;FIG1 is a schematic diagram of a flow chart of a method for rendering a virtual object provided by an embodiment of the present disclosure;
图2是本公开实施例所提供的一种渲染出的虚拟物体的示意图;FIG2 is a schematic diagram of a rendered virtual object provided by an embodiment of the present disclosure;
图3是本公开实施例所提供的一种彩色噪声图的示例图;FIG3 is an example diagram of a color noise diagram provided by an embodiment of the present disclosure;
图4a是本公开实施例所提供的一种目标对象采样图的示例图;FIG4a is an example diagram of a target object sampling diagram provided by an embodiment of the present disclosure;
图4b是本公开实施例所提供的一种第一掩膜图的示例图;FIG4b is an example diagram of a first mask image provided by an embodiment of the present disclosure;
图4c是本公开实施例所提供的一种虚拟物体的示例图;FIG4c is an example diagram of a virtual object provided by an embodiment of the present disclosure;
图4d是本公开实施例所提供的一种目标图像的示例图; FIG4d is an example diagram of a target image provided by an embodiment of the present disclosure;
图5a是本公开实施例所提供的一种显示虚拟物体的示例图;FIG5a is an example diagram of displaying a virtual object provided by an embodiment of the present disclosure;
图5b是本公开实施例所提供的另一种显示虚拟物体的示例图;FIG5b is another example diagram of displaying a virtual object provided by an embodiment of the present disclosure;
图6是本公开实施例所提供的一种虚拟物体的渲染装置的结构示意图;FIG6 is a schematic diagram of the structure of a virtual object rendering device provided by an embodiment of the present disclosure;
图7是本公开实施例所提供的一种电子设备的结构示意图。FIG. 7 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的一些实施例,然而应当理解的是,本公开可以通过多种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described below with reference to the accompanying drawings. Although some embodiments of the present disclosure are shown in the accompanying drawings, it should be understood that the present disclosure can be implemented in various forms and should not be construed as being limited to the embodiments described herein, but rather these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are only for exemplary purposes and are not intended to limit the scope of protection of the present disclosure.
应当理解,本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。It should be understood that the multiple steps described in the method embodiments of the present disclosure can be performed in different orders and/or performed in parallel. In addition, the method embodiments may include additional steps and/or omit the steps shown. The scope of the present disclosure is not limited in this respect.
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。The term "including" and its variations used herein are open inclusions, i.e., "including but not limited to". The term "based on" means "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". The relevant definitions of other terms will be given in the following description.
本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。The concepts of “first”, “second”, etc. mentioned in the present disclosure are only used to distinguish different devices, modules or units, and are not used to limit the order or interdependence of the functions performed by these devices, modules or units.
本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。The modifications of "one" and "plurality" mentioned in the present disclosure are illustrative rather than restrictive, and those skilled in the art should understand that unless otherwise clearly indicated in the context, they should be understood as "one or more".
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。The names of the messages or information exchanged between multiple devices in the embodiments of the present disclosure are only used for illustrative purposes and are not used to limit the scope of these messages or information.
在使用本公开的多个实施例公开的技术方案之前,均应当依据相关法律法规通过恰当的方式对本公开所涉及个人信息的类型、使用范围、使用场景等告知用户并获得用户的授权。Before using the technical solutions disclosed in the multiple embodiments of this disclosure, the types, scope of use, usage scenarios, etc. of the personal information involved in this disclosure should be informed to the user and the user's authorization should be obtained in an appropriate manner in accordance with relevant laws and regulations.
例如,在响应于接收到用户的主动请求时,向用户发送提示信息,以明确地提示用户,其请求执行的操作将需要获取和使用到用户的个人信息。从而,使得用户可以根据提示信息来自主地选择是否向执行本公开技术方案的操作的 电子设备、应用程序、服务器或存储介质等软件或硬件提供个人信息。For example, in response to receiving a user's active request, a prompt message is sent to the user to clearly prompt the user that the operation requested to be performed will require obtaining and using the user's personal information. Thus, the user can autonomously choose whether to submit the operation to the user according to the prompt message. Software or hardware such as electronic devices, applications, servers or storage media provide personal information.
作为一种可选的但非限定性的实现方式,响应于接收到用户的主动请求,向用户发送提示信息的方式例如可以是弹窗的方式,弹窗中可以以文字的方式呈现提示信息。此外,弹窗中还可以承载供用户选择“同意”或者“不同意”向电子设备提供个人信息的选择控件。As an optional but non-limiting implementation, in response to receiving an active request from the user, the prompt information may be sent to the user in the form of a pop-up window, in which the prompt information may be presented in text form. In addition, the pop-up window may also carry a selection control for the user to choose "agree" or "disagree" to provide personal information to the electronic device.
上述通知和获取用户授权过程仅是示意性的,不对本公开的实现方式构成限定,其它满足相关法律法规的方式也可应用于本公开的实现方式中。The above notification and the process of obtaining user authorization are merely illustrative and do not limit the implementation of the present disclosure. Other methods that meet relevant laws and regulations may also be applied to the implementation of the present disclosure.
本技术方案所涉及的数据(包括但不限于数据本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。The data involved in this technical solution (including but not limited to the data itself, the acquisition or use of the data) shall comply with the requirements of relevant laws, regulations and relevant provisions.
图1为本公开实施例所提供的一种虚拟物体的渲染方法的流程示意图,本公开实施例适用于对虚拟物体进行渲染的情形,该方法可以由虚拟物体的渲染装置来执行,该装置可以通过软件和/或硬件的形式实现,可选的,通过电子设备来实现,该电子设备可以是移动终端、个人计算机(Personal Computer,PC)或服务器等。Figure 1 is a flow chart of a method for rendering a virtual object provided by an embodiment of the present disclosure. The embodiment of the present disclosure is applicable to the situation of rendering a virtual object. The method can be executed by a rendering device for a virtual object, which can be implemented in the form of software and/or hardware. Optionally, it can be implemented by an electronic device, which can be a mobile terminal, a personal computer (PC) or a server, etc.
如图1所示,所述方法包括:As shown in FIG1 , the method comprises:
S110,获取当前图像中目标对象的骨骼点位置信息以及虚拟物体的多个顶点的初始位置信息。S110, obtaining the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object.
当前图像可以是一张静态图像、实时采集的图像或者视频中的一帧图像。目标对象可以是人物或者动物,本应用场景下,目标对象为人物。虚拟物体可以是预先构建的3D虚拟物体,例如:虚拟服装、或虚拟饰品等。虚拟物体的多个顶点可以是虚拟物体对应的3D模型的多个顶点。虚拟物体的多个顶点的初始位置信息可以理解为虚拟物体的多个顶点在世界坐标系下的三维坐标。The current image can be a static image, a real-time captured image, or a frame of an image in a video. The target object can be a person or an animal. In this application scenario, the target object is a person. The virtual object can be a pre-built 3D virtual object, such as virtual clothing or virtual accessories. The multiple vertices of the virtual object can be the multiple vertices of the 3D model corresponding to the virtual object. The initial position information of the multiple vertices of the virtual object can be understood as the three-dimensional coordinates of the multiple vertices of the virtual object in the world coordinate system.
骨骼点位置信息可以是骨骼关键点在世界坐标系下的三维坐标。本实施例中,获取当前图像中目标对象的骨骼点位置信息的方式可以是:对当前图像中的目标对象进行骨骼关键点检测或者姿态检测,从而获得多个骨骼关键点的位置信息。可以采用姿态检测算法或者骨骼关键点检测算法获取骨骼点位置信息,此处不做限定。The skeleton point position information may be the three-dimensional coordinates of the skeleton key points in the world coordinate system. In the present embodiment, the method for obtaining the skeleton point position information of the target object in the current image may be: performing skeleton key point detection or posture detection on the target object in the current image, thereby obtaining the position information of multiple skeleton key points. The skeleton point position information may be obtained by using a posture detection algorithm or a skeleton key point detection algorithm, which is not limited here.
S120,基于骨骼点位置信息对多个顶点的初始位置信息进行变换,获得多个顶点的中间位置信息。S120, transforming the initial position information of the multiple vertices based on the skeleton point position information to obtain the intermediate position information of the multiple vertices.
本实施例中,虚拟物体会随着目标对象运动而运动,即虚拟物体的多个顶点的位置信息随着骨骼点位置信息的变化而变化。In this embodiment, the virtual object moves along with the movement of the target object, that is, the position information of the multiple vertices of the virtual object changes along with the change of the position information of the skeleton points.
基于骨骼点位置信息对初始位置信息进行变换的方式可以是:针对虚拟物 体的多个顶点中的每个顶点:获取虚拟物体的所述顶点关联的多个骨骼点以及多个骨骼点对虚拟物体的所述顶点的位置影响权重;基于多个骨骼点的位置信息和位置影响权重确定所述顶点的顶点变换信息;基于所述顶点的顶点变换信息对所述顶点的初始位置信息进行变换,获得所述顶点的中间位置信息。The method of transforming the initial position information based on the position information of the skeleton point can be: For each vertex of the multiple vertices of the body: obtain multiple skeleton points associated with the vertex of the virtual object and the position influence weights of the multiple skeleton points on the vertex of the virtual object; determine the vertex transformation information of the vertex based on the position information and the position influence weights of the multiple skeleton points; transform the initial position information of the vertex based on the vertex transformation information of the vertex to obtain the intermediate position information of the vertex.
位置影响权重可以理解为骨骼点的位置对虚拟物体的顶点的位置的影响程度。位置影响权重与骨骼点与虚拟物体的顶点间的距离相关,骨骼点与虚拟物体点的顶点的距离越近,则骨骼点对该虚拟物体的顶点的位置影响权重越大,骨骼点与虚拟物体的顶点的距离越远,则骨骼点对该虚拟物体的顶点的位置影响权重越小,从而呈现虚拟物体随人体运动而运动的效果。示例性的,对于靠近膝盖骨骼点的虚拟物体的顶点,其受膝盖骨骼点的影响较大,当人体屈膝时,靠近膝盖骨骼点的虚拟物体的顶点会随着膝盖骨骼点有较大的位置变换。The position influence weight can be understood as the degree of influence of the position of the bone point on the position of the vertex of the virtual object. The position influence weight is related to the distance between the bone point and the vertex of the virtual object. The closer the distance between the bone point and the vertex of the virtual object point, the greater the position influence weight of the bone point on the vertex of the virtual object. The farther the distance between the bone point and the vertex of the virtual object, the smaller the position influence weight of the bone point on the vertex of the virtual object, thereby presenting the effect that the virtual object moves with the movement of the human body. For example, for the vertex of the virtual object close to the knee bone point, it is greatly affected by the knee bone point. When the human body bends the knee, the vertex of the virtual object close to the knee bone point will have a large position change with the knee bone point.
本实施例中,获取虚拟物体的所述顶点关联的多个骨骼点以及多个骨骼点对虚拟物体的所述顶点的位置影响权重的过程可以是:首先将虚拟物体与目标对象的骨骼点进行绑定,根据绑定结果确定虚拟物体的每个顶点关联的多个骨骼点及多个骨骼点对虚拟物体的所述顶点的位置影响权重。本应用场景下,虚拟物体的每个顶点关联四个骨骼点。基于多个骨骼点的位置影响权重和位置信息确定所述顶点的顶点变换信息的方式可以是:基于位置影响权重对多个骨骼点的位置信息进行加权求和,获得所述顶点的顶点变换信息。In this embodiment, the process of obtaining multiple skeleton points associated with the vertices of the virtual object and the position influence weights of the multiple skeleton points on the vertices of the virtual object can be: first, the virtual object is bound to the skeleton points of the target object, and the multiple skeleton points associated with each vertex of the virtual object and the position influence weights of the multiple skeleton points on the vertices of the virtual object are determined according to the binding result. In this application scenario, each vertex of the virtual object is associated with four skeleton points. The method of determining the vertex transformation information of the vertex based on the position influence weights and position information of the multiple skeleton points can be: weighted summation of the position information of the multiple skeleton points based on the position influence weights to obtain the vertex transformation information of the vertex.
顶点变换信息可以由矩阵表征,基于所述顶点的顶点变换信息对所述顶点的初始位置信息进行变换的方式可以是:由所述顶点的顶点变换信息对应的矩阵点乘所述顶点的初始位置信息对应的向量,从而获得所述顶点的中间位置信息对应的向量。本实施例中,基于顶点变换信息对所述初始位置信息进行变换,使得虚拟物体随着目标对象的运动而运动,提高虚拟物体穿戴在目标对象上的逼真效果。The vertex transformation information may be represented by a matrix, and the initial position information of the vertex may be transformed based on the vertex transformation information of the vertex by multiplying the vector corresponding to the initial position information of the vertex by the matrix corresponding to the vertex transformation information of the vertex, thereby obtaining the vector corresponding to the intermediate position information of the vertex. In this embodiment, the initial position information is transformed based on the vertex transformation information, so that the virtual object moves with the movement of the target object, thereby improving the realistic effect of the virtual object being worn on the target object.
S130,基于设定材质信息对虚拟物体的多个顶点的中的至少部分顶点的中间位置信息进行更新,获得目标位置信息。S130, updating the intermediate position information of at least some of the vertices of the virtual object based on the set material information to obtain the target position information.
设定材质信息可以基于材质参数表征,材质参数不同对应的材质也不同。本实施例中,设定材质可以是布料材质。首先根据虚拟物体的多个顶点的贴图坐标将虚拟物体的多个顶点划分为至少两个部分,然后从至少两个部分中选取一个或者多个部分的顶点,以基于设定材质信息对选取出的至少部分顶点的中间位置信息进行更新。The material information may be set based on material parameter representation, and different material parameters correspond to different materials. In this embodiment, the set material may be a cloth material. First, the multiple vertices of the virtual object are divided into at least two parts according to the mapping coordinates of the multiple vertices of the virtual object, and then one or more vertices of the at least two parts are selected to update the intermediate position information of at least part of the selected vertices based on the set material information.
基于设定材质信息对所述虚拟物体的多个顶点中的至少部分顶点的中间位置信息进行更新的方式可以是:获取设定材质信息以及虚拟物体的多个顶点中的至少部分顶点的运动状态;根据所述至少部分顶点的中间位置信息、设定材 质信息及所述至少部分顶点的运动状态对虚拟物体的所述至少部分顶点进行设定材质的解算,获得目标位置信息。The method of updating the intermediate position information of at least some of the multiple vertices of the virtual object based on the set material information may be: obtaining the set material information and the motion state of at least some of the multiple vertices of the virtual object; updating the intermediate position information of at least some of the vertices based on the set material information; The material setting is performed on at least some of the vertices of the virtual object according to the material information and the motion state of at least some of the vertices to obtain the target position information.
可选的,目标位置信息包括虚拟物体的所述多个顶点的目标位置信息,对于多个顶点中的所述至少部分顶点的中间位置信息可通过对所述至少部分顶点进行设定材质的解算得到,对于多个顶点中的其余顶点,可直接将顶点的中间位置信息作为所述顶点的目标位置信息。设定材质信息可以是表征材质属性的参数,可以包括空气阻力、拉伸系数、压缩系数及弯曲系数等,设定材质信息可以根据实际需求进行设置。虚拟物体的顶点的运动状态可以是虚拟物体的顶点在当前图像相对于上一张图像的运动速度及运动方向等信息。本实施例中,可以调用预先开发的材质解算插件进行材质解算。例如,由材质解算插件对材质参数、至少部分顶点的中间位置信息及运动状态进行设定材质的解算,输出目标位置信息。本实施例中,根据中间位置信息、设定材质信息及运动状态对虚拟物体的至少部分顶点进行设定材质的解算,可以使得虚拟物体具有设定材质的动感及质感。Optionally, the target position information includes the target position information of the plurality of vertices of the virtual object. The intermediate position information of at least some of the vertices among the plurality of vertices can be obtained by performing a material setting calculation on the at least some of the vertices. For the remaining vertices among the plurality of vertices, the intermediate position information of the vertices can be directly used as the target position information of the vertices. The material setting information can be a parameter characterizing the material properties, which can include air resistance, stretch coefficient, compression coefficient, bending coefficient, etc. The material setting information can be set according to actual needs. The motion state of the vertices of the virtual object can be information such as the motion speed and motion direction of the vertices of the virtual object in the current image relative to the previous image. In this embodiment, a pre-developed material solution plug-in can be called to perform material solution. For example, the material solution plug-in performs a material setting calculation on the material parameters, the intermediate position information of at least some of the vertices, and the motion state, and outputs the target position information. In this embodiment, the material setting calculation is performed on at least some of the vertices of the virtual object according to the intermediate position information, the material setting information, and the motion state, so that the virtual object can have the dynamic and texture of the set material.
可选的,根据所述至少部分顶点的中间位置信息、设定材质信息及所述至少部分顶点的运动状态对虚拟物体的所述至少部分顶点进行设定材质的解算的方式可以是:若当前场景中设置有虚拟物体支撑体;根据虚拟物体支撑体获取支撑信息;根据所述至少部分顶点的中间位置信息、材质参数、所述至少部分顶点的运动状态及支撑信息对虚拟物体的所述至少部分顶点进行设定材质的解算。Optionally, the method of solving the material setting for at least part of the vertices of the virtual object according to the intermediate position information of at least part of the vertices, the set material information and the motion state of at least part of the vertices may be: if a virtual object support body is set in the current scene; obtain support information according to the virtual object support body; solve the material setting for at least part of the vertices of the virtual object according to the intermediate position information of at least part of the vertices, material parameters, the motion state of at least part of the vertices and the support information.
虚拟物体支撑体可以依据碰撞原理设置,即可以是一个碰撞体,支撑信息可以理解为碰撞信息。本实施例中,设置有虚拟物体支撑体的区域,虚拟物体会与虚拟物体支撑体发生碰撞事件,即虚拟物体无法穿过虚拟物体支撑体,可以使得虚拟物体呈现一定的形态。本实施例中,可以通过材质解算插件设置虚拟物体支撑体,材质解算插件根据设置的虚拟物体支撑体获取支撑信息。可以由材质解算插件对材质参数、支撑信息、至少部分顶点的中间位置信息及运动状态进行设定材质的解算,从而获得解算后的目标位置信息。本实施例中,在当前场景中设置虚拟物体支撑体,可以使虚拟物体呈现一定的形态,从而满足用户的个性化需求。示例性的,图2是本实施例中渲染出的虚拟物体的示意图,如图2所示,虚拟物体为裙子,由于裙子腰部以下的顶点进行了设定材质解算,因此,裙子腰部以下具有纱布的动感及质感,且在裙摆下面设置有虚拟物体支撑体,因此裙子下摆呈现膨起的效果。The virtual object support can be set according to the collision principle, that is, it can be a collision body, and the support information can be understood as collision information. In this embodiment, in the area where the virtual object support is set, the virtual object will collide with the virtual object support, that is, the virtual object cannot pass through the virtual object support, which can make the virtual object present a certain shape. In this embodiment, the virtual object support can be set by a material solution plug-in, and the material solution plug-in obtains support information according to the set virtual object support. The material solution plug-in can set the material parameters, support information, at least part of the intermediate position information and motion state of the vertices to solve the material, so as to obtain the target position information after the solution. In this embodiment, setting the virtual object support in the current scene can make the virtual object present a certain shape, so as to meet the personalized needs of the user. Exemplarily, FIG2 is a schematic diagram of the virtual object rendered in this embodiment. As shown in FIG2, the virtual object is a skirt. Since the vertices below the waist of the skirt are set to solve the material solution, the skirt below the waist has the dynamic and texture of gauze, and a virtual object support is set under the skirt, so the skirt hem presents a bulging effect.
S140,基于目标位置信息对虚拟物体进行渲染,获得目标图像。S140, rendering the virtual object based on the target position information to obtain a target image.
目标图像中的目标对象穿戴虚拟物体。 The target object in the target image wears the virtual object.
本实施例中,基于目标位置信息将虚拟物体的多个顶点渲染至画面中对应的位置,从而获得目标图像。In this embodiment, multiple vertices of the virtual object are rendered to corresponding positions in the picture based on the target position information, thereby obtaining a target image.
基于目标位置信息对虚拟物体进行渲染,获得目标图像的方式可以是:基于虚拟物体的每个顶点的贴图坐标从彩色噪声图中采样预设颜色信息;根据当前图像的时间信息对预设颜色信息进行偏移处理,得到所述顶点对应的偏移处理后的预设颜色信息;根据所述顶点对应的偏移处理后的预设颜色信息和目标位置信息对虚拟物体的所述顶点的初始颜色进行调整,获得所述顶点的目标颜色;基于目标位置信息和每个顶点的目标颜色对虚拟物体进行渲染,获得目标图像。The virtual object is rendered based on the target position information, and the method for obtaining the target image can be: sampling preset color information from the color noise map based on the mapping coordinates of each vertex of the virtual object; offsetting the preset color information according to the time information of the current image to obtain the offset-processed preset color information corresponding to the vertex; adjusting the initial color of the vertex of the virtual object according to the offset-processed preset color information corresponding to the vertex and the target position information to obtain the target color of the vertex; rendering the virtual object based on the target position information and the target color of each vertex to obtain the target image.
多个顶点的贴图坐标可以是虚拟物体的表面贴图中多个顶点的坐标。彩色噪声图可以是随机生成的噪声图。示例性的,图3是本实施例中的一种彩色噪声图的示例图,图3为灰度处理后的噪声图,原图为彩色图。本实施例中彩色噪声图和虚拟物体的表面贴图尺寸相同,即两图之间的像素点一一对应,从而可以根据贴图坐标从彩色噪声图中采样预设颜色信息。The mapping coordinates of the multiple vertices may be the coordinates of the multiple vertices in the surface map of the virtual object. The color noise map may be a randomly generated noise map. For example, FIG. 3 is an example of a color noise map in this embodiment, FIG. 3 is a noise map after grayscale processing, and the original image is a color map. In this embodiment, the color noise map and the surface map of the virtual object have the same size, that is, the pixels between the two images correspond one to one, so that the preset color information can be sampled from the color noise map according to the mapping coordinates.
当前图像的时间信息可以理解为当前图像在视频中的时间戳信息。颜色信息可以由三个颜色通道(RGB)值表征。根据当前图像的时间信息对预设颜色信息进行偏移处理的过程可以是:首先对时间信息进行线性变换,获得偏移量;然后将预设颜色信息的RGB信息转换至HSV(Hue,Saturation,Value)色彩空间中,然后基于偏移量对HSV信息中的三个分量分别进行偏移处理,获得偏移处理后的HSV信息,再将HSV信息转换回RGB信息。对时间信息进行线性变换的方式可以是:将时间信息与设定值相乘,获得偏移量。基于偏移量对HSV信息中的三个分量分别进行偏移处理的公式可以表示为:H1=H0,S1=S0+t/360,V1=V0+t/360*0.1,其中,S0为偏移前的S分量,S1为偏移后的S分量,V0为偏移前的V分量,V1为偏移后的V分量。The time information of the current image can be understood as the timestamp information of the current image in the video. The color information can be represented by three color channel (RGB) values. The process of offsetting the preset color information according to the time information of the current image can be: firstly, linearly transform the time information to obtain the offset; then convert the RGB information of the preset color information into the HSV (Hue, Saturation, Value) color space, and then offset the three components in the HSV information based on the offset to obtain the HSV information after the offset, and then convert the HSV information back to RGB information. The linear transformation of the time information can be: multiply the time information by the set value to obtain the offset. The formula for offsetting the three components in the HSV information based on the offset can be expressed as: H1=H0, S1=S0+t/360, V1=V0+t/360*0.1, where S0 is the S component before the offset, S1 is the S component after the offset, V0 is the V component before the offset, and V1 is the V component after the offset.
本实施例中,根据所述顶点对应的偏移处理后的预设颜色信息和目标位置信息对虚拟物体的所述顶点的初始颜色进行调整,获得所述顶点的目标颜色的方式可以是:根据目标位置信息确定虚拟物体的每个顶点的法线方向及视角方向;根据所述顶点对应的偏移处理后的预设颜色信息确定所述顶点对应的颜色变换信息;根据所述顶点对应的颜色变换信息、所述顶点的法线方向及所述顶点的视角方向确定所述顶点的颜色调整量;将所述顶点的颜色调整量与虚拟物体的所述顶点的初始颜色进行累加,获得所述顶点的目标颜色。In this embodiment, the initial color of the vertex of the virtual object is adjusted according to the preset color information and target position information after the offset processing corresponding to the vertex, and the target color of the vertex can be obtained by: determining the normal direction and viewing direction of each vertex of the virtual object according to the target position information; determining the color transformation information corresponding to the vertex according to the preset color information after the offset processing corresponding to the vertex; determining the color adjustment amount of the vertex according to the color transformation information corresponding to the vertex, the normal direction of the vertex and the viewing direction of the vertex; and accumulating the color adjustment amount of the vertex with the initial color of the vertex of the virtual object to obtain the target color of the vertex.
虚拟物体的顶点的法线方向可以是虚拟物体的顶点所在切面的法线方向,虚拟物体的顶点的视角方向可以是相机光心指向虚拟物体的顶点的方向。颜色变换信息可以由颜色变换矩阵表征。根据所述顶点对应的偏移处理后的预设颜 色信息确定所述顶点对应的颜色变换信息的过程可以是:将所述顶点对应的偏移处理后的预设颜色信息的三个通道颜色值转换为角度信息,然后计算三个角度信息的正弦和余弦,最后基于三个角度信息的正弦和余弦构建所述顶点对应的颜色变换矩阵。示例性的,将三个通道颜色值转换为角度信息的公式可以表示为:角度信息=(RGB-a)*b*π,其中,a和b为设定值,例如:a可以取0.5,b可以取2。假设X=(R-a)*b*π,Y=(G-a)*b*π,Z=(B-a)*b*π,则颜色变换矩阵可以表示为:[sinY*cosZ,cosY*sinZ*sinX-sinY*cosX,cosY*sinZ*sinX+sinY*cosX,sinY*cosZ,sinY*cosX+sinY*sinZ*sinX,conY*sinZ*sinX-sinX*conZ,-sinZ,cosZ*sinX,sinZ*cosX]。The normal direction of the vertex of the virtual object may be the normal direction of the section where the vertex of the virtual object is located, and the viewing direction of the vertex of the virtual object may be the direction in which the optical center of the camera points to the vertex of the virtual object. The color transformation information may be represented by a color transformation matrix. The process of determining the color transformation information corresponding to the vertex based on the color information can be: converting the three channel color values of the preset color information corresponding to the vertex after the offset processing into angle information, then calculating the sine and cosine of the three angle information, and finally constructing the color transformation matrix corresponding to the vertex based on the sine and cosine of the three angle information. Exemplarily, the formula for converting the three channel color values into angle information can be expressed as: angle information = (RGB-a)*b*π, where a and b are set values, for example: a can be 0.5 and b can be 2. Assuming X=(Ra)*b*π, Y=(Ga)*b*π, Z=(Ba)*b*π, the color transformation matrix can be expressed as: [sinY*cosZ, cosY*sinZ*sinX-sinY*cosX, cosY*sinZ*sinX+sinY*cosX, sinY*cosZ, sinY*cosX+sinY*sinZ*sinX, conY*sinZ*sinX-sinX*conZ, -sinZ, cosZ*sinX, sinZ*cosX].
颜色调整量可以是亮度值。本实施例中,根据所述顶点对应的颜色变换信息、所述顶点的法线方向及所述顶点的视角方向确定所述顶点的颜色调整量的过程可以是:将颜色变换信息对应的颜色变换矩阵依次与法线方向对应的向量及视角方向对应的向量进行点乘,获得颜色调整量。最后将每个顶点的颜色调整量与虚拟物体的所述顶点的初始颜色进行累加,获得所述顶点的目标颜色。本实施例中,根据基于时间信息偏移处理后的预设颜色信息、法线方向及视角方向对虚拟物体的顶点的颜色进行调整,可以生成虚拟物体随目标对象运动而动态变化的闪烁效果。示例性的,如图2所示,“裙子”上呈现闪烁效果。The color adjustment amount may be a brightness value. In this embodiment, the process of determining the color adjustment amount of the vertex according to the color transformation information corresponding to the vertex, the normal direction of the vertex, and the viewing direction of the vertex may be: performing dot multiplication of the color transformation matrix corresponding to the color transformation information with the vector corresponding to the normal direction and the vector corresponding to the viewing direction in turn to obtain the color adjustment amount. Finally, the color adjustment amount of each vertex is accumulated with the initial color of the vertex of the virtual object to obtain the target color of the vertex. In this embodiment, the color of the vertex of the virtual object is adjusted according to the preset color information, normal direction, and viewing direction after the time information offset processing, so as to generate a flickering effect in which the virtual object changes dynamically with the movement of the target object. Exemplarily, as shown in FIG2 , a flickering effect is presented on the "skirt".
本实施例中,基于目标位置信息对虚拟物体进行渲染,获得目标图像的方式可以是:根据虚拟3D目标对象模型和虚拟物体模型对当前图像进行采样,获得目标对象采样图;将目标对象采样图转换为第一掩膜图;基于第一掩膜图对当前图像和虚拟物体图进行融合,获得目标图像;将目标图像渲染至当前画面。In this embodiment, the virtual object is rendered based on the target position information, and the target image is obtained by: sampling the current image according to the virtual 3D target object model and the virtual object model to obtain a target object sampling map; converting the target object sampling map into a first mask map; fusing the current image and the virtual object map based on the first mask map to obtain a target image; and rendering the target image to the current screen.
虚拟3D目标对象模型可以是将标准目标对象模型与当前图像中的目标对象的姿态绑定后的虚拟3D模型。虚拟物体模型可以理解为虚拟物体对应的3D模型。根据虚拟3D目标对象模型和虚拟物体模型对当前图像进行采样的过程可以是:首先根据虚拟3D目标对象模型和虚拟物体模型确定目标对象和虚拟物体间的遮挡关系,基于该遮挡关系从当前图像中采样未被遮挡的目标对象的像素点,从而获得目标对象采样图。示例性的,图4a是本实施例中的目标对象采样图的示例图,如图4a所示,图4a中的非黑色区域为未被遮挡的目标对象的像素点。The virtual 3D target object model can be a virtual 3D model obtained by binding the standard target object model to the posture of the target object in the current image. The virtual object model can be understood as a 3D model corresponding to the virtual object. The process of sampling the current image according to the virtual 3D target object model and the virtual object model can be: first, the occlusion relationship between the target object and the virtual object is determined according to the virtual 3D target object model and the virtual object model, and the pixel points of the target object that are not occluded are sampled from the current image based on the occlusion relationship, thereby obtaining a target object sampling map. Exemplarily, Figure 4a is an example diagram of the target object sampling map in this embodiment. As shown in Figure 4a, the non-black area in Figure 4a is the pixel points of the target object that are not occluded.
将目标对象采样图转换为第一掩膜图的方式可以是:对目标对象采样图中的目标对象进行识别,获得第一掩膜图。示例性的,图4b为第一掩膜图的示例图,如图4b所示,图4b中的白色区域为目标对象区域。The method of converting the target object sampling map into the first mask map may be: identifying the target object in the target object sampling map to obtain the first mask map. Exemplarily, FIG4b is an example of the first mask map, as shown in FIG4b, the white area in FIG4b is the target object area.
虚拟物体图可以理解为根据目标位置信息将虚拟物体投影至屏幕中获得的二维(2 Dimensional,2D)图。示例性的,图4c为本实施例中虚拟物体的示例 图,如图4c所示,图4c中的牛仔衣为虚拟物体。The virtual object map can be understood as a two-dimensional (2D) map obtained by projecting the virtual object onto the screen according to the target position information. For example, FIG4c is an example of a virtual object in this embodiment. As shown in FIG4c , the denim jacket in FIG4c is a virtual object.
基于第一掩膜图对当前图像和虚拟物体图进行融合的方式可以是:根据第一掩膜图中像素点的像素值确定加权权重,基于该加权权重对当前图像和虚拟物体图进行融合。The method of fusing the current image and the virtual object image based on the first mask image may be: determining a weighted weight according to pixel values of pixels in the first mask image, and fusing the current image and the virtual object image based on the weighted weight.
示例性的,假设第一掩膜图中像素点的像素值为a,则将a作为当前图像的加权权重,将1-a作为虚拟物体图的加权权重,则融合的公式可以是表示为:a*当前图像+(1-a)*虚拟物体图。示例性的,图4d为本实施例中目标图像的示例图。Exemplarily, assuming that the pixel value of the pixel point in the first mask image is a, a is used as the weighted weight of the current image, and 1-a is used as the weighted weight of the virtual object image, then the fusion formula can be expressed as: a*current image+(1-a)*virtual object image. Exemplarily, FIG4d is an example of a target image in this embodiment.
本实施例中,基于目标对象采样图对应的对第一掩膜图对当前图像和虚拟物体图进行融合,可以准确的将虚拟物体穿戴至目标对象中。In this embodiment, the current image and the virtual object image are fused based on the first mask image corresponding to the target object sampling image, so that the virtual object can be accurately worn on the target object.
可选的,基于第一掩膜图对当前图像和虚拟物体图进行融合,获得目标图像的方式可以是:对第一掩膜图进行模糊处理;基于模糊处理后的第一掩膜图对当前图像和虚拟物体图进行融合,获得目标图像。Optionally, the current image and the virtual object image are fused based on the first mask image to obtain the target image by: blurring the first mask image; and fusing the current image and the virtual object image based on the blurred first mask image to obtain the target image.
模糊处理可以采用模糊处理算法,此处不做限定。基于模糊处理后的第一掩膜图对当前图像和虚拟物体图进行融合的方式可以是:根据模糊处理后的第一掩膜图中像素点的像素值确定加权权重,基于该加权权重对当前图像和虚拟物体图进行融合。The blurring process may adopt a blurring process algorithm, which is not limited here. The method of fusing the current image and the virtual object image based on the blurred first mask image may be: determining a weighted weight according to the pixel value of the pixel point in the blurred first mask image, and fusing the current image and the virtual object image based on the weighted weight.
示例性的,假设模糊处理后的第一掩膜图中像素点的像素值为b,则将b作为当前图像的加权权重,将1-b作为虚拟物体图的加权权重,则融合的公式可以是表示为:b*当前图像+(1-b)*虚拟物体图。Exemplarily, assuming that the pixel value of the pixel point in the first mask image after blurring is b, b is used as the weighted weight of the current image, and 1-b is used as the weighted weight of the virtual object image. The fusion formula can be expressed as: b*current image+(1-b)*virtual object image.
本实施例中,基于模糊处理后的第一掩膜图对当前图像和虚拟物体图进行融合,可以使得目标对象与虚拟物体间平滑过渡。In this embodiment, the current image and the virtual object image are fused based on the first mask image after blur processing, so that a smooth transition can be achieved between the target object and the virtual object.
基于所述目标位置信息对所述虚拟物体进行渲染,获得目标图像的方式可以是:获取当前图像中目标对象对应的虚拟3D目标对象模型;根据虚拟3D目标对象模型的法线方向和视角方向确定第二掩膜图;基于第二掩膜图对当前图像和虚拟物体图进行融合,获得目标图像。The virtual object is rendered based on the target position information, and a method for obtaining the target image may be: obtaining a virtual 3D target object model corresponding to the target object in the current image; determining a second mask image according to the normal direction and viewing direction of the virtual 3D target object model; and fusing the current image and the virtual object image based on the second mask image to obtain the target image.
本实施例中,将标准目标对象模型与当前图像中的目标对象的姿态进行绑定,获得虚拟3D目标对象模型。虚拟3D目标对象模型的法线方向可以理解为构成虚拟3D目标对象模型中的3D点所在切面的法线方向,虚拟3D目标对象模型的视角方向可以是相机光心指向构成虚拟3D目标对象模型中的3D点的方向。根据虚拟3D目标对象模型的法线方向和视角方向确定第二掩膜图的方式可是:将法线方向和视角方向进行点乘,将点乘结果作为第二掩膜图中像素点的像素值。基于第二掩膜图对当前图像和虚拟物体图进行融合的方式可以是:根 据第二掩膜图中像素点的像素值确定加权权重,基于该加权权重对当前图像和虚拟物体图进行融合。示例性的,假设第二掩膜图中像素点的像素值为c,则将c作为当前图像的加权权重,将1-c作为虚拟物体图的加权权重,则融合的公式可以是表示为:c*当前图像+(1-c)*虚拟物体图。本实施例中,基于由虚拟3D目标对象模型的法线方向和视角方向确定第二掩膜图对当前图像和虚拟物体图进行融合,可以提高融合效率。In this embodiment, the standard target object model is bound to the posture of the target object in the current image to obtain a virtual 3D target object model. The normal direction of the virtual 3D target object model can be understood as the normal direction of the section where the 3D points in the virtual 3D target object model are located, and the viewing angle direction of the virtual 3D target object model can be the direction in which the optical center of the camera points to the 3D points in the virtual 3D target object model. The method for determining the second mask image based on the normal direction and the viewing angle direction of the virtual 3D target object model can be: dot multiplying the normal direction and the viewing angle direction, and using the dot multiplication result as the pixel value of the pixel point in the second mask image. The method for fusing the current image and the virtual object image based on the second mask image can be: based on The weighted weight is determined according to the pixel value of the pixel point in the second mask image, and the current image and the virtual object image are fused based on the weighted weight. Exemplarily, assuming that the pixel value of the pixel point in the second mask image is c, c is used as the weighted weight of the current image, and 1-c is used as the weighted weight of the virtual object image. The fusion formula can be expressed as: c*current image+(1-c)*virtual object image. In this embodiment, the fusion efficiency can be improved by determining the second mask image based on the normal direction and the viewing direction of the virtual 3D target object model to fuse the current image and the virtual object image.
可选的,在获取当前图像中目标对象的骨骼点位置信息之前,还包括如下步骤:根据虚拟物体的多个顶点的初始位置信息和/或贴图坐标将虚拟物体按照设定顺序在设定时长内逐渐渲染显示。Optionally, before obtaining the skeleton point position information of the target object in the current image, the following step is also included: gradually rendering and displaying the virtual object in a set order within a set time period according to the initial position information and/or mapping coordinates of multiple vertices of the virtual object.
设定顺序可以是从上到下、从下到上、从左到右或者从右到左。设定时长可以是1-2秒之间的任意值。逐渐渲染显示可以是迭代渲染显示或者分区域逐渐渲染显示等方式。本应用场景下,当用户进入服装试穿工具,并检测到人物时,开始按照逐渐显示的方式展示要试穿的服装。The setting order can be from top to bottom, from bottom to top, from left to right, or from right to left. The setting duration can be any value between 1 and 2 seconds. The gradual rendering display can be an iterative rendering display or a gradual rendering display in different areas. In this application scenario, when the user enters the clothing fitting tool and a person is detected, the clothing to be tried on begins to be displayed in a gradual display manner.
本实施例中,根据虚拟物体的多个顶点的初始位置信息和/或贴图坐标将虚拟物体按照设定顺序在设定时长内迭代渲染显示的方式可以是:根据初始位置信息和/或贴图坐标确定在设定时长内每个时刻要显示的虚拟物体的顶点,并将要显示的虚拟物体的顶点显示至所述时刻对应的画面中,从而获得虚拟物体逐步显示的视频。示例性的,图5a-图5b为显示虚拟物体的示例图,如图5a所示,该时刻展示了一部分服装,如图5b所示,将服装进行了完全显示。本实施例中,将虚拟物体迭代显示,可以提高显示效果。In this embodiment, the method of iteratively rendering and displaying the virtual object in a set order within a set time length according to the initial position information and/or mapping coordinates of multiple vertices of the virtual object can be: determining the vertices of the virtual object to be displayed at each moment within the set time length according to the initial position information and/or mapping coordinates, and displaying the vertices of the virtual object to be displayed in the screen corresponding to the moment, thereby obtaining a video of the virtual object being gradually displayed. Exemplarily, Figures 5a-5b are example diagrams for displaying virtual objects. As shown in Figure 5a, a portion of clothing is displayed at this moment, and as shown in Figure 5b, the clothing is fully displayed. In this embodiment, iteratively displaying virtual objects can improve the display effect.
根据虚拟物体的多个顶点的初始位置信息和/或贴图坐标将虚拟物体按照设定顺序在设定时长内迭代渲染显示的方式可以是:对设定时长内的当前时刻,根据当前时刻确定控制参数;根据虚拟物体的每个顶点的初始位置信息和/或贴图坐标确定所述顶点的目标参考量;根据每个顶点的目标参考量和控制参数确定当前时刻待显示的虚拟物体的顶点;对当前时刻待显示的虚拟物体的顶点进行渲染。The method for iteratively rendering and displaying the virtual object in a set order within a set time length according to the initial position information and/or mapping coordinates of multiple vertices of the virtual object can be: for the current moment within the set time length, determining the control parameters according to the current moment; determining the target reference value of each vertex of the virtual object according to the initial position information and/or mapping coordinates of the vertex; determining the vertex of the virtual object to be displayed at the current moment according to the target reference value and control parameters of each vertex; and rendering the vertices of the virtual object to be displayed at the current moment.
根据当前时刻确定控制参数的方式可以是:获取当前时刻在设定时长内进度,根据进度确定控制参数。示例性的,假设设定时长为T,当前时刻与起始时刻间的时长为t,则进度为t/T,则可以将进度作为控制参数。The method of determining the control parameter according to the current moment may be: obtaining the progress of the current moment within the set time length, and determining the control parameter according to the progress. For example, assuming that the set time length is T, the time length between the current moment and the start moment is t, and the progress is t/T, then the progress can be used as the control parameter.
目标参考量的作用是用于确定其对应的虚拟物体的顶点是否进行渲染。根据虚拟物体的每个顶点的初始位置信息和/或贴图坐标确定所述顶点的目标参考量的方式可以是:针对虚拟物物体的多个顶点中的每个顶点,根据虚拟物体的所述顶点的初始位置信息确定第一参考量;根据虚拟物体的所述顶点的贴图坐标从设定噪声图采样灰度值,将采样的灰度值确定为第二参考量;根据第一参 考量和/或第二参考量确定所述顶点的目标参考量。The role of the target reference is to determine whether the vertex of the corresponding virtual object is rendered. The method of determining the target reference of each vertex according to the initial position information and/or mapping coordinates of each vertex of the virtual object can be: for each of the multiple vertices of the virtual object, determine the first reference according to the initial position information of the vertex of the virtual object; sample the gray value from the set noise map according to the mapping coordinates of the vertex of the virtual object, and determine the sampled gray value as the second reference; The consideration and/or the second reference quantity determines a target reference quantity for the vertex.
虚拟物体的顶点的初始位置信息可以理解为构成虚拟物体的顶点在世界坐标系中的初始三维坐标,即初始位置信息包括三个坐标分量x、y、z。根据虚拟物体的顶点的初始位置信息确定第一参考量的过程可以是:首先计算虚拟物体的顶点投影至xz平面后与原点的距离,然后对该距离进行线性变换,再然后对y分量进行指数计算,最后将线性变换结果和指数变换结果累加,获得第一参考量。其中,虚拟物体的顶点投影至xz平面后与原点的距离可以表示为:length(xz),对y分量进行指数计算可以表示为pow(y,a),含义为求取y的a次方。则第一参考量的计算公式可以是为:第一参考量=(1-b*length(xz))*c+pow(y*d,a)*e,其中,a,b,c,d和e为设定值。可选的,在获得第一参考量后,还可以对第一参考量进行值域限制使得第一参考量位于设定范围内,在对值域限制后的第一参考量进行指数计算。设定范围可以是0-1.5。The initial position information of the vertices of the virtual object can be understood as the initial three-dimensional coordinates of the vertices constituting the virtual object in the world coordinate system, that is, the initial position information includes three coordinate components x, y, and z. The process of determining the first reference amount according to the initial position information of the vertices of the virtual object can be: firstly, the distance between the vertices of the virtual object and the origin after being projected to the xz plane is calculated, and then the distance is linearly transformed, and then the y component is exponentially calculated, and finally the linear transformation result and the exponential transformation result are accumulated to obtain the first reference amount. Among them, the distance between the vertices of the virtual object and the origin after being projected to the xz plane can be expressed as: length(xz), and the exponential calculation of the y component can be expressed as pow(y, a), which means to obtain y to the power of a. Then the calculation formula of the first reference amount can be: first reference amount = (1-b*length(xz))*c+pow(y*d, a)*e, where a, b, c, d and e are set values. Optionally, after obtaining the first reference value, the first reference value may be restricted in value range so that the first reference value is within a set range, and the first reference value after the value range restriction is subjected to exponential calculation. The set range may be 0-1.5.
贴图坐标可以理解为虚拟物体对应的表面贴图的坐标,表面贴图与设定噪声图尺寸相同,即两个图像中的像素点一一对应,从而可以根据贴图坐标从设定噪声图采样表面贴图的每个像素点对应的灰度值,最后将灰度值作为第二参考量。可选的,还可以将第二参考量与设定影响因子相乘,获得最终的第二参考量。The mapping coordinates can be understood as the coordinates of the surface map corresponding to the virtual object. The surface map has the same size as the set noise map, that is, the pixels in the two images correspond one to one, so the grayscale value corresponding to each pixel of the surface map can be sampled from the set noise map according to the mapping coordinates, and finally the grayscale value is used as the second reference. Optionally, the second reference can also be multiplied by the set influence factor to obtain the final second reference.
根据第一参考量和/或第二参考量确定目标参考量可以理解为:将第一参考量作为目标参考量,或者将第二参考量作为目标参考量,或者将第一参考量和第二参考量的加权求和值作为目标参考量。本实施例中,基于初始位置信息和/或贴图坐标确定目标参考量,可以提高计算准确性。Determining the target reference amount based on the first reference amount and/or the second reference amount can be understood as: using the first reference amount as the target reference amount, or using the second reference amount as the target reference amount, or using the weighted sum of the first reference amount and the second reference amount as the target reference amount. In this embodiment, determining the target reference amount based on the initial position information and/or the mapping coordinates can improve the calculation accuracy.
根据目标参考量和控制参数确定当前时刻待显示的虚拟物体的顶点的方式可以是:若目标参考量小于控制参数,则虚拟物体的顶点不为待带显示的虚拟物体的顶点;若目标参考量大于或等于控制参数,基于虚拟物体的顶点为待显示的虚拟物体顶点。The method for determining the vertices of the virtual object to be displayed at the current moment based on the target reference value and the control parameter can be: if the target reference value is less than the control parameter, the vertices of the virtual object are not the vertices of the virtual object to be displayed; if the target reference value is greater than or equal to the control parameter, the vertices based on the virtual object are the vertices of the virtual object to be displayed.
对当前时刻待显示的虚拟物体的顶点进行渲染的方式可以是:根据目标参考量和控制参数确定虚拟物体的顶点的特效颜色;根据特效颜色对当前时刻待显示的虚拟物体的顶点进行渲染。The method of rendering the vertices of the virtual object to be displayed at the current moment may be: determining the special effect color of the vertices of the virtual object according to the target reference amount and the control parameter; and rendering the vertices of the virtual object to be displayed at the current moment according to the special effect color.
本实施例中,若目标参考量小于控制参数,则表明目标参考量对应的虚拟物体的顶点在当前时刻无需渲染,因此忽略该虚拟物体顶点。若目标参考量大于或等于控制参数,则表明目标参考量对应的虚拟物体的顶点在当前时刻要进行渲染,此时需要确定出虚拟物体的顶点对应的特效颜色,从而根据特效颜色对当前时刻的虚拟物体进行渲染。本实施例中,通过目标参考量和控制参数的比较结果来确定当前时刻要渲染的虚拟物体的顶点,使得虚拟物体呈现逐渐显 示的效果。In this embodiment, if the target reference value is less than the control parameter, it indicates that the vertices of the virtual object corresponding to the target reference value do not need to be rendered at the current moment, so the vertices of the virtual object are ignored. If the target reference value is greater than or equal to the control parameter, it indicates that the vertices of the virtual object corresponding to the target reference value need to be rendered at the current moment. At this time, it is necessary to determine the special effect color corresponding to the vertices of the virtual object, so as to render the virtual object at the current moment according to the special effect color. In this embodiment, the vertices of the virtual object to be rendered at the current moment are determined by comparing the target reference value with the control parameter, so that the virtual object appears to be gradually rendered. The effect of display.
基于目标参考量和控制参数确定虚拟物体的顶点的特效颜色的方式可以是:获取特效控制范围;基于特效控制范围和控制参数对目标参考量进行平滑过渡处理,获得特效调整量;基于特效调整量对设定颜色进行调整,获得特效颜色。The method of determining the special effect color of the vertex of the virtual object based on the target reference amount and the control parameters can be: obtaining the special effect control range; performing smooth transition processing on the target reference amount based on the special effect control range and the control parameters to obtain the special effect adjustment amount; adjusting the set color based on the special effect adjustment amount to obtain the special effect color.
特效控制上限值和特效控制下限值。特效控制范围由用户设置,此处不做限定。The upper and lower limits of special effect control. The special effect control range is set by the user and is not limited here.
平滑过渡处理的方式可以是采用设定平滑过渡处理函数执行,例如:smoothstep函数。The smooth transition processing method may be to set a smooth transition processing function, for example, a smoothstep function.
本实施例中,基于特效控制范围和控制参数对目标参考量进行平滑过渡处理的过程可以是:首先基于控制参数和特效控制上限值对目标参考量进行第一平滑过渡处理,获得第一子特效调整量;然后基于控制参数、特效控制上限值和特效控制下限值对目标参考量进行第二平滑过渡处理,获得第二子特效调整量;最后将第一子特效调整量和第二子特效调整量进行累加,获得特效调整量。示例性的,第一平滑过渡处理可以表示为:smoothstep(U,U+a*D1,A),第二平滑过渡处理可以表示为:smoothstep(U-a*D2,U+a*D1,A),其中,U表示控制参数,D1表示特效控制上限值,D2表示特效控制下限值,A表示目标参考量,a为设定值,例如a取0.1。In this embodiment, the process of performing smooth transition processing on the target reference amount based on the special effect control range and the control parameter can be: firstly, performing a first smooth transition processing on the target reference amount based on the control parameter and the special effect control upper limit value to obtain the first sub-special effect adjustment amount; then performing a second smooth transition processing on the target reference amount based on the control parameter, the special effect control upper limit value and the special effect control lower limit value to obtain the second sub-special effect adjustment amount; finally, the first sub-special effect adjustment amount and the second sub-special effect adjustment amount are accumulated to obtain the special effect adjustment amount. Exemplarily, the first smooth transition processing can be expressed as: smoothstep(U, U+a*D1, A), and the second smooth transition processing can be expressed as: smoothstep(U-a*D2, U+a*D1, A), where U represents the control parameter, D1 represents the special effect control upper limit value, D2 represents the special effect control lower limit value, A represents the target reference amount, and a is a set value, for example, a is 0.1.
本实施例中,基于特效调整量对设定颜色进行调整的方式可以是:将特效调整量与设定颜色进行相乘,获得特效颜色。本实施例中,基于特效控制范围和控制参数对目标参考量进行平滑过渡处理,可以生成边缘发光的效果。In this embodiment, the method of adjusting the set color based on the special effect adjustment amount can be: multiplying the special effect adjustment amount by the set color to obtain the special effect color. In this embodiment, the target reference amount is smoothly transitioned based on the special effect control range and control parameters to generate an edge glow effect.
本公开实施例的技术方案,获取当前图像中目标对象的骨骼点位置信息以及虚拟物体的多个顶点的初始位置信息;基于骨骼点位置信息对虚拟物体的多个顶点初始位置信息进行变换,获得虚拟物体的多个顶点中间位置信息;基于设定材质信息对虚拟物体的至少部分顶点的中间位置信息进行更新,获得目标位置信息;基于目标位置信息对虚拟物体进行渲染,获得目标图像;其中,目标图像中的目标对象穿戴虚拟物体。本公开实施例提供的虚拟物体的渲染方法,基于目标对象的骨骼点位置信息对虚拟物体的多个顶点初始位置信息进行变换,并基于设定材质信息对虚拟物体的至少部分顶点的中间位置信息进行更新,使得试穿的虚拟物体可以与目标对象的运动很好的贴合,且可以使得虚拟物体具有真实物体的动感及质感,提高渲染效果。The technical solution of the disclosed embodiment obtains the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object; transforms the initial position information of multiple vertices of the virtual object based on the skeleton point position information to obtain the intermediate position information of multiple vertices of the virtual object; updates the intermediate position information of at least part of the vertices of the virtual object based on the set material information to obtain the target position information; renders the virtual object based on the target position information to obtain the target image; wherein the target object in the target image wears the virtual object. The virtual object rendering method provided by the disclosed embodiment transforms the initial position information of multiple vertices of the virtual object based on the skeleton point position information of the target object, and updates the intermediate position information of at least part of the vertices of the virtual object based on the set material information, so that the virtual object being tried on can fit well with the movement of the target object, and can make the virtual object have the dynamic and texture of the real object, thereby improving the rendering effect.
图6为本公开实施例所提供的一种虚拟物体的渲染装置的结构示意图,如图6所示,所述装置包括:获取模块610,设置为获取当前图像中目标对象的骨 骼点位置信息以及虚拟物体的的多个顶点的初始位置信息;其中,所述虚拟物体的多个顶点为所述虚拟物体对应的3D模型的多个顶点;位置信息变换模块620,设置为基于所述骨骼点位置信息对所述多个顶点的初始位置信息进行变换,获得所述多个顶点的中间位置信息;目标位置信息获取模块630,设置为基于设定材质信息对所述虚拟物体的所述多个顶点中的至少部分顶点的中间位置信息进行更新,获得目标位置信息;渲染模块640,设置为基于所述目标位置信息对所述虚拟物体进行渲染,获得目标图像;其中,所述目标图像中的目标对象穿戴所述虚拟物体。FIG6 is a schematic diagram of the structure of a virtual object rendering device provided by an embodiment of the present disclosure. As shown in FIG6 , the device includes: an acquisition module 610 configured to acquire the bone structure of a target object in a current image; The skeleton point position information and the initial position information of multiple vertices of the virtual object; wherein the multiple vertices of the virtual object are the multiple vertices of the 3D model corresponding to the virtual object; a position information transformation module 620 is configured to transform the initial position information of the multiple vertices based on the skeleton point position information to obtain the intermediate position information of the multiple vertices; a target position information acquisition module 630 is configured to update the intermediate position information of at least part of the multiple vertices of the virtual object based on the set material information to obtain the target position information; a rendering module 640 is configured to render the virtual object based on the target position information to obtain a target image; wherein the target object in the target image wears the virtual object.
可选的,位置信息变换模块620是设置为:获取所述虚拟物体的每个顶点关联的多个骨骼点以及所述多个骨骼点对所述虚拟物体的所述每个顶点的位置影响权重;基于所述多个骨骼点的位置信息和所述位置影响权重确定所述每个顶点的顶点变换信息;基于每个顶点的顶点变换信息对所述每个顶点的初始位置信息进行变换,获得所述每个顶点的中间位置信息。Optionally, the position information transformation module 620 is configured to: obtain multiple bone points associated with each vertex of the virtual object and the position influence weights of the multiple bone points on each vertex of the virtual object; determine the vertex transformation information of each vertex based on the position information of the multiple bone points and the position influence weights; transform the initial position information of each vertex based on the vertex transformation information of each vertex to obtain the intermediate position information of each vertex.
可选的,目标位置信息获取模块630是设置为:获取设定材质信息以及所述虚拟物体的至少部分顶点的运动状态;根据所述至少部分顶点的中间位置信息、所述设定材质信息及所述至少部分顶点的运动状态对所述虚拟物体的所述至少部分顶点进行设定材质的解算,获得目标位置信息。Optionally, the target position information acquisition module 630 is configured to: obtain set material information and the motion state of at least some of the vertices of the virtual object; solve the set material of at least some of the vertices of the virtual object according to the intermediate position information of at least some of the vertices, the set material information and the motion state of at least some of the vertices to obtain the target position information.
可选的,目标位置信息获取模块630是设置为通过如下方式根据所述至少部分顶点的中间位置信息、所述设定材质信息及所述至少部分顶点的运动状态对所述虚拟物体的所述至少部分顶点进行设定材质的解算:若当前场景中设置有虚拟物体支撑体;根据所述虚拟物体支撑体获取支撑信息;根据所述至少部分顶点的中间位置信息、所述材质参数、所述至少部分顶点的运动状态及所述支撑信息对所述虚拟物体的所述至少部分顶点进行设定材质的解算。Optionally, the target position information acquisition module 630 is configured to solve the set material of at least some of the vertices of the virtual object according to the intermediate position information of at least some of the vertices, the set material information and the motion state of at least some of the vertices in the following manner: if a virtual object support body is set in the current scene; obtain support information according to the virtual object support body; solve the set material of at least some of the vertices of the virtual object according to the intermediate position information of at least some of the vertices, the material parameters, the motion state of at least some of the vertices and the support information.
可选的,渲染模块640是设置为:基于所述虚拟物体的每个顶点的贴图坐标从彩色噪声图中采样预设颜色信息;根据所述当前图像的时间信息对所述预设颜色信息进行偏移处理得到所述每个顶点对应的偏移处理后的预设颜色信息;根据每个顶点对应的偏移处理后的预设颜色信息和所述目标位置信息对所述虚拟物体的所述每个顶点的初始颜色进行调整,获得所述每个顶点的目标颜色;Optionally, the rendering module 640 is configured to: sample preset color information from a color noise map based on the texture coordinates of each vertex of the virtual object; perform offset processing on the preset color information according to the time information of the current image to obtain the offset processed preset color information corresponding to each vertex; adjust the initial color of each vertex of the virtual object according to the offset processed preset color information corresponding to each vertex and the target position information to obtain the target color of each vertex;
基于所述目标位置信息和每个顶点的目标颜色对所述虚拟物体进行渲染,获得目标图像。The virtual object is rendered based on the target position information and the target color of each vertex to obtain a target image.
可选的,渲染模块640是设置为通过如下方式根据每个顶点对应的偏移处理后的预设颜色信息和所述目标位置信息对所述虚拟物体的所述每个顶点的初始颜色进行调整,获得所述每个顶点的目标颜色:根据所述目标位置信息确定 所述虚拟物体的每个顶点的法线方向及视角方向;根据每个顶点对应的偏移处理后的预设颜色信息确定所述每个顶点对应的颜色变换信息;根据每个顶点对应的颜色变换信息、所述每个顶点的法线方向及所述每个顶点的视角方向确定所述每个顶点的颜色调整量;将每个顶点的颜色调整量与所述虚拟物体的所述顶点的初始颜色进行累加,获得所述顶点的目标颜色。Optionally, the rendering module 640 is configured to adjust the initial color of each vertex of the virtual object according to the preset color information after the offset processing corresponding to each vertex and the target position information to obtain the target color of each vertex in the following manner: determining The normal direction and viewing direction of each vertex of the virtual object; determining the color transformation information corresponding to each vertex according to the preset color information after offset processing corresponding to each vertex; determining the color adjustment amount of each vertex according to the color transformation information corresponding to each vertex, the normal direction of each vertex and the viewing direction of each vertex; adding the color adjustment amount of each vertex with the initial color of the vertex of the virtual object to obtain the target color of the vertex.
可选的,渲染模块640是设置为:根据虚拟3D目标对象模型和所述虚拟物体模型对所述当前图像进行采样,获得目标对象采样图;将所述目标对象采样图转换为第一掩膜图;基于所述第一掩膜图对所述当前图像和虚拟物体图进行融合,获得目标图像;将所述目标图像渲染至当前画面。Optionally, the rendering module 640 is configured to: sample the current image according to the virtual 3D target object model and the virtual object model to obtain a target object sampling map; convert the target object sampling map into a first mask map; fuse the current image and the virtual object map based on the first mask map to obtain a target image; and render the target image to the current screen.
可选的,渲染模块640是设置为通过如下方式基于所述第一掩膜图对所述当前图像和虚拟物体图进行融合,获得目标图像:对所述第一掩膜图进行模糊处理;基于模糊处理后的第一掩膜图对所述当前图像和所述虚拟物体图进行融合,获得目标图像。Optionally, the rendering module 640 is configured to fuse the current image and the virtual object image based on the first mask image to obtain a target image in the following manner: blurring the first mask image; and fusing the current image and the virtual object image based on the blurred first mask image to obtain a target image.
可选的,渲染模块640是设置为:获取所述当前图像中目标对象对应的虚拟3D目标对象模型;根据所述虚拟3D目标对象模型的法线方向和视角方向确定第二掩膜图;基于所述第二掩膜图对所述当前图像和虚拟物体图进行融合,获得目标图像。Optionally, the rendering module 640 is configured to: obtain a virtual 3D target object model corresponding to the target object in the current image; determine a second mask image according to the normal direction and viewing direction of the virtual 3D target object model; and fuse the current image and the virtual object image based on the second mask image to obtain a target image.
可选的,还包括:虚拟物体显示模块,设置为:根据所述虚拟物体的多个顶点的初始位置信息和/或贴图坐标将所述虚拟物体按照设定顺序在设定时长内逐渐渲染显示。Optionally, it also includes: a virtual object display module, configured to: gradually render and display the virtual object in a set order within a set time period according to the initial position information and/or mapping coordinates of multiple vertices of the virtual object.
可选的,虚拟物体显示模块是设置为:对于所述设定时长内的当前时刻,根据所述当前时刻确定控制参数;根据所述虚拟物体的每个顶点的初始位置信息和/或贴图坐标确定所述每个顶点的目标参考量;根据每个顶点的目标参考量和控制参数确定当前时刻待显示的虚拟物体的顶点;对当前时刻待显示的虚拟物体的顶点进行渲染。Optionally, the virtual object display module is configured to: for the current moment within the set time length, determine the control parameters according to the current moment; determine the target reference value of each vertex of the virtual object according to the initial position information and/or mapping coordinates of each vertex; determine the vertices of the virtual object to be displayed at the current moment according to the target reference value and control parameters of each vertex; and render the vertices of the virtual object to be displayed at the current moment.
本公开实施例所提供的虚拟物体的渲染装置可执行本公开任意实施例所提供的虚拟物体的渲染方法,具备执行方法相应的功能模块和效果。The virtual object rendering device provided in the embodiments of the present disclosure can execute the virtual object rendering method provided in any embodiment of the present disclosure, and has the functional modules and effects corresponding to the execution method.
上述装置所包括的多个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,多个单元和模块的名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。The multiple units and modules included in the above-mentioned device are only divided according to functional logic, but are not limited to the above-mentioned division, as long as the corresponding functions can be realized; in addition, the names of the multiple units and modules are only for the convenience of distinguishing each other, and are not used to limit the protection scope of the embodiments of the present disclosure.
图7为本公开实施例所提供的一种电子设备的结构示意图。下面参考图7,其示出了适于用来实现本公开实施例的电子设备(例如图7中的终端设备或服务器)500的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移 动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistan,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(television,TV)、台式计算机等等的固定终端。图7示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。FIG7 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure. Referring to FIG7 below, it shows a schematic diagram of the structure of an electronic device (e.g., a terminal device or server in FIG7) 500 suitable for implementing an embodiment of the present disclosure. The terminal device in the embodiment of the present disclosure may include, but is not limited to, a mobile The electronic device shown in FIG7 is only an example and should not bring any limitation to the functions and scope of use of the embodiments of the present disclosure.
如图7所示,电子设备500可以包括处理装置(例如中央处理器、图形处理器等)501,其可以根据存储在只读存储器(Read-Only Memory,ROM)502中的程序或者从存储装置508加载到随机访问存储器(Random Access Memory,RAM)503中的程序而执行多种适当的动作和处理。在RAM 503中,还存储有电子设备500操作所需的多种程序和数据。处理装置501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(Input/Output,I/O)接口505也连接至总线504。As shown in FIG. 7 , the electronic device 500 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 501, which can perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 502 or a program loaded from a storage device 508 to a random access memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic device 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to the bus 504.
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有多种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Typically, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 507 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 508 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 509. The communication device 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. Although FIG. 7 shows an electronic device 500 having a variety of devices, it should be understood that it is not required to implement or have all of the devices shown. More or fewer devices may be implemented or have alternatively.
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM 502被安装。在该计算机程序被处理装置501执行时,执行本公开实施例的方法中限定的上述功能。According to an embodiment of the present disclosure, the process described above with reference to the flowchart can be implemented as a computer software program. For example, an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart. In such an embodiment, the computer program can be downloaded and installed from a network through a communication device 509, or installed from a storage device 508, or installed from a ROM 502. When the computer program is executed by the processing device 501, the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。The names of the messages or information exchanged between multiple devices in the embodiments of the present disclosure are only used for illustrative purposes and are not used to limit the scope of these messages or information.
本公开实施例提供的电子设备与上述实施例提供的虚拟物体的渲染方法属于同一发明构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的效果。The electronic device provided by the embodiment of the present disclosure and the method for rendering a virtual object provided by the above embodiment belong to the same inventive concept. For technical details not fully described in this embodiment, reference can be made to the above embodiment, and this embodiment has the same effect as the above embodiment.
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的虚拟物体的渲染方法。 The embodiments of the present disclosure provide a computer storage medium on which a computer program is stored. When the program is executed by a processor, the method for rendering a virtual object provided in the above embodiments is implemented.
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。The computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, RAM, ROM, an erasable programmable read-only memory (EPROM) or flash memory, an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above. In the present disclosure, a computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in conjunction with an instruction execution system, device or device. In the present disclosure, a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. Computer readable signal media may also be any computer readable medium other than computer readable storage media, which may send, propagate or transmit a program for use by or in conjunction with an instruction execution system, apparatus or device. The program code contained on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (RF), etc., or any suitable combination of the above.
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and the server may communicate using any currently known or future developed network protocol, such as HyperText Transfer Protocol (HTTP), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), an internet (e.g., the Internet), and a peer-to-peer network (e.g., an ad hoc peer-to-peer network), as well as any currently known or future developed network.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The computer-readable medium may be included in the electronic device, or may exist independently without being incorporated into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取当前图像中目标对象的骨骼点位置信息以及虚拟物体的多个顶点的初始位置信息;基于所述骨骼点位置信息对所述多个顶点的初始位置信息进行变换,获得所述多个顶点的中间位置信息;基于设定材质信息对所述虚拟物体的所述多个顶点中的至少部分顶点的中间位置信息进行更新,获得目标位置信息,获得目标位置信息;基于所述目标位置信息对所述虚拟物体进行渲染,获得目标图像;其中,所述目标图像中的目标对象穿戴所述虚拟物体。 The above-mentioned computer-readable medium carries one or more programs. When the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object; transforms the initial position information of the multiple vertices based on the skeleton point position information to obtain the intermediate position information of the multiple vertices; updates the intermediate position information of at least part of the multiple vertices of the virtual object based on the set material information to obtain the target position information; renders the virtual object based on the target position information to obtain the target image; wherein, the target object in the target image wears the virtual object.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括(LAN或WAN—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or combinations thereof, including, but not limited to, object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural programming languages, such as "C" or similar programming languages. The program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server. In cases involving a remote computer, the remote computer may be connected to the user's computer via any type of network, including (LAN or WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
附图中的流程图和框图,图示了按照本公开的多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flow chart and block diagram in the accompanying drawings illustrate the possible architecture, function and operation of the system, method and computer program product according to the various embodiments of the present disclosure. In this regard, each box in the flow chart or block diagram can represent a module, a program segment or a part of a code, and the module, the program segment or a part of the code contains one or more executable instructions for realizing the specified logical function. It should also be noted that in some alternative implementations, the functions marked in the box can also occur in a different order from the order marked in the accompanying drawings. For example, two boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved. It should also be noted that each box in the block diagram and/or flow chart, and the combination of the boxes in the block diagram and/or flow chart can be implemented with a dedicated hardware-based system that performs the specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
描述于本公开实施例中所涉及到的单元和模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元和模块的名称并不构成对该单元本身的限定,例如,获取模块还可以被描述为“获取当前图像中目标对象的骨骼点位置信息以及虚拟物体的多个顶点的初始位置信息的模块”。The units and modules involved in the embodiments described in the present disclosure may be implemented by software or hardware. The names of the units and modules do not limit the units themselves. For example, the acquisition module may also be described as "a module for acquiring the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object".
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。The functions described above herein may be performed at least in part by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Parts (ASSP), System on Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质 的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM或快闪存储器、光纤、便捷式CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Machine-readable storage medium More specific examples would include electrical connections based on one or more wires, portable computer disks, hard disks, RAM, ROM, EPROM or flash memory, optical fibers, portable CD-ROMs, optical storage devices, magnetic storage devices, or any suitable combination of the above.
虽然采用特定次序描绘了多个操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的一些特征还可以组合地实现在单个实施例中。在单个实施例的上下文中描述的多种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。Although a plurality of operations are described in a particular order, this should not be construed as requiring these operations to be performed in the particular order shown or in a sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Similarly, although specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Some features described in the context of a separate embodiment can also be implemented in a single embodiment in combination. The various features described in the context of a single embodiment can also be implemented in multiple embodiments individually or in any suitable sub-combination.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。 Although the subject matter has been described using language specific to structural features and/or methodological logical acts, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. The specific features and acts described above are merely example forms of implementing the claims.

Claims (14)

  1. 一种虚拟物体的渲染方法,包括:A method for rendering a virtual object, comprising:
    获取当前图像中目标对象的骨骼点位置信息以及虚拟物体的多个顶点的初始位置信息;其中,所述虚拟物体的多个顶点为所述虚拟物体对应的三维3D模型的多个顶点;Obtaining the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object; wherein the multiple vertices of the virtual object are multiple vertices of the three-dimensional 3D model corresponding to the virtual object;
    基于所述骨骼点位置信息对多个顶点的初始位置信息进行变换,获得所述多个顶点的中间位置信息;Transforming initial position information of a plurality of vertices based on the position information of the skeleton points to obtain intermediate position information of the plurality of vertices;
    基于设定材质信息对所述虚拟物体的所述多个顶点的中的至少部分顶点的中间位置信息进行更新,获得目标位置信息;Update the intermediate position information of at least some of the vertices of the plurality of vertices of the virtual object based on the set material information to obtain the target position information;
    基于所述目标位置信息对所述虚拟物体进行渲染,获得目标图像;其中,所述目标图像中的目标对象穿戴所述虚拟物体。The virtual object is rendered based on the target position information to obtain a target image; wherein the target object in the target image wears the virtual object.
  2. 根据权利要求1所述的方法,其中,基于所述骨骼点位置信息对所述多个顶点的初始位置信息进行变换,获得所述多个顶点的中间位置信息,包括:The method according to claim 1, wherein the initial position information of the plurality of vertices is transformed based on the skeleton point position information to obtain the intermediate position information of the plurality of vertices, comprising:
    获取所述虚拟物体的每个顶点关联的多个骨骼点以及所述多个骨骼点对所述虚拟物体的所述每个顶点的位置影响权重;Acquire multiple skeleton points associated with each vertex of the virtual object and position influence weights of the multiple skeleton points on each vertex of the virtual object;
    基于所述多个骨骼点的位置信息和所述位置影响权重确定所述每个顶点的顶点变换信息;Determine vertex transformation information of each vertex based on the position information of the plurality of skeleton points and the position influence weight;
    基于每个顶点的顶点变换信息对所述每个顶点的初始位置信息进行变换,获得所述每个顶点的中间位置信息。The initial position information of each vertex is transformed based on the vertex transformation information of each vertex to obtain the intermediate position information of each vertex.
  3. 根据权利要求1所述的方法,其中,基于设定材质信息对所述虚拟物体的所述多个顶点中的至少部分顶点的中间位置信息进行更新,获得目标位置信息,包括:The method according to claim 1, wherein updating the intermediate position information of at least some of the multiple vertices of the virtual object based on the set material information to obtain the target position information comprises:
    获取设定材质信息以及所述虚拟物体的所述至少部分顶点的运动状态;Acquire set material information and motion states of at least part of the vertices of the virtual object;
    根据所述至少部分顶点的中间位置信息、所述设定材质信息及所述至少部分顶点的运动状态对所述虚拟物体的所述至少部分顶点进行设定材质的解算,获得目标位置信息。The set material of at least part of the vertices of the virtual object is solved according to the intermediate position information of at least part of the vertices, the set material information and the motion state of at least part of the vertices to obtain the target position information.
  4. 根据权利要求3所述的方法,其中,根据所述至少部分顶点的中间位置信息、所述设定材质信息及所述至少部分顶点的运动状态对所述虚拟物体的所述至少部分顶点进行设定材质的解算,包括:The method according to claim 3, wherein the step of solving the set material for at least some of the vertices of the virtual object according to the intermediate position information of at least some of the vertices, the set material information and the motion state of at least some of the vertices comprises:
    在当前场景中设置有虚拟物体支撑体的情况下,根据所述虚拟物体支撑体获取支撑信息;In the case where a virtual object support is provided in the current scene, obtaining support information according to the virtual object support;
    根据所述至少部分顶点的中间位置信息、所述材质参数、所述至少部分顶 点的运动状态及所述支撑信息对所述虚拟物体的所述至少部分顶点进行设定材质的解算。According to the intermediate position information of at least some of the vertices, the material parameters, the at least some of the vertices The motion state of the point and the support information are used to solve the material setting for at least part of the vertices of the virtual object.
  5. 根据权利要求1所述的方法,其中,基于所述目标位置信息对所述虚拟物体进行渲染,获得目标图像,包括:The method according to claim 1, wherein rendering the virtual object based on the target position information to obtain the target image comprises:
    基于所述虚拟物体的每个顶点的贴图坐标从彩色噪声图中采样预设颜色信息;Sampling preset color information from a color noise map based on the texture coordinates of each vertex of the virtual object;
    根据所述当前图像的时间信息对所述预设颜色信息进行偏移处理得到所述每个顶点对应的偏移处理后的预设颜色信息;Performing an offset process on the preset color information according to the time information of the current image to obtain the preset color information after the offset process corresponding to each vertex;
    根据每个顶点对应的偏移处理后的预设颜色信息和所述目标位置信息对所述虚拟物体的所述每个顶点的初始颜色进行调整,获得所述每个顶点的目标颜色;Adjusting the initial color of each vertex of the virtual object according to the preset color information after the offset processing corresponding to each vertex and the target position information to obtain the target color of each vertex;
    基于所述目标位置信息和每个顶点的目标颜色对所述虚拟物体进行渲染,获得目标图像。The virtual object is rendered based on the target position information and the target color of each vertex to obtain a target image.
  6. 根据权利要求5所述的方法,其中,根据每个顶点对应的偏移处理后的预设颜色信息和所述目标位置信息对所述虚拟物体的所述每个顶点的初始颜色进行调整,获得所述每个顶点的目标颜色,包括:The method according to claim 5, wherein adjusting the initial color of each vertex of the virtual object according to the preset color information after offset processing corresponding to each vertex and the target position information to obtain the target color of each vertex comprises:
    根据所述目标位置信息确定所述虚拟物体的每个顶点的法线方向及视角方向;Determine the normal direction and viewing angle direction of each vertex of the virtual object according to the target position information;
    根据每个顶点对应的偏移处理后的预设颜色信息确定所述每个顶点对应的颜色变换信息;Determine the color transformation information corresponding to each vertex according to the preset color information corresponding to each vertex after the offset processing;
    根据每个顶点对应的颜色变换信息、所述每个顶点的法线方向及所述每个顶点的视角方向确定所述每个顶点的颜色调整量;Determining a color adjustment amount for each vertex according to color transformation information corresponding to each vertex, a normal direction of each vertex, and a viewing direction of each vertex;
    将每个顶点的颜色调整量与所述每个顶点的初始颜色进行累加,获得所述每个顶点的目标颜色。The color adjustment amount of each vertex is accumulated with the initial color of each vertex to obtain the target color of each vertex.
  7. 根据权利要求1所述的方法,其中,基于所述目标位置信息对所述虚拟物体进行渲染,获得目标图像,包括:The method according to claim 1, wherein rendering the virtual object based on the target position information to obtain the target image comprises:
    根据虚拟3D目标对象模型和虚拟物体模型对所述当前图像进行采样,获得目标对象采样图;其中,所述虚拟物体模型为所述虚拟物体对应的3D模型;Sampling the current image according to a virtual 3D target object model and a virtual object model to obtain a target object sampling map; wherein the virtual object model is a 3D model corresponding to the virtual object;
    将所述目标对象采样图转换为第一掩膜图;Converting the target object sampling image into a first mask image;
    基于所述第一掩膜图对所述当前图像和虚拟物体图进行融合,获得目标图像,其中,所述虚拟物体图为根据所述目标位置信息将所述虚拟物体投影至屏幕中获得的二维2D图; fusing the current image and the virtual object image based on the first mask image to obtain a target image, wherein the virtual object image is a two-dimensional 2D image obtained by projecting the virtual object onto a screen according to the target position information;
    将所述目标图像渲染至当前画面。Render the target image to the current screen.
  8. 根据权利要求7所述的方法,其中,基于所述第一掩膜图对所述当前图像和所述虚拟物体进行融合,获得目标图像,包括:The method according to claim 7, wherein fusing the current image and the virtual object based on the first mask image to obtain a target image comprises:
    对所述第一掩膜图进行模糊处理;Performing blur processing on the first mask image;
    基于模糊处理后的第一掩膜图对所述当前图像和所述虚拟物体图进行融合,获得目标图像。The current image and the virtual object image are fused based on the blurred first mask image to obtain a target image.
  9. 根据权利要求1所述的方法,其特征在于,基于所述目标位置信息对所述虚拟物体进行渲染,获得目标图像,包括:The method according to claim 1, characterized in that rendering the virtual object based on the target position information to obtain a target image comprises:
    获取所述当前图像中目标对象对应的虚拟3D目标对象模型;Acquire a virtual 3D target object model corresponding to the target object in the current image;
    根据所述虚拟3D目标对象模型的法线方向和视角方向确定第二掩膜图;Determining a second mask image according to the normal direction and the viewing direction of the virtual 3D target object model;
    基于所述第二掩膜图对所述当前图像和虚拟物体图进行融合,获得目标图像,其中,所述虚拟物体图为根据所述目标位置信息将所述虚拟物体投影至屏幕中获得的2D图。The current image and the virtual object image are fused based on the second mask image to obtain a target image, wherein the virtual object image is a 2D image obtained by projecting the virtual object onto a screen according to the target position information.
  10. 根据权利要求1所述的方法,其中,在获取当前图像中目标对象的骨骼点位置信息之前,还包括:The method according to claim 1, wherein before obtaining the skeleton point position information of the target object in the current image, it also includes:
    根据所述虚拟物体的所述多个顶点的初始位置信息和/或贴图坐标将所述虚拟物体按照设定顺序在设定时长内逐渐渲染显示。The virtual object is gradually rendered and displayed in a set order within a set time period according to the initial position information and/or mapping coordinates of the multiple vertices of the virtual object.
  11. 根据权利要求10所述的方法,其中,根据所述虚拟物体的所述多个顶点的初始位置信息和/或贴图坐标将所述虚拟物体按照设定顺序在设定时长内逐渐渲染显示,包括:The method according to claim 10, wherein gradually rendering and displaying the virtual object in a set order within a set time period according to the initial position information and/or mapping coordinates of the multiple vertices of the virtual object comprises:
    对于所述设定时长内的当前时刻,根据所述当前时刻确定控制参数;For a current moment within the set duration, determining a control parameter according to the current moment;
    根据所述虚拟物体的每个顶点的初始位置信息和/或贴图坐标确定所述每个顶点的目标参考量;Determine a target reference amount of each vertex of the virtual object according to initial position information and/or mapping coordinates of each vertex;
    根据每个顶点的目标参考量和所述控制参数确定当前时刻待显示的虚拟物体的顶点;Determine the vertex of the virtual object to be displayed at the current moment according to the target reference amount of each vertex and the control parameter;
    对所述当前时刻待显示的虚拟物体的顶点进行渲染。Rendering the vertices of the virtual object to be displayed at the current moment.
  12. 一种虚拟物体的渲染装置,包括:A virtual object rendering device, comprising:
    获取模块,设置为获取当前图像中目标对象的骨骼点位置信息以及虚拟物体的多个顶点的初始位置信息;其中,所述虚拟物体的多个顶点为所述虚拟物体对应的三维3D模型的多个顶点;An acquisition module, configured to acquire the skeleton point position information of the target object in the current image and the initial position information of multiple vertices of the virtual object; wherein the multiple vertices of the virtual object are multiple vertices of the three-dimensional 3D model corresponding to the virtual object;
    位置信息变换模块,设置为基于所述骨骼点位置信息对所述多个顶点的初 始位置信息进行变换,获得所述多个顶点的中间位置信息;A position information transformation module is configured to transform the initial positions of the plurality of vertices based on the position information of the skeleton points. The initial position information is transformed to obtain the intermediate position information of the plurality of vertices;
    目标位置信息获取模块,设置为基于设定材质信息对所述虚拟物体的所述多个顶点中的至少部分顶点的中间位置信息进行更新,获得目标位置信息;a target position information acquisition module, configured to update the intermediate position information of at least some of the multiple vertices of the virtual object based on the set material information to obtain the target position information;
    渲染模块,设置为基于所述目标位置信息对所述虚拟物体进行渲染,获得目标图像;其中,所述目标图像中的目标对象穿戴所述虚拟物体。A rendering module is configured to render the virtual object based on the target position information to obtain a target image; wherein the target object in the target image wears the virtual object.
  13. 一种电子设备,包括:An electronic device, comprising:
    至少一个处理器;at least one processor;
    存储装置,设置为存储至少一个程序,a storage device configured to store at least one program,
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-11中任一所述的虚拟物体的渲染方法。When the at least one program is executed by the at least one processor, the at least one processor implements the virtual object rendering method as described in any one of claims 1-11.
  14. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-11中任一所述的虚拟物体的渲染方法。 A storage medium comprising computer executable instructions, wherein the computer executable instructions are used to execute the virtual object rendering method as claimed in any one of claims 1 to 11 when executed by a computer processor.
PCT/CN2023/120222 2022-09-29 2023-09-21 Virtual object rendering method and apparatus, and device and storage medium WO2024067320A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211204095.XA CN115861503A (en) 2022-09-29 2022-09-29 Rendering method, device and equipment of virtual object and storage medium
CN202211204095.X 2022-09-29

Publications (1)

Publication Number Publication Date
WO2024067320A1 true WO2024067320A1 (en) 2024-04-04

Family

ID=85661261

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/120222 WO2024067320A1 (en) 2022-09-29 2023-09-21 Virtual object rendering method and apparatus, and device and storage medium

Country Status (2)

Country Link
CN (1) CN115861503A (en)
WO (1) WO2024067320A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861503A (en) * 2022-09-29 2023-03-28 北京字跳网络技术有限公司 Rendering method, device and equipment of virtual object and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968892A (en) * 2009-07-28 2011-02-09 上海冰动信息技术有限公司 Method for automatically adjusting three-dimensional face model according to one face picture
CN104881557A (en) * 2015-06-19 2015-09-02 南京大学 Method for dynamically simulating human body and clothing in computer
US20160189431A1 (en) * 2014-12-25 2016-06-30 Kabushiki Kaisha Toshiba Virtual try-on system, virtual try-on terminal, virtual try-on method, and computer program product
CN110766777A (en) * 2019-10-31 2020-02-07 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN112933597A (en) * 2021-03-16 2021-06-11 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114565705A (en) * 2022-02-28 2022-05-31 百果园技术(新加坡)有限公司 Virtual character simulation and live broadcast method, device, equipment and storage medium
CN115861503A (en) * 2022-09-29 2023-03-28 北京字跳网络技术有限公司 Rendering method, device and equipment of virtual object and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968892A (en) * 2009-07-28 2011-02-09 上海冰动信息技术有限公司 Method for automatically adjusting three-dimensional face model according to one face picture
US20160189431A1 (en) * 2014-12-25 2016-06-30 Kabushiki Kaisha Toshiba Virtual try-on system, virtual try-on terminal, virtual try-on method, and computer program product
CN104881557A (en) * 2015-06-19 2015-09-02 南京大学 Method for dynamically simulating human body and clothing in computer
CN110766777A (en) * 2019-10-31 2020-02-07 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN112933597A (en) * 2021-03-16 2021-06-11 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN114565705A (en) * 2022-02-28 2022-05-31 百果园技术(新加坡)有限公司 Virtual character simulation and live broadcast method, device, equipment and storage medium
CN115861503A (en) * 2022-09-29 2023-03-28 北京字跳网络技术有限公司 Rendering method, device and equipment of virtual object and storage medium

Also Published As

Publication number Publication date
CN115861503A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN110766777B (en) Method and device for generating virtual image, electronic equipment and storage medium
US11538229B2 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN109743626B (en) Image display method, image processing method and related equipment
KR102637901B1 (en) A method of providing a dolly zoom effect by an electronic device and the electronic device utilized in the method
WO2024067320A1 (en) Virtual object rendering method and apparatus, and device and storage medium
CN110568923A (en) unity 3D-based virtual reality interaction method, device, equipment and storage medium
CN111142967B (en) Augmented reality display method and device, electronic equipment and storage medium
KR20120012858A (en) Apparatus and method for rendering object in 3d graphic terminal
WO2023240999A1 (en) Virtual reality scene determination method and apparatus, and system
WO2024104248A1 (en) Rendering method and apparatus for virtual panorama, and device and storage medium
WO2023193642A1 (en) Video processing method and apparatus, device and storage medium
CN116310036A (en) Scene rendering method, device, equipment, computer readable storage medium and product
CN114842120B (en) Image rendering processing method, device, equipment and medium
CN114900625A (en) Subtitle rendering method, device, equipment and medium for virtual reality space
WO2024041623A1 (en) Special effect map generation method and apparatus, device, and storage medium
CN115965672A (en) Three-dimensional object display method, device, equipment and medium
WO2023231926A1 (en) Image processing method and apparatus, device, and storage medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
JP2020014075A (en) Image projection system, image projection method, and program
WO2023197860A1 (en) Highlight rendering method and apparatus, medium, and electronic device
US11651529B2 (en) Image processing method, apparatus, electronic device and computer readable storage medium
CN116228956A (en) Shadow rendering method, device, equipment and medium
CN111524240A (en) Scene switching method and device and augmented reality equipment
US20240290026A1 (en) Method and apparatus for controlling motion of moving object, device, and storage medium
WO2024082901A1 (en) Data processing method and apparatus for cloud game, and electronic device, computer-readable storage medium and computer program product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23870552

Country of ref document: EP

Kind code of ref document: A1