WO2022183723A1 - 特效控制方法及装置 - Google Patents

特效控制方法及装置 Download PDF

Info

Publication number
WO2022183723A1
WO2022183723A1 PCT/CN2021/121093 CN2021121093W WO2022183723A1 WO 2022183723 A1 WO2022183723 A1 WO 2022183723A1 CN 2021121093 W CN2021121093 W CN 2021121093W WO 2022183723 A1 WO2022183723 A1 WO 2022183723A1
Authority
WO
WIPO (PCT)
Prior art keywords
texture
model
information
offset
touch point
Prior art date
Application number
PCT/CN2021/121093
Other languages
English (en)
French (fr)
Inventor
王东烁
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2022183723A1 publication Critical patent/WO2022183723A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to a special effect control method and device.
  • the present disclosure provides a special effect control method and device.
  • the technical solutions of the present disclosure are as follows:
  • a special effect control method includes: in response to a user's touch operation on a model in a current frame image on a screen, acquiring a touch point of the touch point corresponding to the touch operation Texture coordinate information and sliding direction information, the touch point is a point on the model corresponding to the touch operation; update the texture of the texture map corresponding to the model according to the touch point texture coordinate information and the sliding direction information information, wherein the texture map is used to represent the texture coordinate information of the vertices of the model when the model is expanded into a plane; the vertices of the model are offset according to the updated texture information of the texture map, The updated model is obtained.
  • acquiring the sliding direction information corresponding to the touch operation includes: acquiring touch point texture coordinate information of a previous frame of the current frame image; The point texture coordinate information and the touch point texture coordinate information determine the sliding direction information.
  • acquiring the sliding direction information corresponding to the touch operation includes: acquiring touch point texture coordinate information of a previous frame of the current frame image; The point texture coordinate information and the touch point texture coordinate information determine the sliding direction information.
  • the determining the texture shape parameter in the texture parameters according to the texture coordinate information of the touch point includes: obtaining, according to the texture coordinate information of the touch point, a value that radiates around the touch point as a center. Black and white gradient information; according to preset texture thickness adjustment parameters and texture soft and hard adjustment parameters, the black and white gradient information is converted into the texture shape parameters.
  • the determining the texture color parameter in the texture parameter according to the sliding direction information includes: determining the u-axis sliding direction information in the sliding direction information as all the parameters on the R channel.
  • the texture color parameter is determined; the v-axis sliding direction information in the sliding direction information is determined as the texture color parameter on the G channel.
  • the performing offset control on the vertices of the model according to the updated texture information of the texture map includes: determining the vertices of the model according to the updated texture information of the texture map.
  • the first offset vector corresponding to the vertex in the world space; the position of the vertex of the model after offset in the world space is determined according to the initial position of the vertex of the model in the world space and the first offset vector; according to the The position of the vertices of the model after offset in world space is controlled by the offset of the vertices of the model.
  • the determining, according to the updated texture information of the texture map, the first offset vector corresponding to the vertex of the model in the world space includes: changing the texture of the updated texture map
  • the texture shape and color parameters of the vertices of the model in the information on the R channel are converted into the x-axis offset vector in the tangent space of the model; the vertices of the model in the updated texture information of the texture map are
  • the texture shape color parameters on the G channel are converted to the y-axis offset vector in the tangent space of the model;
  • the x-axis offset vector and the y-axis offset vector are converted to the second offset vector in the world space offset vector; normalize the second offset vector to obtain the offset vector.
  • the determining the position of the vertex of the model in the world space after offset according to the initial position of the vertex of the model in the world space and the first offset vector includes:
  • vec3 offset_worldpos worldpos+normalize(N+offset_vector)*level*fur_length;
  • the vec3 offset_worldpos is the offset position of the vertex of the model in the world space; the worldpos is the initial position of the vertex of the model in the world space; the normalize() is a normalization function; the N is the normal vector of the model; the offset_vector is the first offset vector; the level is the expansion ratio of the model; and the fur_length is a preset adjustment parameter.
  • the method further includes: superimposing the texture information in the transparent channel of the texture map corresponding to the current frame image and the texture information in the transparent channel of the texture map corresponding to part of the previous frame image, Obtain the target texture information in the transparent channel of the texture map corresponding to the current frame image.
  • the method before updating the texture information of the texture map, the method further includes: determining the screen space coordinates of the vertices of the model according to the texture coordinates of the vertices of the model; according to the screen space coordinates of the vertices of the model coordinates to generate the texture map.
  • a special effect control apparatus comprising: an obtaining unit configured to, in response to a user's touch operation on a model in a current frame image on a screen, obtain the corresponding touch operation The touch point texture coordinate information and sliding direction information of the touch point, the touch point is a point on the model corresponding to the touch operation; the updating unit is configured to The direction information updates the texture information of the texture map corresponding to the model, wherein the texture map is used to represent the texture coordinate information of the vertices of the model when the model is expanded into a plane; the control unit is configured to The texture information of the texture map performs offset control on the vertices of the model to obtain the updated model.
  • the acquisition unit includes: a first acquisition subunit configured to acquire touch point texture coordinate information of a previous frame of the current frame image; a first determination subunit configured to The sliding direction information is determined according to the touch point texture coordinate information and the touch point texture coordinate information of the previous frame of image.
  • the update unit includes: a second determination subunit configured to determine a texture shape parameter in the texture parameters according to the touch point texture coordinate information; a third determination subunit configured to The texture color parameter in the texture parameters is determined according to the sliding direction information; the updating subunit is configured to update the texture information of the texture map according to the texture shape parameter and the texture color parameter.
  • the second determination sub-unit is further configured to: obtain, according to the texture coordinate information of the touch point, black and white gradient information that radiates from the touch point as a center to the surrounding;
  • the texture thickness adjustment parameter and the texture soft and hard adjustment parameter are used to convert the black and white gradient information into the texture shape parameter.
  • the third determination subunit is further configured to: determine the u-axis sliding direction information in the sliding direction information as the texture color parameter on the R channel;
  • the v-axis sliding direction information in the sliding direction information is determined as the texture color parameter on the G channel.
  • the control unit includes: a fourth determination subunit, configured to determine, according to the updated texture information of the texture map, a first offset corresponding to the vertex of the model in the world space vector; a fifth determination subunit, configured to determine the position of the vertex of the model after offset in the world space according to the initial position of the vertex of the model in the world space and the first offset vector; the control subunit, is configured to perform offset control for the vertices of the model according to the offset positions of the vertices of the model in world space.
  • the fourth determination subunit is further configured to: convert the texture shape and color parameters of the vertices of the model on the R channel in the updated texture information of the texture map into the The x-axis offset vector in the tangent space of the model; convert the texture shape and color parameters of the vertex of the model on the G channel in the texture information of the updated texture map to the y in the tangent space of the model axis offset vector; convert the x-axis offset vector and the y-axis offset vector into a second offset vector in the world space; normalize the second offset vector to obtain The first offset vector.
  • the fifth determination subunit is further configured to:
  • vec3 offset_worldpos worldpos+normalize(N+offset_vector)*level*fur_length;
  • the vec3 offset_worldpos is the offset position of the vertex of the model in the world space; the worldpos is the initial position of the vertex of the model in the world space; the normalize() is a normalization function; the N is the normal vector of the model; the offset_vector is the first offset vector; the level is the expansion ratio of the model; and the fur_length is a preset adjustment parameter.
  • the method further includes: a determining unit configured to map the texture information of the texture map corresponding to the current frame image in the transparent channel, and the texture map corresponding to a part of the previous frame image in the transparent channel The texture information is superimposed to obtain the target texture information of the texture map corresponding to the current frame image in the transparent channel.
  • the updating unit further includes: a sixth determining subunit, configured to determine the screen space coordinates of the vertices of the model according to the texture coordinates of the vertices of the model; a generating subunit, configured by is configured to generate the texture map according to screen space coordinates of vertices of the model.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to execute the instructions to implement the present invention
  • a computer-readable storage medium which, when instructions in the computer-readable storage medium are executed by a processor of an electronic device, enables the electronic device to perform the first aspect of the present disclosure
  • the special effect control method provided by the embodiment is provided.
  • a computer program product including a computer program, wherein the computer program, when executed by a processor, implements the special effect control method provided by the first aspect of the present disclosure.
  • the touch point texture coordinate information and sliding direction information of the touch point corresponding to the touch operation can be obtained in response to the user's touch operation on the model in the current frame image on the screen,
  • the touch point is the point on the model corresponding to the touch operation;
  • the texture information corresponding to the model is updated according to the texture coordinate information of the touch point and the sliding direction information;
  • the vertices of the model are offset according to the updated texture information to obtain the updated model. Therefore, the present disclosure can create a more realistic special effect control effect under the condition of ensuring good running performance. Further, a new interaction method is provided, so that special effects control can be realized through touch screen interaction.
  • Fig. 1 is a flow chart of a special effect control method according to an exemplary embodiment.
  • Fig. 2 is a flow chart of another special effect control method according to an exemplary embodiment.
  • Fig. 3 is a flowchart showing another special effect control method according to an exemplary embodiment.
  • Fig. 4 is a flow chart of another special effect control method according to an exemplary embodiment.
  • Fig. 5 is a flow chart of another special effect control method according to an exemplary embodiment.
  • Fig. 6 is a flowchart showing another special effect control method according to an exemplary embodiment.
  • FIG. 7 is a schematic diagram illustrating that a special effect control method is applied to a touch special effect application scenario of hair shape according to an exemplary embodiment.
  • Fig. 8 is a block diagram of an apparatus for controlling special effects according to an exemplary embodiment.
  • Fig. 9 is a block diagram of another special effect control apparatus according to an exemplary embodiment.
  • Fig. 10 is a block diagram of another special effect control apparatus according to an exemplary embodiment.
  • Fig. 11 is a block diagram of another special effect control apparatus according to an exemplary embodiment.
  • Fig. 12 is a block diagram of another special effect control apparatus according to an exemplary embodiment.
  • Fig. 13 is a block diagram of an electronic device according to an exemplary embodiment.
  • Fig. 1 is a flow chart of a special effect control method according to an exemplary embodiment.
  • the execution subject of the special effect control method of the present disclosure is a special effect control device.
  • the special effect control method of the embodiment of the present disclosure may be executed by the special effect control apparatus of the embodiment of the present disclosure, and the special effect control apparatus may specifically be a hardware device, or software in the hardware device, or the like.
  • the hardware devices are, for example, terminal devices, servers, and the like.
  • the special effect control method proposed in the present application will be explained below by taking the model as a hair model as an example.
  • the special effect control method proposed by the embodiment of the present disclosure includes the following steps:
  • step 101 in response to the user's touch operation on the model in the current frame image on the screen, acquire touch point texture coordinate information and sliding direction information of the touch point corresponding to the touch operation, where the touch point is the point on the model corresponding to the touch operation .
  • the touch operation may be an operation of sliding from the touch start point to the touch end point.
  • the user can perform touch operations on the hair model on the screen in various ways. For example, the touch operation can be triggered by clicking or sliding on the screen with a finger, a smart stylus pen, or the like.
  • the texture coordinate information also known as UV coordinate information, is used to correspond and allocate the area where the texture pixels are drawn on the model, including the U abscissa information in the horizontal direction and the V ordinate information in the vertical direction.
  • the sliding direction information may represent the direction information of sliding from the touch start point to the touch end point for a model such as a hair model on the screen.
  • multiple touch points can be continuously triggered by applying pressure to the screen, thereby performing the touch operation on the hair model on the screen.
  • the corresponding touch point texture coordinate information and sliding direction information can be acquired according to the trajectory generated by the touch operation performed by the user.
  • step 102 the texture information of the texture map corresponding to the model is updated according to the touch point texture coordinate information and the sliding direction information, wherein the texture map is used to represent the texture coordinate information of the vertices of the model when the model is expanded into a plane.
  • the texture map can also be called a flow map.
  • the texture usually stores vector information (Vector), which is used to perturb the texture coordinates of the touch point of other textures, and can achieve special dynamic effects such as "water flow” and "quicksand” on the surface of the model.
  • Vector vector information
  • dynamic effects such as "stroking” and “combing” for the hair model can be realized.
  • the texture information of the texture map can be updated synchronously according to the touch point texture coordinate information and the sliding direction information, so as to adjust the user's target screen
  • a touch operation on the hair model is treated as a paint operation for the texture-mapped texture.
  • step 103 offset control is performed on the vertices of the model according to the texture information of the updated texture map to obtain the updated model.
  • the uv coordinate information corresponding to each vertex of the model may be preset. In this way, after updating the texture information of the texture map, the hair shape can be changed by modifying the spatial offset of the vertices of the hair model relative to their original positions.
  • the present disclosure obtains the touch point texture coordinate information and sliding direction information of the touch point corresponding to the touch operation by responding to the user's touch operation on the model in the current frame image on the screen, and the touch point is the point on the model corresponding to the touch operation;
  • the texture information corresponding to the model is updated with the texture coordinate information of the touch point and the sliding direction information;
  • the vertices of the model are offset and controlled according to the updated texture information to obtain the updated model. Therefore, the present disclosure can realize the control of the bending orientation of hair in different regions, create a more realistic hair effect under the condition of ensuring good running performance, and improve the authenticity of the displayed dynamic effect of the hair shape. Further, a new interaction method is provided, so that the manipulation of the hair can be realized by interacting with the touch screen.
  • the specific manner of acquiring the texture coordinate information of the touch point corresponding to the touch operation is not limited, and may be selected according to the actual situation.
  • a ray detection technology may be used to obtain the texture coordinate information of the touch point corresponding to the touch operation.
  • the ray detection technology refers to a method of emitting a line with no end point from a point to a direction in the three-dimensional (Three Dimensional, 3D) space, and in the trajectory of the ray emission, judging the occurrence with other objects The collision and contact relationship, so as to realize the technology of non-destructive testing.
  • the specific manner of acquiring the texture coordinate information of the touch point by using the ray detection technology is the prior art, which will not be repeated here.
  • the specific manner of acquiring the sliding direction information corresponding to the touch operation is not limited, and may be selected according to the actual situation.
  • the sliding direction information may be determined according to the acquired texture coordinate information of the touch point of the current frame and the texture coordinate information of the touch point of the previous frame.
  • the process of acquiring the sliding direction information corresponding to the touch operation in the foregoing step S101 may include the following steps:
  • step 201 the texture coordinate information of the touch point of the previous frame image of the current frame image is acquired.
  • step 202 the sliding direction information is determined according to the touch point texture coordinate information and the touch point texture coordinate information of the previous frame image.
  • the numerical difference between the UV coordinates of the front and back frames reflects the user's touch and slide direction on the surface of the hair model at every moment.
  • the touch point texture coordinate information of the previous frame and the numerical difference between the touch point texture coordinate information of the previous frame and the touch point texture coordinate information can be obtained, and then the sliding direction information can be determined according to the difference value.
  • the UV value of the acquired touch point texture coordinate information is touch_uv
  • the UV value of the touch point texture coordinate information of the previous frame is last_uv
  • the present disclosure obtains the texture coordinate information of the touch point corresponding to the touch operation by adopting the ray detection technology, and at the same time obtains the texture coordinate information of the touch point of the previous frame, Determine the sliding direction information, establish the correlation and interaction between the touch operation performed by the user and the dynamic effect of the hair shape, and ensure that the change of the hair shape can truly and accurately reflect the user's touch operation on the hair model.
  • the texture information of the texture map can be updated by determining the texture shape parameter and the texture color parameter.
  • the process of updating the texture information of the texture map according to the touch point texture coordinate information and the sliding direction information in the above step S102 may include the following steps:
  • a texture shape parameter in the texture parameters is determined according to the texture coordinate information of the touch point.
  • the texture shape parameter refers to the texture position and texture thickness.
  • the process of determining the texture shape parameter in the texture parameter according to the texture coordinate information of the touch point in the foregoing step S301 may include the following steps:
  • step 401 according to the texture coordinate information of the touch point, the black and white gradient information radiating from the touch point as the center to the surroundings is obtained.
  • the method for obtaining the black and white gradient information is not limited in the present disclosure, and may be selected according to the actual situation.
  • the black and white gradient information radiating from the touch point as the center to the surrounding may be obtained through a distance function (Distance function).
  • the distance function can obtain the number of elements contained in the specified range.
  • step 402 the black and white gradient information is converted into texture shape parameters according to preset texture thickness adjustment parameters and texture hardness adjustment parameters.
  • the method of converting the black and white gradient information into a texture shape is not limited in the present disclosure, and can be selected according to the actual situation.
  • the black and white gradient information can be converted into texture shape.
  • the smooth step function can be used to generate a smooth transition value from 0 to 1.
  • a texture color parameter in the texture parameters is determined according to the sliding direction information.
  • the method for determining the texture color is not limited in the present disclosure, and can be selected according to the actual situation.
  • the u-axis sliding direction information in the sliding direction information may be determined as the texture color parameter on the R (Red, red) channel; the v-axis sliding direction information in the sliding direction information may be determined as the G( The texture color parameter on the Green, green) channel.
  • step 303 the texture information of the texture map is updated according to the texture shape parameter and the texture color parameter.
  • the product of the texture shape parameter and the texture color parameter may be calculated to obtain the texture shape color parameter. Further, texture drawing can be performed at the position corresponding to the texture coordinate information of the touch point of the texture map according to the texture shape and color parameters to obtain an updated texture map.
  • the texture information of the texture map corresponding to the current frame image in the transparent channel can be superimposed with the texture information of the transparent channel corresponding to some previous frame images to obtain the texture corresponding to the current frame image.
  • the target texture information of the map in the transparent channel can be superimposed with the texture information of the transparent channel corresponding to some previous frame images to obtain the texture corresponding to the current frame image.
  • the texture shape parameter is determined according to the texture coordinate information of the touch point, and the texture color parameter is determined according to the sliding direction information, and then the texture information of the texture map is updated according to the texture shape parameter and the texture color parameter, so as to realize real-time effect on hair through real-time rendering The effect of bending. Further, according to the texture coordinate information of the touch point, the black and white gradient information that radiates around the touch point is obtained, and the black and white gradient information is converted into the texture shape parameter according to the preset texture thickness adjustment parameters and texture soft and hard adjustment parameters.
  • the user's touch screen information can be converted into a pixel shape with a touch texture coordinate point as the center and a certain stroke radius extending outward and stored in the rendering target. Simulates the paint behavior of brush painting. Further, by determining the u-axis sliding direction information in the sliding direction information as the texture color parameter on the R channel, and determining the v-axis sliding direction information in the sliding direction information as the texture color parameter on the G channel, it is possible to By using the special information storage method of the rendering target, the sliding direction of the touch screen is stored as the pixel color information of the rendering target, and the correlation and interaction between the touch operation performed by the user and the dynamic effect of the hair shape are established.
  • the position of the vertices of the hair model in the world space can be determined, and then according to the vertices of the hair model Offset control for the vertices of the hair model at the offset position in world space.
  • the process of performing offset control on the vertices of the hair model according to the texture information of the updated texture map in the above step S103 may include the following step:
  • a first offset vector corresponding to the vertex of the model in the world space is determined according to the texture information of the updated texture map.
  • the texture shape and color parameters of the vertex of the model on the R channel in the texture information of the updated texture map can be converted into the x-axis offset vector in the tangent space of the model, and the updated The texture shape color parameters of the model's vertices on the G channel in the texture information of the texture map are converted to the y-axis offset vector in the tangent space of the model. Further, the x-axis offset vector and the y-axis offset vector may be converted into a second offset vector in the world space, and then the second offset vector is normalized to obtain the first offset vector.
  • the texture information of the updated texture map can be sampled according to the texture coordinates of the touch point, and the collected data can be mapped from the range of 0 to 1 to the range of -1 to 1, so that the vector calculation can be performed.
  • the vector information of the R and G channels in the texture map can be used as the offset direction of the x and y axes in the tangent space of the hair model, and the vector information can be converted from the tangent space through the Tangent To World operation. Move to world space and normalize the vector.
  • the specific manner of the normalization operation is not limited in the present disclosure, and can be selected according to the actual situation.
  • the vector can be processed by batch normalization (BN for short), layer normalization (LN for short), instance normalization (IN for short), switchable normalization (SN for short), etc. Perform a normalization operation.
  • the normalized vector can be multiplied by its original modulo length, so that it has the length information again.
  • step 502 the offset position of the vertex of the model in the world space is determined according to the initial position of the vertex of the model in the world space and the first offset vector.
  • the following formula can be used to determine the position of the vertices of the model after offset in world space:
  • vec3 offset_worldpos worldpos+normalize(N+offset_vector)*level*fur_length;
  • vec3 offset_worldpos is the position of the vertices of the model in the world space after offset; worldpos is the initial position of the vertices of the model in the world space; normalize() is the normalization function; N is the normal vector of the model; offset_vector is the offset vector; level is the expansion ratio of the model; fur_length is the preset adjustment parameter.
  • the initial position of the vertices of the hair model in the world space can be combined to determine the hair model.
  • the position of the vertex after offset in world space is in the interval (0, 1).
  • the expansion ratio of each layer of hair model can be set according to the actual situation.
  • the expansion ratio of the bottommost hair model may be set to 0, the expansion ratio of the outermost hair model may be set to 1, and the expansion ratio of the middle layer hair model is uniformly increased. For example, if there are 5 layers of hair models, you can set the expansion ratio of the bottom hair model to 0, the expansion ratio of the second to fourth layers of hair models to be 0.25, 0.5, and 0.75, respectively, and the expansion ratio of the outermost hair model. is 1.
  • step 503 offset control is performed on the vertices of the model according to the offset positions of the vertices of the model in the world space.
  • the change of the hair shape is essentially a process of controlling the spatial offset of the vertices of each layer of hair models relative to their original positions. Therefore, in the present disclosure, after determining the positions of the vertices of the hair model after the offset in the world space, the vertices of the hair model can be offset according to the positions of the vertices of the hair model after being offset in the world space, so as to realize the hair change in form.
  • the offset vector received by the vertex of the hair model in the world space is determined according to the texture information of the updated texture map, and the vertex of the hair model is determined according to the initial position of the vertex of the hair model in the world space and the first offset vector.
  • the position after the offset in the world space, and then the vertices of the hair model are offset according to the position of the vertices of the hair model after the offset in the world space, so that the pixel information in the texture map can be regarded as the direction through the special usage of the texture map.
  • the offset information participates in the calculation of modifying the spatial position of the vertices of the model, and realizes the effect that the bending direction and strength of the hair are affected and controlled by the vector information in the texture map.
  • the texture shape color parameters of the vertices of the model on the R channel in the texture information of the updated texture map into the x-axis offset vector in the tangent space of the model, and converting the texture information of the updated texture map
  • the texture shape color parameters of the vertices of the model in the G channel are converted into the y-axis offset vector in the tangent space of the model, and then the x-axis offset vector and the y-axis offset vector are converted into the second offset in world space.
  • the vector to normalize the second offset vector to obtain the first offset vector which can correspond to the R/G channel brightness information of the texture as the model vertex tangent space on the x/y axis through the special usage of texture mapping
  • the offset on the to implement the modification of the vertex space position. Further, by using a formula to determine the position of the vertices of the model after offset in the world space, the final position of the model vertices in the world space after being offset by the vector can be determined.
  • the texture map may be generated according to the screen space positions of the vertices of the hair model.
  • step 601 the screen space coordinates of the vertices of the model are determined according to the texture coordinates of the vertices of the model.
  • the screen space coordinates of the vertices of the model can be determined using the following formula:
  • ScreenPos.x is the x-axis screen space coordinate of the vertex of the model
  • texCoord.x is the u-axis texture coordinate of the vertex of the model
  • vec2() is the error correction function
  • ScreenPos.y is the y-axis screen space coordinate of the vertex of the model
  • texCoord.x is the v-axis texture coordinate of the vertices of the model.
  • a texture map is generated according to the screen space coordinates of the vertices of the model.
  • the texture information can be obtained by rendering on a blank canvas according to the screen space coordinates of the vertices of the model, and the texture information can be cached to obtain a texture map.
  • a render target (Render Target, RT for short) may be used to cache a canvas texture.
  • RT render Target
  • a scene needs to be pre-rendered before the main scene is rendered, the hair model is rendered in this scene separately, and the vertices of the hair model are rendered in the way that screen coordinates correspond to texture coordinates (UV coordinates), so as to achieve preparation.
  • texture maps may be used to cache a canvas texture.
  • the render target refers to the memory buffer used to render pixels.
  • a common use of render targets is off-screen rendering.
  • multiple render targets need to be superimposed to achieve a bloom effect.
  • the screen space position of the vertex of the hair model is determined according to the texture coordinates of the vertex of the hair model, and the texture map is generated according to the screen space position of the vertex of the hair model, so that the touch point texture coordinate information and the sliding direction information can be stored in the texture in real time
  • the vertices of the hair model can be offset controlled according to the texture information of the updated texture map.
  • the time-delayed dynamic effect can be realized by controlling the speed of returning to the original shape after the hair shape is changed.
  • the transparent channel of the texture information of the texture map can be adjusted according to the delay parameter, so that the texture information of the texture map of the previous frame is superimposed on the basis of the current frame.
  • the transparency channel also known as the Alpha channel, refers to a special layer that can record transparency information, and can identify the transparency and translucency of an image.
  • the delay parameter is in the interval (0, 1).
  • the transparent channel of the texture information of the texture map can be adjusted according to the delay parameter, so that the texture information of the texture map of the previous frame is superimposed on the basis of the current frame.
  • the touched hair displayed on the screen will gradually deform along the trajectory of the user's finger movement.
  • the hair deformation trajectory and the movement trajectory of the user's finger have a certain delay. That is, the hair touched by the user's finger does not immediately return to the original shape, but gradually returns to the original shape.
  • the delay parameter can be set according to the actual situation. Among them, the smaller the value of the delay parameter, the slower the speed of returning to the original shape after the hair shape is changed.
  • the preset delay parameter is 0.5
  • the time required to return to the original shape after the hair shape changes is t1
  • the preset delay parameter is 0.8
  • the present disclosure adjusts the cumulative amount of the transparent channel of the texture information of the texture map, so as to superimpose the texture information of the texture map of the previous frame on the basis of the current frame, so that the speed of returning to the original shape after the hair shape is changed can be controlled by controlling , to achieve a time-delay dynamic effect to ensure that the hair bending deformation after the touch screen rays move away from the touch part will not disappear immediately, but a slow recovery process, which further improves the visualization effect in the special effect control process.
  • the touch point texture coordinate information and sliding direction information corresponding to the touch operation can be obtained, and the texture information of the texture map can be updated according to the touch point texture coordinate information and the sliding direction information, Further, the vertices of the hair model are offset and controlled according to the texture information of the updated texture map, so that the hair model can change its shape according to the sliding track of the user's mobile phone.
  • a tuft of spherical hair model is displayed on the screen, and the hair is in a stationary state initially.
  • the effect shown in FIG. 7( b ) can be presented.
  • the change of the hair shape can reflect the S-shaped trajectory generated by the user's touch operation, and the bending deformation of the hair will not disappear immediately, but has a slow recovery process, which realizes the time-delayed dynamic effect, and the touch effect is realistic and greatly improved. Improved user experience.
  • the present disclosure obtains the touch point texture coordinate information and sliding direction information corresponding to the touch operation in response to the user's touch operation on the hair model on the screen, and updates the texture information of the texture map according to the touch point texture coordinate information and the sliding direction information, and then Offset control is performed on the vertices of the hair model according to the texture information of the updated texture map to realize special effect control. Therefore, the present disclosure can perform offset control on the vertices of the hair model according to the texture information of the updated texture map, so as to realize the control of the bending orientation state of the hair in different regions, and under the condition of ensuring good running performance, create a relatively The real hair effect improves the visualization effect and playability in the special effect control process, and improves the quality of user experience.
  • the rendering information of each frame is cached and stored in the rendering target.
  • the rendering target With the help of the special information storage method of the rendering target, based on the feature that the rendering target does not need to clear each frame, the rendering of each frame can be reasonably controlled.
  • the transparency information of the rendering target so that the previously rendered texture is gradually stacked and covered by the post-rendered texture, so as to achieve the effect that the hair gradually returns to its original state after being touched.
  • the special effect control device 1000 includes an acquisition unit 121 , an update unit 122 and a control unit 123 .
  • the obtaining unit 121 is configured to, in response to a user's touch operation on the model in the current frame image on the screen, obtain touch point texture coordinate information and sliding direction information of the touch point corresponding to the touch operation, where the touch point is the a point on the model corresponding to the touch operation;
  • the updating unit 122 is configured to update the texture information of the texture map corresponding to the model according to the touch point texture coordinate information and the sliding direction information, wherein the texture map is used to represent when the model is expanded into a plane texture coordinate information of the vertices of the model;
  • the control unit 123 is configured to perform offset control on the vertices of the model according to the updated texture information of the texture map to obtain the updated model.
  • the acquiring unit 121 in FIG. 8 includes:
  • the first acquisition subunit 1211 is configured to acquire the texture coordinate information of the touch point of the previous frame image of the current frame image
  • the first determining subunit 1212 is configured to determine the sliding direction information according to the touch point texture coordinate information and the touch point texture coordinate information of the previous frame of image.
  • the updating unit 122 in FIG. 8 includes:
  • the second determination subunit 1221 is configured to determine the texture shape parameter in the texture parameters according to the texture coordinate information of the touch point;
  • the third determination subunit 1222 is configured to determine the texture color parameter in the texture parameter according to the sliding direction information
  • the updating subunit 1223 is configured to update the texture information of the texture map according to the texture shape parameter and the texture color parameter.
  • the second determination subunit 1221 includes:
  • the acquisition module 12211 is configured to acquire black and white gradient information that radiates around the touch point according to the texture coordinate information of the touch point;
  • the conversion module 12212 is configured to convert the black and white gradient information into the texture shape parameters according to preset texture thickness adjustment parameters and texture soft and hard adjustment parameters.
  • the third determination subunit 1222 is further configured to:
  • the v-axis sliding direction information in the sliding direction information is determined as the texture color parameter on the G channel.
  • control unit 123 in FIG. 8 includes:
  • the fourth determination subunit 1231 is configured to determine the first offset vector corresponding to the vertex of the model in the world space according to the updated texture information of the texture map;
  • a fifth determining subunit 1232 configured to determine the offset position of the vertex of the model in the world space according to the initial position of the vertex of the model in the world space and the first offset vector;
  • the control subunit 1233 is configured to perform offset control on the vertices of the model according to the offset positions of the vertices of the model in the world space.
  • the fourth determination subunit 1231 is further configured to:
  • the second offset vector is normalized to obtain the first offset vector.
  • the fifth determination subunit 1232 is further configured to:
  • vec3 offset_worldpos worldpos+normalize(N+offset_vector)*level*fur_length;
  • the vec3 offset_worldpos is the offset position of the vertex of the model in the world space; the worldpos is the initial position of the vertex of the model in the world space; the normalize() is a normalization function; the N is the normal vector of the model; the offset_vector is the first offset vector; the level is the expansion ratio of the model; and the fur_length is a preset adjustment parameter.
  • the special effect control device 1000 in FIG. 8 further includes:
  • the determining unit 124 is configured to superimpose the texture information of the texture map corresponding to the current frame image in the transparent channel and the texture information of the texture map corresponding to some previous frame images in the transparent channel to obtain the current frame image.
  • the updating unit 122 further includes:
  • a sixth determination subunit 1225 configured to determine the screen space coordinates of the vertices of the model according to the texture coordinates of the vertices of the model;
  • the generating subunit 1226 is configured to generate the texture map according to the screen space coordinates of the vertices of the model.
  • the present disclosure obtains the touch point texture coordinate information and sliding direction information of the touch point corresponding to the touch operation by responding to the user's touch operation on the model in the current frame image on the screen, and the touch point is the point on the model corresponding to the touch operation;
  • the texture information corresponding to the model is updated with the texture coordinate information of the touch point and the sliding direction information;
  • the vertices of the model are offset and controlled according to the updated texture information to obtain the updated model. Therefore, the present disclosure can create a more realistic special effect control effect under the condition of ensuring good running performance. Further, a new interaction method is provided, so that special effects control can be realized through touch screen interaction.
  • the present disclosure further provides an electronic device.
  • the electronic device 8000 includes: a processor 801 ; and one or more memories for storing executable instructions of the processor 801 802; wherein, the processor 801 is configured to execute the special effect control method described in the foregoing embodiment.
  • the processor 801 and the memory 802 are connected by a communication bus.
  • the present disclosure also provides a computer-readable storage medium, when the instructions in the computer-readable storage medium are executed by the processor 801 of the electronic device 8000, the electronic device 8000 can be executed to complete the above-mentioned embodiments The special effect control method.
  • the storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, and optical disk data storage devices, etc.
  • the present disclosure further provides a computer program product, including a computer program, wherein the computer program implements the special effect control method described in the above embodiments when the computer program is executed by a processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种特效控制方法及装置,涉及图像处理技术领域,该方法包括:响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,触摸点为模型上与触摸操作对应的点(101);根据触摸点纹理坐标信息和滑动方向信息更新模型对应的纹理贴图的纹理信息,其中,纹理贴图用于表征模型展开为平面时模型的顶点的纹理坐标信息(102);根据更新后的纹理贴图的纹理信息对模型的顶点进行偏移控制,得到更新后的模型(103)。上述方法能够在保证良好运行性能的情况下,营造较为真实的特效控制效果。进一步地,提供了一种新的交互方法,使得能够通过触屏交互即可实现特效控制。

Description

特效控制方法及装置
相关申请的交叉引用
本申请基于申请号为202110241035.4、申请日为2021年03月04日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及图像处理技术领域,尤其涉及特效控制方法及装置。
背景技术
随着移动终端技术以及图像处理(Image Processing)技术的迅速发展,各种具有模拟毛发形态功能的图形技术应运而生,用户对于通过触摸而改变毛发形态、体验毛发特效等特效控制功能的要求也日益提高。
现有技术中,为了在保证良好运行性能的情况下,营造较为真实的毛发效果,通常采用基于多层挤出的渲染方式来模拟毛发效果。
发明内容
本公开提供一种特效控制方法及装置。本公开的技术方案如下:
根据本公开的一些实施例,提供一种特效控制方法,所述特效控制方法包括:响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取所述触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,所述触摸点为所述模型上与所述触摸操作对应的点;根据所述触摸点纹理坐标信息和所述滑动方向信息更新所述模型对应的纹理贴图的纹理信息,其中,所述纹理贴图用于表征所述模型展开为平面时所述模型的顶点的纹理坐标信息;根据更新后的所述纹理贴图的纹理信息对所述模型的顶点进行偏移控制,得到更新后的所述模型。
根据本公开的一个实施例,获取所述触摸操作对应的所述滑动方向信息,包括:获取所述当前帧图像的上一帧图像的触摸点纹理坐标信息;根据所述上一帧图像的触摸点纹理坐标信息和所述触摸点纹理坐标信息,确定所述滑动方向信息。
根据本公开的一个实施例,获取所述触摸操作对应的所述滑动方向信息,包括:获取 所述当前帧图像的上一帧图像的触摸点纹理坐标信息;根据所述上一帧图像的触摸点纹理坐标信息和所述触摸点纹理坐标信息,确定所述滑动方向信息。
根据本公开的一个实施例,所述根据所述触摸点纹理坐标信息确定纹理参数中的纹理形状参数,包括:根据所述触摸点纹理坐标信息,获取以所述触摸点为中心向四周发散的黑白渐变信息;根据预设的纹理粗细调节参数和纹理软硬调节参数,将所述黑白渐变信息转化为所述纹理形状参数。
根据本公开的一个实施例,所述根据所述滑动方向信息确定所述纹理参数中的纹理颜色参数,包括:将所述滑动方向信息中的u轴滑动方向信息确定为在R通道上的所述纹理颜色参数;将所述滑动方向信息中的v轴滑动方向信息确定为在G通道上的所述纹理颜色参数。
根据本公开的一个实施例,所述根据更新后的所述纹理贴图的纹理信息对所述模型的顶点进行偏移控制,包括:根据更新后的所述纹理贴图的纹理信息确定所述模型的顶点在世界空间对应的第一偏移向量;根据所述模型的顶点在世界空间的初始位置和所述第一偏移向量确定所述模型的顶点在世界空间偏移后的位置;根据所述模型的顶点在世界空间偏移后的位置对所述模型的顶点进行偏移控制。
根据本公开的一个实施例,所述根据更新后的所述纹理贴图的纹理信息确定所述模型的顶点在世界空间对应的第一偏移向量,包括:将更新后的所述纹理贴图的纹理信息中所述模型的顶点在R通道上的纹理形状颜色参数转换为所述模型的切线空间中的x轴偏移向量;将更新后的所述纹理贴图的纹理信息中所述模型的顶点在G通道上的纹理形状颜色参数转换为所述模型的切线空间中的y轴偏移向量;将所述x轴偏移向量和所述y轴偏移向量转换为所述世界空间中的第二偏移向量;对所述第二的偏移向量进行归一化处理,得到所述偏移向量。
根据本公开的一个实施例,所述根据所述模型的顶点在世界空间的初始位置和所述第一偏移向量确定所述模型的顶点在世界空间偏移后的位置,包括:
采用如下公式确定所述模型的顶点在世界空间偏移后的位置:
vec3 offset_worldpos=worldpos+normalize(N+offset_vector)*level*fur_length;
其中,所述vec3 offset_worldpos为所述模型的顶点在世界空间偏移后的位置;所述worldpos为所述模型的顶点在世界空间的初始位置;所述normalize()为归一化函数;所述N为所述模型的法向矢量;所述offset_vector为所述第一偏移向量;所述level为所述模型的膨胀比例;所述fur_length为预设的调节参数。
根据本公开的一个实施例,还包括:将所述当前帧图像对应的所述纹理贴图在透明通道的纹理信息,和部分前面帧图像对应的所述纹理贴图在透明通道的纹理信息进行叠加, 得到所述当前帧图像对应的所述纹理贴图在透明通道的目标纹理信息。
根据本公开的一个实施例,所述更新纹理贴图的纹理信息之前,还包括:根据所述模型的顶点的纹理坐标确定所述模型的顶点的屏幕空间坐标;根据所述模型的顶点的屏幕空间坐标生成所述纹理贴图。
根据本公开的一些实施例,提供一种特效控制装置,所述特效控制装置包括:获取单元,被配置为响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取所述触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,所述触摸点为所述模型上与所述触摸操作对应的点;更新单元,被配置为根据所述触摸点纹理坐标信息和所述滑动方向信息更新所述模型对应的纹理贴图的纹理信息,其中,所述纹理贴图用于表征所述模型展开为平面时所述模型的顶点的纹理坐标信息;控制单元,被配置为根据更新后的所述纹理贴图的纹理信息对所述模型的顶点进行偏移控制,得到更新后的所述模型。
根据本公开的一个实施例,所述获取单元,包括:第一获取子单元,被配置为获取所述当前帧图像的上一帧图像的触摸点纹理坐标信息;第一确定子单元,被配置为根据所述上一帧图像的触摸点纹理坐标信息和所述触摸点纹理坐标信息,确定所述滑动方向信息。
根据本公开的一个实施例,所述更新单元,包括:第二确定子单元,被配置为根据所述触摸点纹理坐标信息确定纹理参数中的纹理形状参数;第三确定子单元,被配置为根据所述滑动方向信息确定所述纹理参数中的纹理颜色参数;更新子单元,被配置为根据所述纹理形状参数和所述纹理颜色参数更新所述纹理贴图的纹理信息。
根据本公开的一个实施例,所述第二确定子单元,还被配置为:根据所述触摸点纹理坐标信息,获取以所述触摸点为中心向四周发散的黑白渐变信息;根据预设的纹理粗细调节参数和纹理软硬调节参数,将所述黑白渐变信息转化为所述纹理形状参数。
根据本公开的一个实施例,所述第三确定子单元,还被配置为:将所述滑动方向信息中的u轴滑动方向信息确定为在R通道上的所述纹理颜色参数;将所述滑动方向信息中的v轴滑动方向信息确定为在G通道上的所述纹理颜色参数。
根据本公开的一个实施例,所述控制单元,包括:第四确定子单元,被配置为根据更新后的所述纹理贴图的纹理信息确定所述模型的顶点在世界空间对应的第一偏移向量;第五确定子单元,被配置为根据所述模型的顶点在世界空间的初始位置和所述第一偏移向量确定所述模型的顶点在世界空间偏移后的位置;控制子单元,被配置为根据所述模型的顶点在世界空间偏移后的位置对所述模型的顶点进行偏移控制。
根据本公开的一个实施例,所述第四确定子单元,还被配置为:将更新后的所述纹理贴图的纹理信息中所述模型的顶点在R通道上的纹理形状颜色参数转换为所述模型的切线空间中的x轴偏移向量;将更新后的所述纹理贴图的纹理信息中所述模型的顶点在G通道 上的纹理形状颜色参数转换为所述模型的切线空间中的y轴偏移向量;将所述x轴偏移向量和所述y轴偏移向量转换为所述世界空间中的第二偏移向量;对所述第二偏移向量进行归一化处理,得到第一偏移向量。
根据本公开的一个实施例,所述第五确定子单元,还被配置为:
采用如下公式确定所述模型的顶点在世界空间偏移后的位置:
vec3 offset_worldpos=worldpos+normalize(N+offset_vector)*level*fur_length;
其中,所述vec3 offset_worldpos为所述模型的顶点在世界空间偏移后的位置;所述worldpos为所述模型的顶点在世界空间的初始位置;所述normalize()为归一化函数;所述N为所述模型的法向矢量;所述offset_vector为所述第一偏移向量;所述level为所述模型的膨胀比例;所述fur_length为预设的调节参数。
根据本公开的一个实施例,还包括:确定单元,被配置为将所述当前帧图像对应的所述纹理贴图在透明通道的纹理信息,和部分前面帧图像对应的所述纹理贴图在透明通道的纹理信息进行叠加,得到所述当前帧图像对应的所述纹理贴图在透明通道的目标纹理信息。
根据本公开的一个实施例,所述更新单元,还包括:第六确定子单元,被配置为根据所述模型的顶点的纹理坐标确定所述模型的顶点的屏幕空间坐标;生成子单元,被配置为根据所述模型的顶点的屏幕空间坐标生成所述纹理贴图。
根据本公开的一些实施例,提供一种电子设备,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,以实现如本公开第一方面实施例提供的特效控制方法。
根据本公开的一些实施例,提供一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如本公开第一方面实施例提供的特效控制方法。
根据本公开的一些实施例,提供一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现如本公开第一方面提供的特效控制方法。
本公开通过响应于用户针对屏幕上的特效控制操作,可以通过响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,触摸点为模型上与触摸操作对应的点;根据触摸点纹理坐标信息和滑动方向信息更新模型对应的纹理信息;根据更新后的纹理信息对模型的顶点进行偏移控制,得到更新后的模型。由此,本公开能够在保证良好运行性能的情况下,营造较为真实的特效控制效果。进一步地,提供了一种新的交互方法,使得能够通过触屏交互即可实现特效控制。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。
图1是根据一示例性实施例示出的一种特效控制方法的流程图。
图2是根据一示例性实施例示出的另一种特效控制方法的流程图。
图3是根据一示例性实施例示出的另一种特效控制方法的流程图。
图4是根据一示例性实施例示出的另一种特效控制方法的流程图。
图5是根据一示例性实施例示出的另一种特效控制方法的流程图。
图6是根据一示例性实施例示出的另一种特效控制方法的流程图。
图7是根据一示例性实施例示出的一种特效控制方法应用于毛发形态的触摸特效应用场景下的示意图。
图8是根据一示例性实施例示出的一种特效控制装置的框图。
图9是根据一示例性实施例示出的另一种特效控制装置的框图。
图10是根据一示例性实施例示出的另一种特效控制装置的框图。
图11是根据一示例性实施例示出的另一种特效控制装置的框图。
图12是根据一示例性实施例示出的另一种特效控制装置的框图。
图13是根据一示例性实施例示出的一种电子设备的框图。
具体实施方式
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。本公开的所有实施例均可以单独被执行,也可以与其他实施例相结合共同被执行,本公开对此不做限制。
图1是根据一示例性实施例示出的一种特效控制方法的流程图。其中,需要说明的是,本公开的特效控制方法的执行主体为特效控制装置。本公开实施例的特效控制方法可以由本公开实施例的特效控制装置执行,特效控制装置具体可以为硬件设备,或者硬件设备中的软件等。其中,硬件设备例如终端设备、服务器等。下面以模型为毛发模型为例对本申 请提出的特效控制方法进行解释说明。
如图1所示,本公开的实施例提出的特效控制方法,包括以下步骤:
在步骤101中,响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,触摸点为模型上与触摸操作对应的点。
其中,触摸操作可以为由触摸起始点滑动至触摸终止点的操作。用户可以通过多种方式执行针对屏幕上的毛发模型的触摸操作,例如,可以通过手指、智能触控笔等于屏幕上进行点击、滑动,以触发触摸操作。
其中,纹理坐标信息(Texture Coordinate),又称UV坐标信息,用于对应和分配纹理像素绘制在模型上的区域,包括水平方向的U横坐标信息和垂直方向的V纵坐标信息。
其中,滑动方向信息可以表示针对屏幕上的毛发模型等模型,由触摸起始点滑动至触摸终止点的方向信息。
本公开实施例中,在用户试图执行针对屏幕上的毛发模型的触摸操作时,可以通过向屏幕施加压力,以连续触发多个触摸点,从而执行针对屏幕上的毛发模型的触摸操作。相应地,响应于用户针对屏幕上的毛发模型的触摸操作,可以根据用户执行的触摸操作产生的轨迹,获取对应的触摸点纹理坐标信息和滑动方向信息。
在步骤102中,根据触摸点纹理坐标信息和滑动方向信息更新模型对应的纹理贴图的纹理信息,其中,纹理贴图用于表征模型展开为平面时模型的顶点的纹理坐标信息。
其中,纹理贴图(Flowmap),也可以叫流向贴图。其中的纹理中通常存储有矢量信息(Vector),将其用于对其他纹理的触摸点纹理坐标进行扰动,可以实现模型表面类似“水流”、“流沙”等特殊的动态效果。例如,将纹理贴图中存储的矢量信息用于对获取到的触摸点纹理的UV坐标进行扰动,可以实现针对毛发模型的“捋毛”、“梳毛”等流动的动态效果。
本公开实施例中,在获取到触摸操作对应的触摸点纹理坐标信息和滑动方向信息后,可以根据触摸点纹理坐标信息和滑动方向信息,同步更新纹理贴图的纹理信息,以将用户的针对屏幕上的毛发模型的触摸操作视作纹理贴图纹理的绘制操作。
在步骤103中,根据更新后的纹理贴图的纹理信息对模型的顶点进行偏移控制,得到更新后的模型。
本公开实施例中,在初始化构建毛发模模型时,可以预先设置模型各个顶点对应的uv坐标信息。这样一来,在更新纹理贴图的纹理信息后,可以通过修改毛发模型的顶点相对于其原始位置的空间偏移,实现毛发形态的改变。
本公开通过响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取触摸操作对 应的触摸点的触摸点纹理坐标信息和滑动方向信息,触摸点为模型上与触摸操作对应的点;根据触摸点纹理坐标信息和滑动方向信息更新模型对应的纹理信息;根据更新后的纹理信息对模型的顶点进行偏移控制,得到更新后的模型。由此,本公开能够实现针对不同区域的毛发的弯曲朝向状态的控制,在保证良好运行性能的情况下,营造较为真实的毛发效果,提高了所展示的毛发形态动态效果的真实度。进一步地,提供了一种新的交互方法,使得能够通过触屏交互即可实现对毛发的操纵。
需要说明的是,本公开中,对于获取触摸操作对应的触摸点纹理坐标信息的具体方式不作限定,可以根据实际情况进行选择。
在一些实施例中,可以采用射线检测技术(Raycasting),获取触摸操作对应的触摸点纹理坐标信息。其中,射线检测技术,指的是一种通过在三维(Three Dimensional,简称3D)空间中,由一个点向一个方向发射一条无终点的线,并在射线发射的轨迹中,判断与其他物体发生的碰撞及接触关系,从而实现无损检测的技术。其中,采用射线检测技术获取触摸点纹理坐标信息的具体方式为现有技术,此处不再进行赘述。
需要说明的是,本公开中,对于获取触摸操作对应的滑动方向信息的具体方式不作限定,可以根据实际情况进行选择。在一些实施例中,可以根据获取到的当前帧的触摸点纹理坐标信息和上一帧的触摸点纹理坐标信息,确定滑动方向信息。
作为一种可能的实现方式,如图2所示,在上述实施例的基础上,上述步骤S101中获取触摸操作对应的滑动方向信息的过程,可以包括以下步骤:
在步骤201中,获取当前帧图像的上一帧图像的触摸点纹理坐标信息。
在步骤202中,根据上一帧图像的触摸点纹理坐标信息和触摸点纹理坐标信息,确定滑动方向信息。
需要说明的是,前、后帧的UV坐标的数值差值反映了用户对毛发模型表面每一瞬间的触摸滑动方向。由此,本公开中,可以获取上一帧的触摸点纹理坐标信息,以及上一帧的触摸点纹理坐标信息和触摸点纹理坐标信息的数值差值,进而根据差值确定滑动方向信息。
举例而言,获取到触摸点纹理坐标信息的UV值为touch_uv,上一帧的触摸点纹理坐标信息的UV值为last_uv,此种情况下,通过公式delta_uv=last_uv-touch_uv,可以确定滑动方向信息delta_uv。
本公开通过采用射线检测技术,获取触摸操作对应的触摸点纹理坐标信息,同时通过获取上一帧的触摸点纹理坐标信息,并根据上一帧的触摸点纹理坐标信息和触摸点纹理坐标信息,确定滑动方向信息,建立了用户执行的触摸操作与毛发形态的动态效果之间的关联性与交互性,确保了毛发形态的改变能够真实、准确地反映用户针对毛发模型的触摸操 作。
需要说明的是,本公开中,在试图根据触摸点纹理坐标信息和滑动方向信息更新纹理贴图的纹理信息时,可以通过确定纹理形状参数和纹理颜色参数,以更新纹理贴图的纹理信息。
作为一种可能的实现方式,如图3所示,在上述实施例的基础上,上述步骤S102中根据触摸点纹理坐标信息和滑动方向信息更新纹理贴图的纹理信息的过程,可以包括以下步骤:
在步骤301中,根据触摸点纹理坐标信息确定纹理参数中的纹理形状参数。
其中,纹理形状参数指的是纹理位置及纹理粗细。
作为一种可能的实现方式,如图4所示,在上述实施例的基础上,上述步骤S301中根据触摸点纹理坐标信息确定纹理参数中的纹理形状参数的过程,可以包括以下步骤:
在步骤401中,根据触摸点纹理坐标信息,获取以触摸点为中心向四周发散的黑白渐变信息。
需要说明的是,本公开中对于获取黑白渐变信息的方式不作限定,可以根据实际情况进行选取。
在一些实施例中,可以通过距离函数(Distance函数)获取以触摸点为中心向四周发散的黑白渐变信息。
其中,距离函数可以获取指定范围内所包含的元素的个数。
在步骤402中,根据预设的纹理粗细调节参数和纹理软硬调节参数,将黑白渐变信息转化为纹理形状参数。
需要说明的是,本公开中对于将黑白渐变信息转化为纹理形状的方式不作限定,可以根据实际情况进行选取。
在一些实施例中,可以通过平滑阶梯函数(Smooth Step函数)配合预设的纹理的粗细调节参数(Brush Size)和软硬调节参数以及软硬调节参数(Brush Hardness),将黑白渐变信息转化为纹理形状。
其中,平滑阶梯函数可以用来生成0到1的平滑过渡值。
在步骤302中,根据滑动方向信息确定纹理参数中的纹理颜色参数。
需要说明的是,本公开中对于确定纹理颜色的方式不作限定,可以根据实际情况进行选取。
在一些实施例中,可以将滑动方向信息中的u轴滑动方向信息确定为在R(Red,红色)通道上的纹理颜色参数;将滑动方向信息中的v轴滑动方向信息确定为在G(Green,绿色)通道上的纹理颜色参数。
在步骤303中,根据纹理形状参数和纹理颜色参数更新纹理贴图的纹理信息。
本公开实施例中,可以计算纹理形状参数和纹理颜色参数的乘积得到纹理形状颜色参数。进一步地,可以根据纹理形状颜色参数在纹理贴图的触摸点纹理坐标信息对应的位置进行纹理绘制,得到更新后的纹理贴图。
需要说明的是,本公开中,可以将当前帧图像对应的纹理贴图在透明通道的纹理信息,和部分前面帧图像对应的纹理贴图在透明通道的纹理信息进行叠加,得到当前帧图像对应的纹理贴图在透明通道的目标纹理信息。
本公开通过根据触摸点纹理坐标信息确定纹理形状参数,并根据滑动方向信息确定纹理颜色参数,进而根据纹理形状参数和纹理颜色参数,更新纹理贴图的纹理信息,以通过实时渲染,实现实时影响毛发弯曲走向的效果。进一步地,通过根据触摸点纹理坐标信息,获取以触摸点为中心向四周发散的黑白渐变信息,并根据预设的纹理粗细调节参数和纹理软硬调节参数,将黑白渐变信息转化为纹理形状参数,能够通过借助渲染目标(render target)的特殊的信息存储方式,将用户触屏信息转换为以触摸的纹理坐标点为中心,向外延伸一定笔触半径的像素形状并存储在渲染目标中,以模拟笔刷绘画的涂色行为。进一步地,通过将滑动方向信息中的u轴滑动方向信息确定为在R通道上的纹理颜色参数,并将滑动方向信息中的v轴滑动方向信息确定为在G通道上的纹理颜色参数,能够通过借助渲染目标的特殊的信息存储方式,将触屏滑动方向存储为渲染目标的像素颜色信息,建立了用户执行的触摸操作与毛发形态的动态效果之间的关联性与交互性。
需要说明的是,本公开中,在试图根据更新后的纹理贴图的纹理信息对毛发模型的顶点进行偏移控制时,可以确定毛发模型的顶点在世界空间中的位置,然后根据毛发模型的顶点在世界空间偏移后的位置,对毛发模型的顶点进行偏移控制。
作为一种可能的实现方式,如图5所示,在上述实施例的基础上,上述步骤S103中根据更新后的纹理贴图的纹理信息对毛发模型的顶点进行偏移控制的过程,可以包括以下步骤:
在步骤501中,根据更新后的纹理贴图的纹理信息确定模型的顶点在世界空间对应的第一偏移向量。
作为一种可能的实现方式,可以将更新后的纹理贴图的纹理信息中模型的顶点在R通道上的纹理形状颜色参数转换为模型的切线空间中的x轴偏移向量,并将更新后的纹理贴图的纹理信息中模型的顶点在G通道上的纹理形状颜色参数转换为模型的切线空间中的y轴偏移向量。进一步地,可以将x轴偏移向量和y轴偏移向量转换为世界空间中的第二偏移向量,进而对第二偏移向量进行归一化处理,得到第一偏移向量。
举例而言,可以根据触摸点纹理坐标对更新后的纹理贴图的纹理信息进行采样,并将 采集到的数据由0至1的范围映射至-1至1的范围,以使其可以进行向量计算。进一步地,可以将纹理贴图中的R、G通道的矢量信息作为毛发模型切线空间中x与y轴的偏移方向,并通过切线转世界矩阵(Tangent To World)操作,将向量信息从切线空间转移到世界空间,并对向量进行归一化操作。
需要说明的是,本公开中对归一化操作的具体方式不作限定,可以根据实际情况进行选取。例如,可以通过批标准化(Batch Normalization,简称BN)、横向标准化(Layer Normalization,简称LN)、实列标准化(Instance Normalization,简称IN),自适配标准化(Switchable Normalization,简称SN)等方式对向量进行归一化操作。
进一步地,可以将归一化处理后的向量与其原先的模长相乘,以使其重新具备长度信息。
在步骤502中,根据模型的顶点在世界空间的初始位置和第一偏移向量确定模型的顶点在世界空间偏移后的位置。
作为一种可能的实现方式,可以采用如下公式确定模型的顶点在世界空间偏移后的位置:
vec3 offset_worldpos=worldpos+normalize(N+offset_vector)*level*fur_length;
其中,vec3 offset_worldpos为模型的顶点在世界空间偏移后的位置;worldpos为模型的顶点在世界空间的初始位置;normalize()为归一化函数;N为模型的法向矢量;offset_vector为偏移向量;level为模型的膨胀比例;fur_length为预设的调节参数。
举例而言,在确定毛发模型的顶点在世界空间受到的偏移向量后,可以结合毛发模型的顶点在世界空间的初始位置、每一层毛发模型的膨胀比例以及毛发长度参数,确定毛发模型的顶点在世界空间偏移后的位置。其中,每一层毛发模型的膨胀比例处于区间(0,1)中。
需要说明的是,每一层毛发模型的膨胀比例可以根据实际情况进行设定。在一些实施例中,可以设定最底层毛发模型的膨胀比例为0,最外层毛发模型的膨胀比例为1,且中间层毛发模型的膨胀比例均匀递增。例如,共有5层毛发模型,则可以设定最底层毛发模型的膨胀比例为0,第二层至第四层毛发模型的膨胀比例分别为0.25、0.5和0.75,最外层毛发模型的膨胀比例为1。
在步骤503中,根据模型的顶点在世界空间偏移后的位置对模型的顶点进行偏移控制。
需要说明的是,对毛发形态的改变实质上是对每一层毛发模型的顶点相对于其原始位置的空间偏移进行控制的过程。因此,本公开中,可以在确定毛发模型的顶点在世界空间偏移后的位置后,可以根据毛发模型的顶点在世界空间偏移后的位置对毛发模型的顶点进行偏移控制,以实现毛发形态的改变。
本公开通过根据更新后的纹理贴图的纹理信息确定毛发模型的顶点在世界空间受到的偏移向量,并根据毛发模型的顶点在世界空间的初始位置和第一偏移向量确定毛发模型的顶点在世界空间偏移后的位置,进而根据毛发模型的顶点在世界空间偏移后的位置对毛发模型的顶点进行偏移控制,以通过纹理贴图的特殊用法,将纹理贴图中的像素信息视作方向偏移信息参与修改模型顶点空间位置的计算,实现了毛发弯曲方向及强度受纹理贴图中的矢量信息所影响和控制的效果。进一步地,通过将更新后的纹理贴图的纹理信息中模型的顶点在R通道上的纹理形状颜色参数转换为模型的切线空间中的x轴偏移向量,并将更新后的纹理贴图的纹理信息中模型的顶点在G通道上的纹理形状颜色参数转换为模型的切线空间中的y轴偏移向量,进而将x轴偏移向量和y轴偏移向量转换为世界空间中的第二偏移向量,以对第二偏移向量进行归一化处理,得到第一偏移向量,能够通过纹理贴图的特殊用法,将纹理的R/G通道明度信息对应为模型顶点切线空间在x/y轴上的偏移量,实现对顶点空间位置的修改。进一步地,采用公式确定模型的顶点在世界空间偏移后的位置,能够实现模型顶点受矢量偏移后最终在世界空间中的位置的确定。
需要说明的是,本公开中,在试图更新纹理贴图的纹理信息之前,可以根据毛发模型的顶点的屏幕空间位置生成纹理贴图。
作为一种可能的实现方式,如图6所示,在上述实施例的基础上,可以包括以下步骤:
在步骤601中,根据模型的顶点的纹理坐标确定模型的顶点的屏幕空间坐标。
在一些实施例中,可以采用以下公式确定模型的顶点的屏幕空间坐标:
ScreenPos.x=texCoord.x*2.0-vec2(1.0);
ScreenPos.y=texCoord.y*2.0-vec2(1.0);
其中,ScreenPos.x为模型的顶点的x轴屏幕空间坐标;texCoord.x为模型的顶点的u轴纹理坐标;vec2()为误差修正函数;ScreenPos.y为模型的顶点的y轴屏幕空间坐标;texCoord.x为模型的顶点的v轴纹理坐标。
在步骤602中,根据模型的顶点的屏幕空间坐标生成纹理贴图。
作为一种可能的实现方式,可以根据模型的顶点的屏幕空间坐标在空白画布上进行渲染得到纹理信息,并对纹理信息进行缓存得到纹理贴图贴图。
需要说明的是,本公开中,在试图更新纹理贴图的纹理信息之前,可以使用渲染目标(Render Target,简称RT)来缓存一张画布纹理。此种情况下,在主场景渲染前需预先渲染一个场景,在此场景中单独渲染一遍毛发模型,并且将毛发模型的顶点以屏幕坐标对应纹理坐标(UV坐标)的方式渲染出来,从而达到制备纹理贴图的目的。
其中,渲染目标指的是用于渲染像素的显存缓冲区。渲染目标的一个常见用途是离屏渲染,在一些图像的后期处理里面,例如高动态范围图像(High Dynamic Range,简称HDR), 就需要使用多个渲染目标叠加来实现泛光(Bloom)效果。
本公开通过根据毛发模型的顶点的纹理坐标确定毛发模型的顶点的屏幕空间位置,并根据毛发模型的顶点的屏幕空间位置生成纹理贴图,使得触摸点纹理坐标信息和滑动方向信息可以实时存入纹理贴图中,进而能够根据更新后的纹理贴图的纹理信息对毛发模型的顶点进行偏移控制。
需要说明的是,本公开中,为了更加逼真地呈现出毛发形态的触摸效果,可以通过控制毛发形态发生改变后恢复至原始形态的速度,实现延时动态效果。
在一些实施例中,可以根据延时参数,对纹理贴图的纹理信息的透明通道进行累积量调整,以在当前帧的基础上叠加部分前面帧的纹理贴图的纹理信息。
其中,透明通道(Alpha Channel)又称Alpha通道,指的是可以记录透明度信息的特殊图层,可以标识一张图像的透明和半透明度。
其中,延时参数处于区间(0,1)中。
举例而言,在用户试图触发“捋毛”特效功能时,可以于屏幕上滑动手指。此时,可以根据延时参数,对纹理贴图的纹理信息的透明通道进行累积量调整,以在当前帧的基础上叠加部分前面帧的纹理贴图的纹理信息。这样一来,屏幕上展示的被触摸的毛发会沿着用户手指移动的轨迹逐渐发生形变,此时,毛发形变轨迹与用户手指的移动轨迹有一定延迟。也就是说,用户手指触摸过后的毛发不会立即恢复至原始形态,而是逐渐恢复至原始形态。
需要说明的是,延时参数可以根据实际情况进行设定。其中,延时参数数值越小,毛发形态发生改变后恢复至原始形态的速度越慢。
举例而言,预先设定延时参数为0.5时、毛发形态发生改变后恢复至原始形态的耗时为t1,预先设定延时参数为0.8时、毛发形态发生改变后恢复至原始形态的耗时为t2,此种情况下,t1<t2。
本公开通过对纹理贴图的纹理信息的透明通道进行累积量调整,以在当前帧的基础上叠加部分前面帧的纹理贴图的纹理信息,使得能够通过控制毛发形态发生改变后恢复至原始形态的速度,实现延时动态效果,以保证触屏射线移开触摸部位后的毛发弯曲形变不会立即消失,而是有个缓慢回复的过程,进一步提升了特效控制过程中的可视化效果。
需要说明的是,本申请提出的特效控制方法,可以运用于多种场景中。
针对毛发形态的触摸特效应用场景,在用户试图触摸屏幕上的毛发模型时,可以用手指在屏幕上展示的毛发模型上滑动,以触发针对屏幕上的毛发模型的触摸操作。相应的,响应于用户针对屏幕上的毛发模型的触摸操作,可以获取触摸操作对应的触摸点纹理坐标信息和滑动方向信息,并根据触摸点纹理坐标信息和滑动方向信息更新纹理贴图的纹理信 息,进而根据更新后的纹理贴图的纹理信息对毛发模型的顶点进行偏移控制,以使毛发模型能够根据用户手机的滑动轨迹进行形态的改变。
举例而言,如图7(a)所示,屏幕上展示有一簇球状毛发模型,初始情况下毛发处于静止状态。在用户于屏幕上滑动出S型轨迹时,响应于用户针对屏幕上的毛发模型的触摸操作,可以呈现如图7(b)所示的效果。此时,毛发形态的改变能够反映出用户触摸操作产生的S型轨迹,且毛发弯曲形变不会立即消失,而是有个缓慢回复的过程,实现了延时动态效果,触摸效果逼真,极大地提高了用户体验。
本公开通过响应于用户针对屏幕上的毛发模型的触摸操作,获取触摸操作对应的触摸点纹理坐标信息和滑动方向信息,并根据触摸点纹理坐标信息和滑动方向信息更新纹理贴图的纹理信息,进而根据更新后的纹理贴图的纹理信息对毛发模型的顶点进行偏移控制,以实现特效控制。由此,本公开能够根据更新后的纹理贴图的纹理信息对毛发模型的顶点进行偏移控制,以实现针对不同区域的毛发的弯曲朝向状态的控制,在保证良好运行性能的情况下,营造较为真实的毛发效果,提升了特效控制过程中的可视化效果及可玩性,提高了用户体验质量。进一步地,不再依赖缓存每一帧的渲染信息存入渲染目标的方式,借助渲染目标的特殊的信息存储方式,基于渲染目标可以无需进行每一帧清空的特性,合理控制每一帧渲染的渲染目标的透明度信息,使得先前渲染纹理被后渲染纹理逐步堆叠覆盖,以实现毛发被触摸后逐步恢复原状的效果。
如图8所示,特效控制装置1000,该装置1000包括获取单元121,更新单元122和控制单元123。
该获取单元121,被配置为响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取所述触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,所述触摸点为所述模型上与所述触摸操作对应的点;
该更新单元122,被配置为根据所述触摸点纹理坐标信息和所述滑动方向信息更新所述模型对应的纹理贴图的纹理信息,其中,所述纹理贴图用于表征所述模型展开为平面时所述模型的顶点的纹理坐标信息;
该控制单元123,被配置为根据更新后的所述纹理贴图的纹理信息对所述模型的顶点进行偏移控制,得到更新后的所述模型。
在本公开的实施例中,如图9所示,图8中的获取单元121,包括:
第一获取子单元1211,被配置为获取所述当前帧图像的上一帧图像的触摸点纹理坐标信息;
第一确定子单元1212,被配置为根据所述上一帧图像的触摸点纹理坐标信息和所述触摸点纹理坐标信息,确定所述滑动方向信息。
在本公开的实施例中,如图10所示,图8中的更新单元122,包括:
第二确定子单元1221,被配置为根据所述触摸点纹理坐标信息确定纹理参数中的纹理形状参数;
第三确定子单元1222,被配置为根据所述滑动方向信息确定所述纹理参数中的纹理颜色参数;
更新子单元1223,被配置为根据所述纹理形状参数和所述纹理颜色参数更新所述纹理贴图的纹理信息。
在本公开的实施例中,如图10所示,第二确定子单元1221,包括:
获取模块12211,被配置为根据所述触摸点纹理坐标信息,获取以所述触摸点为中心向四周发散的黑白渐变信息;
转化模块12212,被配置为根据预设的纹理粗细调节参数和纹理软硬调节参数,将所述黑白渐变信息转化为所述纹理形状参数。
在本公开的实施例中,第三确定子单元1222,还被配置为:
将所述滑动方向信息中的u轴滑动方向信息确定为在R通道上的所述纹理颜色参数;
将所述滑动方向信息中的v轴滑动方向信息确定为在G通道上的所述纹理颜色参数。
在本公开的实施例中,如图11所示,图8中的控制单元123,包括:
第四确定子单元1231,被配置为根据更新后的所述纹理贴图的纹理信息确定所述模型的顶点在世界空间对应的第一偏移向量;
第五确定子单元1232,被配置为根据所述模型的顶点在世界空间的初始位置和所述第一偏移向量确定所述模型的顶点在世界空间偏移后的位置;
控制子单元1233,被配置为根据所述模型的顶点在世界空间偏移后的位置对所述模型的顶点进行偏移控制。
在本公开的实施例中,第四确定子单元1231,还被配置为:
将更新后的所述纹理贴图的纹理信息中所述模型的顶点在R通道上的纹理形状颜色参数转换为所述模型的切线空间中的x轴偏移向量;
将更新后的所述纹理贴图的纹理信息中所述模型的顶点在G通道上的纹理形状颜色参数转换为所述模型的切线空间中的y轴偏移向量;
将所述x轴偏移向量和所述y轴偏移向量转换为所述世界空间中的第二偏移向量;
对所述第二偏移向量进行归一化处理,得到所述第一偏移向量。
在本公开的实施例中,第五确定子单元1232,还被配置为:
采用如下公式确定所述模型的顶点在世界空间偏移后的位置:
vec3 offset_worldpos=worldpos+normalize(N+offset_vector)*level*fur_length;
其中,所述vec3 offset_worldpos为所述模型的顶点在世界空间偏移后的位置;所述worldpos为所述模型的顶点在世界空间的初始位置;所述normalize()为归一化函数;所述N为所述模型的法向矢量;所述offset_vector为所述第一偏移向量;所述level为所述模型的膨胀比例;所述fur_length为预设的调节参数。
在本公开的实施例中,如图12所示,图8中的特效控制装置1000,还包括:
确定单元124,被配置为将所述当前帧图像对应的所述纹理贴图在透明通道的纹理信息,和部分前面帧图像对应的所述纹理贴图在透明通道的纹理信息进行叠加,得到所述当前帧图像对应的所述纹理贴图在透明通道的目标纹理信息。
在本公开的实施例中,如图10所示,更新单元122,还包括:
第六确定子单元1225,被配置为根据所述模型的顶点的纹理坐标确定所述模型的顶点的屏幕空间坐标;
生成子单元1226,被配置为根据所述模型的顶点的屏幕空间坐标生成所述纹理贴图。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
本公开通过响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,触摸点为模型上与触摸操作对应的点;根据触摸点纹理坐标信息和滑动方向信息更新模型对应的纹理信息;根据更新后的纹理信息对模型的顶点进行偏移控制,得到更新后的模型。由此,本公开能够在保证良好运行性能的情况下,营造较为真实的特效控制效果。进一步地,提供了一种新的交互方法,使得能够通过触屏交互即可实现特效控制。
为了实现上述实施例,本公开还提供了一种电子设备,如图13所示,所述电子设备8000包括:处理器801;用于存储所述处理器801可执行指令的一个或多个存储器802;其中,所述处理器801被配置为执行上述实施例所述的特效控制方法。处理器801和存储器802通过通信总线连接。
为了实现上述实施例,本公开还提供了一种计算机可读存储介质,当计算机可读存储介质中的指令由电子设备8000的处理器801执行时,使得电子设备8000能够执行以完成上述实施例所述的特效控制方法。在一些实施例中,存储介质可以是非临时性计算机可读存储介质,例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
为了实现上述实施例,本公开还提供一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现上述实施例所述的特效控制方法。
本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应 性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (23)

  1. 一种特效控制方法,包括:
    响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取所述触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,所述触摸点为所述模型上与所述触摸操作对应的点;
    根据所述触摸点纹理坐标信息和所述滑动方向信息更新所述模型对应的纹理贴图的纹理信息,其中,所述纹理贴图用于表征所述模型展开为平面时所述模型的顶点的纹理坐标信息;
    根据更新后的所述纹理贴图的纹理信息对所述模型的顶点进行偏移控制,得到更新后的所述模型。
  2. 根据权利要求1所述的特效控制方法,其中,获取所述触摸操作对应的所述滑动方向信息,包括:
    获取所述当前帧图像的上一帧图像的触摸点纹理坐标信息;
    根据所述上一帧图像的触摸点纹理坐标信息和所述触摸点纹理坐标信息,确定所述滑动方向信息。
  3. 根据权利要求1所述的特效控制方法,其中,所述根据所述触摸点纹理坐标信息和所述滑动方向信息更新纹理贴图的纹理信息,包括:
    根据所述触摸点纹理坐标信息确定纹理参数中的纹理形状参数;
    根据所述滑动方向信息确定所述纹理参数中的纹理颜色参数;
    根据所述纹理形状参数和所述纹理颜色参数更新所述纹理贴图的纹理信息。
  4. 根据权利要求3所述的特效控制方法,其中,所述根据所述触摸点纹理坐标信息确定纹理参数中的纹理形状参数,包括:
    根据所述触摸点纹理坐标信息,获取以所述触摸点为中心向四周发散的黑白渐变信息;
    根据预设的纹理粗细调节参数和纹理软硬调节参数,将所述黑白渐变信息转换为所述纹理形状参数。
  5. 根据权利要求3所述的特效控制方法,其中,所述根据所述滑动方向信息确定所述纹理参数中的纹理颜色参数,包括:
    将所述滑动方向信息中的u轴滑动方向信息确定为在R通道上的所述纹理颜色参数;
    将所述滑动方向信息中的v轴滑动方向信息确定为在G通道上的所述纹理颜色参数。
  6. 根据权利要求1所述的特效控制方法,其中,所述根据更新后的所述纹理贴图的纹理信息对所述模型的顶点进行偏移控制,包括:
    根据更新后的所述纹理贴图的纹理信息确定所述模型的顶点在世界空间对应的第一偏移向量;
    根据所述模型的顶点在世界空间的初始位置和所述第一偏移向量确定所述模型的顶点在世界空间偏移后的位置;
    根据所述模型的顶点在世界空间偏移后的位置对所述模型的顶点进行偏移控制。
  7. 根据权利要求6所述的特效控制方法,其中,所述根据更新后的所述纹理贴图的纹理信息确定所述模型的顶点在世界空间对应的第一偏移向量,包括:
    将更新后的所述纹理贴图的纹理信息中所述模型的顶点在R通道上的纹理形状颜色参数转换为所述模型的切线空间中的x轴偏移向量;
    将更新后的所述纹理贴图的纹理信息中所述模型的顶点在G通道上的纹理形状颜色参数转换为所述模型的切线空间中的y轴偏移向量;
    将所述x轴偏移向量和所述y轴偏移向量转换为所述世界空间中的第二偏移向量;
    对所述第二偏移向量进行归一化处理,得到第一偏移向量。
  8. 根据权利要求6所述的特效控制方法,其中,所述根据所述模型的顶点在世界空间的初始位置和所述第一偏移向量确定所述模型的顶点在世界空间偏移后的位置,包括:
    采用如下公式确定所述模型的顶点在世界空间偏移后的位置:
    vec3 offset_worldpos=worldpos+normalize(N+offset_vector)*level*fur_length;
    其中,所述vec3 offset_worldpos为所述模型的顶点在世界空间偏移后的位置;所述worldpos为所述模型的顶点在世界空间的初始位置;所述normalize()为归一化函数;所述N为所述模型的法向矢量;所述offset_vector为所述第一偏移向量;所述level为所述模型的膨胀比例;所述fur_length为预设的调节参数。
  9. 根据权利要求3所述的特效控制方法,其中,还包括:
    将所述当前帧图像对应的所述纹理贴图在透明通道的纹理信息,和部分前面帧图像对应的所述纹理贴图在透明通道的纹理信息进行叠加,得到所述当前帧图像对应的所述纹理贴图在透明通道的目标纹理信息。
  10. 根据权利要求1所述的特效控制方法,还包括:
    根据所述模型的顶点的纹理坐标确定所述模型的顶点的屏幕空间坐标;
    根据所述模型的顶点的屏幕空间坐标生成所述纹理贴图。
  11. 一种特效控制装置,其特征在于,包括:
    获取单元,被配置为响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取所述触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,所述触摸点为所述模型上与所述触摸操作对应的点;
    更新单元,被配置为根据所述触摸点纹理坐标信息和所述滑动方向信息更新所述模型对应的纹理贴图的纹理信息,其中,所述纹理贴图用于表征所述模型展开为平面时所述模型的顶点的纹理坐标信息;
    控制单元,被配置为根据更新后的所述纹理贴图的纹理信息对所述模型的顶点进行偏移控制,得到更新后的所述模型。
  12. 根据权利要求11所述的特效控制装置,其中,所述获取单元,包括:
    第一获取子单元,被配置为获取所述当前帧图像的上一帧图像的触摸点纹理坐标信息;
    第一确定子单元,被配置为根据所述上一帧图像的触摸点纹理坐标信息和所述触摸点纹理坐标信息,确定所述滑动方向信息。
  13. 根据权利要求11所述的特效控制装置,其中,所述更新单元,包括:
    第二确定子单元,被配置为根据所述触摸点纹理坐标信息确定纹理参数中的纹理形状参数;
    第三确定子单元,被配置为根据所述滑动方向信息确定所述纹理参数中的纹理颜色参数;
    更新子单元,被配置为根据所述纹理形状参数和所述纹理颜色参数更新所述纹理贴图的纹理信息。
  14. 根据权利要求13所述的特效控制装置,其中,所述第二确定子单元,包括:
    获取模块,被配置为根据所述触摸点纹理坐标信息,获取以所述触摸点为中心向四周发散的黑白渐变信息;
    转化模块,被配置为根据预设的纹理粗细调节参数和纹理软硬调节参数,将所述黑白渐变信息转化为所述纹理形状参数。
  15. 根据权利要求13所述的特效控制装置,其中,所述第三确定子单元,还被配置为:
    将所述滑动方向信息中的u轴滑动方向信息确定为在R通道上的所述纹理颜色参数;
    将所述滑动方向信息中的v轴滑动方向信息确定为在G通道上的所述纹理颜色参数。
  16. 根据权利要求11所述的特效控制装置,其中,所述控制单元,包括:
    第四确定子单元,被配置为根据更新后的所述纹理贴图的纹理信息确定所述模型的顶点在世界空间对应的第一偏移向量;
    第五确定子单元,被配置为根据所述模型的顶点在世界空间的初始位置和所述第一偏移向量确定所述模型的顶点在世界空间偏移后的位置;
    控制子单元,被配置为根据所述模型的顶点在世界空间偏移后的位置对所述模型的顶点进行偏移控制。
  17. 根据权利要求16所述的特效控制装置,其中,所述第四确定子单元,还被配置为:
    将更新后的所述纹理贴图的纹理信息中所述模型的顶点在R通道上的纹理形状颜色参数转换为所述模型的切线空间中的x轴偏移向量;
    将更新后的所述纹理贴图的纹理信息中所述模型的顶点在G通道上的纹理形状颜色参数转换为所述模型的切线空间中的y轴偏移向量;
    将所述x轴偏移向量和所述y轴偏移向量转换为所述世界空间中的第二偏移向量;
    对所述第二偏移向量进行归一化处理,得到第一偏移向量。
  18. 根据权利要求16所述的特效控制装置,其中,所述第五确定子单元,还被配置为:
    采用如下公式确定所述模型的顶点在世界空间偏移后的位置:
    vec3 offset_worldpos=worldpos+normalize(N+offset_vector)*level*fur_length;
    其中,所述vec3 offset_worldpos为所述模型的顶点在世界空间偏移后的位置;所述worldpos为所述模型的顶点在世界空间的初始位置;所述normalize()为归一化函数;所述N为所述模型的法向矢量;所述offset_vector为所述第一偏移向量;所述level为所述模型的膨胀比例;所述fur_length为预设的调节参数。
  19. 根据权利要求13所述的特效控制装置,其中,还包括:
    确定单元,被配置为将所述当前帧图像对应的所述纹理贴图在透明通道的纹理信息,和部分前面帧图像对应的所述纹理贴图在透明通道的纹理信息进行叠加,得到所述当前帧图像对应的所述纹理贴图在透明通道的目标纹理信息。
  20. 根据权利要求11所述的特效控制装置,其中,所述更新单元,还包括:
    第六确定子单元,被配置为根据所述模型的顶点的纹理坐标确定所述模型的顶点的屏幕空间坐标;
    生成子单元,被配置为根据所述模型的顶点的屏幕空间坐标生成所述纹理贴图。
  21. 一种电子设备,其中,包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    其中,所述处理器被配置为执行所述指令,以实现以下步骤:
    响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取所述触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,所述触摸点为所述模型上与所述触摸操作对应的点;
    根据所述触摸点纹理坐标信息和所述滑动方向信息更新所述模型对应的纹理贴图的纹理信息,其中,所述纹理贴图用于表征所述模型展开为平面时所述模型的顶点的纹理坐标信息;
    根据更新后的所述纹理贴图的纹理信息对所述模型的顶点进行偏移控制,得到更新后 的所述模型。
  22. 一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够实现以下步骤:
    响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取所述触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,所述触摸点为所述模型上与所述触摸操作对应的点;
    根据所述触摸点纹理坐标信息和所述滑动方向信息更新所述模型对应的纹理贴图的纹理信息,其中,所述纹理贴图用于表征所述模型展开为平面时所述模型的顶点的纹理坐标信息;
    根据更新后的所述纹理贴图的纹理信息对所述模型的顶点进行偏移控制,得到更新后的所述模型。
  23. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现以下步骤:
    响应于用户针对屏幕上当前帧图像中的模型的触摸操作,获取所述触摸操作对应的触摸点的触摸点纹理坐标信息和滑动方向信息,所述触摸点为所述模型上与所述触摸操作对应的点;
    根据所述触摸点纹理坐标信息和所述滑动方向信息更新所述模型对应的纹理贴图的纹理信息,其中,所述纹理贴图用于表征所述模型展开为平面时所述模型的顶点的纹理坐标信息;
    根据更新后的所述纹理贴图的纹理信息对所述模型的顶点进行偏移控制,得到更新后的所述模型。
PCT/CN2021/121093 2021-03-04 2021-09-27 特效控制方法及装置 WO2022183723A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110241035.4 2021-03-04
CN202110241035.4A CN113064539B (zh) 2021-03-04 2021-03-04 特效控制方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022183723A1 true WO2022183723A1 (zh) 2022-09-09

Family

ID=76559756

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/121093 WO2022183723A1 (zh) 2021-03-04 2021-09-27 特效控制方法及装置

Country Status (2)

Country Link
CN (1) CN113064539B (zh)
WO (1) WO2022183723A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113064539B (zh) * 2021-03-04 2022-07-29 北京达佳互联信息技术有限公司 特效控制方法、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150015510A1 (en) * 2013-07-10 2015-01-15 Fih (Hong Kong) Limited Electronic device and method for drawing pictures
CN104574484A (zh) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 一种基于交互操作产生图片动态效果的方法和装置
CN105164628A (zh) * 2013-03-14 2015-12-16 华为技术有限公司 移动设备的透镜触摸图形效果
CN109448137A (zh) * 2018-10-23 2019-03-08 网易(杭州)网络有限公司 交互方法、交互装置、电子设备及存储介质
CN109491586A (zh) * 2018-11-14 2019-03-19 网易(杭州)网络有限公司 虚拟对象控制方法及装置、电子设备、存储介质
US20200219327A1 (en) * 2019-01-08 2020-07-09 Ephere Inc. Dynamic detail adaptive hair modeling and editing
CN112181263A (zh) * 2019-07-02 2021-01-05 北京奇虎科技有限公司 触摸屏的绘画操作响应方法、装置及计算设备
CN113064539A (zh) * 2021-03-04 2021-07-02 北京达佳互联信息技术有限公司 特效控制方法、装置、电子设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685869B (zh) * 2018-12-25 2023-04-07 网易(杭州)网络有限公司 虚拟模型渲染方法与装置、存储介质、电子设备
CN112330570B (zh) * 2020-11-27 2024-03-12 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105164628A (zh) * 2013-03-14 2015-12-16 华为技术有限公司 移动设备的透镜触摸图形效果
US20150015510A1 (en) * 2013-07-10 2015-01-15 Fih (Hong Kong) Limited Electronic device and method for drawing pictures
CN104574484A (zh) * 2014-12-31 2015-04-29 北京奇虎科技有限公司 一种基于交互操作产生图片动态效果的方法和装置
CN109448137A (zh) * 2018-10-23 2019-03-08 网易(杭州)网络有限公司 交互方法、交互装置、电子设备及存储介质
CN109491586A (zh) * 2018-11-14 2019-03-19 网易(杭州)网络有限公司 虚拟对象控制方法及装置、电子设备、存储介质
US20200219327A1 (en) * 2019-01-08 2020-07-09 Ephere Inc. Dynamic detail adaptive hair modeling and editing
CN112181263A (zh) * 2019-07-02 2021-01-05 北京奇虎科技有限公司 触摸屏的绘画操作响应方法、装置及计算设备
CN113064539A (zh) * 2021-03-04 2021-07-02 北京达佳互联信息技术有限公司 特效控制方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN113064539B (zh) 2022-07-29
CN113064539A (zh) 2021-07-02

Similar Documents

Publication Publication Date Title
US11238644B2 (en) Image processing method and apparatus, storage medium, and computer device
CN109003325B (zh) 一种三维重建的方法、介质、装置和计算设备
US9098930B2 (en) Stereo-aware image editing
US10083538B2 (en) Variable resolution virtual reality display system
US11694392B2 (en) Environment synthesis for lighting an object
CN111243071A (zh) 实时三维人体重建的纹理渲染方法、系统、芯片、设备和介质
RU2586566C1 (ru) Способ отображения объекта
CN111723902A (zh) 使用神经网络动态估计增强现实场景中位置的照明参数
JP7432005B2 (ja) 二次元画像の三次元化方法、装置、機器及びコンピュータプログラム
CN112184575A (zh) 图像渲染的方法和装置
JP7244810B2 (ja) 単色画像及び深度情報を使用した顔テクスチャマップ生成
US20230230311A1 (en) Rendering Method and Apparatus, and Device
CN111382618B (zh) 一种人脸图像的光照检测方法、装置、设备和存储介质
WO2018080849A1 (en) Simulating depth of field
WO2023066120A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN115810101A (zh) 三维模型风格化方法、装置、电子设备及存储介质
WO2022183723A1 (zh) 特效控制方法及装置
WO2018127364A1 (en) Method and device for applying an effect of an augmented or mixed reality application
JP2020532022A (ja) 全視角方向の球体ライトフィールドレンダリング方法
JP3629243B2 (ja) モデリング時の距離成分を用いてレンダリング陰影処理を行う画像処理装置とその方法
WO2022022260A1 (zh) 图像风格迁移方法及其装置
US10275925B2 (en) Blend shape system with texture coordinate blending
WO2019026388A1 (ja) 画像生成装置および画像生成方法
WO2011069285A1 (en) Concave surface modeling in image-based visual hull
CN114219888A (zh) 三维角色动态剪影效果生成方法及装置、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21928797

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.01.2024)