WO2022088928A1 - 弹性对象的渲染方法、装置、设备及存储介质 - Google Patents

弹性对象的渲染方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022088928A1
WO2022088928A1 PCT/CN2021/115591 CN2021115591W WO2022088928A1 WO 2022088928 A1 WO2022088928 A1 WO 2022088928A1 CN 2021115591 W CN2021115591 W CN 2021115591W WO 2022088928 A1 WO2022088928 A1 WO 2022088928A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
touch
deformation
elastic
elastic object
Prior art date
Application number
PCT/CN2021/115591
Other languages
English (en)
French (fr)
Inventor
王惊雷
鄂彦志
冯宇飞
刘飞鹏
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to US18/034,325 priority Critical patent/US20230386137A1/en
Priority to EP21884660.8A priority patent/EP4207083A4/en
Publication of WO2022088928A1 publication Critical patent/WO2022088928A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • Embodiments of the present disclosure relate to the technical field of image rendering, and in particular, to a method, apparatus, device, and storage medium for rendering elastic objects.
  • the methods for realizing dynamic special effects in the related art mainly include three methods: skeletal animation, blend deformation (or called BlendShape) and vertex animation.
  • these three methods cannot provide realistic elastic effects, and users cannot participate in interaction, resulting in poor user experience.
  • the embodiments of the present disclosure provide a method, apparatus, device and storage medium for rendering elastic objects, which can provide realistic elastic effects, thereby improving interactivity with users , thereby improving the user experience.
  • a first aspect of the embodiments of the present disclosure provides a method for rendering elastic objects, including:
  • the deformation position and velocity of the acting point in the mesh model of the elastic object under the deformation triggering operation are determined, wherein the deformation triggering operation is used to trigger all
  • the elastic object is deformed due to being acted on, and the acting point is a grid point in the elastic object that leaves the original position due to being acted on;
  • a second aspect of the embodiments of the present disclosure provides a rendering apparatus, including:
  • a first determination module configured to, in response to a deformation triggering operation for the elastic object, determine the deformation position and speed of the acting point in the mesh model of the elastic object under the deformation triggering operation, wherein the The deformation triggering operation is used to trigger the elastic object to deform due to being acted on, and the acted point is a grid point in the elastic object that leaves the original position due to being acted on;
  • the second determining module is configured to determine each grid point in the grid model of the elastic object based on the deformation position and velocity of the acting point and the elastic constraints of each grid point in the grid model of the elastic object movement trajectory;
  • the execution module is configured to mobilize the grid model of the elastic object to perform elastic motion based on the motion trajectory of each grid point in the grid model of the elastic object.
  • a third aspect of the embodiments of the present disclosure provides a terminal device, the terminal device includes a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the above-mentioned first aspect can be implemented method.
  • a fourth aspect of the embodiments of the present disclosure provides a computer-readable storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, the method of the first aspect can be implemented.
  • the deformation of the acting point under the deformation triggering operation in the mesh model of the elastic object is determined. position and velocity; then based on the deformation position and velocity of the acting point and the elastic constraints of each grid point in the grid model of the elastic object, determine the motion trajectory of each grid point in the grid model of the elastic object; finally , and based on the motion trajectory of each grid point in the grid model of the elastic object, the grid model of the elastic object is mobilized to perform elastic motion.
  • the technical solutions provided by the embodiments of the present disclosure only project some representative points (such as points on the surface of the counterpart, or points depicting the outline of the counterpart) into the virtual world, as the elastic
  • the grid points on the grid model of the object can greatly reduce the number of grid points in the grid model, reduce the complexity of subsequent calculations, and reduce the dependence of calculations on hardware performance, especially for mobile phones, smart watches, etc. Miniaturized, low computing power devices.
  • FIG. 1 is a flowchart of a method for rendering an elastic object provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of the shape of an elastic object according to an embodiment of the present disclosure
  • Fig. 3 is the schematic diagram after marking grid point in Fig. 2;
  • FIG. 4 is a schematic diagram of a user performing a touch sliding operation according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of another user performing a touch sliding operation according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of another user performing a touch sliding operation according to an embodiment of the present disclosure.
  • FIG. 7 is a flowchart of another elastic object rendering method according to an embodiment of the present disclosure.
  • FIG. 8 is a structural block diagram of a rendering apparatus according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic structural diagram of a terminal device in an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a method for rendering an elastic object provided by an embodiment of the present disclosure, and the method may be executed by a terminal device.
  • the terminal device can be exemplarily understood as a device having a video or animation playback capability, such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, and a smart TV.
  • the terminal device referred to in this embodiment may be equipped with a display device and a human-computer interaction device, the terminal device may receive user instructions through the human-computer interaction device, and present an appropriate response result by means of the display device based on the user instruction, For example, a specific dynamic 3D elastic effect is displayed through a display device.
  • the technical solution provided by the present disclosure can be applied to the situation of recording and editing video, and can also be applied to the situation of multiple people interacting through a terminal (eg, making a video call).
  • a terminal eg, making a video call
  • the technical solutions provided in the present disclosure can also be applied to other situations, which are not limited in the present disclosure.
  • the method provided by this embodiment includes the following steps:
  • the mesh model of the elastic object may be constructed based on the 3D model data of the elastic object, wherein the 3D model data of the elastic object may be pre-configured.
  • the elastic object is rendered on the interface of the terminal device, where the interface refers to the display interface of the terminal.
  • the interface may be a usage interface of a certain application program installed on the terminal. Such as video call interface or video recording interface, video editing interface and video playback interface, etc.
  • Elastic objects refer to objects displayed in the interface that can respond to user operations and display elastic dynamic effects.
  • the interface is a video editing interface, a video calling interface, or a video playing interface
  • the elastic object may be a prop added as a special effect.
  • the elastic object can be set to be able to correspond to a specific object in the real world (hereinafter referred to as a counterpart).
  • the elastic objects are hat-shaped props, scarf-shaped props, headgear-shaped props, leather ball-shaped props, and the like.
  • specific objects in the real world are composed of a large number of microscopic particles such as particles, atoms or molecules. Due to the interaction force between these microscopic particles, the object can return to its original size and shape after deformation, that is, it has elasticity.
  • the elastic object By setting the elastic object to correspond to the specific object in the real world, in essence, the specific object in the real world is projected into the virtual world, or it is called using the elastic object to simulate the specific object in the real world, which makes the user feel more real and causes the user to feel more real. resonance.
  • the counterpart in the real world, if the counterpart is regarded as a set of points. When the counterpart is pulled, the counterpart deforms. The essence of deformation is the change in the relative position between the multiple points that constitute the counterpart.
  • the elastic object can be set to include multiple grid points. Grid points are used to describe the topography of elastic objects.
  • the counterpart is three-dimensional, so the points constituting the counterpart include points depicting the surface of the counterpart and points depicting the interior of the counterpart.
  • Correspondence can only be formed by the combined action of the points depicting the surface of the counterpart and the points depicting the interior of the counterpart.
  • the elastic object since the elastic object mainly serves to modify the image in actual use, it usually does not need to pay attention to its internal structure. For example, if you put a hat on a character image in the interface, you only need to superimpose the image of the hat on the image of the character, and you don't need to fully display the inside of the hat. Therefore, in a possible implementation, only representative points may be selected as grid points. That is, when determining the grid points, only some representative points among the points constituting the counterpart are projected into the virtual world as grid points of the elastic object.
  • a "representative point" may be a point that describes the surface of a specific counterpart, or a point that may delineate an outline of the counterpart.
  • the grid point is mainly used as the object to calculate the speed and position of each grid point, and finally the entire dynamic effect of the elastic object is obtained.
  • the grid points of the elastic object By setting only some of the representative points of the corresponding points to be projected into the virtual world, as the grid points of the elastic object, it can greatly reduce the number of grid points, reduce the complexity of subsequent calculations, and reduce the computational cost.
  • the degree of dependence on hardware performance especially for mobile phones, smart watches and other miniaturized devices with weak computing power.
  • FIG. 2 is a schematic diagram of the shape of an elastic object according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram after the grid points are marked in FIG. 2 .
  • this embodiment of the present application does not limit the implementation of S110.
  • it may specifically include: acquiring an elastic object adding instruction input by a user; and constructing an elastic object adding instruction in response to the elastic object adding instruction.
  • the mesh model is displayed, and the added elastic object is displayed on the interface to be edited.
  • the deformation triggering operation in this step acts on the elastic object, so that the elastic object is deformed due to the action, and the action point is the grid point in the elastic object that is away from the original position due to the action.
  • the motion of the affected point can be regarded as the motion directly caused by the user's deformation-triggered operation (eg, a pulling operation).
  • the motion of the grid points in the mesh model of the elastic object, except the action point, the motion of other grid points is the motion generated by the motion of the action point and the limitation of the preset elastic constraint.
  • the deformation triggering operation of the elastic object by the user can be obtained through a device such as a touch screen.
  • a device such as a touch screen.
  • the user can directly perform the deformation triggering operation on the display device.
  • the embodiments of the present application do not limit the above-mentioned "deformation triggering operation of the elastic object by the user".
  • the deformation triggering operation of the elastic object by the user may include a touch sliding operation.
  • the touch point corresponding to the touch sliding operation is used to determine the acting point
  • the sliding track corresponding to the sliding touch operation is used to determine the deformation position and speed of the acting point.
  • the method for a user to perform a deformation triggering operation on an elastic object may specifically include: the user touches a certain area (hereinafter referred to as a touch area) on the display device with a finger, and moves the finger in a certain direction.
  • a touch area a certain area
  • the user may apply pressure to the display device, or may not apply pressure to the display device. If the user exerts pressure on the display device, the pressure sensor can be invoked to identify the user's touch operation. If the user does not exert pressure on the display device, the light sensor can be called to identify the user's touch operation.
  • a touch sliding method is used to trigger the movement of the affected point deviating from the original position, which mainly includes two situations:
  • the user's touch area does not overlap with the area covered by the elastic object. That is, the touch point corresponding to the touch sliding operation is not on the elastic object, but on other areas in the touch interface except the area covered by the elastic object.
  • the affected point can be set as a preset grid point on the elastic object; the touch and slide operation that the user acts on the interface is mapped to the preset grid point.
  • FIG. 4 is a schematic diagram of a user performing a touch sliding operation according to an embodiment of the present disclosure.
  • the user touch area and the elastic object coverage area at least partially overlap. That is, the touch point corresponding to the touch sliding operation is on the elastic object.
  • the affected point can be set as the grid point corresponding to the touch point in the grid model of the elastic object; the touch and slide operation that the user acts on the interface is mapped to the grid point.
  • FIG. 5 is a schematic diagram of another user performing a touch sliding operation according to an embodiment of the present disclosure.
  • the selection method may be to determine the acting points according to the detected pressure values at the physical positions of the grid points on the screen. For example, the grid point at the physical position with the maximum pressure value is taken as the acting point.
  • the geometric center of the user's touch area may also be determined first, and the affected point may be determined according to the distance between each grid point and the geometric center. For example, the grid point with the closest distance from the geometric center is used as the affected point.
  • FIG. 6 is a schematic diagram of another user performing a touch sliding operation according to an embodiment of the present disclosure. Exemplarily.
  • the first stage A corresponds to the user moving the finger in the X direction after touching, and stops sliding after reaching the point n, and the finger leaves the display device.
  • the display effect of the display device is that the grid point e of the hat-shaped elastic object moves to point n with the finger, and the overall shape of the hat-shaped elastic object changes.
  • the second stage B the user's finger leaves the display device, and the hat-shaped elastic object returns to its original state under the action of "elasticity".
  • the user's sliding speed and sliding distance can be mapped to the acting point to obtain the speed and deformation position of the acting point.
  • the sliding track is a collection of points where the finger is located at different movement moments. Therefore, the sliding track can be obtained by sequentially connecting the positions of the fingers according to the time sequence.
  • point e is the position of the finger at the initial moment of sliding.
  • T1 the time when it reaches point f is T2
  • T3 the time when it reaches point e is T3
  • Tm the time when it reaches point n is Tm.
  • a pressure sensor or an optical sensor in the display device can be used to detect the time when the finger reaches each point and the position coordinates of each point. This is because a display device with a touch function or a light-sensing function is usually provided with a large number of pressure sensors or optical sensors scattered therein, and different pressure sensors or optical sensors have different position coordinates. When the user touches the display device, only the pressure sensor or optical sensor in the area covered by the finger will respond and output an electrical signal.
  • a proportional parameter can be set, and the sliding speed and sliding of the affected point can be calculated based on the proportional parameter. distance to complete the mapping operation.
  • the distance from point e to point n is 5cm
  • the scale parameter is 1, when the user's finger slides from point e to point n, the moving distance of the grid point determined according to point e is 1.
  • ⁇ 5cm 5cm.
  • the value of the scale parameter can be arbitrary, for example, it can be a number greater than 1, a positive number less than 1, or a negative number.
  • the value of the scale parameter may also be set to be a random number.
  • the value of the scale parameter is different.
  • the interest of the special effect can be increased by setting the scale parameter.
  • the second stage there are various methods for determining the velocity and deformation position of the acting point at each motion moment after the stop sliding moment. For example, the speed and/or the deformation position of each movement moment after the moment of stopping sliding is preset.
  • each movement time of the affected point after the stop of the touch and slide operation is determined. speed and deformation position.
  • the essence of this setting is to infer the subsequent speed and deformation position based on the speed and shape position of the affected point when the touch and slide is stopped, which can make the elastic effect presented by it more natural and real.
  • Method 1 According to the deformation position and speed of the affected point at the moment of stopping the touch and sliding, and the preset gravity of the acting point, determine the speed and position of the acting point after the moment of stopping the touch and sliding.
  • each grid point can be given a preset gravity value to simulate the counterpart in the real world.
  • the preset gravity values of different grid points may be the same or different, which is not limited in this application.
  • the preset gravity value of the same grid point may be the same or different. Exemplarily. If the terminal is placed horizontally (that is, the display surface of the terminal is parallel to the ground), the preset gravity values of all grid points are the same. If the terminal is placed vertically (that is, the terminal display surface is perpendicular to the ground), the smaller the distance from the ground, the greater the preset gravity value of the grid point.
  • the first method fully considers the influence of gravity on the process of restoring the original shape of the elastic object, which can make the elastic effect presented by the elastic object in the second stage more natural and real.
  • Method 2 Determine the force information of the affected point at the position according to the deformation position of the affected point at the moment of stopping the touch and slide and the elastic constraints between the affected point and other grid points; The deformation position, speed and force information at the moment of sliding are determined to determine the speed and deformation position of the affected point after the moment of stopping the touch and sliding.
  • the interaction force between the microscopic particles that make up the object makes the object return to its original size and shape after deformation, that is, it has elasticity.
  • the essence of the second method is to fully consider the influence of elastic constraints on the process of restoring the original shape of the elastic object, which can make the elastic effect presented by the elastic object in the second stage more natural and real.
  • method 1 and method 2 can also be combined to determine the speed and position of the affected point at each movement moment after the moment of stopping the touch and slide, so as to further make the elastic object in the The elastic effect presented in the second stage is more natural and realistic.
  • the deformation position, speed and force information of the affected point at the moment of stopping the touch and sliding after executing "According to the deformation position, speed and force information of the affected point at the moment of stopping the touch and sliding, it is determined that the affected point is obtained after the moment of stopping the touch and sliding.
  • “speed and position”, specifically, after the moment of stopping touch and sliding, based on the deformation position, speed and force information of the action point at the previous moment, the deformation position, speed and force information of the action point at the moment after the action point can be determined.
  • speed and force information at time Tm+p based on the deformation position, speed and force information at time Tm+p, the deformation position, speed and force information at time Tm+p+1 are determined.
  • the deformation position, velocity, and force information at time Tm+p+1 are determined. This is repeated, and finally the velocity and deformation position of the acting point at each movement moment in the second stage are calculated.
  • the essence of this setting is to use the idea of iteration and circulation to calculate the velocity and deformation position of the acting point at each motion moment.
  • the model will be too soft and not elastic enough.
  • the above-mentioned method of determining the deformation position, speed and force information of the action point at a moment after the action point is based on the deformation position, speed and force information at the moment before the action point after stopping the touch and sliding, which can realize the
  • the solution is split into several sub-steps (simulation substeps), and additional rest pose attachment constraints (such as elastic constraints with other mesh points) are added to the triangular mesh, which can significantly enhance the elastic effect.
  • the acting point on the mesh model of the elastic object deviates from the original position, according to the deformation position and speed of the acting point at each movement moment, and the elastic object
  • the preset elastic constraint between each grid point determines the deformation position and speed of other grid points (hereinafter referred to as other grid points) except the acting point.
  • the deformation position and speed of the affected point at each movement moment after the deformation triggering operation is stopped, and the distance between the grid points on the elastic object
  • Preset elastic constraints to determine the deformation positions and speeds of other grid points at each movement moment after the deformation triggering operation is stopped except the acting point.
  • the essence of this step is to adjust the deformation position and speed of each grid point in sequence according to the deformation position and speed of each grid point on the grid model at different motion moments and in accordance with the sequence of each motion moment, and apply the effects presented on the display device.
  • the refresh frequency of the current terminal display is obtained, where the refresh frequency of the terminal display refers to the number of images that the display device can display per second; The deformation position and speed of each grid point in the grid model of the elastic object; thirdly, determine the pixel unit corresponding to each grid point in the grid model of the elastic object every time an image is displayed; finally, adjust the The pixel voltage finally realizes the deformation process of the elastic object displayed by the display device.
  • the deformation position and speed of the acting point in the mesh model of the elastic object under the deformation triggering operation are determined; Based on the deformation position and velocity of the acting point and the elastic constraints of each grid point in the grid model of the elastic object, determine the motion trajectory of each grid point in the grid model of the elastic object; finally, based on the elastic object
  • the motion trajectory of each grid point in the grid model, and the grid model of the elastic object is mobilized to perform elastic motion, so that the entire process of elastic deformation of the elastic object due to force can be simulated and dynamically displayed, which can provide realistic Elasticity effect, improve interactivity with users, and improve user experience.
  • the above technical solution only projects some representative points (such as the points describing the surface of the specific counterpart, or the points describing the contour line of the counterpart) of the points that constitute the counterparts into the virtual world as a mesh of elastic objects.
  • Grid points which can greatly reduce the number of grid points, reduce the complexity of subsequent calculations, and reduce the dependence of calculations on hardware performance, especially for mobile phones, smart watches and other miniaturized devices with weak computing power.
  • FIG. 7 is a flowchart of another elastic object rendering method provided by an embodiment of the present disclosure.
  • FIG. 7 is a specific example of applying the rendering method of the elastic object in FIG. 1 to a video communication interface; and the rendering method of the elastic object shown in FIG. 7 includes:
  • S201 Start a video communication application in the terminal, enter a video communication interface, and select a contact to establish an end-to-end network communication connection.
  • 3D headgear sticker is the elastic object.
  • the 3D headgear sticker can decorate the character on the current end or the opposite end in the interface.
  • the face tracking function is activated, and the 3D headgear is displayed on the human face.
  • the physical parameters of the elastic object simulation may specifically be the scale parameters mentioned above for completing the mapping operation, or the preset grid points used as the acting points, or the preset gravity of each grid point of the elastic object values, or elastic constraints between grid points, etc.
  • the elastic body simulation solver is used to complete S120-S130 in Fig. 1 of the present disclosure in response to the user performing a touch-swipe operation on the 3D headgear sticker.
  • the elastic body simulation solver obtains the deformation position and velocity of each grid point in the headgear's mesh model at each movement moment, and sends it to the opposite end user.
  • the elastic body simulation solver determines the acting point located in the mesh model of the headgear based on the sliding information, and then obtains the mesh point in the mesh model at each movement moment. Deformation position and velocity. The deformation position and velocity of each grid point in the grid model at each motion moment are used as special effect data and sent to the opposite end user.
  • S205 The local terminal and/or the opposite terminal relocate the face position, and render the deformation process of the headgear on the screen based on the received special effect data, so as to present a realistic dynamic elastic effect for the user.
  • the above technical solutions can increase the interactive interest of both parties in the call and improve the user experience.
  • S204-S205 may be repeatedly performed multiple times during the entire video call.
  • An embodiment of the present disclosure further provides a rendering apparatus.
  • a rendering apparatus For ease of understanding, the following description is made with reference to the rendering apparatus shown in FIG. 8 .
  • 8 is a structural block diagram of a rendering apparatus provided by an embodiment of the present disclosure.
  • the rendering device includes:
  • a model building module 310 used for building a mesh model of the elastic object
  • the first determination module 320 is configured to, in response to the deformation triggering operation for the elastic object, determine the deformation position and velocity of the acting point in the mesh model of the elastic object under the deformation triggering operation, wherein the The deformation triggering operation is used to trigger the elastic object to deform due to being acted on, and the acting point is a grid point in the elastic object that leaves the original position due to being acted on;
  • the second determination module 330 is configured to determine each mesh in the mesh model of the elastic object based on the deformation position and velocity of the acting point and the elastic constraints of each mesh point in the mesh model of the elastic object the trajectory of the point;
  • the execution module 340 is configured to mobilize the grid model of the elastic object to perform elastic motion based on the motion trajectory of each grid point in the grid model of the elastic object.
  • the deformation triggering operation is specifically a touch sliding operation
  • the touch point corresponding to the touch sliding operation is used to determine the affected point
  • the sliding track corresponding to the sliding touch operation is used to determine The deformation position and velocity of the acting point.
  • the acting point is a preset grid point on the grid model of the elastic object.
  • the acting point is a grid point corresponding to the touch point on the grid model.
  • the first determining module 320 includes:
  • the first determination submodule is configured to map the sliding speed and sliding distance of the user to the acting point when the user performs the touch sliding operation, and obtain the deformation position and speed of the acting point .
  • the first determining module 320 further includes:
  • the second determination sub-module is configured to determine the speed and deformation position of the affected point after the touch operation is stopped according to the speed and deformation position of the affected point when the touch operation is stopped.
  • the second determination sub-module includes:
  • a first determination subunit configured to determine that the affected point is in the touch operation according to the deformation position and speed of the affected point when the touch operation is stopped, and the preset gravity of the affected point Speed and position after stop.
  • the second determination sub-module includes:
  • a second determination subunit configured to determine that the affected point is in the Force information at the deformation position
  • the third determining subunit is configured to determine the speed and deformation position of the acting point after the touching operation is stopped according to the deformation position, speed and force information of the acting point when the touch operation stops.
  • the third determination subunit is used for:
  • the deformation position, speed and force information of the acting point at a moment after the acting point are determined based on the deformation position, speed and force information at the previous moment of the acting point.
  • the rendering apparatus provided by the embodiment of the present disclosure can be used to execute any elastic object rendering method provided by the embodiment of the present disclosure, it has the same or corresponding beneficial effects as the executable rendering method, which is not repeated here. .
  • An embodiment of the present disclosure further provides a terminal device, the terminal device includes a processor and a memory, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the above-mentioned FIG. 1-FIG. The method of any one of 7.
  • FIG. 9 is a schematic structural diagram of a terminal device in an embodiment of the present disclosure.
  • the terminal device 1000 in the embodiment of the present disclosure may include, but is not limited to, such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal ( For example, mobile terminals such as car navigation terminals) and the like, and stationary terminals such as digital TVs, desktop computers, and the like.
  • the terminal device shown in FIG. 9 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • a terminal device 1000 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 1001, which may be loaded into random access according to a program stored in a read only memory (ROM) 1002 or from a storage device 1008 Various appropriate operations and processes are executed by the programs in the memory (RAM) 1003 . In the RAM 1003, various programs and data required for the operation of the terminal device 1000 are also stored.
  • the processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other through a bus 1004.
  • An input/output (I/O) interface 1005 is also connected to the bus 1004 .
  • the following devices can be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration
  • An output device 1007 such as a computer
  • a storage device 1008 including, for example, a magnetic tape, a hard disk, etc.
  • the communication means 1009 may allow the terminal device 1000 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 9 shows the terminal device 1000 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 1009, or from the storage device 1008, or from the ROM 1002.
  • the processing apparatus 1001 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections having one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned terminal device; or may exist alone without being assembled into the terminal device.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the terminal device, the terminal device: constructs a mesh model of an elastic object; triggers a deformation in response to the elastic object an operation to determine the deformation position and speed of the acting point in the mesh model of the elastic object under the deformation triggering operation, wherein the deformation triggering operation is used to trigger the elastic object to deform due to being acted on,
  • the acting point is a grid point in the elastic object that leaves the original position due to being acted on; based on the deformation position and velocity of the acting point and the elasticity of each grid point in the grid model of the elastic object Constraints to determine the motion trajectory of each grid point in the grid model of the elastic object; based on the motion trajectory of each grid point in the grid model of the elastic object, mobilize the grid model of the elastic object to perform elastic motion .
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), complex programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs complex programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored in the storage medium.
  • a computer program is stored in the storage medium.
  • the computer program is executed by a processor, the method of any of the foregoing embodiments in FIG. 1 to FIG. The implementation manner and beneficial effects are similar, and are not repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开涉及弹性对象的渲染方法、装置、设备及存储介质,方法包括:通过构建弹性对象的网格模型,先响应于针对该弹性对象的形变触发操作,确定该弹性对象的网格模型中在形变触发操作下的受作用点的形变位置和速度;基于该受作用点的形变位置和速度以及该弹性对象的网格模型中各网格点的弹性约束,确定该弹性对象的网格模型中各网格点的运动轨迹;最后,基于该弹性对象的网格模型中各网格点的运动轨迹,调动该弹性对象的网格模型进行弹性运动,如此能够实现对弹性对象因受力发生弹性形变的整个过程进行模拟并进行动态显示,可以提供逼真的弹性效果,提高与用户的可互动性,提高用户体验。

Description

弹性对象的渲染方法、装置、设备及存储介质
本申请要求于2020年10月28日提交中国专利局、申请号为202011169108.5、申请名称为“弹性对象的渲染方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及图像渲染技术领域,尤其涉及一种弹性对象的渲染方法、装置、设备及存储介质。
背景技术
相关技术实现动态特效的方法主要包括骨骼动画、混合变形(或者称为BlendShape)和顶点动画三种方法。但是这三种方法无法提供逼真的弹性效果,且用户无法参与互动,用户体验较差。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开实施例提供了一种弹性对象的渲染方法、装置、设备及存储介质,能够提供逼真的弹性效果,从而提高与用户的可互动性,进而提高用户体验。
本公开实施例的第一方面提供了一种弹性对象的渲染方法,包括:
构建弹性对象的网格模型;
响应于针对所述弹性对象的形变触发操作,确定所述弹性对象的网格模型中在所述形变触发操作下的受作用点的形变位置和速度,其中,所述形变触发操作用于触发所述弹性对象因受作用而发生形变,所述受作用点为所述弹性对象中因受作用而离开原始位置的网格点;
基于所述受作用点的形变位置和速度以及所述弹性对象的网格模型中各网格点的弹性约束,确定所述弹性对象的网格模型中各网格点的运动轨迹;
基于所述弹性对象的网格模型中各网格点的运动轨迹,调动所述弹性对象的网格模型进行弹性运动。本公开实施例的第二方面提供了一种渲染装置,包括:
模型构建模块,用于构建弹性对象的网格模型;
第一确定模块,用于响应于针对所述弹性对象的形变触发操作,确定所述弹性对象的网格模型中在所述形变触发操作下的受作用点的形变位置和速度,其中,所述形变触发操作用于触发所述弹性对象因受作用而发生形变,所述受作用点为所述弹性对象中因受作用而离开原始位置的网格点;
第二确定模块,用于基于所述受作用点的形变位置和速度以及所述弹性对象的网格模型中各网格点的弹性约束,确定所述弹性对象的网格模型中各网格点的运动轨迹;
执行模块,用于基于所述弹性对象的网格模型中各网格点的运动轨迹,调动所述弹性对象的网格模型进行弹性运动。
本公开实施例的第三方面提供了一种终端设备,该终端设备包括存储器和处理器,其中,存储器中存储有计算机程序,当该计算机程序被处理器执行时,可以实现上述第一方面的方法。
本公开实施例的第四方面提供了一种计算机可读存储介质,该存储介质中存储有计算机程序,当该计算机程序被处理器执行时,可以实现上述第一方面的方法。
本公开实施例提供的技术方案与现有技术相比具有如下优点:
本公开实施例提供的技术方案,通过构建弹性对象的网格模型,先响应于针对该弹性对象的形变触发操作,确定该弹性对象的网格模型中在形变触发操作下的受作用点的形变位置和速度;再基于该受作用点的形变位置和速度以及该弹性对象的网格模型中各网格点的弹性约束,确定该弹性对象的网格模型中各网格点的运动轨迹;最后,基于该弹性对象的网格模型中各网格点的运动轨迹,调动该弹性对象的网格模型进行弹性运动。可见,本公开实施例提供的方案可以实现对弹性对象因受力发生弹性形变的整个过程进行模拟并进行动态显示,可以提供逼真的弹性效果,提高与用户的可互动性,提高用户体验。
另外,本公开实施例提供的技术方案仅将构成对应物的点中部分具有代表性的点(如对应物表面的点,或描绘对应物轮廓线的点),投射到虚拟世界中,作为弹性对象的网格模型上的网格点,其可以大大减小网格模型中网格点的数量,降低后续计算的复杂程度,降低计算对硬件性能的依赖程度,尤其适用于手机、智能手表等小型化,计算能力弱的设备。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本公开实施例提供的一种弹性对象的渲染方法的流程图;
图2为本公开实施例提供的一种弹性对象的形状示意图;
图3为在图2中标注网格点后的示意图;
图4为本公开实施例提供的一种用户执行触摸滑动操作的示意图;
图5为本公开实施例提供的另一种用户执行触摸滑动操作的示意图;
图6为本公开实施例提供的另一种用户执行触摸滑动操作的示意图;
图7为本公开实施例提供的另一种弹性对象的渲染方法的流程图;
图8为本公开实施例提供的一种渲染装置的结构框图;
图9是本公开实施例中的一种终端设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
图1是本公开实施例提供的一种弹性对象的渲染方法的流程图,该方法可以由一种终端设备来执行。该终端设备可以示例性的理解为诸如手机、平板电脑、笔记本电脑、台式机、智能电视等具有视频或动画播放能力的设备。在一些实施例中,本实施例所称的终端设备上可以搭载显示设备和人机交互设备,终端设备可以通过人机交互设备接收用户指令,并基于用户指令借助显示设备呈现适当的响应结果,如通过显示设备显示特定的动态3D弹性特效。本公开提供的技术方案可以应用于对视频进行录制、编辑的情况,还可以应用于多人通过终端互动(如进行视频通话)的情况。当然,本公开提供的技术方案还可以应用其他情况,本公开对此不作限制。
如图1所示,本实施例提供的方法包括如下步骤:
S110、构建弹性对象的网格模型。
本实施例中,弹性对象的网格模型可以基于弹性对象的3D模型数据构建得到,其中弹性对象的3D模型数据可以预先配置得到。
在本步骤中,弹性对象被渲染在终端设备的界面上,其中,界面是指终端的显示界面。具体地,界面可以为终端上安装的某个应用程序的使用界面。如为视频通话界面或视频录制界面,视频编辑界面以及视频播放界面等。
弹性对象是指在界面中显示的能够响应用户的操作并展示弹性动态效果的对象。示例性,若界面为视频编辑界面、视频通话界面或视频播放界面,弹性对象可以为作为特效添加的道具。在一些实施例中,可以设置弹性对象能够与现实世界中具体的物体(以下简称对应物)对应。示例性地,弹性对象为帽子形道具、围巾形道具、头套形道具以及皮球形道具等。
从微观角度来看,现实世界中具体的物体由数目众多的粒子、原子或分子等微观粒子构成。由于这些微观粒子间存在相互作用力,使得物体在发生形变后,能恢复原来大小和形状,即具有弹性。通过设置弹性对象能够与现实世界中具体的物体对应,实质上是将现实世界的具体物体投射到虚拟世界中,或者称为用弹性对象模拟现实世界的具体物体,使得用户感受更加真实,引起用户共鸣。
本领域技术人员可以理解,现实世界中,如果将对应物视作为点的集合。当拉扯对应物时,对应物发生形变。形变的本质是构成对应物的多个点之间的相对位置发生变化。基于此,可以设置弹性对象包括多个网格点。网格点用于描绘弹性对象的形貌。
另外,在现实世界中,对应物是立体的,因此构成对应物的点包括描绘对应物表面的点和描绘对应物内部的点。只有描绘对应物表面的点和描绘对应物内部的点共同作用,才能构成对应物。
而在本公开技术方案中,由于在实际使用时,弹性对象主要起到对图像进行修饰,其往往不需要关注其内部结构。例如,如果为界面中某个人物图像带帽子,只需要将帽子的图片叠加到人物的图片上即可,不需要完全展示帽子内部情况。因此,在一种可能的实施方式中,可以仅选择具有代表性的点作为网格点即可。即在确定网格点时,仅将构成对应物的点中部分具有代表性的点投射到虚拟世界中,作为弹性对象的网格点。其中“具有代表性的点”可以为描述具体对应物表面的点,或可以描绘对应物轮廓线的点。
由于在后续计算中,主要是以网格点为对象,计算各网格点的速度和位置,最终得到 弹性对象的整个动态效果。通过设置仅将构成对应物的点中部分具有代表性的点投射到虚拟世界中,作为弹性对象的网格点,其可以大大减小网格点的数量,降低后续计算的复杂程度,降低计算对硬件性能的依赖程度,尤其适用于手机、智能手表等小型化,计算能力弱的设备。
另外,由于在现实世界中,构成对应物的微观粒子间存在相互力。为了模拟现实世界对应物中微观粒子间的相互作用力,设置不同网格点之间存在弹性约束。其弹性约束具体可以使用约束规则和/或限定条件予以表示。约束规则和/或限定条件可以根据弹性对象在现实世界中对应物进行力学分析结果确定。
网格点的具体设置方法有多种,本申请对此不作限制。示例性地,当将图2所示的形状为帽子的道具作为弹性对象时,可以针对该弹性对象采用图3所示的网格点的设置方法,而且该网格点的设置方法可以包括:在弹性对象的表面绘制网格,将构成网格的边线的交点作为网格点。其中,用于绘制网格的线可以为直线,也可以为曲线。不同网格的形状可以相同,也可以不同。图2为本公开实施例提供的一种弹性对象的形状示意图。图3为在图2中标注网格点后的示意图。
另外,本申请实施例不限定S110的实施方式,例如,在一种可能的实施方式中,其具体可以包括:获取用户输入的弹性对象添加指令;响应于该弹性对象添加指令,构建弹性对象的网格模型,并在待编辑的界面上显示添加的弹性对象。
S120、响应于针对弹性对象的形变触发操作,确定该弹性对象的网格模型中在形变触发操作下的受作用点的形变位置和速度。
本步骤的形变触发操作作用于弹性对象使得弹性对象因受作用而发生形变,受作用点为弹性对象中因受作用而离开原始位置的网格点。
受作用点的运动可以视作为,直接因用户的形变触发操作(比如,拉扯操作)而产生的运动。弹性对象的网格模型中的网格点,除受作用点外,其他的网格点的运动,为因该受作用点的运动以及预设弹性约束的限制而产生的运动。
用户对弹性对象的形变触发操作可以通过触控屏等设备获得。在一种可能的实施方式中,若终端中显示设备复用触控屏,用户可以直接在显示设备上执行形变触发操作。
另外,本申请实施例不限定上述“用户对弹性对象的形变触发操作”,例如,在一种可能的实施方式中,用户对弹性对象的形变触发操作可以包括触摸滑动操作。其中,触摸滑动操作对应的触摸点用于确定受作用点,而且滑动触摸操作对应的滑动轨迹用于确定受作用点的形变位置和速度。示例性地,用户执行对弹性对象的形变触发操作的方法具体可以包括:用户用手指触摸显示设备上某一区域(以下简称触摸区域),并向某一方向移动手指。需要说明的是,在实际中,用户触摸和滑动的过程中,可以向显示设备施加压力,也可以不向显示设备施加压力。若用户对显示设备施加压力,可以调用压力传感器对用户的触控操作进行识别。若用户对显示设备不施加压力,可以调用光感传感器对用户的触控操作进行识别。
在实际中,采用触摸滑动的方式触发受作用点偏离原始位置移动,其主要包括两种情况:
情况一,用户触摸区域与弹性对象覆盖的区域不重合。即触摸滑动操作对应的触摸点 不在弹性对象上,而是触摸界面中除弹性对象所覆盖的区域以外的其他区域上。此种情况下,可以设置受作用点为弹性对象上的预设网格点;用户作用在界面上的触摸滑动操作被映射到预设网格点上。为了便于理解,下面结合图4进行说明。其中,图4为本公开实施例提供的一种用户执行触摸滑动操作的示意图。
如图4所示,当预先规定该弹性对象中网格点b为预设网格点时,若用户的实际操作为先触摸点a,并沿X方向移动,则可以将该实际操作映射后,可以视作为用户触摸点b,并沿X方向移动。
情况二,用户触摸区域与弹性对象覆盖区域至少部分重合。即触摸滑动操作对应的触摸点在弹性对象上。此种情况下,可以设置受作用点为弹性对象的网格模型中与触摸点对应的网格点;用户作用在界面上的触摸滑动操作被映射到该网格点上。为了便于理解,下面结合图5进行说明。其中,图5为本公开实施例提供的另一种用户执行触摸滑动操作的示意图。
如图5所示,当用户输入对弹性对象的拉扯指令的实际操作为触摸点d,并沿X方向移动时,由于点d本身为弹性对象的一个网格点,因此可以将点d作为受作用点。
需要说明的是,针对情况二,由于在实际中,触摸时,用户手指与显示设备的接触面较大,可能存在用户触摸区域内包括多个网格点。此种情况下,可以选择该区域内的全部或部分网格点作为受作用点。本公开对此不作限制。进一步地,如果选择该区域内的部分网格点作为受作用点,选择的方法可以为,根据各网格点在屏幕中的物理位置处所检测到的压力值,确定受作用点。如将压力值最大的物理位置处的网格点作为受作用点。或者,还可以先确定用户触摸区域的几何中心,根据各网格点距几何中心的距离确定受作用点。如将距几何中心的距离最近的网格点作为受作用点。
由于在本申请中,是针对弹性对象进行动态渲染,而弹性的定义是发生形变后,能恢复原来大小和形状。因此在弹性对象发生形变的整个过程中,分为两个阶段。第一阶段,用户触摸显示设备,通过触摸滑动操作促使弹性对象发生形变。第二阶段,用户手指离开显示设备,不再作用于弹性对象,弹性对象恢复初始形状。为了便于理解,下面结合图6进行说明。其中,图6为本公开实施例提供的另一种用户执行触摸滑动操作的示意图。示例性地。
如图6所示,假设手指触摸区域覆盖该帽子形弹性对象的网格点e,将网格点e作为受作用点。第一阶段A对应用户触摸后沿X方向移动手指,到达点n后停止滑动,手指离开显示设备。在这一阶段,显示设备的显示效果为帽子形弹性对象的网格点e随手指移动到点n,帽子形弹性对象整体形状发生变化。第二阶段B,用户手指离开显示设备,帽子形弹性对象在“弹性”作用下,恢复原状。
针对于第一阶段,在一种可能的实施方式中,可以设置在用户执行触摸滑动操作时,将用户的滑动速度和滑动距离映射到受作用点上,获得受作用点的速度和形变位置。
从微观上看,滑动轨迹为手指在不同运动时刻所处位置的点的集合。因此,按照时间先后顺序,将手指所处位置顺序连接,即可得到滑动轨迹。继续参见图6,假设手指沿X方向滑动,点e、点f、点g、……以及点n均为滑动轨迹上的点,并且点e为滑动的初始时刻手指的位置点。手指到达点e时的时刻为T1,到达点f时的时刻为T2,到达点e时的 时刻为T3,……,到达点n时的时刻为Tm。只要知道手指到达各点的时间(即T1、T2、T3、……Tm的具体取值),以及各点的位置坐标,即可以得到用户在各时刻的滑动速度以及截止到各时刻用户的滑动距离。示例性地,可以利用显示设备中的压力传感器或光学传感器检测手指到达各点的时间以及各点的位置坐标。这是因为具有触控功能或光感功能的显示设备,其内部往往分散地设置数目众多的压力传感器或光学传感器,不同压力传感器或光学传感器具有不同的位置坐标。在用户触摸显示设备时,只有手指所覆盖区域的压力传感器或光学传感器才会响应,输出电信号。
还需要说明的是,在实际中,由于需要将用户的滑动速度和滑动距离映射到受作用点上,在映射的过程中,可以设置比例参数,基于比例参数计算受作用点的滑动速度和滑动距离,完成映射操作。示例性地,继续参见图6,假设点e到点n的距离为5cm,比例参数为1,当用户手指从点e滑到点n时,根据点e确定的网格点的移动距离为1×5cm=5cm。在实际中比例参数的取值可以是任意的,例如可以为大于1的数,也可以为小于1的正数,还可以设置为负数。在一种可能的实施方式中,还可以设置比例参数的取值为随机数。例如,针对同一弹性对象,相邻两次采用本公开提供的弹性对象的渲染方法时,比例参数取值不同。通过设置比例参数可以增加特效的趣味性。类似地,还可以设置拉扯前后,受作用点可以移动的最大距离和最小距离等。
针对于第二阶段,受作用点在停止滑动时刻之后的各运动时刻的速度和形变位置的确定方法有多种。例如预先设置停止滑动时刻之后各运动时刻的速度和/或形变位置。
在一种可能的实施方式中,在用户抬起手指停止触摸滑动操作时,根据受作用点在触摸操作停止时的速度和形变位置,确定得到受作用点在停止触摸滑动时刻之后的各运动时刻的速度和形变位置。这样设置的实质是基于停止触摸滑动时刻受作用点的速度和形位置,推断其之后的速度和形变位置,这样可以使得其所呈现的弹性效果更佳自然、真实。
用于实现基于停止触摸滑动时刻受作用点的速度和形变位置,推断其之后的速度和形变位置的方法有多种。示例性地,下面给出两种方法。
方法一,根据受作用点在停止触摸滑动时刻的形变位置、速度,以及受作用点的预设重力,确定得到受作用点在停止触摸滑动时刻之后的速度和位置。
在现实世界中,具体的物体是有质量的,在空间中其会受到重力作用。由于在本公开中,利用网格点描绘弹性对象的形貌,可以为各网格点赋予预设重力值,以实现对真实世界中对应物的模仿。在实际设置时,不同网格点的预设重力值可以相同,也可以不同,本申请对此不作限制。并且不同情况(如调整终端摆放角度前后)下,同一网格点的预设重力值可以相同,也可以不同。示例性地。若终端水平放置(即终端显示面与地面平行),所有网格点的预设重力值相同。若终端竖直放置(即终端显示面与地面垂直),距地面距离越小的网格点的预设重力值越大。
方法一充分考虑了重力对弹性对象在恢复初始形状的过程的影响,其可以使得弹性对象在第二阶段所呈现的弹性效果更佳自然、真实。
方法二,根据受作用点在停止触摸滑动时刻的形变位置以及受作用点与其他网格点之间的弹性约束,确定受作用点在该位置上的受力信息;根据受作用点在停止触摸滑动时刻的形变位置、速度以及受力信息,确定得到受作用点在停止触摸滑动时刻之后的速度和形 变位置。
如前,在现实世界中,构成物体的微观粒子间的相互作用力,使得物体在发生形变后,能恢复原来大小和形状,即具有弹性。方法二的本质是充分考虑了弹性约束对弹性对象在恢复初始形状的过程的影响,其可以使得弹性对象在第二阶段所呈现的弹性效果更佳自然、真实。
在一种可能的实施方式中,在实际中,还可以将方法一与方法二相结合,来确定受作用点在停止触摸滑动时刻之后的各运动时刻的速度和位置,以进一步使得弹性对象在第二阶段所呈现的弹性效果更佳自然、真实。
在一种可能的实施方式中,在上述方法二的基础上,在执行“根据受作用点在停止触摸滑动时刻的形变位置、速度以及受力信息,确定得到受作用点在停止触摸滑动时刻之后的速度和位置”时,具体可以在停止触摸滑动时刻之后,基于受作用点前一时刻的形变位置、速度以及受力信息,确定受作用点后一时刻的形变位置、速度以及受力信息。示例性地,继续参见图6,基于时刻Tm+p的形变位置、速度以及受力信息,确定时刻Tm+p+1的形变位置、速度以及受力信息。基于时刻Tm+p+1的形变位置、速度以及受力信息,确定时刻Tm+p+2的形变位置、速度以及受力信息。如此反复,最终计算第二阶段各运动时刻受作用点的速度和形变位置。这样设置的实质是利用迭代和循环的思想,计算各运动时刻受作用点的速度和形变位置。
此外,在实际中,若直接使用三角形网格(triangle mesh)进行仿真,会导致模型过于柔软,弹性不足。上述在停止触摸滑动时刻之后,基于受作用点前一时刻的形变位置、速度以及受力信息,确定受作用点后一时刻的形变位置、速度以及受力信息的方法,其可以实现将每次求解拆分为若干子计算步骤(simulation substep),并为三角形网格增加其他位置附加约束(rest pose attachment constraints)(如与其他网格点之间的弹性约束),其可以显著增强弹性效果。
S130、基于受作用点的形变位置和速度以及弹性对象的网格模型中各网格点的弹性约束,确定该弹性对象的网格模型中各网格点的运动轨迹。
由于弹性约束的本质是对现实世界具体物体中微观粒子间作用力的模拟结果。本步骤的具体实现方法可以为:
针对第一阶段,即在用户执行形变触发操作,使得弹性对象的网格模型上的受作用点偏离原始位置时,根据该受作用点在各运动时刻的形变位置和速度,以及该弹性对象上各网格点之间的预设弹性约束,确定除该受作用点外其他网格点(以下简称其他网格点)的形变位置和速度。
针对第二阶段,即在用户抬起手指停止执行形变触发操作时,根据受作用点在停止执行形变触发操作后的各运动时刻的形变位置和速度,以及弹性对象上各网格点之间的预设弹性约束,确定除该受作用点外其他网格点在停止执行形变触发操作后的各运动时刻的形变位置和速度。或者,根据其他网格点在用户停止执行形变触发操作时刻的速度和形变位置,以及弹性对象上各网格点之间的预设弹性约束,确定其他网格点的形变位置和速度。
基于上述S130的相关内容可知,可以基于网格模型上各网格点在不同运动时刻的形变位置和速度,即可获得各网格点的运动轨迹。
S140、基于弹性对象的网格模型中各网格点的运动轨迹,调动该弹性对象的网格模型进行弹性运动。
本步骤的实质是,根据网格模型上各网格点在不同运动时刻的形变位置和速度,按照各运动时刻的先后顺序,顺次调整各网格点的形变位置和速度,并将其效果呈现在显示设备中。
本步骤的实现方法有多种,本公开对此不作限制。示例性地,首先,获取当前终端显示的刷新频率,其中终端显示的刷新频率是指显示设备每秒所能显示的图像次数;其次,基于当前终端显示的刷新频率,确定每一次显示图像时,该弹性对象的网格模型中各网格点的形变位置和速度;再次,确定每一次显示图像时该弹性对象的网格模型中各网格点对应的像素单元;最后,调整各像素单元的像素电压,最终实现通过显示设备显示弹性对象的形变过程。
上述技术方案中,通过构建弹性对象的网格模型,先响应于针对该弹性对象的形变触发操作,确定该弹性对象的网格模型中在形变触发操作下的受作用点的形变位置和速度;基于该受作用点的形变位置和速度以及该弹性对象的网格模型中各网格点的弹性约束,确定该弹性对象的网格模型中各网格点的运动轨迹;最后,基于该弹性对象的网格模型中各网格点的运动轨迹,调动该弹性对象的网格模型进行弹性运动,如此可以实现对弹性对象因受力发生弹性形变的整个过程进行模拟并进行动态显示,可以提供逼真的弹性效果,提高与用户的可互动性,提高用户体验。
此外,上述技术方案仅将构成对应物的点中部分具有代表性的点(如描述具体对应物表面的点,或描绘对应物轮廓线的点),投射到虚拟世界中,作为弹性对象的网格点,其可以大大减小网格点的数量,降低后续计算的复杂程度,降低计算对硬件性能的依赖程度,尤其适用于手机、智能手表等小型化,计算能力弱的设备。
为了更好地理解本申请提供的技术方案,下面以图7所示的弹性对象的渲染方法作为示例进行说明。其中,图7为本公开实施例提供的另一种弹性对象的渲染方法的流程图。
作为示例,图7为将图1中的弹性对象的渲染方法应用于视频通讯界面的一个具体示例;而且图7所示的弹性对象的渲染方法包括:
S201、启动终端中视频通讯应用,进入视频通讯界面,选择联系人建立端对端网络通讯连接。
S202、选择一种3D头套贴纸,以利用该3D头套贴纸对界面中人物进行修饰。
此处3D头套贴纸即为弹性对象。
在一种可能的实施方式中,该3D头套贴纸可以对界面中本端或对端的人物进行修饰。
在一种可能的实施方式中,在用户选择一种3D头套贴纸后,启动人脸追踪功能,将3D头套显示于人脸上。
S203、设置弹性对象仿真物理参数,启动弹性体仿真求解器。
此处,弹性对象仿真物理参数具体可以为前文提及的用于完成映射操作的比例参数,或者用于作为受作用点的预设网格点,或者为弹性对象各网格点的预设重力值,或者各网格点之间的弹性约束等。
弹性体仿真求解器用于响应于用户针对3D头套贴纸执行触摸滑动操作,完成本公开 图1中S120-S130。
S204、响应于用户对头套的触摸滑动操作,弹性体仿真求解器得到头套的网格模型中各网格点在各运动时刻的形变位置和速度,并发送给对端用户。
示例性地,识别用户在屏幕的滑动信息,弹性体仿真求解器基于该滑动信息,确定位于头套的网格模型中的受作用点,进而得到网格模型中各网格点在各运动时刻的形变位置和速度。将网格模型中各网格点在各运动时刻的形变位置和速度作为特效数据,发送给对端用户。
S205、本端终端和/或对端终端重新定位人脸位置,基于接收到的特效数据在屏幕上渲染头套的形变过程,以为用户展现逼真的动态弹性效果。
上述技术方案可以增加通话双方的互动趣味性,提高用户体验。
在一种可能的实施方式中,在整个视频通话的过程中,S204-S205可重复执行多次。
本公开实施例还提供一种渲染装置,为了便于理解,下面结合图8所示的渲染装置进行说明。其中,图8为本公开实施例提供的一种渲染装置的结构框图。
参见图8,该渲染装置包括:
模型构建模块310,用于构建弹性对象的网格模型;
第一确定模块320,用于响应于针对所述弹性对象的形变触发操作,确定所述弹性对象的网格模型中在所述形变触发操作下的受作用点的形变位置和速度,其中,所述形变触发操作用于触发所述弹性对象因受作用而发生形变,所述受作用点为所述弹性对象中因受作用而离开原始位置的网格点;
第二确定模块330,用于基于所述受作用点的形变位置和速度以及所述弹性对象的网格模型中各网格点的弹性约束,确定所述弹性对象的网格模型中各网格点的运动轨迹;
执行模块340,用于基于所述弹性对象的网格模型中各网格点的运动轨迹,调动所述弹性对象的网格模型进行弹性运动。
在一种可能的实施方式中,所述形变触发操作具体为触摸滑动操作,所述触摸滑动操作对应的触摸点用于确定所述受作用点,所述滑动触摸操作对应的滑动轨迹用于确定所述受作用点的形变位置和速度。
在一种可能的实施方式中,当所述触摸滑动操作对应的触摸点不在所述弹性对象上时,所述受作用点为所述弹性对象的网格模型上的预设网格点。
在一种可能的实施方式中,当所述触摸滑动操作对应的触摸点在所述弹性对象上时,所述受作用点为所述网格模型上与所述触摸点对应的网格点。
在一种可能的实施方式中,所述第一确定模块320,包括:
第一确定子模块,用于在所述用户执行所述触摸滑动操作时,将所述用户的滑动速度和滑动距离映射到所述受作用点上,获得所述受作用点的形变位置和速度。
在一种可能的实施方式中,所述第一确定模块320,还包括:
第二确定子模块,用于根据所述受作用点在触摸操作停止时的速度和形变位置,确定得到所述受作用点在触摸操作停止之后的速度和形变位置。
在一种可能的实施方式中,所述第二确定子模块,包括:
第一确定子单元,用于用于根据所述受作用点在所述触摸操作停止时的形变位置、速 度,以及所述受作用点的预设重力,确定得到所述受作用点在触摸操作停止之后的速度和位置。
在一种可能的实施方式中,所述第二确定子模块,包括:
第二确定子单元,用于根据所述受作用点在所述触摸操作停止时的形变位置以及所述受作用点与其他网格点之间的弹性约束,确定所述受作用点在所述形变位置上的受力信息;
第三确定子单元,用于根据所述受作用点在所述触摸操作停止时的形变位置、速度以及受力信息,确定得到所述受作用点在触摸操作停止之后的速度和形变位置。
在一种可能的实施方式中,所述第三确定子单元,用于:
在触摸操作停止之后,基于所述受作用点前一时刻的形变位置、速度以及受力信息,确定所述受作用点后一时刻的形变位置、速度以及受力信息。
由于本公开实施例提供的渲染装置,可以用于执行本公开实施例提供的任意一种弹性对象的渲染方法,其具有其所可执行的渲染方法相同或相应的有益效果,此处不再赘述。
本公开实施例还提供一种终端设备,该终端设备包括处理器和存储器,其中,所述存储器中存储有计算机程序,当所述计算机程序被所述处理器执行时可以实现上述图1-图7中任一实施例的方法。
示例的,图9是本公开实施例中的一种终端设备的结构示意图。下面具体参考图9,其示出了适于用来实现本公开实施例中的终端设备1000的结构示意图。本公开实施例中的终端设备1000可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图9示出的终端设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图9所示,终端设备1000可以包括处理装置(例如中央处理器、图形处理器等)1001,其可以根据存储在只读存储器(ROM)1002中的程序或者从存储装置1008加载到随机访问存储器(RAM)1003中的程序而执行各种适当的动作和处理。在RAM 1003中,还存储有终端设备1000操作所需的各种程序和数据。处理装置1001、ROM 1002以及RAM 1003通过总线1004彼此相连。输入/输出(I/O)接口1005也连接至总线1004。
通常,以下装置可以连接至I/O接口1005:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1006;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1007;包括例如磁带、硬盘等的存储装置1008;以及通信装置1009。通信装置1009可以允许终端设备1000与其他设备进行无线或有线通信以交换数据。虽然图9示出了具有各种装置的终端设备1000,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1009从网络上被下载和安装,或者从存储装置1008被安装,或者从ROM 1002被安装。在该计算机程序被处理装置1001执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述终端设备中所包含的;也可以是单独存在,而未装配入该终端设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该终端设备执行时,使得该终端设备:构建弹性对象的网格模型;响应于针对所述弹性对象的形变触发操作,确定所述弹性对象的网格模型中在所述形变触发操作下的受作用点的形变位置和速度,其中,所述形变触发操作用于触发所述弹性对象因受作用而发生形变,所述受作用点为所述弹性对象中因受作用而离开原始位置的网格点;基于所述受作用点的形变位置和速度以及所述弹性对象的网格模型中各网格点的弹性约束,确定所述弹性对象的网格模型中各网格点的运动轨迹;基于所述弹性对象的网格模型中各网格点的运动轨迹,调动所述弹性对象的网格模型进行弹性运动。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
本公开实施例还提供一种计算机可读存储介质,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时可以实现上述图1-图7任一实施例的方法,其执行方式和有益效果类似,在这里不再赘述。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (21)

  1. 一种弹性对象的渲染方法,其特征在于,包括:
    构建弹性对象的网格模型;
    响应于针对所述弹性对象的形变触发操作,确定所述弹性对象的网格模型中在所述形变触发操作下的受作用点的形变位置和速度,其中,所述形变触发操作用于触发所述弹性对象因受作用而发生形变,所述受作用点为所述弹性对象中因受作用而离开原始位置的网格点;
    基于所述受作用点的形变位置和速度以及所述弹性对象的网格模型中各网格点的弹性约束,确定所述弹性对象的网格模型中各网格点的运动轨迹;
    基于所述弹性对象的网格模型中各网格点的运动轨迹,调动所述弹性对象的网格模型进行弹性运动。
  2. 根据权利要求1所述的方法,其特征在于,所述形变触发操作具体为触摸滑动操作,所述触摸滑动操作对应的触摸点用于确定所述受作用点,所述滑动触摸操作对应的滑动轨迹用于确定所述受作用点的形变位置和速度。
  3. 根据权利要求2所述的方法,其特征在于,当所述触摸滑动操作对应的触摸点不在所述弹性对象上时,所述受作用点为所述弹性对象的网格模型上的预设网格点。
  4. 根据权利要求2所述的方法,其特征在于,当所述触摸滑动操作对应的触摸点在所述弹性对象上时,所述受作用点为所述网格模型上与所述触摸点对应的网格点。
  5. 根据权利要求2所述的方法,其特征在于,所述确定所述弹性对象的网格模型中在所述形变触发操作下的受作用点的形变位置和速度,包括:
    在所述用户执行所述触摸滑动操作时,将所述用户的滑动速度和滑动距离映射到所述受作用点上,获得所述受作用点的形变位置和速度。
  6. 根据权利要求5所述的方法,其特征在于,所述确定所述弹性对象的网格模型中在所述形变触发操作下的受作用点的形变位置和速度,包括:
    根据所述受作用点在触摸操作停止时的速度和形变位置,确定得到所述受作用点在触摸操作停止之后的速度和形变位置。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述受作用点在所述触摸操作停止时的速度和形变位置,确定得到所述受作用点在触摸操作停止之后的速度和形变位置,包括:
    根据所述受作用点在所述触摸操作停止时的形变位置、速度,以及所述受作用点的预设重力,确定得到所述受作用点在触摸操作停止之后的速度和位置。
  8. 根据权利要求6所述的方法,其特征在于,所述根据所述受作用点在所述触摸操作停止时的速度和形变位置,确定得到所述受作用点在触摸操作停止之后的速度和形变位置,包括:
    根据所述受作用点在所述触摸操作停止时的形变位置以及所述受作用点与其他网格点之间的弹性约束,确定所述受作用点在所述形变位置上的受力信息;
    根据所述受作用点在所述触摸操作停止时的形变位置、速度以及受力信息,确定得到所述受作用点在触摸操作停止之后的速度和形变位置。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述受作用点在所述触摸操作停止时的形变位置、速度以及受力信息,确定得到所述受作用点在触摸操作停止之后的速度和形变位置,包括:
    在触摸操作停止之后,基于所述受作用点前一时刻的形变位置、速度以及受力信息,确定所述受作用点后一时刻的形变位置、速度以及受力信息。
  10. 一种渲染装置,其特征在于,包括:
    模型构建模块,用于构建弹性对象的网格模型;
    第一确定模块,用于响应于针对所述弹性对象的形变触发操作,确定所述弹性对象的网格模型中在所述形变触发操作下的受作用点的形变位置和速度,其中,所述形变触发操作用于触发所述弹性对象因受作用而发生形变,所述受作用点为所述弹性对象中因受作用而离开原始位置的网格点;
    第二确定模块,用于基于所述受作用点的形变位置和速度以及所述弹性对象的网格模型中各网格点的弹性约束,确定所述弹性对象的网格模型中各网格点的运动轨迹;
    执行模块,用于基于所述弹性对象的网格模型中各网格点的运动轨迹,调动所述弹性对象的网格模型进行弹性运动。
  11. 根据权利要求10所述的装置,其特征在于,所述形变触发操作具体为触摸滑动操作,所述触摸滑动操作对应的触摸点用于确定所述受作用点,所述滑动触摸操作对应的滑动轨迹用于确定所述受作用点的形变位置和速度。
  12. 根据权利要求11所述的装置,其特征在于,当所述触摸滑动操作对应的触摸点不在所述弹性对象上时,所述受作用点为所述弹性对象的网格模型上的预设网格点。
  13. 根据权利要求11所述的装置,其特征在于,当所述触摸滑动操作对应的触摸点在所述弹性对象上时,所述受作用点为所述网格模型上与所述触摸点对应的网格点。
  14. 根据权利要求11所述的装置,其特征在于,所述第一确定模块,包括:
    第一确定子模块,用于在所述用户执行所述触摸滑动操作时,将所述用户的滑动速度和滑动距离映射到所述受作用点上,获得所述受作用点的形变位置和速度。
  15. 根据权利要求14所述的装置,其特征在于,所述第一确定模块,还包括:
    第二确定子模块,用于根据所述受作用点在触摸操作停止时的速度和形变位置,确定得到所述受作用点在触摸操作停止之后的速度和形变位置。
  16. 根据权利要求15所述的装置,其特征在于,所述第二确定子模块,包括:
    第一确定子单元,用于根据所述受作用点在所述触摸操作停止时的形变位置、速度,以及所述受作用点的预设重力,确定得到所述受作用点在触摸操作停止之后的速度和位置。
  17. 根据权利要求15所述的装置,其特征在于,所述第二确定子模块,包括:
    第二确定子单元,用于根据所述受作用点在所述触摸操作停止时的形变位置以及所述受作用点与其他网格点之间的弹性约束,确定所述受作用点在所述形变位置上的受力信息;
    第三确定子单元,用于根据所述受作用点在所述触摸操作停止时的形变位置、速度以及受力信息,确定得到所述受作用点在触摸操作停止之后的速度和形变位置。
  18. 根据权利要求17所述的装置,其特征在于,所述第三确定子单元,用于:
    在触摸操作停止之后,基于所述受作用点前一时刻的形变位置、速度以及受力信息,确定所述受作用点后一时刻的形变位置、速度以及受力信息。
  19. 一种终端设备,其特征在于,包括:
    存储器和处理器,其中,所述存储器上存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1-9中任一项所述的方法。
  20. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时,使得所述处理器执行如权利要求1-9中任一项所述的方法。
  21. 一种计算机程序产品,其特征在于,所述计算机程序产品在终端设备上运行时,使得所述终端设备执行权利要求1-9中任一项所述的方法。
PCT/CN2021/115591 2020-10-28 2021-08-31 弹性对象的渲染方法、装置、设备及存储介质 WO2022088928A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/034,325 US20230386137A1 (en) 2020-10-28 2021-08-31 Elastic object rendering method and apparatus, device, and storage medium
EP21884660.8A EP4207083A4 (en) 2020-10-28 2021-08-31 METHOD AND DEVICE FOR REPRESENTING AN ELASTIC OBJECT, DEVICE AND STORAGE MEDIUM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011169108.5 2020-10-28
CN202011169108.5A CN112258653A (zh) 2020-10-28 2020-10-28 弹性对象的渲染方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022088928A1 true WO2022088928A1 (zh) 2022-05-05

Family

ID=74262555

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115591 WO2022088928A1 (zh) 2020-10-28 2021-08-31 弹性对象的渲染方法、装置、设备及存储介质

Country Status (4)

Country Link
US (1) US20230386137A1 (zh)
EP (1) EP4207083A4 (zh)
CN (1) CN112258653A (zh)
WO (1) WO2022088928A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114722447A (zh) * 2022-06-09 2022-07-08 广东时谛智能科技有限公司 多指触控展示鞋子模型方法、装置、设备及存储介质
CN115034052A (zh) * 2022-05-30 2022-09-09 广东时谛智能科技有限公司 鞋底模型弹性展示方法、装置、设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258653A (zh) * 2020-10-28 2021-01-22 北京字跳网络技术有限公司 弹性对象的渲染方法、装置、设备及存储介质
CN112991444B (zh) * 2021-02-10 2023-06-20 北京字跳网络技术有限公司 位置确定方法及设备
CN113096225B (zh) * 2021-03-19 2023-11-21 北京达佳互联信息技术有限公司 一种图像特效的生成方法、装置、电子设备及存储介质
CN115035277A (zh) * 2022-05-30 2022-09-09 广东时谛智能科技有限公司 鞋子模型动态展示方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130181972A1 (en) * 2012-01-16 2013-07-18 Autodesk, Inc. Three dimensional contriver tool for modeling with multi-touch devices
CN106504329A (zh) * 2016-09-27 2017-03-15 西安科技大学 一种基于牙体长轴的质点弹簧模型的牙龈变形仿真方法
CN110069195A (zh) * 2019-01-31 2019-07-30 北京字节跳动网络技术有限公司 图像拖拽变形方法和装置
CN110069191A (zh) * 2019-01-31 2019-07-30 北京字节跳动网络技术有限公司 基于终端的图像拖拽变形实现方法和装置
CN110555798A (zh) * 2019-08-26 2019-12-10 北京字节跳动网络技术有限公司 图像变形方法、装置、电子设备及计算机可读存储介质
CN112258653A (zh) * 2020-10-28 2021-01-22 北京字跳网络技术有限公司 弹性对象的渲染方法、装置、设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130181972A1 (en) * 2012-01-16 2013-07-18 Autodesk, Inc. Three dimensional contriver tool for modeling with multi-touch devices
CN106504329A (zh) * 2016-09-27 2017-03-15 西安科技大学 一种基于牙体长轴的质点弹簧模型的牙龈变形仿真方法
CN110069195A (zh) * 2019-01-31 2019-07-30 北京字节跳动网络技术有限公司 图像拖拽变形方法和装置
CN110069191A (zh) * 2019-01-31 2019-07-30 北京字节跳动网络技术有限公司 基于终端的图像拖拽变形实现方法和装置
CN110555798A (zh) * 2019-08-26 2019-12-10 北京字节跳动网络技术有限公司 图像变形方法、装置、电子设备及计算机可读存储介质
CN112258653A (zh) * 2020-10-28 2021-01-22 北京字跳网络技术有限公司 弹性对象的渲染方法、装置、设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115034052A (zh) * 2022-05-30 2022-09-09 广东时谛智能科技有限公司 鞋底模型弹性展示方法、装置、设备及存储介质
CN114722447A (zh) * 2022-06-09 2022-07-08 广东时谛智能科技有限公司 多指触控展示鞋子模型方法、装置、设备及存储介质

Also Published As

Publication number Publication date
EP4207083A1 (en) 2023-07-05
EP4207083A4 (en) 2024-06-05
US20230386137A1 (en) 2023-11-30
CN112258653A (zh) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2022088928A1 (zh) 弹性对象的渲染方法、装置、设备及存储介质
US11893702B2 (en) Virtual object processing method and apparatus, and storage medium and electronic device
JP2024505995A (ja) 特殊効果展示方法、装置、機器および媒体
WO2023179346A1 (zh) 特效图像处理方法、装置、电子设备及存储介质
WO2022042291A1 (zh) 一种图像处理方法、装置、电子设备和存储介质
US20240168615A1 (en) Image display method and apparatus, device, and medium
CN111638791B (zh) 虚拟角色的生成方法、装置、电子设备及存储介质
WO2023151524A1 (zh) 图像显示方法、装置、电子设备及存储介质
WO2022055421A1 (zh) 基于增强现实的显示方法、设备及存储介质
CN112053449A (zh) 基于增强现实的显示方法、设备及存储介质
WO2023116801A1 (zh) 一种粒子效果渲染方法、装置、设备及介质
CN112766215A (zh) 人脸融合方法、装置、电子设备及存储介质
WO2020215789A1 (zh) 虚拟画笔实现方法、装置和计算机可读存储介质
WO2022012349A1 (zh) 动画处理方法、装置、电子设备及存储介质
CN111652675A (zh) 展示方法、装置和电子设备
WO2020259152A1 (zh) 贴纸生成方法、装置、介质和电子设备
CN114494658B (zh) 特效展示方法、装置、设备和存储介质
WO2023121569A2 (zh) 粒子特效渲染方法、装置、设备及存储介质
WO2022057576A1 (zh) 人脸图像显示方法、装置、电子设备及存储介质
WO2022033445A1 (zh) 交互式动态流体效果处理方法、装置及电子设备
US20230334801A1 (en) Facial model reconstruction method and apparatus, and medium and device
WO2022135017A1 (zh) 动态流体显示方法、装置、电子设备和可读介质
WO2022135022A1 (zh) 动态流体显示方法、装置、电子设备和可读介质
WO2022135018A1 (zh) 动态流体显示方法、装置、电子设备和可读介质
WO2023065948A1 (zh) 特效处理方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884660

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021884660

Country of ref document: EP

Effective date: 20230331

NENP Non-entry into the national phase

Ref country code: DE