WO2022033445A1 - 交互式动态流体效果处理方法、装置及电子设备 - Google Patents

交互式动态流体效果处理方法、装置及电子设备 Download PDF

Info

Publication number
WO2022033445A1
WO2022033445A1 PCT/CN2021/111608 CN2021111608W WO2022033445A1 WO 2022033445 A1 WO2022033445 A1 WO 2022033445A1 CN 2021111608 W CN2021111608 W CN 2021111608W WO 2022033445 A1 WO2022033445 A1 WO 2022033445A1
Authority
WO
WIPO (PCT)
Prior art keywords
fluid
pose
particle
model
change
Prior art date
Application number
PCT/CN2021/111608
Other languages
English (en)
French (fr)
Inventor
李奇
李小奇
王惊雷
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to US18/041,003 priority Critical patent/US20230368422A1/en
Publication of WO2022033445A1 publication Critical patent/WO2022033445A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/24Fluid dynamics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to an interactive dynamic fluid effect processing method, apparatus, and electronic device.
  • the present disclosure provides an interactive dynamic fluid effect processing method, device and electronic device, which are used to solve the problems existing in the prior art.
  • an interactive dynamic fluid effect processing method comprising:
  • the position of the fluid displayed in the user display interface is adjusted, and the motion change of the fluid is dynamically displayed on the user display interface.
  • a game interactive dynamic fluid effect processing device comprising:
  • the acquisition module is used to collect video and detect the pose change of the target object in the video
  • an acquisition module configured to acquire the pose mapping relationship between the target object and the object model corresponding to the first object displayed in the user display interface
  • the determining module is used to determine the pose change of the object model according to the pose change of the target object and the pose mapping relationship;
  • the adjustment module is used to adjust the position of the fluid displayed in the user display interface according to the pose change of the object model, and dynamically display the movement change of the fluid on the user display interface.
  • the present disclosure provides an electronic device, the electronic device comprising:
  • processors one or more processors
  • the memory stores one or more application programs, wherein when the one or more application programs are executed by the one or more processors, the electronic device executes the corresponding processing method of the interactive dynamic fluid effect shown in the first aspect of the present disclosure. operate.
  • the present disclosure provides a computer-readable medium for storing computer instructions, which, when executed by a computer, enables the computer to execute the interactive dynamic as shown in the first aspect of the present disclosure Fluid effect processing method.
  • a video is collected, and a pose change of a target object in the video is detected;
  • the pose mapping relationship of the object model determine the pose change of the object model according to the pose change of the target object and the pose mapping relationship; adjust the position of the fluid displayed in the user display interface according to the pose change of the object model
  • the user display interface dynamically displays the movement of the fluid.
  • FIG. 1 is a schematic flowchart of an interactive dynamic fluid effect processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of performing dynamic fluid effect processing through face detection according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a game interactive dynamic fluid effect processing device according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • the technical solutions of the present disclosure can be applied to applications involving the production, application, and use of dynamic fluid effects.
  • the technical solutions of the present disclosure can be applied to terminal equipment, and the terminal equipment can include a mobile terminal or a computer equipment, wherein the mobile terminal can include, for example, a smart phone, a palmtop computer, a tablet computer, a wearable device with a display screen, etc.; the computer equipment This may include, for example, desktops, laptops, all-in-ones, smart TVs, and the like.
  • the first object and the fluid are modeled in a three-dimensional space through the technical solution of the present disclosure, and the effect image after rendering the object model and the fluid model is displayed in a two-dimensional user display interface (for the sake of brevity, in the following, the user
  • the model rendering effect image of the first object displayed in the display interface is abbreviated as "first object”
  • the model rendering effect image of the fluid displayed in the user display interface is abbreviated as "fluid”
  • the first object in the interface It can be in contact with the fluid.
  • the first object contains fluid. When the first object is moved by an external force, the fluid contained in it will move accordingly. When the first object collides with the fluid, it can be displayed on the user's Dynamic display of fluid movement changes in the interface.
  • the fluid is outside the first object, and when the fluid moves under the action of an external force, the fluid collides with the first object, and the motion change of the fluid is dynamically displayed on the user display interface. It should be understood by those skilled in the art that the present disclosure does not limit the positions and motions of the first object and the fluid.
  • FIG. 1 is a schematic flowchart of an interactive dynamic fluid effect processing method provided by an embodiment of the present disclosure. As shown in FIG. 1 , the method may include:
  • step S101 a video is collected, and a pose change of a target object in the video is detected.
  • the terminal device may start a video capture device (eg, a camera) of the terminal device to capture video.
  • the duration of video capture may be a preset time period, or the duration of video capture may be determined according to the video capture start instruction and the video capture end command, which is not limited in the present disclosure.
  • the terminal device detects the target object in the collected video, where the target object may be a specific object in the video, including but not limited to: a human face, a human head, a human hand, and the like.
  • a face detection algorithm can be used to detect the human face in each frame of images in the video; when the target object is a human head, the head detection algorithm can be used to detect the human face in the video. of the person's head in each frame of the image.
  • Detecting the pose change of the target object may specifically include detecting the pose change of key points in the target object, and determining the pose change of the target object according to the pose change of the key point.
  • the key points may include the center point of the human face.
  • the pose change of the center point of the face is used to determine the pose change of the face.
  • Step S102 acquiring the pose mapping relationship between the target object and the object model corresponding to the first object displayed in the user display interface.
  • the user display interface may be a display interface in an application program, and the solution provided by the embodiments of the present disclosure may be implemented, for example, as an application program or a function plug-in of the application program.
  • the application program When the application program is started, the user display interface is displayed; or, when the terminal device detects the user's trigger instruction (such as clicking a virtual button) for the function plug-in of the application program, the user display interface is displayed, wherein the user display interface can also An image with the first object and the fluid is displayed.
  • the terminal device can model the first object and the fluid in a three-dimensional space, and project an effect image after rendering of the object model and the fluid model on a two-dimensional user display interface, so as to display on the user interface.
  • the first object and the fluid are displayed in the interface.
  • the first object may be an object whose shape and volume are relatively stable during motion and after being acted upon by a force, for example, a rigid body, a soft body, and the like.
  • the first object can be in contact with the fluid in the interface.
  • the fluid can be contained in the first object.
  • the first object moves, the fluid contained in the first object will move accordingly, and a dynamic effect will be presented in the user display interface. .
  • the terminal device detects the target object from the video, and acquires the pose mapping relationship between the pre-configured target object and the object model corresponding to the first object displayed in the user display interface. Since the display position of the first object in the user display interface is related to the position of the target object in the user display interface, when the pose of the target object changes, the pose of the first object also changes. Moreover, the terminal device determines the pose change of the object model according to the pose mapping relationship, so as to present the effect that the target object moves and the first object also moves along with the target object in the user display interface.
  • Step S103 Determine the pose change of the object model according to the pose change of the target object and the pose mapping relationship.
  • the terminal device can determine the pose according to the pose of the target object.
  • the change and the pose mapping relationship can determine the pose change of the object model.
  • the pose change may include the change amount of the pose, and may also be the changed pose.
  • Step S104 adjust the position of the fluid displayed in the user display interface according to the pose change of the object model, and dynamically display the movement change of the fluid on the user display interface.
  • the terminal device can adjust the position of the fluid in the user display interface according to the pose change of the first object, and display the dynamic effect of the fluid movement driven by the object model of the first object in the user display interface.
  • the first object according to the pose change of the human face, correspondingly, the first object also undergoes a pose change, the first object carries fluid, and the pose change of the first object makes The fluid carried inside is subjected to an external force, which changes the position of the fluid.
  • the terminal device can determine the pose change of the first object according to the pose change of the face, and further adjust the position of the fluid according to the pose change of the first object, and display the dynamic effect of the fluid movement on the user display interface.
  • the specific display position can be determined through the following embodiments.
  • the first object is displayed in the user display interface.
  • the terminal device when the terminal device initially performs video capture, the user display interface is displayed according to the user's display trigger operation for the user display interface, and the video capture device is turned on to capture video.
  • the terminal device After determining the initial display position of the target object, determine the initial display position of the first object in the user display interface, and display the first object in the user display interface according to the initial display position.
  • the terminal device can also display the fluid while displaying the first object.
  • the terminal device can display on the user display interface that the first object contains the fluid.
  • the terminal device may further display the fluid after displaying the first object, for example, display a dynamic effect of fluid injection into the first object in the user display interface.
  • the present disclosure does not limit the display order and specific display manner of the first object and the fluid.
  • the position association relationship may include center point coincidence
  • the terminal device displays the center point position of the target object and the center point position of the first object overlapped on the user display interface.
  • the terminal device can associate the position of the center point of the target object with the position of the center point of the first object containing the fluid, and associate the target object with the first object containing the fluid.
  • An object is displayed in the user display interface in a manner that the center points are coincident.
  • the position association relationship may further include maintaining a certain distance d between the center point position of the target object and the center point position of the first object, and the terminal device associates the center point position of the target object with the first object in the user display interface
  • the position of the center point of the target object is maintained at a specific distance d for display.
  • the terminal device can determine the center point position of the first object according to the position of the center point of the target object and the specific distance d, and display the first object in the user display interface according to the position. object.
  • it may be an object model of the first object obtained by modeling the first object in a three-dimensional space according to feature information (eg, size, shape, color, etc.) of the first object.
  • feature information eg, size, shape, color, etc.
  • step S101 for detecting the pose change of the target object in the video in step S101, it may include:
  • step S103 may include:
  • the change amount of the pose of the target object and the pose mapping relationship the change amount of the pose of the object model corresponding to the first object is determined.
  • adjusting the position of the fluid displayed in the user display interface according to the pose change of the object model may include:
  • the terminal device can detect the change of the pose of the target object, and determine the change of the pose of the object model according to the change of the pose of the target object and the pose mapping relationship between the target object and the object model.
  • the change amount of the pose of the object model determines the pose of the object model after the pose change, and adjusts the position of the fluid displayed in the user display interface according to the pose change of the object model.
  • the change of the pose of the object model corresponding to the first object can be determined, and the position of the fluid can be adjusted according to the change of the pose of the object model, which can make the adjustment
  • the position of the back fluid is more accurate and presents a better dynamic effect.
  • the detecting the change amount of the pose of the target object in the video in step S101 includes: detecting the change amount of the position and the pose of the target object in the video.
  • the pose mapping relationship includes a first mapping relationship between the change in the position of the target object and the change in the position of the object model, and a second mapping relationship between the change in the pose of the target object and the change in the pose of the object model.
  • the step determines the change amount of the pose of the object model according to the change amount of the pose of the target object and the pose mapping relationship, which may include:
  • the variation of the posture information of the object model is determined.
  • pose can include position and pose.
  • the terminal device can determine the position and posture of the target object in the three-dimensional space according to the two-dimensional image of the target object detected in the video.
  • the posture of the target object may be the rotation angle of the target object in the three directions of the x-axis, the y-axis, and the z-axis, which may be called azimuth angle, pitch angle and roll angle respectively.
  • the terminal device can estimate the pose of the target object in three-dimensional space according to the two-dimensional image of the target object.
  • the head pose estimation algorithm can be used to estimate the pose of the human head according to the face image. .
  • the position of the target object may be a position coordinate or a position vector of the target object in the three-dimensional space determined according to the position of the target object in the two-dimensional image.
  • the terminal device may establish a first mapping relationship for the variation of the position of the target object in the three-dimensional space and the variation of the position of the object model corresponding to the first object, and for the estimated variation of the posture of the target object in the three-dimensional space and A second mapping relationship is established for the variation of the posture of the object model, and the variation of the position of the object model and the variation of the posture of the object model are determined according to the variation of the position and the posture of the target object, respectively.
  • the change of the position and the attitude of the object model of the first object in the three-dimensional space can be determined:
  • ⁇ p f represents the amount of change in the position of the target object in the three-dimensional space
  • ⁇ ps represents the amount of change in the position of the object model corresponding to the first object in the three-dimensional space
  • represents the scale parameter, which can be a preset value, available It is used to adjust the speed of the movement of the object model in the three-dimensional space
  • ⁇ q f represents the change of the attitude of the target object in the three-dimensional space
  • ⁇ q s represents the change of the attitude of the object model in the three-dimensional space.
  • the formula (1) may be used as the first mapping relationship
  • the formula (2) may be used as the second mapping relationship.
  • the position change amount and the posture change amount of the object model can be determined, so that the object model can be presented in the user display interface along with the change amount of the target object.
  • the dynamic effect of movement and movement is a simple expression
  • adjusting the position of the fluid displayed in the user display interface according to the change in the pose of the object model in step S104 includes:
  • each model particle and the position of the fluid particle determine the model particle that collides with the fluid particle
  • the fluid when the first object moves with the pose transformation of the target object in the video, the fluid can collide with the first object, and the terminal device can determine the object model according to the change in the position of the object model For the changed position, the position of each model particle in the object model is determined according to the changed position of the object model.
  • the object model corresponding to the first object can be exported as point cloud data through 3D modeling software (for example, 3Dmax, Maya, etc.), and the point cloud data is in point cloud format (the suffix of the point cloud format file is .ply)
  • 3D modeling software for example, 3Dmax, Maya, etc.
  • the point cloud data is in point cloud format (the suffix of the point cloud format file is .ply)
  • Each point cloud data corresponds to a point
  • each point corresponds to a model particle
  • each point cloud data can include the position and normal information of each point in the model
  • each normal information can point to the outside of the object model.
  • the terminal device can simulate the movement of the fluid according to the magnitude of the external force on the fluid particles, estimate the position of each fluid particle after movement, and obtain The estimated position, as the position of each fluid particle corresponding to the fluid.
  • the terminal device may calculate the estimated position of each fluid particle through a position-based fluid (Position Based Fluid, PBF) simulation method.
  • PBF Position Based Fluid
  • the terminal device can collide with the model particles according to the model The position of the particle, adjust the position of the fluid particle, and use the adjusted position as the position displayed on the user display interface after the fluid particle moves, so as to dynamically display the movement change of the fluid on the user display interface.
  • model particles are distributed near the fluid particles.
  • For each fluid particle according to the estimated position of the fluid particle and the position of each model particle, which model particles collide with the fluid particle can be determined, Which model particles did not collide with fluid particles.
  • the distance between the fluid particle and each model particle can be calculated, and the adjacent model particles of the fluid particle can be determined according to the distance, and the terminal device can The model particle closest to the fluid particle acts as the adjacent model particle of the fluid particle. Since the adjacent model particles of the fluid particle are the model particles most likely to collide with the fluid particle, if the distance between the adjacent model particle and the fluid particle is less than the preset distance, the adjacent model particle is the one that collides with the fluid particle. model particle, so that the terminal device obtains the position of the model particle that collided with the fluid particle.
  • the terminal device estimates the position of the fluid particles before the motion (for example, using the PBF algorithm) to obtain the estimated position, and uses the estimated position as the position displayed on the user display interface after the fluid particles move. . Therefore, the motion change process of this part of the fluid particles displayed in the user display interface is: moving from the position before the motion to the estimated position.
  • the terminal device can estimate the position of the fluid particle after it moves under the action of inertia according to the current position of the fluid particle, simulate the motion of the fluid particle, and use it in the user. displayed on the display interface.
  • the terminal device can simulate the motion of fluid particles under the action of inertia by means of PBF.
  • adjusting the position of the fluid particle according to the position of the model particle that collides with the fluid particle may include:
  • the positions of the fluid particles that collide with the model particles are adjusted, so as to dynamically display the motion change of the fluid on the user display interface.
  • the position of the fluid particle and the position of the model particle that collides with the fluid particle can be a position vector in three-dimensional space, and the terminal device calculates the difference between the two position vectors, According to the difference between the two position vectors, the position correction amount of the fluid particle is determined, and the terminal device adjusts the position of the colliding fluid particle according to the position correction amount, and uses the adjusted position as the fluid particle collision
  • the user display interface shows that the fluid particle moves from the position corresponding to the position before the movement to the position corresponding to the adjusted position, so as to present the dynamic change effect of the fluid in the user display interface.
  • the position correction amount of the fluid particles is determined according to the positions of the fluid particles and the positions of the model particles that collide with the fluid particles, which may include:
  • the position correction amount of the fluid particle is determined.
  • the terminal device exports the object model as point cloud data, each point cloud data corresponds to a model particle, and the point cloud data includes the position and normal information of each model particle in the model, and each normal direction information can point to The exterior of the object model.
  • the terminal device may preconfigure the first weight and the second weight, wherein the first weight may be the weight corresponding to the normal information of the model particle that collides with the fluid particle, and the second weight may be the fluid particle and the fluid particle.
  • the weight corresponding to the first distance between the colliding model particles the terminal device is based on the first distance, normal information, first weight, second weight and preset between the fluid particle and the model particle that collides with the fluid particle
  • the distance r determines the position correction of the fluid particles.
  • the terminal device can perform coordinate transformation on the position and normal information of the model particles, and transform them into the coordinate system (also referred to as the fluid coordinate system) for calculating the position correction amount of the fluid particles.
  • Formulas (3)-(4) perform coordinate transformation on model particles:
  • P ⁇ represents the position of each model particle in the fluid coordinate system
  • P m represents the position of each model particle in the model coordinate system
  • n ⁇ represents the normal vector of each model particle in the fluid coordinate system
  • n m represents each model particle
  • R represents the rotation matrix
  • T represents the translation vector, and R and T can be pre-configured according to specific needs.
  • the position correction of the fluid particles is calculated by the following formula:
  • ⁇ p represents the position correction amount to be calculated
  • r represents the preset distance
  • d represents the difference between the position vectors of the fluid particle and the model particle that collides with the fluid particle in three-dimensional space
  • represents the fluid particle and the fluid
  • p represents the position vector of the position of the fluid particle
  • x represents the position vector of the position of the model particle that collided with the fluid particle
  • n ⁇ represents the model particle that collided with the fluid particle in the fluid
  • ⁇ 1 represents the first weight
  • ⁇ 2 represents the second weight.
  • pt represents the position of the fluid particle before the position adjustment (for example, the estimated position calculated by the PBF method)
  • ⁇ p represents the position correction amount
  • pt +1 represents the position of the fluid particle after the position adjustment
  • t represents the position adjustment The time corresponding to the previous position
  • t+1 represents the time corresponding to the position after the position adjustment.
  • step S201 is performed, when the terminal device detects the camera start command on the user display interface, the camera is turned on to capture video (the camera captures the picture as shown in the figure);
  • Step S202 is performed to detect the change of the pose of the target object in the video, if the target object is a human face, then detect the change of the pose of the human face (“face detection” as shown in the figure);
  • Step S203 is performed, when the human face appears on the user display interface for the first time (“first appearance” as shown in the figure), according to the first detection of the display position of the human face in the user display interface, determine the first time. the initial display position of an object in the user display interface, and displaying the fluid in the first object (eg, injecting the fluid into the first object);
  • Step S204 is performed, when the face moves, the change of the pose of the face is obtained, and the change of the pose of the first object is determined according to the change of the pose of the face and the mapping relationship of the pose (as shown in the figure).
  • “Calculate the pose change of the first object” since the movement of the first object will also drive the fluid in the first object to move together, the terminal device can adjust the first object according to the change of the pose of the first object.
  • the position of the fluid in the object, and the motion change of the fluid is dynamically displayed on the user display interface (as shown in the figure, "drive the fluid in the first object to move together").
  • Step S205 is performed, when the face does not move, the terminal device can determine the position of the fluid after the movement of the fluid by means of PBF, simulating the state in which the fluid continues to move under the action of inertia (as shown in the figure “Fluid continues to flow under the action of inertia”). ");
  • step S206 in the figure the terminal device outputs the image to the screen.
  • the interactive dynamic fluid effect processing method may include: collecting a video, and detecting a pose change of a target object in the video; acquiring an object model corresponding to the target object and the first object displayed in the user display interface According to the pose change of the target object and the pose mapping relationship, determine the pose change of the object model; according to the pose change of the object model, adjust the position of the fluid displayed in the user display interface, and display it on the user
  • the interface dynamically displays the movement of the fluid.
  • an embodiment of the present disclosure further provides an interactive dynamic fluid effect processing apparatus 30.
  • the interactive dynamic fluid effect processing apparatus 30 may include:
  • the collection module 31 is used to collect video and detect the pose change of the target object in the video
  • an acquisition module 32 configured to acquire the pose mapping relationship between the target object and the object model corresponding to the first object displayed in the user display interface
  • the determination module 33 is used for determining the pose change of the object model according to the pose change of the target object and the pose mapping relationship;
  • the adjustment module 34 is configured to adjust the position of the fluid displayed in the user display interface according to the pose change of the object model, and dynamically display the movement change of the fluid on the user display interface.
  • the apparatus 30 further includes a receiving module for:
  • the first object is displayed in the user display interface according to the initial display position.
  • the collection module 31 is specifically used for:
  • the determination module 33 is specifically used for:
  • the adjustment module 34 is specifically used for:
  • the acquisition module 31 when the acquisition module 31 detects the variation of the pose of the target object in the video, it is used for:
  • the pose mapping relationship includes a first mapping relationship between the position change of the target object and the position change of the object model, and a second mapping relationship between the change amount of the target object's pose and the change amount of the object model.
  • the determination module 33 is used for determining the variation of the pose of the object model according to the variation of the pose of the target object and the pose mapping relationship:
  • the variation of the posture of the target object is determined.
  • the adjustment module 34 when adjusting the position of the fluid displayed in the user display interface according to the change of the pose of the object model, the adjustment module 34 is used for:
  • each model particle and the position of the fluid particle determine the model particle that collides with the fluid particle
  • the adjustment module 34 is specifically used to:
  • the position correction amount adjust the position of the fluid particles that collide with the model particles, so as to dynamically display the motion changes of the fluid on the user display interface.
  • the adjustment module 34 when determining the position correction amount of the fluid particle according to the position of the fluid particle and the position of the model particle that collides with the fluid particle, the adjustment module 34 is used to:
  • the position correction amount of the fluid particle is determined.
  • the interactive dynamic fluid effect processing apparatus in the embodiments of the present disclosure can execute the interactive dynamic fluid effect processing method provided by the embodiments of the present disclosure, and the implementation principle is similar.
  • the interactive dynamic fluid effect processing apparatus in the embodiments of the present disclosure The actions performed by each module in the above are corresponding to the steps in the interactive dynamic fluid effect processing method in each embodiment of the present disclosure.
  • a video can be collected, and a pose change of a target object in the video can be detected; Pose mapping relationship; determine the pose change of the object model according to the pose change of the target object and the pose mapping relationship; adjust the position of the fluid displayed in the user display interface according to the pose change of the object model, and dynamically display the fluid in the user display interface Demonstrate changes in the motion of fluids.
  • FIG. 4 it shows a schematic structural diagram of an electronic device 400 suitable for implementing an embodiment of the present disclosure.
  • the execution body of the technical solutions of the embodiments of the present disclosure may include, but is not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals ( Mobile terminals such as in-vehicle navigation terminals), wearable electronic devices, etc., and stationary terminals such as digital TVs, desktop computers, and the like.
  • the electronic device shown in FIG. 4 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device includes: a memory and a processor, where the memory is used to store a program for executing the methods described in the foregoing method embodiments; the processor is configured to execute the program stored in the memory, so as to implement the above-described embodiments of the present disclosure. function and/or other desired functions.
  • the processor here may be referred to as the processing device 401 described below, and the memory may include at least one of a read-only memory (ROM) 402, a random access memory (RAM) 403, and a storage device 408 in the following, specifically as follows shown:
  • an electronic device 400 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 401 that may be loaded into random access according to a program stored in a read only memory (ROM) 402 or from a storage device 408 Various appropriate actions and processes are executed by the programs in the memory (RAM) 403 . In the RAM 403, various programs and data required for the operation of the electronic device 400 are also stored.
  • the processing device 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
  • An input/output (I/O) interface 405 is also connected to bus 404 .
  • I/O interface 405 the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, vibration An output device 407 of a computer, etc.; a storage device 408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 409. Communication means 409 may allow electronic device 400 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 4 shows electronic device 400 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer-readable medium, the computer program storing program code for performing the methods described in the various embodiments described above.
  • the computer program may be downloaded and installed from the network via the communication device 409, or from the storage device 408, or from the ROM 402.
  • the processing apparatus 401 When the computer program is executed by the processing apparatus 401, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the client and server can use any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) to communicate, and can communicate with digital data in any form or medium Communication (eg, a communication network) interconnects.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks (“LAN”), wide area networks (“WAN”), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently known or future development network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: collects video, and detects the pose change of the target object in the video; obtains the target object The pose mapping relationship of the object model corresponding to the first object displayed in the user display interface; determine the pose change of the object model according to the pose change of the target object and the pose mapping relationship; adjust the pose change according to the pose change of the object model
  • the position of the fluid displayed in the user display interface is displayed, and the motion change of the fluid is dynamically displayed on the user display interface.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the modules or units involved in the embodiments of the present disclosure may be implemented in software or hardware. Among them, the name of the module or unit does not constitute a limitation of the unit itself under certain circumstances.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), Systems on Chips (SOCs), Complex Programmable Logical Devices (CPLDs) and more.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs Systems on Chips
  • CPLDs Complex Programmable Logical Devices
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the present disclosure provides an interactive dynamic fluid effect processing method, the method comprising:
  • the position of the fluid displayed in the user display interface is adjusted, and the motion change of the fluid is dynamically displayed on the user display interface.
  • the method further includes:
  • the first object is displayed in the user display interface according to the initial display position.
  • the detection of the pose change of the target object in the video includes:
  • the determining the pose change of the object model corresponding to the first object according to the pose change of the target object and the pose mapping relationship includes:
  • the adjusting the position of the fluid displayed in the user display interface according to the pose change of the object model includes:
  • the position of the fluid displayed in the user display interface is adjusted according to the change amount of the pose of the object model.
  • the detection of changes in the pose of the target object in the video includes:
  • the pose mapping relationship includes a first mapping relationship between the change amount of the position of the target object and the change amount of the position of the object model, and the change amount of the pose of the target object and the change amount of the object model.
  • the second mapping relationship of the change amount of the attitude is a first mapping relationship between the change amount of the position of the target object and the change amount of the position of the object model, and the change amount of the pose of the target object and the change amount of the object model.
  • the determining the change amount of the pose of the object model according to the change amount of the pose of the target object and the pose mapping relationship includes:
  • the change amount of the posture of the target object is determined.
  • the adjusting the position of the fluid displayed in the user display interface according to the change amount of the pose of the object model includes:
  • the position of the fluid particle is adjusted according to the position of the model particle that collided with the fluid particle.
  • adjusting the position of the fluid particle according to the position of the model particle that collides with the fluid particle includes:
  • the position correction amount the position of the fluid particle that collides with the model particle is adjusted, so as to dynamically display the motion change of the fluid on the user display interface.
  • the determining the position correction amount of the fluid particle according to the position of the fluid particle and the position of the model particle that collides with the fluid particle includes:
  • a position correction amount of the fluid particle is determined.
  • the present disclosure provides an interactive dynamic fluid effect processing device, the device comprising:
  • the acquisition module is used to collect video and detect the pose change of the target object in the video
  • an acquisition module configured to acquire the pose mapping relationship between the target object and the object model corresponding to the first object displayed in the user display interface
  • a determination module configured to determine the pose change of the object model according to the pose change of the target object and the pose mapping relationship
  • the adjustment module is configured to adjust the position of the fluid displayed in the user display interface according to the pose change of the object model, and dynamically display the movement change of the fluid on the user display interface.
  • the apparatus further includes a receiving module for:
  • the first object is displayed in the user display interface according to the initial display position.
  • the acquisition module is specifically used for:
  • the determining module is specifically used for:
  • the adjustment module is specifically used for:
  • the position of the fluid displayed in the user display interface is adjusted according to the change amount of the pose of the object model.
  • the acquisition module when detecting the variation of the pose of the target object in the video, is used to:
  • the pose mapping relationship includes a first mapping relationship between the change amount of the position of the target object and the change amount of the position of the object model, and the change amount of the pose of the target object and the change amount of the object model.
  • the second mapping relationship of the change amount of the attitude is a first mapping relationship between the change amount of the position of the target object and the change amount of the position of the object model, and the change amount of the pose of the target object and the change amount of the object model.
  • the determining module when determining the change amount of the pose of the object model according to the change of the pose of the target object and the pose mapping relationship, is configured to:
  • the change amount of the posture of the target object is determined.
  • the adjustment module when the adjustment module adjusts the position of the fluid displayed in the user display interface according to the change in the pose of the object model, the adjustment module is configured to:
  • the position of the fluid particle is adjusted according to the position of the model particle that collided with the fluid particle.
  • the adjustment module is specifically used to:
  • the position correction amount the position of the fluid particle that collides with the model particle is adjusted, so as to dynamically display the motion change of the fluid on the user display interface.
  • the adjustment module when determining the position correction amount of the fluid particle according to the position of the fluid particle and the position of the model particle that collides with the fluid particle, is configured to:
  • a position correction amount of the fluid particle is determined.
  • the present disclosure provides an electronic device, comprising:
  • processors one or more processors
  • the memory stores one or more application programs, wherein when the one or more application programs are executed by the one or more processors, the electronic device executes the interactive dynamic fluid effect processing method.
  • the present disclosure provides a computer-readable medium for storing computer instructions that, when executed by a computer, cause the computer to execute all The interactive dynamic fluid effect processing method described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开提供了一种交互式动态流体效果处理方法、装置及电子设备,涉及计算机技术领域。该方法包括:采集视频,并检测视频中的目标对象的位姿变化;获取目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;根据目标对象的位姿变化以及位姿映射关系,确定物体模型的位姿变化;根据物体模型的位姿变化,调整用户显示界面中显示的流体的位置,并在用户显示界面动态展示流体的运动变化。本公开提供的技术方案,根据采集到的视频中的目标对象的位姿变化,来控制用户显示界面中的物体模型和流体的运动变化,交互方式新颖,趣味性强。

Description

交互式动态流体效果处理方法、装置及电子设备
相关申请的交叉引用
本申请要求于2020年08月10日提交的,申请号为202010796950.5、发明名称为“交互式动态流体效果处理方法、装置及电子设备”的中国专利申请的优先权,该申请的全文通过引用结合在本申请中。
技术领域
本公开涉及计算机技术领域,具体而言,本公开涉及一种交互式动态流体效果处理方法、装置及电子设备。
背景技术
随着计算机技术和通信技术的迅速发展,基于终端设备的各种应用程序得到了普遍应用,极大地丰富了人们的日常生活。用户可以通过各种应用程序进行娱乐、与其他用户分享日常生活等。为了增强趣味性,通常在游戏类或者视频拍摄类应用程序中添加交互方式以增加用户体验。
然而,现有技术中,应用在移动端的交互方式大多采用手指触屏方式进行交互输入,玩法单一,趣味性不强。
发明内容
本公开提供了一种交互式动态流体效果处理方法、装置及电子设备,用于解决现有技术中存在的问题。
第一方面,提供了一种交互式动态流体效果处理方法,该方法包括:
采集视频,并检测视频中的目标对象的位姿变化;
获取目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;
根据目标对象的位姿变化以及位姿映射关系,确定物体模型的位姿变 化;
根据物体模型的位姿变化,调整用户显示界面中显示的流体的位置,并在用户显示界面动态展示流体的运动变化。
第二方面,提供了一种游戏交互式动态流体效果处理装置,该装置包括:
采集模块,用于采集视频,并检测视频中的目标对象的位姿变化;
获取模块,用于获取目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;
确定模块,用于根据目标对象的位姿变化以及位姿映射关系,确定物体模型的位姿变化;
调整模块,用于根据物体模型的位姿变化,调整用户显示界面中显示的流体的位置,并在用户显示界面动态展示流体的运动变化。
第三方面,本公开提供了一种电子设备,该电子设备包括:
一个或多个处理器;
存储器,存储一个或多个应用程序,其中一个或多个应用程序被一个或多个处理器执行时,使得电子设备执行如本公开的第一方面所示的交互式动态流体效果处理方法对应的操作。
第四方面,本公开提供了一种计算机可读介质,计算机可读介质用于存储计算机指令,当计算机指令被计算机执行时,使得计算机可以执行如本公开的第一方面所示的交互式动态流体效果处理方法。
本公开提供的技术方案带来的有益效果可以包括:
在本公开实施例提供的交互式动态流体效果处理方法、装置及电子设备,采集视频,并检测视频中的目标对象的位姿变化;获取目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;根据目标对象的位姿变化以及位姿映射关系,确定物体模型的位姿变化;根据物体模型的位姿变化,调整用户显示界面中显示的流体的位置,并在用户显示界面动态展示流体的运动变化。通过采用本公开提供的技术方案,根据采集到的视频中的目标对象的位姿变化,来控制用户显示界面中的物体和流体的运动变化,交互方式新颖、趣味性强,能够提升用户的体验。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对本公开实施例描述中所需要使用的附图作简单地介绍。
图1为本公开实施例提供的一种交互式动态流体效果处理方法的流程示意图;
图2为本公开实施例提供的通过人脸检测进行动态流体效果处理的示意图;
图3为本公开实施例提供的一种游戏交互式动态流体效果处理装置的结构示意图;
图4为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对装置、模块或单元进行区分,并非用于限定这些装置、模块或单元一定为不同的装置、模块或单元,也并非用于限定这些装置、模块或单元所执行的功能 的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
下面以具体地实施例对本公开的技术方案以及本公开的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本公开的实施例进行描述。
本公开技术方案可以应用在涉及动态流体效果制作、应用以及使用等的应用程序中。本公开的技术方案可以应用在终端设备中,终端设备可以包括移动终端或者计算机设备,其中,移动终端可以包括例如智能手机、掌上电脑、平板电脑、带显示屏的可穿戴设备等等;计算机设备可以包括例如台式机、笔记本电脑、一体机、智能电视等。通过本公开技术方案对第一物体和流体在三维空间中进行建模,将物体模型和流体模型渲染之后的效果图像在二维的用户显示界面中进行显示(为简明起见,在下文中,将用户显示界面中显示的第一物体的模型渲染效果图像简称为“第一物体”,类似地,将用户显示界面中显示的流体的模型渲染效果图像简称为“流体”),第一物体在界面中可以与流体接触,例如,第一物体中盛载有流体,当第一物体受到外力进行运动时,其中盛载的流体随之发生运动,当第一物体和流体发生碰撞时,可以在用户显示界面中动态展示流体的运动变化。又例如,流体在第一物体的外部,当流体受到外力作用运动时,流体和第一物体发生碰撞,并在用户显示界面中动态展示流体的运动变化。本领域技术人员应当理解的是,本公开不对第一物体和流体的位置以及运动情况进行限定。
图1为本公开实施例提供的一种交互式动态流体效果处理方法的流程示意图,如图1所示,该方法可以包括:
步骤S101,采集视频,并检测视频中的目标对象的位姿变化。
具体的,当检测到视频采集指令时,终端设备可以启动终端设备的视频采集装置(例如,摄像头)采集视频。视频采集的持续时间可以是预设时间段,也可以根据视频采集开始指令和视频采集结束指令来确定视频采集的持续时间,本公开不对此进行限制。采集到视频之后,终端设备在采集到的视频中检测目标对象,其中,目标对象可以是视频中的特定对象,包括但不限于:人的脸部、人的头部、人的手部等。可选地,当目标对象为人的脸时,可以通过人脸检测算法来检测视频中的各帧图像中的人脸;当目标对象为人的头部时,可以通过头部检测算法来检测视频中的各帧图像中的人的头部。
检测目标对象的位姿变化具体可以包括检测目标对象中的关键点的位姿变化,并根据关键点的位姿变化来确定目标对象的位姿变化。
在一个实施例中,以目标对象为人脸为例,关键点可以包括人脸的中心点,则终端设备在采集的视频的各帧图像中检测人脸的中心点的位姿变化,并根据人脸的中心点的位姿变化,来确定人脸的位姿变化。
步骤S102,获取目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系。
其中,用户显示界面可以为应用程序中的一个显示界面,本公开实施例所提供的方案例如可以实现为一个应用程序或者应用程序的一个功能插件,当终端设备检测到用户针对应用程序的启动指令时,启动应用程序,展示用户显示界面;或者,当终端设备检测到用户针对应用程序的该功能插件的触发指令(如点击虚拟按钮)时,展示用户显示界面,其中,用户显示界面中还可以显示有第一物体和流体的图像。在一个实施例中,终端设备可以通过对第一物体和流体在三维空间中进行建模,并将物体模型和流体模型渲染之后的效果图像投影在二维的用户显示界面中,从而在用户显示界面中显示第一物体以及流体。
其中,第一物体可以是运动中和受到力的作用后,形状和体积都相对稳定的物体,例如,刚体、软体等。第一物体在界面中可以与流体接触,例如,流体可以盛载在第一物体中,当第一物体进行运动时,其中盛载的流体随之发生运动,并在用户显示界面中呈现动态效果。
具体的,终端设备从视频中检测出目标对象,并获取预先配置的目标对象和用户显示界面中显示的第一物体对应的物体模型之间的位姿映射关系。由于第一物体在用户显示界面中的显示位置与目标对象在用户显示界面中的位置相关,因此,当目标对象发生位姿变化时,第一物体也发生位姿变换。并且,终端设备根据该位姿映射关系确定物体模型的位姿变化,从而在用户显示界面中呈现出目标对象运动并且第一物体也随着目标对象而运动的效果。
步骤S103,根据目标对象的位姿变化以及位姿映射关系,确定物体模型的位姿变化。
由于第一物体在用户显示界面中的显示位置与目标对象在用户显示界面中的位置相关,并且目标对象和物体模型之间预先配置了位姿映射关系,因此,终端设备根据目标对象的位姿变化以及位姿映射关系,可以确定出物体模型的位姿变化。其中,位姿变化可以包括位姿的变化量,也可以是变化后的位姿。
步骤S104,根据物体模型的位姿变化,调整用户显示界面中显示的流体的位置,并在用户显示界面动态展示流体的运动变化。
以第一物体中盛载有流体为例进行说明,当目标对象发生位姿变化时,由于第一物体在用户显示界面中的显示位置与目标对象在用户显示界面中的位置相关,相应地,第一物体也发生位姿变化,并带动流体运动,流体受到外力作用,流体的位置也会发生变化。终端设备可以根据第一物体的位姿变化,来调整用户显示界面中的流体的位置,并在用户显示界面中展示第一物体的物体模型带动流体运动的动态效果。
在一个实施例中,以目标对象为人脸为例,根据人脸的位姿变化,相应地,第一物体也发生位姿变化,第一物体中承载有流体,第一物体的位姿变化使内部承载的流体受到外力作用,从而使流体的位置发生变化。终端设备可以根据人脸的位姿变化确定第一物体的位姿变化,并进一步根据第一物体的位姿变化,来调整流体的位置,在用户显示界面中展示流体运动的动态效果。
终端设备在用户显示界面中显示第一物体时,具体的显示位置的确定 可以通过如下实施例实现。
在一种可能的实现方式中,包括:
接收用户针对用户显示界面的显示触发操作;
显示用户显示界面,并开启视频采集装置采集视频;
检测视频中的目标对象,并获取检测到的目标对象在用户显示界面中的位置;
根据目标对象在用户显示界面中的位置,确定第一物体在用户显示界面中的初始显示位置;
根据所述初始显示位置,在用户显示界面中显示第一物体。
在实际应用中,终端设备在初始进行视频采集时,根据用户针对用户显示界面的显示触发操作显示用户显示界面,并开启视频采集装置采集视频,终端设备可以根据目标对象与第一物体之间的位置关联关系,在确定目标对象的初始显示位置后,确定第一物体在用户显示界面中的初始显示位置,并根据该初始显示位置,在用户显示界面中显示第一物体。此外,终端设备还可以在显示第一物体的同时显示流体,例如,终端设备可以在用户显示界面中显示第一物体中盛装有流体。在另一个实施例中,终端设备还可以在显示第一物体之后显示流体,例如,在用户显示界面中显示流体注入至第一物体的动态效果。本公开不对第一物体和流体的显示顺序和具体显示方式进行限制。
其中,该位置关联关系可以包括中心点重合,终端设备在用户显示界面中将目标对象的中心点位置与第一物体的中心点位置重合显示。例如,仍以流体被盛载在第一物体中为例进行说明,终端设备可以将目标对象的中心点位置和盛装流体的第一物体的中心点位置相关联,将目标对象和盛装流体的第一物体按照中心点重合的方式在用户显示界面中进行显示。在另一个实施例中,位置关联关系还可以包括目标对象的中心点位置和第一物体的中心点位置保持特定距离d,终端设备在用户显示界面中将目标对象的中心点位置与第一物体的中心点位置保持特定距离d进行显示,例如,终端设备根据目标对象的中心点的位置和特定距离d,能够确定第一物体的中心点位置,并根据该位置在用户显示界面中显示第一物体。
在一个实施例中,可以是根据第一物体的特征信息(例如,尺寸、形状、颜色等),在三维空间中对第一物体进行建模得到的第一物体的物体模型。
在一种可能的实现方式中,针对步骤S101中的检测视频中的目标对象的位姿变化,可以包括:
检测视频中的目标对象的位姿的变化量。
进一步地,步骤S103可以包括:
根据目标对象的位姿的变化量以及位姿映射关系,确定第一物体对应的物体模型的位姿的变化量。
从而,根据物体模型的位姿变化,调整用户显示界面中显示的流体的位置,可以包括:
根据物体模型的位姿的变化量,调整用户显示界面中显示的流体的位置。
在实际应用中,终端设备可以检测目标对象的位姿的变化量,根据目标对象的位姿的变化量以及目标对象和物体模型的位姿映射关系,确定物体模型的位姿的变化量,根据物体模型的位姿的变化量,确定物体模型发生位姿变化后的位姿,根据物体模型变化后的位姿,调整用户显示界面中显示的流体的位置。
本公开实施例中,根据目标对象的位姿的变化量,可以确定第一物体对应的物体模型的位姿的变化量,根据物体模型的位姿的变化量,调整流体的位置,可以使调整后的流体的位置更加准确,呈现更好的动态效果。
在一种可能的实现方式中,步骤S101中的检测视频中的目标对象的位姿的变化量,包括:检测视频中的目标对象的位置的变化量和姿态的变化量。其中,位姿映射关系包括目标对象的位置的变化量和物体模型位置的变化量的第一映射关系,以及目标对象的姿态的变化量和物体模型的姿态的变化量的第二映射关系。
并且,步骤根据目标对象的位姿的变化量以及位姿映射关系,确定物体模型的位姿的变化量,可以包括:
根据目标对象的位置的变化量和第一映射关系,确定物体模型的位置 的变化量;以及
根据目标对象的姿态的变化量和第二映射关系,确定物体模型的姿态信息的变化量。
在实际应用中,位姿可以包括位置和姿态。终端设备可以根据在视频中检测出的目标对象的二维图像,确定目标对象在三维空间中的位置和姿态。其中,目标对象的姿态可以是目标对象在x轴、y轴、z轴三个方向的旋转角,分别可以称为方位角、俯仰角和翻滚角。终端设备可以根据目标对象的二维图像,估计出目标对象在三维空间中的位姿,例如,可以通过头部姿态估计(Head Pose Estimation)算法,根据人脸图像,估计出人头部的姿态。目标对象的位置可以是根据目标对象在二维图像的位置确定出的目标对象在三维空间中的位置坐标或位置向量。终端设备可以针对目标对象在三维空间中的位置的变化量和第一物体对应的物体模型的位置的变化量建立第一映射关系,针对估计出的目标对象在三维空间中的姿态的变化量和物体模型的姿态的变化量建立第二映射关系,并分别根据目标对象的位置的变化量和姿态的变化量,确定出物体模型的位置的变化量以及物体模型的姿态的变化量。
在一示例中,可以根据估计出的目标对象在三维空间中的位置的变化量和姿态的变化量,确定第一物体在三维空间中的物体模型的位置的变化量和姿态的变化量:
Δp s=ωΔp f     (1)
Δq s=Δq f     (2)
其中,Δp f表示目标对象在三维空间中的位置的变化量,Δp s表示第一物体对应的物体模型在三维空间中的位置的变化量,ω表示比例参数,可以为预设的值,可用于调节三维空间中物体模型运动的速度;Δq f表示目标对象在三维空间中的姿态的变化量,Δq s表示物体模型在三维空间中的姿态的变化量。其中,公式(1)可以作为第一映射关系,公式(2)可以作为第二映射关系。
本公开实施例中,根据目标对象的位置的变化量和姿态的变化量,可以确定物体模型的位置的变化量和姿态的变化量,使得可以在用户显示界 面中呈现物体模型随着目标对象的运动而运动的动态效果。
在一种可能的实现方式中,步骤S104中的根据物体模型的位姿的变化量,调整用户显示界面中显示的流体的位置,包括:
根据物体模型的位置的变化量,确定物体模型的各模型粒子的位置;以及
针对流体中的每个流体粒子,采取如下步骤:
获取各流体粒子的位置;
根据各模型粒子的位置和流体粒子的位置,确定与流体粒子发生碰撞的模型粒子;
根据与流体粒子发生碰撞的模型粒子的位置,调整流体粒子的位置。
在实际应用中,当第一物体在视频中随着目标对象的位姿变换而运动时,可以使得流体与该第一物体发生碰撞,终端设备可以根据物体模型的位置的变化量,确定物体模型变化后的位置,根据物体模型变化后的位置,确定物体模型中的各模型粒子的位置。具体的,可以通过三维建模软件(例如,3Dmax、Maya等),将第一物体对应的物体模型导出为点云数据,点云数据为点云格式(点云格式文件的后缀为.ply)的数据,每个点云数据对应一个点,每个点对应一个模型粒子,各点云数据可以包括各点在模型中的位置和法向信息,各法向信息可以指向物体模型的外部。
当第一物体发生运动时,其中盛载的流体随之发生运动,终端设备可以根据流体粒子受到的外力作用的大小,对流体的运动进行模拟,估计各流体粒子的运动后的位置,并得到估计位置,作为流体对应的各个流体粒子的位置。可选地,终端设备可以通过基于位置的流体(Position Based Fluids,PBF)模拟方式计算出各个流体粒子的估计位置。
当流体随着第一物体的运动而运动时,流体中的部分或者全部流体粒子可能会与模型粒子发生碰撞,对于与模型粒子发生碰撞的流体粒子,终端设备可以根据与流体粒子发生碰撞的模型粒子的位置,调整流体粒子的位置,并将调整后的位置作为流体粒子运动后在用户显示界面显示的位置,从而在用户显示界面动态展示流体的运动变化。
当流体与第一物体发生碰撞时,流体粒子附近分布模型粒子,对于每 个流体粒子,根据该流体粒子的估计位置和每个模型粒子的位置,能够确定出哪些模型粒子与流体粒子发生碰撞,哪些模型粒子没有与流体粒子发生碰撞。
对于每个流体粒子,根据该流体粒子的位置和各模型粒子的位置,能够计算出流体粒子与各模型粒子之间的距离,根据该距离可以确定流体粒子的相邻模型粒子,终端设备可以将距离流体粒子最近的模型粒子作为流体粒子的相邻模型粒子。由于流体粒子的相邻模型粒子是最有可能和流体粒子发生碰撞的模型粒子,因此,若相邻模型粒子与流体粒子的距离小于预设距离,则相邻模型粒子为与流体粒子发生碰撞的模型粒子,从而终端设备得到与流体粒子发生碰撞的模型粒子的位置。
对于没有与模型粒子发生碰撞的流体粒子,终端设备对该流体粒子运动前的位置进行估计(例如,采用PBF算法)得到估计位置,并将估计位置作为流体粒子运动后在用户显示界面显示的位置。因此,用户显示界面中显示的这部分流体粒子的运动变化过程为:从运动前的位置移动到估计位置。
然后,如果目标对象没有继续发生位姿变化,流体在惯性作用下继续运动,终端设备可以根据流体粒子的当前位置,估计流体粒子在惯性作用下运动后的位置,模拟流体粒子的运动并在用户显示界面中进行显示。可选地,终端设备可以通过PBF的方式模拟流体粒子在惯性作用下的运动。
在一种可能的实现方式中,对于与模型粒子发生碰撞的每一流体粒子,根据与流体粒子发生碰撞的模型粒子的位置,调整流体粒子的位置,可以包括:
根据流体粒子的位置和与流体粒子发生碰撞的模型粒子的位置,确定流体粒子的位置修正量;
根据位置修正量,调整与模型粒子发生碰撞的流体粒子的位置,以在用户显示界面动态展示所述流体的运动变化。
在实际应用中,若流体的一部分或者全部流体粒子与物体模型发生了碰撞,则碰撞会改变这些流体粒子的位置,因此,这部分流体粒子的位置不再是对流体运动进行模拟时得到的估计位置,而是还需要对这些粒子的 位置进行调整。针对每一与模型粒子发生碰撞的流体粒子,该流体粒子的位置、与该流体粒子发生碰撞的模型粒子的位置均可以为三维空间中的位置向量,终端设备计算这两个位置向量的差,根据这两个位置向量的差,确定流体粒子的位置修正量,并且,终端设备根据该位置修正量,对该发生碰撞的流体粒子的位置进行调整,并将调整后的位置作为该流体粒子碰撞后的位置,在用户显示界面中显示该流体粒子从运动前的位置对应的位置移动到调整后的位置对应位置,从而在用户显示界面中呈现流体的动态的变化效果。
在一种可能的实现方式中,根据流体粒子的位置和与流体粒子发生碰撞的模型粒子的位置,确定流体粒子的位置修正量,可以包括:
获取与流体粒子发生碰撞的模型粒子的法向信息;
获取与法向信息对应的第一权重、以及与流体粒子和与流体粒子发生碰撞的模型粒子之间的第一距离对应的第二权重;
基于第一距离、法向信息、第一权重、第二权重、以及预设距离r,确定流体粒子的位置修正量。
在实际应用中,终端设备将物体模型导出为点云数据,每个点云数据对应一个模型粒子,点云数据中包括各模型粒子在模型中的位置以及法向信息,各法向信息可以指向物体模型的外部。终端设备可以预先配置该第一权重和该第二权重,其中,第一权重可以为与流体粒子发生碰撞的模型粒子的法向信息对应的权重、第二权重可以为与流体粒子和与流体粒子发生碰撞的模型粒子之间的第一距离对应的权重,终端设备基于流体粒子和与流体粒子发生碰撞的模型粒子之间的第一距离、法向信息、第一权重、第二权重以及预设距离r,确定流体粒子的位置修正量。
在一个实施例中,通过将第一物体建模为物体模型,并将物体模型导出为点云数据,能够得到模型粒子在模型中(也可称为模型坐标系)的位置以及法向信息,为了计算流体粒子的位置修正量,终端设备可以对模型粒子的位置和法向信息进行坐标变换,变换到计算流体粒子位置修正量的坐标系(也可称为流体坐标系)中,可以通过如下公式(3)-(4)对模型粒子进行坐标变换:
P ω=RP m+T    (3)
n ω=Rn m     (4)
其中,P ω表示各模型粒子在流体坐标系中的位置;P m表示各模型粒子在模型坐标系中的位置;n ω表示各模型粒子在流体坐标系中的法向量;n m表示各模型粒子在模型坐标系中的法向量;R表示旋转矩阵;T表示平移向量,R、T可以根据具体需要预先进行配置。
将模型粒子的位置和法向信息转换到流体坐标系之后,通过以下公式计算流体粒子的位置修正量:
Δp=(r-||d||)*abs(n ω·d)*(-ω 1n ω2d)    (5)
d=p-x      (6)
其中,Δp表示待计算的位置修正量,r表示预设距离,d表示流体粒子和与流体粒子发生碰撞的模型粒子在三维空间中的位置向量的差,||d||表示流体粒子与流体粒子发生碰撞的模型粒子在三维空间中的距离,p表示流体粒子位置的位置向量,x表示与流体粒子发生碰撞的模型粒子位置的位置向量;n ω表示与流体粒子发生碰撞的模型粒子在流体坐标系下的法向量,ω 1表示第一权重,ω 2表示第二权重。通过采用上述公式(5)和(6),可以获取到与第一物体发生碰撞的流体粒子的流体粒子的位置修正量Δp。
在得到了与模型粒子发生碰撞的流体粒子的位置修正量之后,利用该位置修正量,通过下述公式对与模型粒子发生碰撞的流体粒子的位置进行调整:
p t+1=p t+Δp    (7)
其中,p t表示位置调整前的流体粒子的位置(例如,通过PBF方法计算得到的估计位置),Δp表示位置修正量,p t+1表示位置调整后的流体粒子的位置,t表示位置调整前的位置对应的时刻,t+1表示位置调整后的位置对应的时刻。
基于上述本公开所提供的技术方案,下面以一个具体实施例对该技术方案进行阐释,该具体实施例及其内容仅是为了说明本公开方案的一种可 能地实现方式,并不代表本公开方案的全部实现方式。
如图2所示,执行步骤S201,当终端设备检测到在用户显示界面的摄像头启动指令时,开启摄像头采集视频(如图中所示的相机捕捉画面);
执行步骤S202,检测视频中的目标对象的位姿变化,若目标对象为人脸,则检测人脸的位姿变化(如图中所示的“人脸检测”);
执行步骤S203,当人脸第一次出现在用户显示界面时(如图中所示的“第一次出现”),根据第一次检测到人脸在用户显示界面中的显示位置,确定第一物体在用户显示界面中的初始显示位置,并在第一物体中显示流体(例如,向第一物体中注入流体);
执行步骤S204,当人脸运动时,获取人脸的位姿的变化,根据人脸的位姿的变化量和位姿映射关系,确定第一物体的位姿的变化量(如图中所示的“计算第一物体的位姿变化量”),由于第一物体的运动也会带动第一物体中的流体一起运动,因此终端设备可以根据第一物体的位姿的变化量,调整第一物体中的流体的位置,并在用户显示界面动态展示流体的运动变化(如图中所示的“带动第一物体中的流体一起运动”)。
执行步骤S205,当人脸不动时,终端设备可以通过PBF的方式确定流体运动后的位置,模拟流体在惯性作用下继续运动的状态(如图中所示的“流体在惯性作用下继续流动”);
以上各步骤中,人脸的运动、第一物体的运动、流体的运动均在用户显示界面中进行显示,如图中所示的步骤S206,终端设备将图像输出到屏幕。
在本公开实施例提供的交互式动态流体效果处理方法,可以包括:采集视频,并检测视频中的目标对象的位姿变化;获取目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;根据目标对象的位姿变化以及位姿映射关系,确定物体模型的位姿变化;根据物体模型的位姿变化,调整用户显示界面中显示的流体的位置,并在用户显示界面动态展示流体的运动变化。通过采用本公开提供的技术方案,根据采集到的视频中的目标对象的位姿变化,来控制用户显示界面中的物体和流体的运动变化,交互方式新颖、趣味性强,能够提升用户的体验。
基于与图1中所示方法相同的原理,本公开的实施例中还提供了一种交互式动态流体效果处理装置30,如图3所示,该交互式动态流体效果处理装置30可以包括:
采集模块31,用于采集视频,并检测视频中的目标对象的位姿变化;
获取模块32,用于获取目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;
确定模块33,用于根据目标对象的位姿变化以及位姿映射关系,确定物体模型的位姿变化;
调整模块34,用于根据物体模型的位姿变化,调整用户显示界面中显示的流体的位置,并在用户显示界面动态展示流体的运动变化。
在一种可能的实现方式中,装置30还包括接收模块,用于:
接收用户针对用户显示界面的显示触发操作;
显示用户显示界面,并开启视频采集装置采集视频;
检测视频中的目标对象,并获取检测到的目标对象在用户显示界面中的位置;
根据目标对象在用户显示界面中的位置,确定第一物体在用户显示界面中的初始显示位置;
根据初始显示位置,在用户显示界面中显示第一物体。
在一种可能的实现方式中,采集模块31,具体用于:
检测视频中的目标对象的位姿的变化量;
确定模块33,具体用于:
根据目标对象的位姿的变化量以及位姿映射关系,确定物体模型的位姿的变化量;
调整模块34,具体用于:
根据物体模型的位姿的变化量,调整用户显示界面中显示的流体的位置。
在一种可能的实现方式中,采集模块31在检测视频中的目标对象的位姿的变化量时,用于:
检测视频中的目标对象的位置的变化量和姿态的变化量;
其中,位姿映射关系包括目标对象的位置的变化量和物体模型的位置的变化量的第一映射关系,以及目标对象的姿态的变化量和物体模型的姿态的变化量的第二映射关系。
在一种可能的实现方式中,确定模块33在根据目标对象的位姿的变化量以及位姿映射关系,确定物体模型的位姿的变化量时,用于:
根据目标对象的位置的变化量和第一映射关系,确定物体模型的位置的变化量;以及
根据目标对象的姿态的变化量和第二映射关系,确定物体模型的姿态的变化量。
在一种可能的实现方式中,调整模块34在根据物体模型的位姿的变化量,调整用户显示界面中显示的流体的位置时,用于:
根据物体模型的位置的变化量,确定物体模型的各模型粒子的位置;以及
针对流体中的每个流体粒子,执行如下方法:
获取流体粒子的位置;
根据各模型粒子的位置和流体粒子的位置,确定与流体粒子发生碰撞的模型粒子;
根据与流体粒子发生碰撞的模型粒子的位置,调整流体粒子的位置。
在一种可能的实现方式中,对于与模型粒子发生碰撞的每一流体粒子,调整模块34具体用于:
根据流体粒子的位置和与流体粒子发生碰撞的模型粒子的位置,确定流体粒子的位置修正量;
根据位置修正量,调整与模型粒子发生碰撞的流体粒子的位置,以在用户显示界面动态展示流体的运动变化。
在一种可能的实现方式中,调整模块34在根据流体粒子的位置和与流体粒子发生碰撞的模型粒子的位置,确定流体粒子的位置修正量时,用于:
获取与流体粒子发生碰撞的模型粒子的法向信息;
获取与法向信息对应的第一权重、以及与流体粒子和与流体粒子发生 碰撞的模型粒子之间的第一距离对应的第二权重;
基于第一距离、法向信息、第一权重、第二权重、以及预设距离,确定流体粒子的位置修正量。
本公开实施例的交互式动态流体效果处理装置可执行本公开的实施例所提供的交互式动态流体效果处理方法,其实现原理相类似,本公开各实施例中的交互式动态流体效果处理装置中的各模块所执行的动作是与本公开各实施例中的交互式动态流体效果处理方法中的步骤相对应的,对于交互式动态流体效果处理装置的各模块的详细功能描述具体可以参见前文中所示的对应的交互式动态流体效果处理方法中的描述,此处不再赘述。
在本公开实施例提供的交互式动态流体效果处理装置,可以采集视频,并检测视频中的目标对象的位姿变化;获取目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;根据目标对象的位姿变化以及位姿映射关系,确定物体模型的位姿变化;根据物体模型的位姿变化,调整用户显示界面中显示的流体的位置,并在用户显示界面动态展示流体的运动变化。通过采用本公开提供的技术方案,根据采集到的视频中的目标对象的位姿变化,来控制用户显示界面中的物体和流体的运动变化,交互方式新颖、趣味性强,能够提升用户的体验。
下面参考图4,其示出了适于用来实现本公开实施例的电子设备400的结构示意图。本公开实施例技术方案的执行主体可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)、可穿戴电子设备等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图4示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
电子设备包括:存储器以及处理器,存储器用于存储执行上述各个方法实施例所述方法的程序;处理器被配置为执行存储器中存储的程序,以实现上文所述的本公开的实施例的功能以及/或者其它期望的功能。其中,这里的处理器可以称为下文所述的处理装置401,存储器可以包括下文中 的只读存储器(ROM)402、随机访问存储器(RAM)403以及存储装置408中的至少一项,具体如下所示:
如图4所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(ROM)402中的程序或者从存储装置408加载到随机访问存储器(RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图4示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序存储有执行上述各个实施例所述的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器 (CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:采集视频,并检测视频中的目标对象的位姿变化;获取目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;根据目标对象的位姿变化以及位姿映射关系,确定物体模型的位姿变化;根据物体模型的位姿变化,调整用户显示界面中显示的流体的位置,并在用户显示界面动态展示流体的运动变化。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用 户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块或单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块或单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组 合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,本公开提供了一种交互式动态流体效果处理方法,所述方法包括:
采集视频,并检测视频中的目标对象的位姿变化;
获取所述目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;
根据所述目标对象的位姿变化以及所述位姿映射关系,确定所述物体模型的位姿变化;
根据所述物体模型的位姿变化,调整所述用户显示界面中显示的流体的位置,并在所述用户显示界面动态展示所述流体的运动变化。
在一种可能的实现方式中,所述方法还包括:
接收用户针对所述用户显示界面的显示触发操作;
显示所述用户显示界面,并开启视频采集装置采集视频;
检测所述视频中的目标对象,并获取检测到的所述目标对象在所述用户显示界面中的位置;
根据所述目标对象在所述用户显示界面中的位置,确定所述第一物体在所述用户显示界面中的初始显示位置;
根据所述初始显示位置,在所述用户显示界面中显示所述第一物体。
在一种可能的实现方式中,所述检测视频中的目标对象的位姿变化,包括:
检测所述视频中的所述目标对象的位姿的变化量;
所述根据所述目标对象的位姿变化以及所述位姿映射关系,确定所述第一物体对应的物体模型的位姿变化,包括:
根据所述目标对象的位姿的变化量以及所述位姿映射关系,确定所述物体模型的位姿的变化量;
所述根据所述物体模型的位姿变化,调整所述用户显示界面中显示的流体的位置,包括:
根据所述物体模型的位姿的变化量,调整所述用户显示界面中显示的流体的位置。
在一种可能的实现方式中,所述检测视频中的目标对象的位姿的变化量,包括:
检测视频中的目标对象的位置的变化量和姿态的变化量;
其中,所述位姿映射关系包括所述目标对象的位置的变化量和所述物体模型的位置的变化量的第一映射关系,以及所述目标对象的姿态的变化量和所述物体模型的姿态的变化量的第二映射关系。
在一种可能的实现方式中,所述根据所述目标对象的位姿的变化量以及所述位姿映射关系,确定所述物体模型的位姿的变化量,包括:
根据所述目标对象的位置的变化量和所述第一映射关系,确定所述物体模型的位置的变化量;以及
根据所述目标对象的姿态的变化量和所述第二映射关系,确定所述物体模型的姿态的变化量。
在一种可能的实现方式中,所述根据所述物体模型的位姿的变化量,调整所述用户显示界面中显示的流体的位置,包括:
根据所述物体模型的位置的变化量,确定所述物体模型的各模型粒子的位置;以及
针对所述流体中的每个流体粒子,执行如下方法:
获取所述流体粒子的位置;
根据各所述模型粒子的位置和所述流体粒子的位置,确定与所述流体粒子发生碰撞的模型粒子;
根据与所述流体粒子发生碰撞的模型粒子的位置,调整所述流体粒子的位置。
在一种可能的实现方式中,对于与模型粒子发生碰撞的每一流体粒子,所述根据与所述流体粒子发生碰撞的模型粒子的位置,调整所述流体粒子的位置,包括:
根据所述流体粒子的位置和与所述流体粒子发生碰撞的模型粒子的位置,确定所述流体粒子的位置修正量;
根据所述位置修正量,调整与所述模型粒子发生碰撞的流体粒子的位置,以在所述用户显示界面动态展示所述流体的运动变化。
在一种可能的实现方式中,所述根据所述流体粒子的位置和与所述流体粒子发生碰撞的模型粒子的位置,确定所述流体粒子的位置修正量,包括:
获取与所述流体粒子发生碰撞的模型粒子的法向信息;
获取与所述法向信息对应的第一权重、以及与所述流体粒子和与所述流体粒子发生碰撞的模型粒子之间的第一距离对应的第二权重;
基于所述第一距离、所述法向信息、所述第一权重、所述第二权重、以及预设距离,确定所述流体粒子的位置修正量。
根据本公开的一个或多个实施例,本公开提供了一种交互式动态流体效果处理装置,所述装置包括:
采集模块,用于采集视频,并检测视频中的目标对象的位姿变化;
获取模块,用于获取所述目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;
确定模块,用于根据所述目标对象的位姿变化以及所述位姿映射关系,确定所述物体模型的位姿变化;
调整模块,用于根据所述物体模型的位姿变化,调整所述用户显示界面中显示的流体的位置,并在所述用户显示界面动态展示所述流体的运动变化。
在一种可能的实现方式中,所述装置还包括接收模块,用于:
接收用户针对所述用户显示界面的显示触发操作;
显示所述用户显示界面,并开启视频采集装置采集视频;
检测所述视频中的目标对象,并获取检测到的所述目标对象在所述用户显示界面中的位置;
根据所述目标对象在所述用户显示界面中的位置,确定所述第一物体在所述用户显示界面中的初始显示位置;
根据所述初始显示位置,在所述用户显示界面中显示所述第一物体。
在一种可能的实现方式中,所述采集模块,具体用于:
检测所述视频中的所述目标对象的位姿的变化量;
所述确定模块,具体用于:
根据所述目标对象的位姿的变化量以及所述位姿映射关系,确定所述物体模型的位姿的变化量;
所述调整模块,具体用于:
根据所述物体模型的位姿的变化量,调整所述用户显示界面中显示的流体的位置。
在一种可能的实现方式中,所述采集模块,在检测视频中的目标对象的位姿的变化量时,用于:
检测视频中的目标对象的位置的变化量和姿态的变化量;
其中,所述位姿映射关系包括所述目标对象的位置的变化量和所述物体模型的位置的变化量的第一映射关系,以及所述目标对象的姿态的变化量和所述物体模型的姿态的变化量的第二映射关系。
在一种可能的实现方式中,所述确定模块在根据所述目标对象的位姿的变化量以及所述位姿映射关系,确定所述物体模型的位姿的变化量时,用于:
根据所述目标对象的位置的变化量和所述第一映射关系,确定所述物体模型的位置的变化量;以及
根据所述目标对象的姿态的变化量和所述第二映射关系,确定所述物体模型的姿态的变化量。
在一种可能的实现方式中,所述调整模块在根据所述物体模型的位姿的变化量,调整所述用户显示界面中显示的流体的位置时,用于:
根据所述物体模型的位置的变化量,确定所述物体模型的各模型粒子的位置;以及
针对所述流体中的每个流体粒子,执行如下方法:
获取所述流体粒子的位置;
根据各所述模型粒子的位置和所述流体粒子的位置,确定与所述流体 粒子发生碰撞的模型粒子;
根据与所述流体粒子发生碰撞的模型粒子的位置,调整所述流体粒子的位置。
在一种可能的实现方式中,对于与模型粒子发生碰撞的每一流体粒子,所述调整模块,具体用于:
根据所述流体粒子的位置和与所述流体粒子发生碰撞的模型粒子的位置,确定所述流体粒子的位置修正量;
根据所述位置修正量,调整与所述模型粒子发生碰撞的流体粒子的位置,以在所述用户显示界面动态展示所述流体的运动变化。
在一种可能的实现方式中,所述调整模块在根据所述流体粒子的位置和与所述流体粒子发生碰撞的模型粒子的位置,确定所述流体粒子的位置修正量时,用于:
获取与所述流体粒子发生碰撞的模型粒子的法向信息;
获取与所述法向信息对应的第一权重、以及与所述流体粒子和与所述流体粒子发生碰撞的模型粒子之间的第一距离对应的第二权重;
基于所述第一距离、所述法向信息、所述第一权重、所述第二权重、以及预设距离,确定所述流体粒子的位置修正量。
根据本公开的一个或多个实施例,本公开提供了一种电子设备,包括:
一个或多个处理器;
存储器,存储一个或多个应用程序,其中所述一个或多个应用程序被所述一个或多个处理器执行时,使得所述电子设备执行所述的交互式动态流体效果处理方法。
根据本公开的一个或多个实施例,本公开提供了一种计算机可读介质,所述计算机可读介质用于存储计算机指令,当所述计算机指令被计算机执行时,使得所述计算机执行所述的交互式动态流体效果处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方 案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (11)

  1. 一种交互式动态流体效果处理方法,其特征在于,所述方法包括:
    采集视频,并检测视频中的目标对象的位姿变化;
    获取所述目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;
    根据所述目标对象的位姿变化以及所述位姿映射关系,确定所述物体模型的位姿变化;
    根据所述物体模型的位姿变化,调整所述用户显示界面中显示的流体的位置,并在所述用户显示界面动态展示所述流体的运动变化。
  2. 根据权利要求1所述的交互式动态流体效果处理方法,其特征在于,所述方法还包括:
    接收用户针对所述用户显示界面的显示触发操作;
    显示所述用户显示界面,并开启视频采集装置采集视频;
    检测所述视频中的目标对象,并获取检测到的所述目标对象在所述用户显示界面中的位置;
    根据所述目标对象在所述用户显示界面中的位置,确定所述第一物体在所述用户显示界面中的初始显示位置;
    根据所述初始显示位置,在所述用户显示界面中显示所述第一物体。
  3. 根据权利要求1所述的交互式动态流体效果处理方法,其特征在于,所述检测视频中的目标对象的位姿变化,包括:
    检测所述视频中的所述目标对象的位姿的变化量;
    所述根据所述目标对象的位姿变化以及所述位姿映射关系,确定所述第一物体对应的物体模型的位姿变化,包括:
    根据所述目标对象的位姿的变化量以及所述位姿映射关系,确定所述物体模型的位姿的变化量;
    所述根据所述物体模型的位姿变化,调整所述用户显示界面中显示的流体的位置,包括:
    根据所述物体模型的位姿的变化量,调整所述用户显示界面中显示的 流体的位置。
  4. 根据权利要求3所述的交互式动态流体效果处理方法,其特征在于,所述检测视频中的目标对象的位姿的变化量,包括:
    检测视频中的目标对象的位置的变化量和姿态的变化量;
    其中,所述位姿映射关系包括所述目标对象的位置的变化量和所述物体模型的位置的变化量的第一映射关系,以及所述目标对象的姿态的变化量和所述物体模型的姿态的变化量的第二映射关系。
  5. 根据权利要求4所述的交互式动态流体效果处理方法,其特征在于,所述根据所述目标对象的位姿的变化量以及所述位姿映射关系,确定所述物体模型的位姿的变化量,包括:
    根据所述目标对象的位置的变化量和所述第一映射关系,确定所述物体模型的位置的变化量;以及
    根据所述目标对象的姿态的变化量和所述第二映射关系,确定所述物体模型的姿态的变化量。
  6. 根据权利要求4所述的交互式动态流体效果处理方法,其特征在于,所述根据所述物体模型的位姿的变化量,调整所述用户显示界面中显示的流体的位置,包括:
    根据所述物体模型的位置的变化量,确定所述物体模型的各模型粒子的位置;以及
    针对所述流体中的每个流体粒子,执行如下方法:
    获取所述流体粒子的位置;
    根据各所述模型粒子的位置和所述流体粒子的位置,确定与所述流体粒子发生碰撞的模型粒子;
    根据与所述流体粒子发生碰撞的模型粒子的位置,调整所述流体粒子的位置。
  7. 根据权利要求6所述的交互式动态流体效果处理方法,其特征在于,对于与模型粒子发生碰撞的每一流体粒子,所述根据与所述流体粒子发生碰撞的模型粒子的位置,调整所述流体粒子的位置,包括:
    根据所述流体粒子的位置和与所述流体粒子发生碰撞的模型粒子的 位置,确定所述流体粒子的位置修正量;
    根据所述位置修正量,调整与所述模型粒子发生碰撞的流体粒子的位置,以在所述用户显示界面动态展示所述流体的运动变化。
  8. 根据权利要求7所述的交互式动态流体效果处理方法,其特征在于,所述根据所述流体粒子的位置和与所述流体粒子发生碰撞的模型粒子的位置,确定所述流体粒子的位置修正量,包括:
    获取与所述流体粒子发生碰撞的模型粒子的法向信息;
    获取与所述法向信息对应的第一权重、以及与所述流体粒子和与所述流体粒子发生碰撞的模型粒子之间的第一距离对应的第二权重;
    基于所述第一距离、所述法向信息、所述第一权重、所述第二权重、以及预设距离,确定所述流体粒子的位置修正量。
  9. 一种交互式动态流体效果处理装置,其特征在于,所述装置包括:
    采集模块,用于采集视频,并检测视频中的目标对象的位姿变化;
    获取模块,用于获取所述目标对象与用户显示界面中显示的第一物体对应的物体模型的位姿映射关系;
    确定模块,用于根据所述目标对象的位姿变化以及所述位姿映射关系,确定所述物体模型的位姿变化;
    调整模块,用于根据所述物体模型的位姿变化,调整所述用户显示界面中显示的流体的位置,并在所述用户显示界面动态展示所述流体的运动变化。
  10. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器,存储一个或多个应用程序,其中所述一个或多个应用程序被所述一个或多个处理器执行时,使得所述电子设备执行根据权利要求1-8任一项所述的交互式动态流体效果处理方法。
  11. 一种计算机可读介质,其特征在于,所述计算机可读介质用于存储计算机指令,当所述计算机指令被计算机执行时,使得所述计算机执行上述权利要求1-8中任一项所述的交互式动态流体效果处理方法。
PCT/CN2021/111608 2020-08-10 2021-08-09 交互式动态流体效果处理方法、装置及电子设备 WO2022033445A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/041,003 US20230368422A1 (en) 2020-08-10 2021-08-09 Interactive dynamic fluid effect processing method and device, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010796950.5A CN114116081B (zh) 2020-08-10 2020-08-10 交互式动态流体效果处理方法、装置及电子设备
CN202010796950.5 2020-08-10

Publications (1)

Publication Number Publication Date
WO2022033445A1 true WO2022033445A1 (zh) 2022-02-17

Family

ID=80246953

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/111608 WO2022033445A1 (zh) 2020-08-10 2021-08-09 交互式动态流体效果处理方法、装置及电子设备

Country Status (3)

Country Link
US (1) US20230368422A1 (zh)
CN (1) CN114116081B (zh)
WO (1) WO2022033445A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937964B (zh) * 2022-06-27 2023-12-15 北京字跳网络技术有限公司 姿态估计的方法、装置、设备和存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070002386A1 (en) * 2005-06-29 2007-01-04 Ryunosuke Iijima Image capturing apparatus, control method thereof, program, and storage medium
CN104539795A (zh) * 2014-12-26 2015-04-22 小米科技有限责任公司 数据流量的显示方法、装置及移动终端

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150077398A1 (en) * 2013-06-27 2015-03-19 Tactus Technology, Inc. Method for interacting with a dynamic tactile interface
US8479107B2 (en) * 2009-12-31 2013-07-02 Nokia Corporation Method and apparatus for fluid graphical user interface
CN107329671B (zh) * 2017-07-05 2020-06-30 北京京东尚科信息技术有限公司 模型显示方法和装置
CN108564600B (zh) * 2018-04-19 2019-12-24 北京华捷艾米科技有限公司 运动物体姿态跟踪方法及装置
CN108830928A (zh) * 2018-06-28 2018-11-16 北京字节跳动网络技术有限公司 三维模型的映射方法、装置、终端设备和可读存储介质
US10592087B1 (en) * 2018-10-22 2020-03-17 Typetura Llc System and method for creating fluid design keyframes on graphical user interface
CN109803165A (zh) * 2019-02-01 2019-05-24 北京达佳互联信息技术有限公司 视频处理的方法、装置、终端及存储介质
CN109885163A (zh) * 2019-02-18 2019-06-14 广州卓远虚拟现实科技有限公司 一种虚拟现实多人交互协作方法与系统
CN110930487A (zh) * 2019-11-29 2020-03-27 珠海豹趣科技有限公司 一种动画实现方法及装置
CN111107280B (zh) * 2019-12-12 2022-09-06 北京字节跳动网络技术有限公司 特效的处理方法、装置、电子设备及存储介质

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070002386A1 (en) * 2005-06-29 2007-01-04 Ryunosuke Iijima Image capturing apparatus, control method thereof, program, and storage medium
CN104539795A (zh) * 2014-12-26 2015-04-22 小米科技有限责任公司 数据流量的显示方法、装置及移动终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YU MU, ZHU YOU: "How to shoot the Douyin Submarine Challenge", BAIDU, 12 March 2020 (2020-03-12), pages 1 - 3, XP055900392, Retrieved from the Internet <URL:https://jingyan.baidu.com/article/d3b74d648a534a5e77e609ea.html> [retrieved on 20220311] *

Also Published As

Publication number Publication date
US20230368422A1 (en) 2023-11-16
CN114116081B (zh) 2023-10-27
CN114116081A (zh) 2022-03-01

Similar Documents

Publication Publication Date Title
US9779508B2 (en) Real-time three-dimensional reconstruction of a scene from a single camera
CN111738220A (zh) 三维人体姿态估计方法、装置、设备及介质
US20220148279A1 (en) Virtual object processing method and apparatus, and storage medium and electronic device
US20140009384A1 (en) Methods and systems for determining location of handheld device within 3d environment
WO2018233623A1 (zh) 图像显示的方法和装置
WO2022088928A1 (zh) 弹性对象的渲染方法、装置、设备及存储介质
WO2022007627A1 (zh) 一种图像特效的实现方法、装置、电子设备及存储介质
US11776209B2 (en) Image processing method and apparatus, electronic device, and storage medium
US20140354779A1 (en) Electronic device for collaboration photographing and method of controlling the same
WO2022206335A1 (zh) 图像显示方法、装置、设备及介质
WO2023151524A1 (zh) 图像显示方法、装置、电子设备及存储介质
WO2023051340A1 (zh) 一种动画显示方法、装置及设备
WO2019007372A1 (zh) 模型显示方法和装置
CN113289327A (zh) 移动终端的显示控制方法及装置、存储介质及电子设备
WO2022033444A1 (zh) 动态流体效果处理方法、装置、电子设备和可读介质
WO2022033445A1 (zh) 交互式动态流体效果处理方法、装置及电子设备
JP2022531186A (ja) 情報処理方法、装置、電子機器、記憶媒体およびプログラム
WO2022048428A1 (zh) 目标物体的控制方法、装置、电子设备及存储介质
WO2023174087A1 (zh) 特效视频生成方法、装置、设备及存储介质
WO2023151558A1 (zh) 用于显示图像的方法、装置和电子设备
WO2024016924A1 (zh) 视频处理方法、装置、电子设备及存储介质
WO2022227918A1 (zh) 视频处理方法、设备及电子设备
WO2023279939A1 (zh) 具备触觉交互功能的用户手持设备、触觉交互方法及装置
WO2022057576A1 (zh) 人脸图像显示方法、装置、电子设备及存储介质
US20230334801A1 (en) Facial model reconstruction method and apparatus, and medium and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21855497

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05.06.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21855497

Country of ref document: EP

Kind code of ref document: A1