WO2023143120A1 - 素材展示方法、装置、电子设备、存储介质及程序产品 - Google Patents

素材展示方法、装置、电子设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2023143120A1
WO2023143120A1 PCT/CN2023/072057 CN2023072057W WO2023143120A1 WO 2023143120 A1 WO2023143120 A1 WO 2023143120A1 CN 2023072057 W CN2023072057 W CN 2023072057W WO 2023143120 A1 WO2023143120 A1 WO 2023143120A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
special effect
trajectory
unit special
Prior art date
Application number
PCT/CN2023/072057
Other languages
English (en)
French (fr)
Inventor
梁雅涵
马佳欣
易安安
蒋俊
张晓旭
陈旭
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023143120A1 publication Critical patent/WO2023143120A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present disclosure relates to the technical field of the Internet, and in particular to a material display method, device, electronic equipment, storage medium, and program product.
  • the application program can provide the user with special effects for the user to use and display.
  • a material display method including:
  • Target elements according to the multiple target unit special effect images, and display the target elements in the captured video images, wherein the main body of the target elements is a three-dimensional geometric structure including multiple geometric surfaces, and the multiple The special-effect images of the target unit are pasted on the corresponding geometric surfaces, so as to display the multiple special-effect images of the target units on the multiple geometric surfaces.
  • said obtaining a plurality of target unit special effect images according to said trajectory images includes:
  • Copying and mirroring are performed on the unit special effect images to obtain the plurality of target unit special effect images.
  • the superimposition of the trajectory image and the material image corresponding to the target element to obtain the unit special effect image includes:
  • the trajectory image and the mirror image are respectively superimposed on different regions of the material image corresponding to the target material to obtain the unit special effect image.
  • the obtaining the trajectory image according to the position information of the target object in the captured video image includes:
  • Drawing is performed according to the continuously captured video images until the drawing of the trajectory is completed, and the trajectory image is obtained.
  • the method also includes:
  • the size of the long side of the rectangular area is proportional to the length of the connecting line, and the long side of the rectangular area is a side parallel to the connecting line.
  • the blurring is Gaussian blurring.
  • the plurality of target unit special effect images in the tiled state are respectively moved according to corresponding paths, so that the plurality of target unit special effect images are sequentially pasted on corresponding geometric surfaces to obtain the target material .
  • the displaying the target element in the captured video image includes:
  • the target element is rotated and displayed according to the rotation axis corresponding to the three-dimensional geometric structure as the rotation center.
  • the target element is a lantern.
  • the displaying the target element in the captured video image includes: displaying the target element in a subsequently captured video image.
  • the method further includes: adding at least one item of a foreground image, filter, text, and texture that matches the target element to the video image captured later.
  • a material display device including:
  • An image acquisition module configured to acquire video images taken
  • a track generating module configured to obtain the position information of the target recognition object in the captured video image Trajectory image
  • a material generating module configured to obtain multiple target unit special effect images according to the trajectory image; and generate target elements according to the multiple target unit special effect images;
  • a display module configured to display the target element in the captured video image
  • the main body of the target element is a three-dimensional geometric structure including multiple geometric surfaces, and the multiple target unit special effect images are respectively pasted on the corresponding geometric surfaces to display the multiple target elements on the multiple collection surfaces.
  • Unit effect image is a three-dimensional geometric structure including multiple geometric surfaces, and the multiple target unit special effect images are respectively pasted on the corresponding geometric surfaces to display the multiple target elements on the multiple collection surfaces.
  • an electronic device including: a memory and a processor;
  • the memory is configured to store computer program instructions
  • the processor is configured to execute the computer program instructions, so that the electronic device implements the material display method according to any of the foregoing embodiments.
  • a readable storage medium including: computer program instructions
  • the computer program instructions are executed by at least one processor of the electronic device, so that the electronic device implements the material display method according to any of the foregoing embodiments.
  • a computer program product is provided, and when the computer program product is executed by a computer, the computer implements the material display method according to any of the foregoing embodiments.
  • a computer program including: instructions, and when the instructions are executed by a processor, the material display method according to any of the foregoing embodiments is implemented.
  • FIG. 1 is a schematic flowchart of a material display method provided by an embodiment of the present disclosure
  • FIG. 2A to FIG. 2F are special effect display renderings provided by an embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram of the effect of a trajectory pattern provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of a material display device provided by an embodiment of the present disclosure.
  • Fig. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the material display method provided in the present disclosure can be realized by the material display device provided in the present disclosure, and the material display device can be realized by any software and/or hardware.
  • the material display device may be: a tablet computer, a mobile phone (such as a folding screen mobile phone, a large-screen mobile phone, etc.), a wearable device, an augmented reality (augmented reality, AR)/virtual reality (virtual reality, VR) device , notebook computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, personal digital assistant (personal digital assistant, PDA) and other electronic equipment, this disclosure does not make any restrictions on the specific type of electronic equipment.
  • an electronic device is taken as an example, and the material display method provided by the present disclosure is described in detail in combination with drawings and application scenarios.
  • Fig. 1 is a schematic flowchart of a material display method provided by some embodiments of the present disclosure. Referring to FIG. 1 , the method provided in this embodiment includes: steps S101-S104.
  • step S101 a captured video image is acquired, and a trajectory image is obtained according to the position information of the target recognition object in the captured video image.
  • the captured video images are images within the corresponding image acquisition range that are currently captured or photographed by the electronic device through an image acquisition module (such as a camera).
  • the disclosure does not limit parameters such as the size and definition of the captured video images.
  • Each captured video image may include a target identification object.
  • the present disclosure does not limit the target recognition object.
  • the target recognition object may be a certain part of the human body, such as a finger, a nose in a human face, an eye, a mouth, and the like.
  • the electronic device obtains the captured video images, and can perform image recognition on each video image, determine whether each video image includes a target recognition object, and obtain position information of the target recognition object in each video image.
  • This disclosure does not limit the implementation of image recognition for electronic devices.
  • an image recognition model can be used in electronic devices to perform image recognition on each video image captured.
  • the image recognition model can be, but not limited to It is suitable for neural network models, convolutional network models, and other types of models.
  • the electronic device may also perform image recognition on each video image obtained by shooting in other ways, so as to obtain the position information of the target identification object in each video image.
  • the image collection module of the electronic device collects an image of a human face
  • the target recognition object is the nose in the face
  • the electronic device After the electronic device obtains the captured video images, it can draw a circular material in the pre-built canvas according to the position information of the key points corresponding to the nose of the user's face in each video image, and then use a rectangular material to draw the newly drawn.
  • the circular material and the circular material drawn based on the previous video image are connected, and the rectangular material and the circular material can overlap. Since the video images captured by electronic devices are usually taken continuously, as the number of captured video images continues to increase, more and more circular materials and rectangular materials are drawn in real time to dynamically present the images generated following the user's actions. Trajectory image.
  • the position information of the target recognition object in each video image can be represented by the coordinates of the target recognition object in each video image, and the corresponding coordinate system can be established according to actual needs; for example, the lower left corner of each video image that can be taken is the coordinate
  • the origin is the origin
  • the horizontal axis is the horizontal axis
  • the vertical axis is the vertical axis to establish a two-dimensional coordinate system.
  • a two-dimensional coordinate system can also be established in other ways.
  • the canvas can be understood as the basis for carrying the trajectory image.
  • the present disclosure does not limit the size and color of the canvas and other parameters.
  • the size of the canvas can be related to the size of each surface of the target object included in the target 3D special effect, and the color of the canvas can be It is a preset color, for example, any color such as white, gray, black, etc.
  • the size (such as diameter or radius) of the circular material can be a preset value. Then, the size of the circular material can change according to the movement speed of the nose in the face.
  • the initial state mentioned here refers to the state of the circular material drawn based on the first captured video image.
  • the colors of the circular material and the rectangular material are different from the color of the canvas, so that the drawn trajectory can be clearly presented to the user.
  • the color of the canvas is black, and the color of the circular material and the rectangular material can be white.
  • the trajectory when drawing the trajectory image, can be presented through a specific camera.
  • the specific camera mentioned here is a virtual camera, not a real physical camera; when starting to draw the trajectory Set the camera's ClearType to Amaz.CameraClearType.Done before the image, which The picture of the trajectory image drawn in the previous frame will not be cleared, thus showing the effect of the trajectory drawn by the brush.
  • ClearType indicates the clearing mode corresponding to the virtual camera
  • Amaz.CameraClearType.Done indicates that the clearing mode of the virtual camera is set to not cleared, that is, the content in the previously drawn trajectory image is not cleared, so that the drawn trajectory image can be presented to the user .
  • the target element is an element included in the special effect to be realized
  • the body of the target element may be a solid geometric structure, such as a cone structure, a cylinder structure, a prism structure, a spherical structure including multiple arc surfaces, and the like.
  • the three-dimensional geometric structure usually includes multiple geometric surfaces, and the special effect image of the target unit is an image that needs to be displayed on each geometric surface included in the main body of the target element.
  • the size, shape and other parameters of multiple target unit special effect images may be the same or different.
  • the size and shape parameters of multiple target unit special effect images are related to the main structure of the target element and determined according to specific circumstances.
  • the material image corresponding to the target element can be superimposed on the track image to obtain a unit special effect image, and then the electronic device can copy and mirror the unit special effect image to obtain multiple target unit special effect images.
  • the overlay processing mentioned here refers to replacing the background of the trajectory image with the material image corresponding to the target element.
  • the target element is a lantern
  • the material image corresponding to the target element is the initial material image used to attach to each geometric surface of the lantern body
  • the initial material image is the material image that is not superimposed with the trajectory image drawn by the user.
  • copy processing refers to a processing method of generating the same image based on the unit special effect image
  • mirror image processing refers to a processing method of generating an image with an axisymmetric structure relative to a specific axis based on the unit special effect image.
  • the times of copying and mirroring can be determined according to the number of geometric faces of the solid geometric structure of the target object.
  • the electronic device can perform mirroring processing on the trajectory image to obtain a mirror image corresponding to the trajectory image, and then superimpose the trajectory image and the mirror image corresponding to the trajectory image on different regions of the material image to obtain a unit special effect image .
  • the two areas superimposed on the track image and the mirror image corresponding to the track image respectively in the material image can be in a mirror state, or it can also be understood that the two areas are relative to a specific axis (such as an axis in the horizontal direction). Axisymmetric.
  • the shapes and sizes of the multiple geometric surfaces of the main structure of the target element are not exactly the same, and the electronic device can pre-store the material images corresponding to the multiple geometric surfaces corresponding to the target element. After obtaining the trajectory image , the electronic device can respectively superimpose the trajectory image and multiple material images, so as to obtain multiple target unit special effect images.
  • the size of the trajectory image may not be consistent with the size of the material image, so the size of the trajectory image can be adjusted to the same size as the material image, so that the trajectory image and the material image are aligned.
  • the size of the trajectory image is 720*1280
  • the size of the material image is 1024*1024.
  • the size of the trajectory image can be adjusted to 1024*1024, and then the adjusted trajectory image and the material image are superimposed.
  • a target element is generated according to a plurality of target unit special effect images, wherein the main body of the target element is a three-dimensional geometric structure including multiple geometric surfaces, and the multiple target unit special effect images are respectively pasted on the corresponding geometric surfaces. planes, to display the multiple target unit special effect images on the multiple geometric planes.
  • the main body of the target element is a solid geometric structure, and the solid geometric structure may generally include multiple geometric surfaces.
  • the target unit special effect image can be understood as an image displayed on multiple geometric surfaces included in the main body of the target element.
  • the process of generating target elements based on multiple target unit special effect images can be understood as the target unit special effect images can be rotated, moved, scaled, etc., so that the target unit special effect images can be aligned with the corresponding geometry in the main structure of the target element
  • the position and size of the surface are adapted, so that the special effect image of the target unit can be displayed on the geometric surface of the three-dimensional geometric structure.
  • the process of merging and generating target elements can be presented in an animation manner.
  • the target element is a lantern
  • it can be realized by using a preset lantern model. Specifically, create bones according to the mesh frame structure of the lantern model, then bind the corresponding skin, and then split the lantern model to make the lantern The model is pushed back to the single patch state step by step, and then the animation is played in reverse to show the process of merging to generate the target element.
  • each patch in the lantern model corresponds to a special effect image of the target unit.
  • step S104 the target element is displayed in the captured video image.
  • the target element After the target element is generated by the above method, the target element can be displayed in the captured video image using a preset display method.
  • the present disclosure does not limit the preset display method.
  • One or more methods such as swinging, swinging back and forth, etc., can set specific display methods according to actual needs.
  • the preset display method can be that the rotation axis corresponding to the three-dimensional geometric structure of the target element is the rotation center, and the target element is rotated and displayed in the captured video image, and the rotation axis can also swing left and right during the rotation display .
  • the target element is a lantern
  • the lantern can be displayed superimposed on the top of the captured video image, and the central axis of the main body of the lantern is used as the rotation axis to rotate and display the various faces of the lantern.
  • the lantern also It can swing left and right.
  • the display position of the lantern can be set according to requirements, for example, The lantern can be displayed in an area close to the top of the display screen of the electronic device, so as to ensure that the lantern does not block the face of the user in the captured video image during the display of the lantern.
  • the foreground image, filter, text, texture, etc. that match the target element may be pre-configured.
  • the video image used to identify the target recognition object to generate the trajectory image and the video image superimposed to display the target element are different video images.
  • the electronic device recognizes the target recognition object based on the captured video image in real time during the shooting of the video segment 1, generates a trajectory image, and then generates the target element based on the trajectory image; after that, the electronic device captures the video segment 2 in real time, And display the target element in the video image included in video clip 2 according to the set animation method, video clip 1 and video clip 2 can be two video clips shot continuously, and the time of video clip 1 is earlier than the time of video clip 2 .
  • This solution draws trajectories based on the user's actions, and uses the drawn trajectories as one of the materials for generating target elements, so that users can participate in the design of target elements, which is conducive to improving the user's interactive experience; in addition, the targets in the special effects generated by the above solution
  • the main body of the element is a three-dimensional structure, which can enhance the visual expression of special effects.
  • the target three-dimensional special effect is the lantern special effect, that is, the target element is a lantern, and the target recognition object is the nose tip of a human face as an example, to illustrate drawing trajectory images, generating lantern special effects and displaying lanterns special effects process.
  • the application 1 may display the user interface 21 shown in FIG.
  • the user interface 21 shown in FIG.
  • it can be realized in one or more ways such as text, animation, sound, etc., and the present disclosure does not limit the display parameters such as font, size, and color of text.
  • the prompt text content "Draw a lantern" is displayed, and when the prompt text content is displayed, a lantern animation can be displayed.
  • the display duration of the prompt text content may be preset, for example, 1 second, 2 seconds and so on.
  • App 1 can recognize the nose tip of user 1's face and display a paintbrush on user 1's nose tip. For example, in the embodiment shown in FIG. A paintbrush is displayed at the tip of the nose, and the disclosure does not limit the style of the paintbrush.
  • the main body of the lantern is spherical
  • the central axis of the spherical shape along the vertical direction is the rotation axis
  • the corresponding arc surface every 45 degrees around the rotation axis is a geometric surface.
  • the lantern patch corresponding to the curved surface can also be superimposed and displayed on the top of the captured video image, and the lantern patch is the material image corresponding to the lantern special effect.
  • application 1 can display the lantern surface, so that the user can actually feel the drawing The scene of the lantern patch improves the user experience.
  • application 1 can also display part of the lantern surface, such as the lower half of the lantern surface (of course, it can also display the upper half of the lantern surface).
  • the undisplayed lantern surface The slice part can be covered and displayed by means of a mask, such as shown in the embodiment shown in FIG. 2B .
  • the lower half of the lantern patch displayed on the user interface 22 can also be understood as an effective drawing area of the trajectory image.
  • application 1 can mirror the drawn trajectory image to obtain the mirror image corresponding to the trajectory image, and combine the mirror image corresponding to the trajectory image with the lantern
  • the top half of the patch (or the bottom half of the lantern patch) is superimposed.
  • the pattern in the target lantern patch obtained in this way has a vertically symmetrical effect, which can provide users with rich visual effects and enhance the expressiveness of the lantern special effect.
  • the application 1 can display a complete lantern patch, and the user can draw a trajectory image in the area corresponding to the entire lantern patch. This method is simple and can be realized quickly by electronic devices.
  • the complete lantern patch displayed by Application 1 is the effective drawing area of the trajectory image.
  • the application 1 may not show the lantern patch to the user, but instead show the user a trajectory drawing area, which may have nothing to do with the shape of the lantern patch.
  • application 1 can display a complete lantern patch, and the user can draw in the area corresponding to the entire lantern patch, and the electronic device can only record that the user is in the lower half (or upper half) of the lantern patch. The trajectory drawn in .
  • the remaining duration of drawing the trajectory can also be displayed in the user interface through a progress bar.
  • a progress bar may be displayed on the area near the top of the user interface 22 . The progress bar can try not to block the face area of user 1 and the lower half area of the lantern patch in the captured video image, so as to ensure the user's experience of drawing a lantern pattern.
  • the user 1 can move the nose tip by moving the face area, and the application 1 follows the movement of the nose tip of the user 1 and displays the drawn trajectory image on the mobile phone.
  • the trajectory image drawn by user 1 is as shown in Figure 2C.
  • the drawing duration can be set in advance, and the present disclosure does not limit the duration, for example, the drawing duration can be 5s, 6s, 7s, etc., which can be set according to the actual situation.
  • the lower half of the lantern picture is the effective drawing area.
  • the tip of the nose may move out of the lower half of the lantern picture, that is, out of the effective drawing area.
  • Application 1 does not need to record Tracks outside the effective drawing area, only record the drawing tracks within the effective drawing area.
  • application 1 obtains the captured first frame of video image, recognizes and obtains the position of user 1's nose tip in the first frame of video image, and then draws a circular material s1 on the pre-built canvas.
  • the position of the circular material s1 on the canvas is determined according to the position of the tip of user 1's nose in the first frame of video image, and the diameter of the circular material s1 can be a preset value.
  • the application 1 obtains the second frame of the video image, it recognizes the position of the nose tip of the user 1 in the second frame of the video image, and then draws the circular material s2 and the rectangular material r1 on the canvas to connect the circular material s1 and the circular material Material s2.
  • the diameter of the circular material s2 can be determined according to the length of the connecting line between the center of the circular material s1 and the center of the circular material s2. If the length of the connecting line is longer, the thickness of the drawn track will be thinner; if the length of the connecting line is shorter, the thickness of the drawn track will be thicker.
  • the thickness of the brush can be changed, which is closer to the effect of the actual painting.
  • Y and basescale represent the first and second parameters related to the size of the circular material respectively
  • spotX represents the circle The size of the shape material. Since the size of the side perpendicular to the drawing direction of the rectangular material is the same as that of the circular material, determining the size of the circular material is equivalent to determining the size of the rectangular material at the same time.
  • the size of the circular material can be determined in the following manner.
  • the size of the circular material (such as diameter or radius) can be obtained by multiplying spotX with the size of the preset circular material.
  • the parameters a1 to a4 can be understood as the zoom factor corresponding to the circular material, for example.
  • the present disclosure uses the preset value A2 as the judgment condition.
  • A2 the preset value
  • the size of the circular material increases at a rate of 0.02 (ie a1), and the upper limit is Y; when Y is less than or equal to A2, the size of the circular material decreases at a rate of 0.01 (ie a2), and the lower limit is Y.
  • application 1 can present the drawn trajectory through a specific camera, therefore, the third frame of video image is obtained during shooting, and the circular material s3 is drawn according to the third frame of video image and The rectangular material r2, the previously drawn circular material s1, circular material s2, and rectangular material r1 can all be retained and not cleared, and thus presented to the user during the drawing process.
  • application 1 obtains more and more captured video images, more and more circular materials and rectangular materials can be drawn on the canvas in the above manner, so as to obtain drawn trajectory images.
  • application 1 can generate corresponding textures based on the canvas and the circular and rectangular materials drawn on the canvas, and perform blur processing on the entire generated texture to achieve a luminous effect. It should be noted that after drawing a new circular material and rectangular material on the canvas based on the newly obtained video image, a new texture needs to be generated and blurred according to the new texture to realize the glow effect in real time.
  • the present disclosure does not limit the specific manner of blurring, for example, but not limited to Gaussian blurring, mean blurring, median blurring and so on.
  • the transparency of the track area (that is, all circular material areas and rectangular material areas) in the track image can also be set to a preset transparency value, for example, 50%, 60% and so on.
  • the size of the specified transparency value is not limited.
  • application 1 can mirror the trajectory image to obtain the mirror image corresponding to the trajectory image, and superimpose the trajectory image on the lower half of the lantern patch, The mirror image corresponding to the trajectory image is superimposed on the upper half of the lantern patch to obtain a target lantern patch.
  • the application 1 can display the target lantern patch on the mobile phone, that is, jump from the user interface 23 shown in FIG. 2C to the user interface 24 shown in FIG. 2D .
  • the application 1 can display the target lantern patch in the upper area of the user interface 24 , and of course the target lantern patch can also be displayed in other positions.
  • the application 1 can display the target lantern patch in a preset manner, for example, the target lantern patch can be displayed in a gradually shrinking manner.
  • eight target lantern patches can be arranged horizontally in the area near the top of the user interface 25 .
  • the number of target lantern faces may be relatively large. Due to the limitation of the display screen size of the mobile phone, some target lantern faces can also be displayed in a horizontal arrangement near the top of the user interface 25 piece. For example, in the user interface 25 shown in FIG. 2E , five target lantern patches are displayed.
  • the merging parameters corresponding to the target lantern patch may include: a path parameter corresponding to the target lantern patch in three-dimensional space, a scaling size of the target lantern patch, and other parameters.
  • the three-dimensional space referred to here can be a three-dimensional coordinate system established based on the main body of the lantern, and the corresponding path parameters in the target lantern patch can include the coordinate values of each pixel in the target lantern patch in the three-dimensional coordinate system, wherein the path can be It consists of multiple discrete points. For each point on the path, each pixel in the target lantern patch can correspond to a set of coordinate values.
  • the application 1 can switch through a preset transition mode, and jump from the user interface 25 shown in FIG. 2E to the user interface 26 shown in FIG. 2F , thereby displaying the lantern special effect.
  • the central axis of the spherical body of the lantern can be used as the center of rotation, and the images on each curved surface of the lantern can be rotated and displayed.
  • pre-set filters such as glowing special effects, etc. can also be added to enhance the visual expression of the lantern special effect.
  • the duration of the rotating display can be preset.
  • the duration of the rotating display can be set to 3 seconds, 4 seconds, etc., and when the preset rotating display duration is reached, the shooting of the video ends.
  • the central axis of the main body of the lantern can also move, for example, move left and right in parallel, swing left and right, move back and forth, and so on.
  • foreground images, filters, text, stickers, etc. that match the special effects of the lantern can also be pre-set.
  • the elements included in the foreground image can be mainly distributed around the edges of the foreground image, so as not to block the face area of the user 1 in the captured video image as much as possible, and not to block the generated lantern special effect. For example, in the embodiment shown in FIG.
  • the foreground image of the lantern element is superimposed and displayed on the top of the captured video image, and the text "good luck is in the head” is displayed in the area near the bottom, wherein the foreground image, text, stickers, etc. can have animation Effect.
  • Fig. 4 is a schematic structural diagram of a material display device provided by an embodiment of the present disclosure.
  • the material display device 400 provided in this embodiment includes: an image acquisition module 401 , a track generation module 402 , a material generation module 403 , and a display module 404 .
  • the image acquiring module 401 is configured to acquire captured video images.
  • the trajectory generating module 402 is configured to obtain a trajectory image according to the position information of the target recognition object in the captured video image.
  • the material generation module 403 is configured to obtain a plurality of target unit special effect images according to the trajectory image, and generate target elements according to the plurality of target unit special effect images.
  • a display module 404 configured to display the target element in the captured video image.
  • the main body of the target element is a three-dimensional geometric structure including multiple geometric surfaces, and the special effect images of the multiple target units are pasted on the corresponding geometric surfaces, so as to display the multiple target units on the multiple geometric surfaces special effects image.
  • the material generation module 403 is specifically configured to superimpose the trajectory image and the material image corresponding to the target element to obtain a unit special effect image; perform copy processing and mirroring processing on the unit special effect image to obtain the Multiple target unit special effect images.
  • the material generation module 403 is specifically configured to perform mirroring processing on the trajectory image to obtain a mirror image corresponding to the trajectory image; and correspond the trajectory image and the mirror image to the target element respectively Different regions of the material image are superimposed to obtain the unit special effect image.
  • the trajectory generation module 402 is configured to draw two circles on the pre-built canvas according to the positions of the target recognition objects in the two consecutively captured video images respectively. a rectangular area; draw a rectangular area on the canvas according to the connection of the position of the target recognition object in two consecutive video images, wherein, the size of the wide side of the rectangular area is the same as that of the circular The diameters of the areas are the same, and the wide side of the rectangular area is a side perpendicular to the connecting line; and the wide midpoint of the rectangular area coincides with the dot of the corresponding circular area; drawing is performed according to continuously captured video images , until the trajectory drawing ends, the trajectory image is obtained.
  • the trajectory generation module 402 is further configured to perform blurring processing on the trajectory image, so that the trajectory in the trajectory image has a luminous effect.
  • the size of the long side of the rectangular area is proportional to the length of the connecting line, so The long side of the rectangular area is the side parallel to the connecting line
  • the blurring is Gaussian blurring.
  • the material generation module 403 is specifically configured to move the multiple target unit special effect images in the tiled state according to corresponding paths, so that the multiple target unit special effect images are attached to the corresponding The geometric surface is used to obtain the target material.
  • the display module 404 is specifically configured to rotate and display the target element in the captured video image according to the rotation axis corresponding to the three-dimensional geometric structure as the rotation center.
  • the display module 404 is specifically configured to display the target element in a video image captured later.
  • the presentation module 404 is further configured to add at least one item of a foreground image, filter, text, and texture that matches the target element to the video image captured later.
  • the target element is a lantern.
  • the special effect including the target element is a lantern special effect.
  • the material display device provided in this embodiment can be used to implement the technical solutions of any of the foregoing method embodiments, and its implementation principles and technical effects are similar, and reference can be made to the detailed description of the foregoing method embodiments. For the sake of brevity, details are not repeated here.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • an electronic device 500 provided in this embodiment includes: a memory 501 and a processor 502 .
  • the memory 501 may be an independent physical unit, and may be connected with the processor 502 through the bus 503 .
  • the memory 501 and the processor 502 may also be integrated together, implemented by hardware, and the like.
  • the memory 501 is used to store program instructions, and the processor 502 invokes the program instructions to execute the operations of any one of the above method embodiments.
  • the foregoing electronic device 500 may also include only the processor 502 .
  • the memory 501 for storing programs is located outside the electronic device 500, and the processor 502 is connected to the memory through circuits/wires, and is used to read and execute the programs stored in the memory.
  • the processor 502 may be a central processing unit (central processing unit, CPU), a network processor (network processor, NP) or a combination of CPU and NP.
  • CPU central processing unit
  • NP network processor
  • the processor 502 may further include a hardware chip.
  • the aforementioned hardware chip may be an application-specific integrated circuit (application-specific integrated circuit, ASIC), a programmable logic device (programmable logic device, PLD) or a combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the above-mentioned PLD can be a complex programmable logic device (complex programmable logic device (CPLD), field-programmable gate array (field-programmable gate array, FPGA), general array logic (generic array logic, GAL) or any combination thereof.
  • CPLD complex programmable logic device
  • FPGA field-programmable gate array
  • GAL general array logic
  • the memory 501 may include a volatile memory (volatile memory), such as a random-access memory (random-access memory, RAM); the memory may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory) ), a hard disk (hard disk drive, HDD) or a solid-state drive (solid-state drive, SSD); the memory can also include a combination of the above-mentioned types of memory.
  • volatile memory such as a random-access memory (random-access memory, RAM
  • non-volatile memory such as a flash memory (flash memory)
  • HDD hard disk drive
  • solid-state drive solid-state drive
  • An embodiment of the present disclosure also provides a readable storage medium, including: computer program instructions; when the computer program instructions are executed by at least one processor of the electronic device, the material display method shown in any of the above method embodiments is implemented.
  • An embodiment of the present disclosure further provides a computer program product, the program product includes a computer program, the computer program is stored in a readable storage medium, and at least one processor of the electronic device can read from the readable storage medium The computer program is read, and the at least one processor executes the computer program so that the electronic device implements the material display method as shown in any one of the above method embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开涉及一种素材展示方法、装置、电子设备、存储介质及程序产品,该方法通过获取拍摄的视频图像,并基于视频图像中的目标识别对象的位置,获得轨迹图像,根据轨迹图像以及目标元素对应的素材图像进行叠加处理,获得多个目标单元特效图像;将多个目标单元特效图像进行合并,获得目标元素,目标三维特效的主体为包括多个面的立体几何结构,多个目标单元特效图像分别贴合在相对应的几何面,以在立体几何结构的多个几何面展示上述多个目标单元特效图像。

Description

素材展示方法、装置、电子设备、存储介质及程序产品
相关申请的交叉引用
本申请是以CN申请号为202210089981.6,申请日为2022年1月25日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及互联网技术领域,尤其涉及一种素材展示方法、装置、电子设备、存储介质及程序产品。
背景技术
随着互联网技术的不断发展,智能终端与用户之间的人机交互的技术也愈发成熟,许多应用程序中都会涉及到人机交互,而娱乐场景的人机交互,更能够为用户带来更多的欢乐。
现有的人机交互场景中,应用程序可以向用户提供特效供用户使用并进行展示。
发明内容
根据本公开的一些实施例,提供了一种素材展示方法,包括:
获取拍摄的视频图像,根据拍摄的视频图像中目标识别对象的位置信息,获得轨迹图像;
根据所述轨迹图像,获得多个目标单元特效图像;
根据所述多个目标单元特效图像,生成目标元素,并在拍摄的视频图像中展示所述目标元素,其中,所述目标元素的主体为包括多个几何面的立体几何结构,所述多个目标单元特效图像分别贴合在相对应的几何面,以在所述多个几何面展示所述多个目标单元特效图像。
在一些实施例中,所述根据所述轨迹图像,获得多个目标单元特效图像包括:
将所述轨迹图像与目标元素对应的素材图像进行叠加,获得单元特效图像;
对所述单元特效图像进行复制处理和镜像处理,获得所述多个目标单元特效图像。
在一些实施例中,所述将所述轨迹图像与目标元素对应的素材图像进行叠加,获得单元特效图像包括:
对所述轨迹图像进行镜像处理,获得所述轨迹图像对应的镜像图像;
将所述轨迹图像和所述镜像图像分别与所述目标素材对应的素材图像的不同区域进行叠加,获得所述单元特效图像。
在一些实施例中,所述根据所述拍摄的视频图像中目标对象的位置信息,获得轨迹图像包括:
在预先构建的画布上,根据所述目标识别对象分别在连续拍摄的两个视频图像中所处的位置,在所述画布上绘制两个圆形区域;
根据所述目标识别对象分别在连续拍摄的两个视频图像中的位置的连线,在所述画布上绘制矩形区域;其中,所述矩形区域的宽边的尺寸与所述圆形区域的直径相同,所述矩形区域的宽边为垂直于所述连线的边;且所述矩形区域的宽的中点与相应圆形区域的圆点重合;
根据连续拍摄的视频图像进行绘制,直至轨迹绘制结束,获得所述轨迹图像。
在一些实施例中,所述方法还包括:
对所述轨迹图像进行模糊处理,使得所述轨迹图像中的轨迹具有发光效果。
在一些实施例中,所述矩形区域的长边的尺寸与所述连线的长度成正比关系,所述矩形区域的长边为与所述连线平行的边。
在一些实施例中,所述模糊处理为高斯模糊处理。
在一些实施例中,将平铺状态的所述多个目标单元特效图像分别按照相应的路径移动,使得所述多个目标单元特效图像依次贴合在相对应的几何面,获得所述目标素材。
在一些实施例中,所述在拍摄的视频图像中展示所述目标元素包括:
在拍摄的视频图像中,根据所述立体几何结构对应的旋转轴为旋转中心,旋转展示所述目标元素。
在一些实施例中,所述目标元素为灯笼。
在一些实施例中,所述在拍摄的视频图像中展示所述目标元素包括:在之后拍摄的视频图像中展示所述目标元素。
在一些实施例中,所述方法还包括:为之后拍摄的视频图像,添加与所述目标元素相互匹配的前景图像、滤镜、文字、贴图中至少一项。
根据本公开的另一些实施例,提供了一种素材展示装置,包括:
图像获取模块,用于获取拍摄的视频图像;
轨迹生成模块,用于根据所述拍摄的视频图像中目标识别对象的位置信息,获得 轨迹图像;
素材生成模块,用于根据所述轨迹图像,获得多个目标单元特效图像;以及根据所述多个目标单元特效图像,生成目标元素;
展示模块,用于在拍摄的视频图像中展示所述目标元素;
其中,所述目标元素的主体为包括多个几何面的立体几何结构,所述多个目标单元特效图像分别贴合在相对应的几何,以在所述多个集合面展示所述多个目标单元特效图像。
根据本公开的又一些实施例,提供了一种电子设备,包括:存储器和处理器;
所述存储器被配置为存储计算机程序指令;
所述处理器被配置为执行所述计算机程序指令,使得所述电子设备实现如前述任意实施例的素材展示方法。
根据本公开的再一些实施例,提供一种可读存储介质,包括:计算机程序指令;
所述计算机程序指令被电子设备的至少一个处理器执行,使得所述电子设备实现如前述任意实施例的素材展示方法。
根据本公开的又一些实施例,提供了一种计算机程序产品,当所述计算机程序产品被计算机执行,使得所述计算机实现如前述任意实施例的素材展示方法。
根据本公开的再一些实施例,提供了一种计算机程序,包括:指令,所述指令被处理器执行时实现如前述任意实施例的素材展示方法。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开一实施例提供的素材展示方法的流程示意图;
图2A至图2F为本公开一实施例提供的特效展示效果图;
图3为本公开一实施例提供的轨迹图案的效果示意图;
图4为本公开一实施例提供的素材展示装置的结构示意图;
图5为本公开一实施例提供的电子设备的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
本公开提供的素材展示方法,可以由本公开提供的素材展示装置实现,素材展示装置可以通过任意的软件和/硬件的方式实现。在一些实施例中,素材展示装置可以为:平板电脑、手机(如折叠屏手机、大屏手机等)、可穿戴设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等电子设备,本公开对电子设备的具体类型不作任何限制。
下述实施例中,以电子设备为例,结合附图和应用场景,对本公开提供的素材展示方法进行详细阐述。
图1为本公开一些实施例提供的素材展示方法的流程示意图。参照图1所示,本实施例提供的方法包括:步骤S101~S104。
在步骤S101中、获取拍摄的视频图像,根据拍摄的视频图像中目标识别对象的位置信息,获得轨迹图像。
拍摄的视频图像为电子设备当前通过图像采集模块(如摄像头)采集或者拍摄的相应图像采集范围内的图像。本公开对于拍摄的视频图像的尺寸、清晰度等参数不作限定。
拍摄的各个视频图像中可以包括目标识别对象。本公开对于目标识别对象不作限定,例如,目标识别对象可以为人体的某个部位,例如,手指、人脸中的鼻子、眼睛、嘴巴等等。
电子设备获得拍摄的视频图像,且可以对各个视频图像进行图像识别,确定各个视频图像中是否包括目标识别对象,以及获得目标识别对象在各个视频图像中的位置信息。本公开对于电子设备进行图像识别的实现方式不作限定,例如,电子设备中可以利用图像识别模型对拍摄的各个视频图像进行图像识别,图像识别模型可以但不限 于为神经网络模型、卷积网络模型等等类型的模型。
当然,电子设备也可以通过其他方式对拍摄获得的各个视频图像进行图像识别,获得目标识别对象在各个视频图像中的位置信息。
在下述示例中,以用户的脸部位于电子设备的图像采集模块对应的图像采集范围内,电子设备的图像采集模块采集人体脸部的图像,且目标识别对象为人脸中的鼻子为例,进行举例说明电子设备如何根据目标识别对象在各个视频图像中的位置信息进行绘制,获得轨迹图像。
电子设备获得拍摄的视频图像后,可以根据用户脸部的鼻子对应的关键点在各个视频图像中的位置信息,在预先构建的画布中绘制一圆形素材,再通过一矩形素材将新绘制的圆形素材以及基于前一个视频图像绘制的圆形素材连接起来,矩形素材与圆形素材之间可以重叠。由于电子设备拍摄的视频图像通常为连续拍摄,因此,随着拍摄的视频图像的数量不断增加,实时绘制更多的更多的圆形素材以及矩形素材,以动态呈现跟随用户的动作而生成的轨迹图像。
其中,目标识别对象在各个视频图像中的位置信息可以通过目标识别对象在各个视频图像中的坐标表示,相应的坐标系可以根据实际需求建立;例如,可以拍摄的各个视频图像的左下角为坐标系原点,水平方向为横轴,竖直方向为纵轴,建立二维坐标系,当然,也可以通过其他方式建立二维坐标系。
其中,画布可以理解为承载轨迹图像的基础,本公开对于画布的尺寸以及颜色等参数不作限定,例如,画布的尺寸可以与目标三维特效包括的目标对象的各个面的尺寸相关,画布的颜色可以为预先设定的颜色,例如,白色、灰色、黑色等等任意颜色。
在画布上绘制圆形区域(也可以成为圆形素材)以及矩形区域(也可以称为矩形素材),初始状态时,圆形素材的尺寸(如直径或者半径)可以为预先设定的值,之后,圆形素材的尺寸可以根据人脸中鼻子的移动速度发生变化。其中,此处提及的初始状态即表示基于拍摄的第一个视频图像绘制的圆形素材的状态。在画布上绘制圆形素材和矩形素材时,圆形素材以及矩形素材的颜色与画布的颜色具有差异,使得绘制的轨迹能够清楚地呈现给用户。例如,画布的颜色为黑色,圆形素材和矩形素材的颜色可以为白色。
在一些实施例中,在绘制轨迹图像时,可以通过一个特定的摄像机来呈现轨迹,需要说明的是,此处提及的特定的摄像机是虚拟相机,并不是真实的实体相机;在开始绘制轨迹图像之前将摄像机的ClearType设置为Amaz.CameraClearType.Done,这 样绘制的前一帧轨迹图像的画面就不会被清除,从而显示出画笔绘制轨迹的效果。其中,ClearType表示虚拟相机对应的清除模式,Amaz.CameraClearType.Done表示将虚拟相机的清除模式设置为不清除,即不清除之前绘制的轨迹图像中的内容,从而可以将绘制的轨迹图像呈现给用户。
S102、根据轨迹图像,获得多个目标单元特效图像。
在一些实施例中,目标元素为要实现的特效中包括的元素,目标元素的主体可以为立体几何结构,如,圆锥结构、圆柱结构、棱柱结构、包括多个弧面的球形结构等等。立体几何结构通常包括多个几何面,目标单元特效图像即为需要贴合在目标元素的主体所包括的各个几何面进行展示的图像。
多个目标单元特效图像的尺寸、形状等参数可以相同,也可以不同,多个目标单元特效图像的尺寸以及形状等参数与目标元素的主体结构相关,根据具体情况而定。
在一些实施例中,可将目标元素对应的素材图像与轨迹图像进行叠加处理,获得单元特效图像,之后,电子设备可以对单元特效图像进行复制处理和镜像处理,获得多个目标单元特效图像。
其中,此处提及的叠加处理表示将轨迹图像的背景替换为目标元素对应的素材图像。假设目标元素为灯笼时,目标元素对应的素材图像即为用于贴合在灯笼主体的各个几何面的初始素材图像,初始素材图像即为未与用户绘制的轨迹图像进行叠加的素材图像。
其中,复制处理是指基于单元特效图像生成相同的图像的处理方式;镜像处理是指基于单元特效图像生成相对于特定的轴呈轴对称结构的图像的处理方式。复制处理和镜像处理的次数可以根据目标对象的立体几何结构的几何面数量确定。
在一些实施例中,电子设备可以针对轨迹图像进行镜像处理,获得轨迹图像对应的镜像图像,再将轨迹图像和轨迹图像对应的镜像图像分别与素材图像的不同区域进行叠加,从而获得单元特效图像。其中,素材图像中分别与轨迹图像和轨迹图像对应的镜像图像叠加的两个区域可以为镜像的状态,或者,也可以理解为这两个区域相对于特定的轴(如水平方向的轴)呈轴对称。
在另一些实施例中,目标元素的主体结构的多个几何面的形状以及尺寸并不完全相同,电子设备中可以预先存储目标元素对应的多个几何面分别对应的素材图像,获得轨迹图像之后,电子设备可以将轨迹图像与多个素材图像分别进行叠加,从而获得多个目标单元特效图像。
需要说明的是,将轨迹图像与素材图像叠加时,轨迹图像的尺寸与素材图像的尺寸可能不一致,则可以将轨迹图像的尺寸调整至与素材图像一致的尺寸,使得轨迹图像与素材图像对齐。例如,轨迹图像的尺寸为720*1280,素材图像的尺寸为1024*1024,可以将轨迹图像的尺寸调整为1024*1024,之后,再将调整后的轨迹图像与素材图像进行叠加。
在步骤S103中、根据多个目标单元特效图像,生成目标元素,其中,目标元素的主体为包括多个几何面的立体几何结构,所述多个目标单元特效图像分别贴合在相对应的几何面,以在所述多个几何面展示所述多个目标单元特效图像。
结合步骤S102中描述,目标元素的主体为立体几何结构,且立体几何结构通常可以包括多个几何面。目标单元特效图像则可以理解为在目标元素的主体包括的多个几何面上展示的图像。
基于多个目标单元特效图像生成目标元素的过程,可以理解为针对目标单元特效图像,可以通过旋转、移动、缩放等等操作,使得目标单元特效图像能够与目标元素的主体结构中相对应的几何面的位置、大小等适配,使得目标单元特效图像能够贴合在立体几何结构的几何面上显示。
此外,合并生成目标元素的过程,可以通过动画的方式呈现。例如,目标元素为灯笼时,可以采用预先设定的灯笼模型实现,具体地,按照灯笼模型面片框架结构布线创建骨骼,接着绑定相应的蒙皮,再把灯笼模型进行拆分,将灯笼模型一步一步反推回单个面片状态,之后将该动画反向播放,从而呈现合并生成目标元素的过程。其中,灯笼模型中的每个面片对应一个目标单元特效图像。
在步骤S104中、在拍摄的视频图像中展示目标元素。
通过上述方式生成目标元素之后,可以采用预先设定的展示方式,在拍摄的视频图像中展示目标元素,本公开对于预先设定的展示方式不作限定,例如,可以为旋转展示、上下移动、左右摆动、前后摆动等等一种或多种方式,可根据实际需求设置具体的展示方式。
在一些实施例中,预先设定的展示方式可以为目标元素的立体几何结构对应的旋转轴为旋转中心,在拍摄的视频图像中旋转展示目标元素,在旋转展示时,旋转轴还可以左右摆动。例如,目标元素为灯笼时,可以将灯笼叠加在拍摄的视频图像的上方显示,且以灯笼主体的中心轴为旋转轴,旋转展示灯笼的各个面,此外,灯笼在旋转的过程中,灯笼还可以左右摆动,此外,灯笼的显示位置可以根据需求设定,例如, 灯笼可以展示在电子设备的显示屏幕的靠近顶部的区域内,保证在展示灯笼的过程中,灯笼能够不遮挡拍摄的视频图像中用户的脸部。
此外,在拍摄的视频图像中展示目标元素的过程中,还可以为拍摄的视频图像添加与目标元素相互匹配的前景图像,以及为拍摄的视频图像添加与目标元素相互匹配的滤镜、文字、贴图等等。其中,与目标元素相互匹配的前景图像、滤镜、文字、贴图等等可以为预先配置好的。
需要说明的是,用于识别目标识别对象而生成轨迹图像的视频图像和叠加展示目标元素的视频图像是不同的视频图像。或者,也可以理解为电子设备是在拍摄视频片段1的过程中实时基于拍摄的视频图像识别目标识别对象,生成轨迹图像,进而基于轨迹图像生成目标元素;之后,电子设备实时拍摄视频片段2,并在视频片段2包括的视频图像中按照设定的动画方式展示目标元素,视频片段1和视频片段2可以为连续拍摄的2个视频片段,且视频片段1的时间早于视频片段2的时间。
上述实施例提供的方法,通过获取拍摄的视频图像,并基于各个视频图像中的目标识别对象的位置,获得轨迹图像,根据轨迹图像以及目标元素对应的素材图像进行叠加处理,获得多个目标单元特效图像;将多个目标单元特效图像进行合并,获得目标元素,其中,目标元素的主体为包括多个几何面的立体几何结构,多个目标单元特效图像分别贴合在相对应的几何面,以在立体几何结构的多个几何面展示上述多个目标单元特效图像。本方案基于用户的动作绘制轨迹,并将绘制的轨迹作为生成目标元素的素材之一,实现用户参与目标元素的设计,有利于提升用户的交互体验;此外,采用上述方案生成的特效中的目标元素的主体为三维的立体结构,能够提升特效的视觉表现力。
基于图1所示实施例的描述,以电子设备为手机,手机中安装有视频编辑类应用程序(以下简称为应用1)为例结合图2A至图2F所示场景及附图,举例介绍本公开提供的素材展示方法。其中,图2A至图2F所示实施例中,以目标三维特效为灯笼特效,即目标元素为灯笼,目标识别对象为人脸的鼻尖部位为例,举例说明绘制轨迹图像、生成灯笼特效以及展示灯笼特效的过程。
其中,当用户触发灯笼特效对应的图标之后,应用1可以在手机上示例性的显示如图2A所示的用户界面21,在用户界面21中显示提示信息,以提示用户开始画灯笼,提示信息例如可以通过文字、动画、声音等一种或多种方式实现,且本公开对于文字的字体、大小、颜色等等显示参数均不做限定。在一些实施例中如图2A所示, 在用户界面21靠近顶部的区域中展示提示文字内容“画灯笼喽”,且在展示提示文字内容时,可以展示灯笼动画。此外,可以预先设置提示文字内容的展示时长,例如为1秒、2秒等等。
当提示文字内容的展示时长达到预先设置的展示时长时,应用1可以识别用户1脸部的鼻尖部位,且在用户1的鼻尖部位显示画笔,例如图2B所示实施例中,在用户1的鼻尖部位显示画笔,本公开对于画笔的样式不作限定。
本实施例中,假设灯笼的主体为球形,球形沿竖直方向的中心轴为旋转轴,环绕旋转轴每45度对应的弧面为一个几何面。此外,在展示画笔时,还可以在拍摄的视频图像的上方叠加展示与弧面相对应的灯笼面片,灯笼面片即为灯笼特效对应的素材图像。
本实施例中,由于每45度对应的弧面为一个几何面,环绕球形的中心轴的8个几何面形状以及大小相同,因此,应用1可以显示灯笼面片,使得用户能够切实感受到绘制灯笼面片的场景,提升用户体验。
一些可能的情况下,应用1也可以展示部分灯笼面片,如下半部分灯笼面片(当然,也可以展示灯笼面片的上半部分),此外,采用该方式实现时,未展示的灯笼面片部分可以通过遮罩的方式遮盖显示,例如图2B所示实施例所示。在图2B所示实施例中,用户界面22中所展示的灯笼面片的下半部分也可以理解为轨迹图像的有效绘制区域。
采用该方式实现,当用户在灯笼面片的下半部分区域绘制轨迹图案之后,应用1可以对绘制的轨迹图像进行镜像处理,获得轨迹图像对应的镜像图像,将轨迹图像对应的镜像图像与灯笼面片的上半部分(或者灯笼面片的下半部分)进行叠加。采用该方式所获得的目标灯笼面片中图案具有上下对称效果,能够提供给用户丰富的视觉效果,增强灯笼特效的表现力。
在一些实施例中,应用1可以展示完整的灯笼面片,用户可以在整个灯笼面片对应的区域进行绘制轨迹图像,该方式简单,电子设备能够快速实现。该情况下,应用1所展示的完整的灯笼面片即为轨迹图像的有效绘制区域。
在一些实施例中,应用1也可以不向用户展示灯笼面片,而是向用户展示一轨迹绘制区域,轨迹绘制区域与灯笼面片的形状可以无关。
在一些实施例中,应用1可以展示完整的灯笼面片,用户可以在整个灯笼面片对应的区域进行绘制,电子设备可以仅记录用户在灯笼面片的下半部分(或者上半部分) 中绘制的轨迹。
此外,一些情况下,还可以限定绘制轨迹的时长,为了使用户能够了解绘制轨迹的剩余时长,因此,还可以在用户界面中通过进度条的方式展示绘制轨迹的剩余时长,例如,请继续参照图2B所示,可以通过在用户界面22的靠近顶部的区域展示进度条。进度条可以尽量不遮挡拍摄的视频图像中用户1的脸部区域以及灯笼面片的下半部分区域,保证用户绘制灯笼图案的体验感。
之后,用户1可以通过移动脸部区域,从而移动鼻尖部位,应用1跟随用户1的鼻尖部位的移动在手机上显示绘制的轨迹图像。在一些实施例中,用户1绘制的轨迹图像如图2C所示。
一些可能的情况下,可以预先设置绘制时长,本公开对于时长不作限定,例如,绘制时长可以为5s、6s、7s等,可根据实际情况设置。
结合前述图2B所示实施例,灯笼画片的下半部分为有效绘制区域,用户1移动脸部的过程中,鼻尖部位可能移出灯笼画片的下半部分,即移出有效绘制区域,应用1可不记录有效绘制区域之外的轨迹,仅记录有效绘制区域之内的绘制轨迹。
下面通过图3所示实施例介绍如何绘制轨迹。
参照图3所示实施例,假设,应用1获得拍摄的第1帧视频图像,识别获得用户1的鼻尖部位在第1帧视频图像中的位置,则在预先构建的画布上绘制圆形素材s1,圆形素材s1在画布上的位置根据用户1的鼻尖部位在第1帧视频图像中的位置确定,且圆形素材s1的直径大小可以为预先设定的值。应用1获得拍摄的第2帧视频图像之后,识别获得用户1的鼻尖部位在第2帧视频图像中的位置,则在画布上绘制圆形素材s2以及矩形素材r1连接圆形素材s1和圆形素材s2。其中,圆形素材s2的直径大小可根据圆形素材s1的圆心以及圆形素材s2的圆心之间的连线的长短确定。若连线的长度越长,则绘制的轨迹的粗细越细;连线的长度越短,绘制的轨迹的粗细越粗。
通过计算鼻尖位置停留时间控制笔刷直径大小(即圆形素材的直径大小),实现画笔粗细变化,更加接近实际绘画的笔触效果。
假设,len表示用户1的鼻尖位置在相邻两帧视频图像中位置之间的连线的距离;Y以及basescale分别表示与圆形素材的尺寸相关的第一参数和第二参数,spotX表示圆形素材的尺寸。由于矩形素材垂直于绘制方向的边的尺寸与圆形素材相同,因此,确定了圆形素材的尺寸,相当于同时确定了矩形素材的尺寸。
在一些实施例中,可以通过下述方式,确定圆形素材的尺寸。
步骤a:将用户1的鼻尖位置在相邻两帧视频图像中位置之间的连线的距离len代入反比例函数“Y=A1-len”中,获得第一参数Y的值,其中,A1为预设的常数,例如,A1=0.04。
若Y大于预设值A2,则通过步骤b确定第二参数basescale的值;若Y小于或等于预设值A2,则通过步骤c确定第二参数basescale的值。例如,A2=0.01。
步骤b:根据公式basescale=min(basescale+a1*Y,Y),确定第二参数basescale的值。
步骤c:根据公式basescale=max(basescale-a2*Y,Y),确定第二参数basescale的值。
其中,前述a1和a2为预先设定的常数,例如,a1=0.02,a2=0.01,a1和a2的取值大小主要影响绘制的轨迹的粗细变化,因此,可根据实际情况设置a1以及a2的取值大小。
步骤d:根据公式spotX=max(a3,min(basescale,a4)),确定spotX的大小。
其中,a3和a4为预先设定的圆形素材的尺寸相关的参数的最小值和最大值,例如,a3=0.003,a4=0.04。
确定spotX的大小之后,可以将spotX与预先设定的圆形素材的尺寸相乘,从而获得圆形素材的尺寸(如直径或者半径)。
在上述步骤a至步骤d中,参数a1至a4例如可以理解为圆形素材对应的缩放倍数。
结合前述步骤a至步骤d中所示的公式,本公开为了防止绘制的轨迹粗细突变,实现轨迹粗细变化缓慢柔和自然的效果,通过预设值A2为判断条件,当Y超过A2的情况下,圆形素材的尺寸以0.02(即a1)为速率增大,且上限为Y;当Y小于或者等于A2的情况下,圆形素材的尺寸以0.01(即a2)为速率减小,且下限为Y。
结合图1所示实施例中的介绍可知,应用1可通过一特定的摄像机呈现绘制的轨迹,因此,在拍摄获得第3帧视频图像,且根据第3帧视频图像绘制了圆形素材s3以及矩形素材r2,之前绘制的圆形素材s1和圆形素材s2以及矩形素材r1均能够保留下来,不被清除,从而呈现给用户绘制的过程。
以此类推,随着应用1获得越来越多拍摄的视频图像,可通过上述方式在画布上绘制越来越多的圆形素材和矩形素材,从而获得绘制的轨迹图像。
为了获得更加丰富的视觉效果,例如,使轨迹图像中的轨迹具有发光效果,还可 以对轨迹图像进行模糊处理。在进行模糊处理时,应用1可以基于画布以及画布上绘制的圆形素材和矩形素材,生成相应纹理,并对生成的整个纹理进行模糊处理,从而实现发光效果。需要说明的是,在基于新获得的视频图像在画布上每绘制一个新的圆形素材以及矩形素材之后,则需要生成新的纹理,并根据新的纹理进行模糊处理,实时实现发光效果。
此外,本公开对于模糊处理的具体方式不作限定,例如,可以但不限于为高斯模糊、均值模糊、中值模糊等等。
此外,还可以将轨迹图像中的轨迹区域(即所有的圆形素材区域和矩形素材区域)的透明度设置为预先设定的透明度值,例如,50%、60%等等,本公开对于预先设定的透明度值的大小不作限定。
在图2C所示实施例的基础上,用户绘制好轨迹图像之后,应用1可以将轨迹图像进行镜像,获得轨迹图像对应的镜像图像,并将轨迹图像叠加在灯笼面片的下半部分,将轨迹图像对应的镜像图像叠加在灯笼面片的上半部分,从而获得一个目标灯笼面片。
获得该目标灯笼面片之后,应用1可以在手机上显示该目标灯笼面片,即由图2C所示的用户界面23跳转至图2D所示的用户界面24。参照图2D所示实施例,应用1可以在用户界面24靠上的区域中显示目标灯孔面片,当然也可以在其他位置显示目标灯笼面片。
其中,应用1可以采用预先设置的方式展示目标灯笼面片,例如,可将目标灯笼面片逐渐缩小的方式展示。
本实施例中,由于灯笼主体由8个弧面首尾拼接而成,因此,可以根据图2D中展示的目标灯笼面片进行复制处理以及镜像处理,获得8个目标灯笼面片,并且在显示屏幕上采用平铺的方式排列展示上述8个目标灯笼面片。
在一些实施例中,参照图2E所示实施例,可在用户界面25靠近顶部的区域横向排列8个目标灯笼面片。
一些情况下,由于灯笼的主体的形状的多样性,目标灯笼面片的数量可能较多,由于手机的显示屏幕尺寸的限制,也可以在用户界面25靠近顶部的区域横向排列展示部分目标灯笼面片。例如图2E所示用户界面25中,即展示了5个目标灯笼面片。
获得8个目标灯笼面片之后,可以按照灯笼特效对应的动画方式,将8个目标灯笼面片按照各自对应的合并参数进行合并,获得灯笼特效。例如,可以将处于横向平 铺状态的8个目标灯笼面片按照预先设置的路径移动,从而将8个目标灯孔面片贴合至灯笼主体对应的8个弧面上,或者,也可以将8个目标灯笼面片一起合并至灯笼主体对应的8个弧面上。
目标灯笼面片对应的合并参数可以包括:目标灯笼面片在三维空间中对应的路径参数、目标灯笼面片的缩放尺寸等等参数。这里所指的三维空间可以为基于灯笼主体建立的三维坐标系,目标灯笼面片中对应的路径参数可以包括目标灯笼面片中各像素点分别在三维坐标系中的坐标值,其中,路径可以由多个离散点位组成,针对路径上的每个点位,目标灯笼面片中各像素点均可以对应一组坐标值。
需要说明的是,在合并的过程中,还可以将灯笼所包括的其他元素,例如,灯笼两端的端盖、灯笼的装饰件(如连接在灯笼底部端盖上的流苏装置)等等元素与灯笼的主体结构进行合并,从而生成生动的灯笼特效。
之后,应用1可以通过预先设定的转场方式进行切换,由图2E所示的用户界面25跳转至图2F所示的用户界面26,从而显示灯笼特效。其中,展示灯笼特效时,可以灯笼的球形主体的中心轴为旋转中心,旋转展示灯笼的各个弧面上的图像。
此外,还可以添加预先设定的滤镜、显示特效(如发光特效)等等,以增强灯笼特效的视觉表现力。
此外,可以预先设置旋转展示的时长,例如,旋转展示时长可以但不限于设置为3秒、4秒等等,当达到预先设置的旋转展示时长时,结束视频的拍摄。
此外,在旋转展示灯笼特效的过程中,灯笼主体的中心轴还可以发生移动,例如,左右平行移动、左右摇摆、前后移动等等。
此外,还可以预先设定与灯笼特效相匹配的前景图像、滤镜、文字、贴图等等,由用户界面25跳转至用户界面26时,可以在拍摄的视频图像添加与目标三维特效相互匹配的前景图像,以及为拍摄的视频图像添加与目标三维特效相互匹配的滤镜、文字、贴图等等。以前景图像为例,前景图像中包括的元素可以主要分布于前景图像的四周边缘位置,以尽量不遮挡拍摄的视频图像中用户1的脸部区域,以及不遮挡生成的灯笼特效。例如,图2F所示实施例,在拍摄的视频图像的上方叠加显示灯笼元素的前景图像,以及在靠近底部的区域显示文字“鸿运当头”,其中,前景图像、文字、贴图等等可以具有动画效果。
结合前述图2A至图2F所示实施例可知,获得多个目标灯笼面片的过程是二维的动画过程,而后续将多个目标灯笼面片进行合并生成灯笼特效以及灯笼特效的展示是 三维的动画过程,2D的动画过程与3D的动画过程之间的衔接通过预先设定的转场方式实现,使得画面切换自然流畅,能够带给用户较好的体验感受。
图4为本公开一实施例提供的素材展示装置的结构示意图。参照图4所示,本实施例提供的素材展示装置400包括:图像获取模块401,轨迹生成模块402,素材生成模块403,展示模块404。
图像获取模块401,用于获取拍摄的视频图像。
轨迹生成模块402,用于根据拍摄的视频图像中目标识别对象的位置信息,获得轨迹图像。
素材生成模块403,用于根据所述轨迹图像,获得多个目标单元特效图像,以及根据所述多个目标单元特效图像,生成目标元素。
展示模块404,用于在拍摄的视频图像中展示所述目标元素。
其中,所述目标元素的主体为包括多个几何面的立体几何结构,所述多个目标单元特效图像贴合在相应的几何面,以在所述多个几何面展示所述多个目标单元特效图像。
在一些实施例中,素材生成模块403,具体用于将所述轨迹图像与目标元素对应的素材图像进行叠加,获得单元特效图像;对所述单元特效图像进行复制处理和镜像处理,获得所述多个目标单元特效图像。
在一些实施例中,素材生成模块403,具体用于对所述轨迹图像进行镜像处理,获得所述轨迹图像对应的镜像图像;将所述轨迹图像和所述镜像图像分别与所述目标元素对应的素材图像的不同区域进行叠加,获得所述单元特效图像。
在一些实施例中,轨迹生成模块402,用于在预先构建的画布上,根据所述目标识别对象分别在连续拍摄的两个视频图像中所处的位置,在所述画布上绘制两个圆形区域;根据所述目标识别对象分别在连续拍摄的两个视频图像中的位置的连线,在所述画布上绘制矩形区域,其中,所述矩形区域的宽边的尺寸与所述圆形区域的直径相同,所述矩形区域的宽边为垂直于所述连线的边;且所述矩形区域的宽的中点与相应圆形区域的圆点重合;根据连续拍摄的视频图像进行绘制,直至轨迹绘制结束,获得所述轨迹图像。
在一些实施例中,轨迹生成模块402,还用于对所述轨迹图像进行模糊处理,使得所述轨迹图像中的轨迹具有发光效果。
在一些实施例中,所述矩形区域的长边的尺寸与所述连线的长度成正比关系,所 述矩形区域的长边为与所述连线平行的边
在一些实施例中,所述模糊处理为高斯模糊处理。
在一些实施例中,素材生成模块403,具体用于将平铺状态的所述多个目标单元特效图像分别按照相对应的路径移动,使得所述多个目标单元特效图像贴合在相对应的几何面,获得所述目标素材。
在一些实施例中,展示模块404,具体用于在拍摄的视频图像中,根据所述立体几何结构对应的旋转轴为旋转中心,旋转展示所述目标元素。
在一些实施例中,展示模块404,具体用于在之后拍摄的视频图像中展示所述目标元素。
在一些实施例中,展示模块404还用于为之后拍摄的视频图像,添加与所述目标元素相互匹配的前景图像、滤镜、文字、贴图中至少一项。
在一些实施例中,所述目标元素为灯笼。相应地,包含所述目标元素的特效为灯笼特效。
本实施例提供的素材展示装置可以用于执行前述任一方法实施例的技术方案,其实现原理以及技术效果类似,可参照前述方法实施例的详细描述,简明起见,此处不再赘述。
图5为本公开一实施例提供的电子设备的结构示意图。参照图5所示,本实施例提供的电子设备500,包括:存储器501和处理器502。
其中,存储器501可以是独立的物理单元,与处理器502可以通过总线503连接。存储器501、处理器502也可以集成在一起,通过硬件实现等。
存储器501用于存储程序指令,处理器502调用该程序指令,执行以上任一方法实施例的操作。
可选地,当上述实施例的方法中的部分或全部通过软件实现时,上述电子设备500也可以只包括处理器502。用于存储程序的存储器501位于电子设备500之外,处理器502通过电路/电线与存储器连接,用于读取并执行存储器中存储的程序。
处理器502可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP)或者CPU和NP的组合。
处理器502还可以进一步包括硬件芯片。上述硬件芯片可以是专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex  programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
存储器501可以包括易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM);存储器也可以包括非易失性存储器(non-volatile memory),例如快闪存储器(flash memory),硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD);存储器还可以包括上述种类的存储器的组合。
本公开实施例还提供一种可读存储介质,包括:计算机程序指令;计算机程序指令被电子设备的至少一个处理器执行时,实现上述任一方法实施例所示的素材展示方法。
本公开实施例还提供一种计算机程序产品,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,所述电子设备的至少一个处理器可以从所述可读存储介质中读取所述计算机程序,所述至少一个处理器执行所述计算机程序使得所述电子设备实现如上述任一方法实施例所示的素材展示方法。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (16)

  1. 一种素材展示方法,包括:
    获取拍摄的视频图像,根据拍摄的视频图像中目标识别对象的位置信息,获得轨迹图像;
    根据所述轨迹图像,获得多个目标单元特效图像;
    根据所述多个目标单元特效图像,生成目标元素,并在拍摄的视频图像中展示所述目标元素,其中,所述目标元素的主体为包括多个几何面的立体几何结构,所述多个目标单元特效图像分别贴合在相对应的几何面,以在所述多个几何面展示所述多个目标单元特效图像。
  2. 根据权利要求1所述的素材展示方法,其中,所述根据所述轨迹图像,获得多个目标单元特效图像包括:
    将所述轨迹图像与目标元素对应的素材图像进行叠加,获得单元特效图像;
    对所述单元特效图像进行复制处理和镜像处理,获得所述多个目标单元特效图像。
  3. 根据权利要求2所述的素材展示方法,其中,所述将所述轨迹图像与目标元素对应的素材图像进行叠加,获得单元特效图像包括:
    对所述轨迹图像进行镜像处理,获得所述轨迹图像对应的镜像图像;
    将所述轨迹图像和所述镜像图像分别与所述目标元素对应的素材图像的不同区域进行叠加,获得所述单元特效图像。
  4. 根据权利要求1-3任一项所述的素材展示方法,其中,所述根据拍摄的视频图像中目标对象的位置信息,获得轨迹图像,包括:
    在预先构建的画布上,根据所述目标识别对象分别在连续拍摄的两个视频图像中所处的位置,在所述画布上绘制两个圆形区域;
    根据所述目标识别对象分别在连续拍摄的两个视频图像中的位置的连线,在所述画布上绘制矩形区域,其中,所述矩形区域的宽边的尺寸与所述圆形区域的直径相同,所述矩形区域的宽边为垂直于所述连线的边,且所述矩形区域的宽的中点与相应圆形区域的圆点重合;
    根据连续拍摄的视频图像进行绘制,直至轨迹绘制结束,获得所述轨迹图像。
  5. 根据权利要求4所述的素材展示方法,还包括:
    对所述轨迹图像进行模糊处理,使得所述轨迹图像中的轨迹具有发光效果。
  6. 根据权利要求4所述的素材展示方法,其中,所述矩形区域的长边的尺寸与所述连线的长度成正比关系,所述矩形区域的长边为与所述连线平行的边。
  7. 根据权利要求5所述的素材展示方法,其中,所述模糊处理为高斯模糊处理。
  8. 根据权利要求1至7任一项所述的素材展示方法,其中,所述根据所述多个目标单元特效图像,生成目标元素包括:
    将平铺状态的所述多个目标单元特效图像分别按照相对应的路径移动,使得所述多个目标单元特效图像贴合在相对应的几何面,获得所述目标素材。
  9. 根据权利要求1至7任一项所述的素材展示方法,其中,所述在拍摄的视频图像中展示所述目标元素包括:
    在拍摄的视频图像中,根据所述立体几何结构对应的旋转轴为旋转中心,旋转展示所述目标元素。
  10. 根据权利要求1至8任一项所述的素材展示方法,其中,所述在拍摄的视频图像中展示所述目标元素包括:
    在之后拍摄的视频图像中展示所述目标元素。
  11. 根据权利要求1至8任一项所述的素材展示方法,还包括:
    为之后拍摄的视频图像,添加与所述目标元素相互匹配的前景图像、滤镜、文字、贴图中至少一项。
  12. 一种素材展示装置,其中,包括:
    图像获取模块,用于获取拍摄的视频图像;
    轨迹生成模块,用于根据所述拍摄的视频图像中目标识别对象的位置信息,获得轨迹图像;
    素材生成模块,用于根据所述轨迹图像,获得多个目标单元特效图像,以及根据所述多个目标单元特效图像,生成目标元素;
    展示模块,用于在拍摄的视频图像中展示所述目标元素;
    其中,所述目标元素的主体为包括多个几何面的立体几何结构,所述多个目标单元特效图像分别贴合在相对应的几何面,以在所述多个集合面展示所述多个目标单元特效图像。
  13. 一种电子设备,包括:存储器和处理器;
    所述存储器被配置为存储计算机程序指令;
    所述处理器被配置为执行所述计算机程序指令,使得所述电子设备实现如权利要 求1至11任一项所述的素材展示方法。
  14. 一种可读存储介质,包括:计算机程序指令;
    所述计算机程序指令被电子设备的至少一个处理器执行,使得所述电子设备实现如权利要求1至11任一项所述的素材展示方法。
  15. 一种计算机程序产品,其中,所述计算机程序产品被计算机执行时,使得所述计算机实现如权利要求1至11任一项所述的素材展示方法。
  16. 一种计算机程序,包括:指令,所述指令被处理器执行时实现如权利要求1-11中任一项所述的素材展示方法。
PCT/CN2023/072057 2022-01-25 2023-01-13 素材展示方法、装置、电子设备、存储介质及程序产品 WO2023143120A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210089981.6A CN114430466A (zh) 2022-01-25 2022-01-25 素材展示方法、装置、电子设备、存储介质及程序产品
CN202210089981.6 2022-01-25

Publications (1)

Publication Number Publication Date
WO2023143120A1 true WO2023143120A1 (zh) 2023-08-03

Family

ID=81312307

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072057 WO2023143120A1 (zh) 2022-01-25 2023-01-13 素材展示方法、装置、电子设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN114430466A (zh)
WO (1) WO2023143120A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114430466A (zh) * 2022-01-25 2022-05-03 北京字跳网络技术有限公司 素材展示方法、装置、电子设备、存储介质及程序产品
CN115801978A (zh) * 2022-10-24 2023-03-14 网易(杭州)网络有限公司 特效视频制作方法、装置、电子设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447043A (zh) * 2018-03-30 2018-08-24 腾讯科技(深圳)有限公司 一种图像合成方法、设备及计算机可读介质
CN112672185A (zh) * 2020-12-18 2021-04-16 脸萌有限公司 基于增强现实的显示方法、装置、设备及存储介质
CN112929582A (zh) * 2021-02-04 2021-06-08 北京字跳网络技术有限公司 一种特效展示方法、装置、设备及介质
CN113888681A (zh) * 2021-09-30 2022-01-04 完美世界(北京)软件科技发展有限公司 虚拟动画的制作方法及装置、存储介质、终端
CN114430466A (zh) * 2022-01-25 2022-05-03 北京字跳网络技术有限公司 素材展示方法、装置、电子设备、存储介质及程序产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447043A (zh) * 2018-03-30 2018-08-24 腾讯科技(深圳)有限公司 一种图像合成方法、设备及计算机可读介质
CN112672185A (zh) * 2020-12-18 2021-04-16 脸萌有限公司 基于增强现实的显示方法、装置、设备及存储介质
CN112929582A (zh) * 2021-02-04 2021-06-08 北京字跳网络技术有限公司 一种特效展示方法、装置、设备及介质
CN113888681A (zh) * 2021-09-30 2022-01-04 完美世界(北京)软件科技发展有限公司 虚拟动画的制作方法及装置、存储介质、终端
CN114430466A (zh) * 2022-01-25 2022-05-03 北京字跳网络技术有限公司 素材展示方法、装置、电子设备、存储介质及程序产品

Also Published As

Publication number Publication date
CN114430466A (zh) 2022-05-03

Similar Documents

Publication Publication Date Title
WO2023143120A1 (zh) 素材展示方法、装置、电子设备、存储介质及程序产品
CN112348969B (zh) 增强现实场景下的展示方法、装置、电子设备及存储介质
WO2018188499A1 (zh) 图像、视频处理方法和装置、虚拟现实装置和存储介质
CN109410298B (zh) 一种虚拟模型的制作方法及表情变化方法
WO2017152673A1 (zh) 人物面部模型的表情动画生成方法及装置
TWI678099B (zh) 視頻處理方法、裝置和儲存介質
WO2020133862A1 (zh) 游戏角色模型的生成方法、装置、处理器及终端
US20070230794A1 (en) Real-time automatic facial feature replacement
CN108939556B (zh) 一种基于游戏平台的截图方法及装置
CN113228625A (zh) 支持复合视频流的视频会议
JP2023504030A (ja) 拡張現実に基づいた表示方法及び装置、並びに記憶媒体
CN111652123B (zh) 图像处理和图像合成方法、装置和存储介质
CN108876886B (zh) 图像处理方法、装置和计算机设备
JP7300563B2 (ja) 拡張現実に基づいた表示方法、装置、及び記憶媒体
CN107274491A (zh) 一种三维场景的空间操控虚拟实现方法
CN109151540A (zh) 视频图像的交互处理方法及装置
CN110688948A (zh) 视频中人脸性别变换方法、装置、电子设备和存储介质
WO2020007182A1 (zh) 个性化场景图像的处理方法、装置及存储介质
CN111182350B (zh) 图像处理方法、装置、终端设备及存储介质
WO2020215789A1 (zh) 虚拟画笔实现方法、装置和计算机可读存储介质
EP3533218A1 (en) Simulating depth of field
CN111833257A (zh) 视频动态换脸方法、装置、计算机设备及存储介质
KR20220110493A (ko) 비디오의 객체 표시 방법 및 장치, 전자 장치 및 컴퓨터 판독 가능 저장 매체
WO2024174971A1 (zh) 视频处理方法、装置、设备和存储介质
WO2022022260A1 (zh) 图像风格迁移方法及其装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23746003

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023746003

Country of ref document: EP

Effective date: 20240729