CN111954055A - Video special effect display method and device, electronic equipment and storage medium - Google Patents

Video special effect display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111954055A
CN111954055A CN202010624935.2A CN202010624935A CN111954055A CN 111954055 A CN111954055 A CN 111954055A CN 202010624935 A CN202010624935 A CN 202010624935A CN 111954055 A CN111954055 A CN 111954055A
Authority
CN
China
Prior art keywords
tracked
frame image
key point
position information
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010624935.2A
Other languages
Chinese (zh)
Other versions
CN111954055B (en
Inventor
武珊珊
肖逸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010624935.2A priority Critical patent/CN111954055B/en
Publication of CN111954055A publication Critical patent/CN111954055A/en
Priority to PCT/CN2021/103299 priority patent/WO2022002082A1/en
Application granted granted Critical
Publication of CN111954055B publication Critical patent/CN111954055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a method and a device for displaying a video special effect, electronic equipment and a storage medium, and relates to the technical field of software application, wherein the method comprises the following steps: acquiring a current frame image and a key point to be tracked in the current frame image based on a key point track special effect opening instruction; acquiring position information of a key point to be tracked in a current frame image; generating auxiliary position information of the key points to be tracked according to the position information of the key points to be tracked in the current frame image and the position information of the key points to be tracked in the previous frame image; performing key point track rendering on the current frame image according to the current frame and the position information of the key point to be tracked in each frame image before the current frame and the auxiliary position information of the key point to be tracked generated each time; and displaying the rendered current frame image. The method and the device for generating the auxiliary position information based on the linear interpolation mode ensure that the track of the key points is smooth and continuous, and meanwhile, the key points in each frame of image are directly subjected to track rendering, so that rapid rendering is realized.

Description

Video special effect display method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a method and an apparatus for displaying a video special effect, an electronic device, and a storage medium.
Background
With the rapid development of mobile terminal technology, application software with various functions is produced, thereby bringing convenience and entertainment to users. Among them, the game special effects, especially the key point track special effects, are very popular with users.
In the related art, the positions of the key points in each frame of image are recorded, the key points are stored in an array, and all the points in the track are rendered in the picture in each frame of image, so that the special effect of the track of the key points is displayed. However, in the existing display mode of video special effects, due to incomplete recording of the key point tracks, intervals exist in the displayed key point tracks, and meanwhile, if more key points need to be rendered in each frame of image, due to long rendering time of each frame, rapid rendering cannot be realized, and user experience is seriously influenced. Therefore, how to quickly add a smooth and continuous key point track special effect to a video becomes an urgent problem to be solved.
Disclosure of Invention
The disclosure provides a method and a device for displaying video special effects, electronic equipment and a storage medium, which are used for at least solving the problem that the smooth and continuous key point track special effect cannot be quickly added to a video in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a method for displaying a video special effect is provided, which includes: acquiring a current frame image and a key point to be tracked in the current frame image based on a key point track special effect opening instruction; acquiring the position information of the key point to be tracked in the current frame image; generating auxiliary position information of the key point to be tracked according to the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image; performing key point track rendering on the current frame image according to the current frame and the position information of the key point to be tracked in each frame image before the current frame and the generated auxiliary position information of the key point to be tracked each time; and displaying the rendered current frame image.
According to an embodiment of the present disclosure, the generating auxiliary location information of the keypoint to be tracked according to the location information of the keypoint to be tracked in the current frame image and the location information of the keypoint to be tracked in the previous frame image includes: and performing linear interpolation processing on the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image to generate auxiliary position information of the key point to be tracked.
According to an embodiment of the present disclosure, the performing linear interpolation processing on the position information of the to-be-tracked keypoint in the current frame image and the position information of the to-be-tracked keypoint in the previous frame image to generate auxiliary position information of the to-be-tracked keypoint includes: acquiring the pixel distance between the key point to be tracked in the current frame image and the key point to be tracked in the previous frame image according to the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image; determining the quantity of auxiliary position information of the key point to be tracked, which needs to be interpolated, according to the pixel distance and a preset interpolation gap; and determining the auxiliary position information of the key point to be tracked according to the position information of the key point to be tracked in the current frame image, the position information of the key point to be tracked in the previous frame image and the quantity of the auxiliary position information of the key point to be tracked.
According to an embodiment of the present disclosure, the performing, according to the current frame and the position information of the to-be-tracked keypoint in each frame of image before the current frame and the auxiliary position information of the to-be-tracked keypoint generated each time, keypoint track rendering on the current frame image includes: rendering the stored texture according to the position information of the key point to be tracked in the current frame image and the generated auxiliary position information of the key point to be tracked, wherein the stored texture is generated by rendering according to the position information of the key point to be tracked in each frame image before the current frame image and the generated auxiliary position information of the key point to be tracked each time before the current frame image; and rendering the rendered texture onto the current frame image.
According to an embodiment of the present disclosure, when the current frame image is a first frame image, the stored texture is a blank texture.
According to one embodiment of the present disclosure, the size of the stored texture corresponds to the size of the video.
According to an embodiment of the present disclosure, further comprising: based on a video recording starting instruction, removing the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked, or replacing the stored texture with blank texture, and continuing to execute the steps of obtaining the current frame image and the key point to be tracked in the current frame image.
According to an embodiment of the present disclosure, further comprising: and based on the special effect clearing instruction, clearing the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked, or replacing the stored texture with blank texture, and displaying the newly acquired current frame image.
According to an embodiment of the present disclosure, further comprising: and based on the special effect recovery instruction, continuously executing the step of obtaining the current frame image and the key points to be tracked in the current frame image.
According to a second aspect of the embodiments of the present disclosure, there is provided a display apparatus for video special effects, including: the system comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is configured to execute a key point track-based special effect opening instruction and acquire a current frame image and a key point to be tracked in the current frame image; a second obtaining unit configured to perform obtaining of position information of the key point to be tracked in the current frame image; the generating unit is configured to generate auxiliary position information of the key point to be tracked according to the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image; the rendering unit is configured to perform key point track rendering on the current frame image according to the current frame and the position information of the key point to be tracked in each frame image before the current frame and the generated auxiliary position information of the key point to be tracked each time; and a presentation unit configured to perform presentation of the rendered current frame image.
According to an embodiment of the present disclosure, the generating unit includes: a generating subunit, configured to perform linear interpolation processing on the position information of the to-be-tracked key point in the current frame image and the position information of the to-be-tracked key point in the previous frame image, and generate auxiliary position information of the to-be-tracked key point.
According to an embodiment of the present disclosure, the generating subunit includes: an obtaining module configured to perform obtaining a pixel distance between the key point to be tracked in the current frame image and the key point to be tracked in the previous frame image according to the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image; the first determination module is configured to determine the quantity of auxiliary position information of the key point to be tracked, which needs to be interpolated, according to the pixel distance and a preset interpolation gap; a second determining module configured to determine auxiliary position information of the keypoint to be tracked according to the position information of the keypoint to be tracked in the current frame image, the position information of the keypoint to be tracked in the previous frame image, and the number of auxiliary position information of the keypoint to be tracked.
According to an embodiment of the present disclosure, the rendering unit includes: a first rendering subunit, configured to perform rendering on a stored texture according to the position information of the to-be-tracked key point in the current frame image and the auxiliary position information of the to-be-tracked key point generated this time, where the stored texture is generated by performing rendering according to the position information of the to-be-tracked key point in each frame image before the current frame image and the auxiliary position information of the to-be-tracked key point generated each time before this time; and a second rendering subunit configured to perform rendering of the rendered texture onto the current frame image.
According to an embodiment of the present disclosure, when the current frame image is a first frame image, the stored texture is a blank texture.
According to one embodiment of the present disclosure, the size of the stored texture corresponds to the size of the video.
According to an embodiment of the present disclosure, the first obtaining unit is further configured to perform: based on a video recording starting instruction, removing the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked, or replacing the stored texture with blank texture, and continuing to execute the steps of obtaining the current frame image and the key point to be tracked in the current frame image.
According to an embodiment of the present disclosure, the first obtaining unit is further configured to perform: and based on the special effect clearing instruction, clearing the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked, or replacing the stored texture with blank texture, and displaying the newly acquired current frame image.
According to an embodiment of the present disclosure, the first obtaining unit is further configured to perform: and based on the special effect recovery instruction, continuously executing the step of obtaining the current frame image and the key points to be tracked in the current frame image.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method for presenting a video special effect provided by the embodiment of the first aspect of the disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium, where instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method for presenting video effects as provided in the embodiments of the first aspect of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: the method comprises the steps of monitoring target operation of music materials after a source short video is obtained and a first display page is called, generating a calling instruction after the target operation is monitored, jumping to a music editing page, loading the target music materials on the music editing page, and further responding to the editing operation, editing the target music materials and loading the target music materials to the source short video to generate the target short video. Therefore, according to the method and the device, the interaction scheme is simplified, so that after the source short video is obtained, the first display page can be directly called, the target operation is monitored in the first display page, then after the target operation is monitored, the first display page can be directly jumped to the music editing page, the target music material is edited, and then the target short video is obtained, so that the interaction times are reduced, the operation time is shortened, the learning cost is reduced, and the user experience is greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flowchart illustrating a method for presenting video effects according to an exemplary embodiment.
FIG. 2 is a schematic diagram illustrating a host interface in accordance with an exemplary embodiment.
FIG. 3 is a diagram illustrating a special effects preview interface, according to an example embodiment.
Fig. 4 is a diagram illustrating a display of a current frame image on a special effect preview interface according to an exemplary embodiment.
FIG. 5 is a diagram illustrating a target object selection result according to an example embodiment.
FIG. 6 is a schematic diagram illustrating a rendered current frame image according to an example embodiment.
Fig. 7 is a flow chart illustrating another method of presenting video effects according to an example embodiment.
Fig. 8 is a flowchart illustrating yet another method for presenting video effects according to an example embodiment.
FIG. 9 is a progressive schematic diagram illustrating a keypoint trajectory change process, according to an exemplary embodiment.
Fig. 10 is a flowchart illustrating still another method for presenting video effects according to an example embodiment.
Fig. 11 is a flow chart illustrating another method of presenting video effects according to an example embodiment.
Fig. 12 is a block diagram illustrating a video effects presentation apparatus according to an example embodiment.
Fig. 13 is a block diagram illustrating another apparatus for presenting video effects according to an example embodiment.
FIG. 14 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a method for presenting video effects according to an exemplary embodiment. The execution subject of the video special effect display method disclosed by the invention is a video special effect display device. The method for displaying the video special effect according to the embodiment of the present disclosure may be executed by the apparatus for displaying the video special effect according to the embodiment of the present disclosure, and the apparatus for displaying the video special effect may specifically be a hardware device, or software in a hardware device, or the like. The hardware devices are, for example, terminal devices, servers, and the like. As shown in fig. 1, the method for displaying a video special effect provided by this embodiment includes the following steps:
in step 101, a key point to be tracked in a current frame image and a current frame image is acquired based on a key point track special effect opening instruction.
The key point track special effect may be a special effect obtained by superimposing any display effect on a track along which the key point moves. For example, a yellow light beam or a white light ray may be superimposed along the trajectory of the keypoint movement.
The key point may be any key position of the target object in the current frame image. For example, the nose, eyes, and fingertips of the target object may be used as key points to be tracked. For another example, objects such as glasses, a hand-held microphone, and a worn video of the target object may be used as the key points to be tracked.
In the embodiment of the disclosure, when trying to add a special effect to a video, a user may issue an instruction to add the special effect by clicking a special effect control, inputting voice information, and the like. Correspondingly, after a special effect adding instruction issued by a user is detected, at least one special effect identifier can be loaded in a special effect preview interface so that the user can select the special effect identifier according to requirements. The special effect marks comprise key point track special effects, such as magic noses, magic eyes, magic fingers and other special effect marks.
As a possible implementation manner, when an attempt is made to detect an add special effect instruction, a position of a target operation performed on the main interface by the user may be acquired, and when the position of the target operation is detected as an operation area for adding a special effect, it may be determined that the add special effect instruction is detected.
Further, the instructions issued by the user can be continuously detected, and after the key point track special effect opening instruction is detected, the key point track special effect is added to the video.
As a possible implementation manner, when trying to detect a key point track special effect opening instruction, a position of a target operation implemented on a special effect preview interface by a user may be obtained, and when it is detected that the position of the target operation is a special effect identification operation area of a key point track special effect, it may be determined that the key point track special effect opening instruction is detected.
It should be noted that, in the present disclosure, in order to simplify the interaction process and improve the user experience, a plurality of functional controls may be displayed on the special effect preview interface, for example, a recording control may be displayed on the special effect preview interface, so that the user may directly perform an operation of adding a key point trajectory special effect to the recorded video after selecting a required special effect after previewing.
For example, as shown in fig. 2, a user may trigger an add-special-effect function by clicking an add-special-effect control 12 on a main interface 11, and accordingly, after it is detected that the user clicks the add-special-effect control 12, as shown in fig. 3, special-effect identifiers 14-1 to 14-4, 4 in total, and a shooting control 15 may be loaded in a special-effect preview interface 13. Further, when it is detected that the position of the target operation implemented on the special effect preview interface by the user is a special effect identification operation area of the special effect of the key point track, it is determined that the key point track special effect opening instruction is detected.
In the embodiment of the disclosure, the current frame image can be acquired through a video, image and other acquisition devices based on the key point track special effect opening instruction, and the acquired current frame image is displayed on the special effect preview interface.
For example, a current frame image 16 as shown in fig. 4 may be acquired by a camera, and the acquired current frame image 16 is displayed on the special effect preview interface 13.
It should be noted that, after the current frame image is acquired in the above manner, it may be first determined whether the acquired current frame image only includes one object, and if it is determined that the current frame image only includes one object, the object may be used as a target object; if it is recognized that the current frame image includes more than one object, one of the objects may be selected as the target object. The target object is an object which needs to be added with a video special effect subsequently.
In the present disclosure, the selection manner of the target object is not limited, and may be selected according to actual situations. For example, the target object may be selected according to the recognition completeness and confidence of the joint point of each object. For example, as shown in fig. 5, after the current frame image is acquired, if the current frame image is recognized to include 17-1 and 17-2, and 2 objects in total, the object 17-1 may be the target object.
Further, after the current frame image is obtained, the key points to be tracked in the current frame image can be obtained based on the characteristics of the target object.
It should be noted that, in the present disclosure, the obtaining manner of the key point to be tracked is not limited, and may be selected according to actual situations. For example, the contour image may be used to detect a region of a straight line-like, and the key point to be tracked in the current frame image may be identified by combining the regions of the straight lines of various types. For another example, the keypoints to be tracked can be described by color features and/or shape features, and then the keypoints to be tracked in the current frame image can be obtained in a feature matching manner.
In step 102, position information of a key point to be tracked in a current frame image is acquired.
In the embodiment of the present disclosure, after the key point to be tracked in the current frame image is obtained, the position information of the key point to be tracked may be obtained.
As a possible implementation manner, the image position of the key point to be tracked in the current frame image may be determined, and the position mapping between the current frame image and the special effect preview interface preset in the present disclosure is queried according to the image position, so as to obtain the position information of the key point to be tracked on the special effect preview interface.
Optionally, the coordinates of the key point to be tracked in the current frame image may be determined according to the obtained key point to be tracked in the current frame image, and the coordinates of the key point to be tracked in the special effect preview interface may be obtained based on the same reference coordinate system that is pre-established for the current frame image and the special effect preview interface, so as to obtain the position information of the key point to be tracked on the special effect preview interface.
In step 103, auxiliary position information of the keypoint to be tracked is generated according to the position information of the keypoint to be tracked in the current frame image and the position information of the keypoint to be tracked in the previous frame image.
The auxiliary position information refers to position information obtained after interpolation processing is carried out on the key points to be tracked.
In the embodiment of the disclosure, after the position information of the key point to be tracked in the current frame image is acquired, the position information of the key point to be tracked in the previous frame image may be read, and linear interpolation processing is performed on the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image, so as to generate the auxiliary position information of the key point to be tracked.
In step 104, a key point track rendering is performed on the current frame image according to the current frame and the position information of the key point to be tracked in each frame image before the current frame and the auxiliary position information of the key point to be tracked generated each time.
The process of performing the key point track rendering on the current frame image refers to a process of generating a rendering result (i.e., the current frame image) according to information such as texture and illumination information.
In the embodiment of the disclosure, when attempting to perform the key point track rendering on the current frame image, the key point track rendering may be performed on the current frame image according to the current frame and the position information of the key point to be tracked in each frame image before the current frame, the auxiliary position information of the key point to be tracked generated each time, and the pre-stored texture, so as to obtain the rendered current frame image.
Wherein the pre-stored texture is a texture having a size corresponding to the video size.
In step 105, the rendered current frame image is shown.
In the embodiment of the disclosure, after the current frame image is subjected to the key point track rendering, the rendered current frame image can be obtained, and the rendered current frame image is displayed on a screen, so that the display of the video special effect is realized.
For example, as shown in fig. 6, based on the key point track special effect opening instruction, a key point track special effect is added to the key point to be tracked "the nose 19 of the target user" in the current frame image 18-1, so as to obtain and display the rendered current frame image 18-2. The rendered current frame image 18-2 includes a black light 20 superimposed along the trajectory of the movement of the nose 19 of the target user.
It should be noted that after the rendered current frame image is obtained, the rendered current frame image is not displayed on the screen, and after the background obtains a new texture through processing, the texture is superimposed on the current frame image, and then the superimposed image is displayed on the screen.
The method for displaying the video special effect, provided by the embodiment of the disclosure, can acquire the current frame image and the key point to be tracked in the current frame image based on the key point track special effect opening instruction, and acquire the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image to generate the auxiliary position information of the key point to be tracked, and further perform key point track rendering on the current frame image according to the position information of the key point to be tracked in the current frame image and the auxiliary position information of the key point to be tracked generated each time in each frame image before the current frame image, and display the rendered current frame image to realize the display of the video special effect. Therefore, the method and the device can generate the auxiliary position information by performing linear interpolation processing on the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image, and then perform key point track rendering on the current frame image according to the position information and the auxiliary position information of the key point to be tracked, so that the displayed track of the key point in the rendered current frame image is smooth and continuous, and no gap exists. Furthermore, the method and the device can directly perform key point track rendering on the key points acquired in each frame of image, can realize fast rendering of the key point track, avoid the technical problem of rendering blockage, and greatly shorten the time consumption of the video special effect display process.
It should be noted that, in the present disclosure, in order to ensure that the displayed track of the keypoints in the rendered current frame image is smooth and continuous and no longer has a gap, linear interpolation processing may be performed on the position information of the keypoint to be tracked in the current frame image and the position information of the keypoint to be tracked in the previous frame image, so as to generate auxiliary position information of the keypoint to be tracked.
As a possible implementation manner, as shown in fig. 7, on the basis of the foregoing embodiment, in the step S103, a process of performing linear interpolation processing on the position information of the to-be-tracked key point in the current frame image and the position information of the to-be-tracked key point in the previous frame image to generate auxiliary position information of the to-be-tracked key point specifically includes the following steps:
in step 201, a pixel distance between a key point to be tracked in a current frame image and a key point to be tracked in a previous frame image is obtained according to the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image.
The pixel distance may be a linear distance between a key point to be tracked in the current frame image and a key point to be tracked in the previous frame image.
In the embodiment of the present disclosure, based on the size information of the current frame image, according to the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image, the pixel distance is calculated according to the following formula:
Figure BDA0002565808850000081
wherein (x)0,y0) For the position information of the key point to be tracked in the current frame image, (x)1,y1) The position information of the key point to be tracked in the previous frame of image.
For example, the position information (x) of the key point to be tracked in the current frame image is obtained0,y0) (2, 2), position information (x) of the key point to be tracked in the previous frame image1,y1) When the pixel distance between two key points is 10, (8, 10) may be obtained.
In step 202, the number of auxiliary position information of the to-be-tracked key point to be interpolated is determined according to the pixel distance and the preset interpolation gap.
The interpolation gap may be a minimum interpolation distance for performing linear interpolation processing in a connecting line direction between a key point to be tracked in the current frame image and a key point to be tracked in the previous frame image.
In the embodiment of the present disclosure, after the pixel distance is obtained, the number of auxiliary position information of the to-be-tracked key point to be interpolated may be determined based on the interpolation gap. The interpolation gap can be set according to actual conditions. For example, the interpolation gap may be set to 5, 7, or the like.
For example, if the obtained pixel distance is 10 and the interpolation gap is 5, the number of the auxiliary position information of the to-be-tracked key point to be interpolated may be determined to be 1.
In step 203, the auxiliary position information of the keypoint to be tracked is determined according to the position information of the keypoint to be tracked in the current frame image, the position information of the keypoint to be tracked in the previous frame image, and the number of the auxiliary position information of the keypoint to be tracked.
Wherein the auxiliary position information includes first auxiliary position information in a horizontal direction and second auxiliary position information in a vertical direction.
In the embodiment of the present disclosure, a component of the interpolation gap in the horizontal direction may be acquired according to the interpolation gap, and then the first auxiliary position information is acquired according to the acquired component, the position information of the to-be-tracked key point in the current frame image, and the position information of the to-be-tracked key point in the previous frame image. Further, the second auxiliary position information may be calculated according to the first auxiliary position information, the position information of the keypoint to be tracked in the current frame image, and the position information of the keypoint to be tracked in the previous frame image, by the following formula:
Figure BDA0002565808850000091
wherein x is first auxiliary position information, and y is second auxiliary position information.
For example, if the position information of the key point to be tracked in the current frame image is (2, 2), the position information of the key point to be tracked in the previous frame image is (8, 10), the number of the auxiliary position information of the key point to be tracked is 1, and the interpolation gap is 5, the component of the interpolation gap in the horizontal direction may be 3, and then the first auxiliary position information x may be obtained as 5 according to the obtained component, the position information of the key point to be tracked in the current frame image, and the position information of the key point to be tracked in the previous frame image. Further, the second auxiliary position information y may be calculated to be 6 according to the first auxiliary position information, the position information of the key point to be tracked in the current frame image, and the position information of the key point to be tracked in the previous frame image. That is, the auxiliary location information of the keypoint to be tracked is (5, 6).
It should be noted that, in the process of generating the auxiliary position information of the key point to be tracked, if a decimal occurs, the decimal may be obtained in a rounding-down manner to ensure that the auxiliary position information is an integer.
According to the method for displaying the video special effect, the number of auxiliary position information of the key points to be tracked, which need to be interpolated, can be determined by acquiring the pixel distance between the key point to be tracked in the current frame image and the key point to be tracked in the previous frame image and according to the pixel distance and the preset interpolation gap, and then the auxiliary position information of the key points to be tracked can be determined according to the position information of the key points to be tracked in the current frame image, the position information of the key points to be tracked in the previous frame image and the number of the auxiliary position information of the key points to be tracked. Therefore, the auxiliary position information is generated by performing linear interpolation processing on the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image, the acquired auxiliary position information can be ensured to be capable of supplementing the track of the key point in the current frame image, and the track of the key point in the rendered current frame image is smooth and continuous, and no gap exists.
It should be noted that, in the present disclosure, in order to achieve fast rendering of the keypoint track, avoid the pause in rendering, and shorten the time consumption of the video special effect display process, the keypoint track can be directly rendered for the keypoint acquired in each frame of image, so as to obtain the rendered current frame of image.
As a possible implementation manner, as shown in fig. 8, on the basis of the foregoing embodiment, in the step S104, a process of performing a keypoint track rendering on the current frame image according to the current frame and position information of a keypoint to be tracked in each frame image before the current frame and auxiliary position information of the keypoint to be tracked, which is generated each time, specifically includes the following steps:
in step 301, a stored texture is rendered according to the position information of the to-be-tracked key point in the current frame image and the auxiliary position information of the to-be-tracked key point generated this time, where the stored texture is generated by rendering according to the position information of the to-be-tracked key point in each frame image before the current frame image and the auxiliary position information of the to-be-tracked key point generated each time before this time.
In the embodiment of the present disclosure, the track of the key point may be updated to the texture generated by rendering according to the position information of the key point to be tracked in each frame of image before the current frame and the auxiliary position information of the key point to be tracked generated this time, and the rendered texture may be stored.
It should be noted that, if the current frame image is the first frame image, the stored texture is blank texture, and the size of the texture is consistent with the size of the video.
For example, the schematic diagram of the trajectory change process of the keypoint as shown in fig. 9 can be obtained by recording the process of rendering the stored texture according to the position information of the keypoint to be tracked in the current frame image and the auxiliary position information of the keypoint to be tracked generated this time.
In step 302, the rendered texture is rendered onto the current frame image.
In the embodiment of the present disclosure, the saved rendered texture may be rendered on the current frame image to obtain the rendered current frame image.
It should be noted that the specific process of rendering is the prior art, and is not described herein again.
According to the method for displaying the video special effect, the stored texture can be rendered according to the position information of the key point to be tracked in the current frame image and the auxiliary position information of the key point to be tracked generated this time, and the rendered texture is rendered on the current frame image, so that the rendered current frame image is obtained. Therefore, the method and the device can directly perform key point track rendering on the key points acquired in each frame of image, can realize the rapid rendering of the key point track, avoid the technical problem of rendering blockage, and greatly shorten the time consumption of the video special effect display process.
It should be noted that, in practical applications, a user may send a special effect clearing instruction based on reasons that a track of a current key point movement is not satisfactory, other multiple key point movement modes are tried, or other objects in a current frame image are selected as target objects to experience a special effect of the key point track, and the like.
As a possible implementation manner, as shown in fig. 10, on the basis of the foregoing embodiment, a process of displaying a rendered current frame image obtained again based on a special effect removal instruction specifically includes the following steps:
in step 401, based on the special effect clearing instruction, the newly acquired current frame image is displayed.
In the embodiment of the present disclosure, based on the special effect removal instruction, the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked may be removed, or the stored texture may be replaced with a blank texture, and the newly acquired current frame image may be displayed.
It should be noted that, when trying to clear the special effect of the track of the key point in the current frame image, the user may issue a special effect clearing instruction by clicking a plurality of modes, for example, double-clicking the screen, inputting the voice information of "please clear the current special effect", and the like. Correspondingly, after a special effect clearing instruction issued by a user is detected, the current special effect can be cleared based on the special effect clearing instruction, and the newly acquired current frame image can be displayed.
In step 402, based on the special effect restoration instruction, the current frame image and the key points to be tracked in the current frame image are obtained.
In the embodiment of the disclosure, the step of obtaining the current frame image and the key point to be tracked in the current frame image may be continuously executed based on the special effect restoration instruction.
It should be noted that, when trying to restore the special effect of the key point trajectory in the current frame image, the user may issue a special effect restoration instruction by clicking a plurality of modes, for example, double-clicking the screen, inputting the voice information of "please restore the current special effect", and the like. Correspondingly, after a special effect restoring instruction issued by a user is detected, the step of obtaining the current frame image and the key point to be tracked in the current frame image can be continuously executed based on the special effect restoring instruction.
In step 403, position information of a key point to be tracked in the current frame image is obtained.
In step 404, auxiliary position information of the keypoint to be tracked is generated according to the position information of the keypoint to be tracked in the current frame image and the position information of the keypoint to be tracked in the previous frame image.
In step 405, a key point trajectory rendering is performed on the current frame image according to the current frame and the position information of the key point to be tracked in each frame image before the current frame and the auxiliary position information of the key point to be tracked generated each time.
In step 406, the rendered current frame image is shown.
The steps 403-406 are the same as the steps 102-105 in the embodiment shown in FIG. 1, and are not repeated here.
The method for displaying the video special effect provided by the embodiment of the disclosure can clear the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked based on the special effect clearing instruction, or replace the stored texture with the blank texture, display the newly acquired current frame image, then continue to execute the step of acquiring the current frame image and the key point to be tracked in the current frame image, provide a rendering effect of clearing the current frame key point track for a user, and preview a playing method for displaying the special effect of the key point track again.
Therefore, the method and the device have the advantages that the tracks of the key points are supplemented based on a linear interpolation processing mode, and the key points acquired in each frame of image are directly subjected to key point track rendering, so that a user can quickly view a key point track special effect display effect with smooth and continuous tracks after selecting a key point track special effect on a special effect preview interface.
Further, after the user finishes previewing the effect of displaying the special effect of the track of the key point, a shooting instruction can be issued for shooting, so that a video added with the special effect of the track of the key point is obtained.
As a possible implementation manner, as shown in fig. 11, on the basis of the foregoing embodiment, a process of adding a special key point trajectory effect to a recorded video based on a video recording start instruction specifically includes the following steps:
in step 501, based on a video recording start instruction, the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked are cleared.
It should be noted that, when trying to record a video, a user may issue a special effect clearing instruction by clicking a plurality of ways, for example, clicking a recording control on a special effect preview interface, inputting a voice message of "please start recording a video", and the like. Accordingly, after a video recording start instruction issued by a user is detected, the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked can be cleared based on the video recording start instruction.
As a possible implementation manner, when attempting to record a video, the position of a target operation implemented on the special effect preview interface by a user may be obtained, and when detecting that the position of the target operation is a recording control operation area, it may be determined that a video recording start instruction is detected.
In step 502, a current frame image and a key point to be tracked in the current frame image are obtained.
In step 503, position information of the key point to be tracked in the current frame image is obtained.
In step 504, auxiliary position information of the keypoint to be tracked is generated according to the position information of the keypoint to be tracked in the current frame image and the position information of the keypoint to be tracked in the previous frame image.
In step 505, a keypoint trajectory rendering is performed on the current frame image according to the current frame and the position information of the keypoint to be tracked in each frame image before the current frame and the auxiliary position information of the keypoint to be tracked generated each time.
In step 506, the rendered current frame image is shown.
The steps 502-506 are the same as the steps 101-105 in the embodiment shown in FIG. 1, and are not described herein again.
According to the method for displaying the video special effect, the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked can be cleared based on the video recording starting instruction, or the stored texture is replaced by the blank texture, and the step of obtaining the current frame image and the key point to be tracked in the current frame image is continuously executed, so that a user can add the key point track special effect with smooth and continuous track to the video through simple interaction, and the user experience is further improved.
Fig. 12 to 13 are block diagrams of a video effect presentation apparatus according to an exemplary embodiment.
As shown in fig. 12, the apparatus 1000 includes a first obtaining unit 121, a second obtaining unit 122, a generating unit 123, a rendering unit 124, and a presentation unit 125.
The first obtaining unit 121 is configured to execute a special effect opening instruction based on a key point track, and obtain a current frame image and a key point to be tracked in the current frame image;
the second obtaining unit 122 is configured to obtain the position information of the key point to be tracked in the current frame image;
the generating unit 123 is configured to generate auxiliary position information of the keypoint to be tracked according to the position information of the keypoint to be tracked in the current frame image and the position information of the keypoint to be tracked in the previous frame image;
the rendering unit 124 is configured to perform the key point trajectory rendering on the current frame image according to the current frame and the position information of the key point to be tracked in each frame image before the current frame and the auxiliary position information of the key point to be tracked generated each time;
the presentation unit 125 is configured to perform presentation of the rendered current frame image.
In an embodiment of the present disclosure, as shown in fig. 13, the generating unit 123 in fig. 12 includes: the generating subunit 1231 is configured to perform linear interpolation processing on the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image, and generate auxiliary position information of the key point to be tracked.
In an embodiment of the present disclosure, as shown in fig. 13, the generating subunit 1231 includes: the obtaining module 12311 is configured to obtain, according to the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image, a pixel distance between the key point to be tracked in the current frame image and the key point to be tracked in the previous frame image; a first determining module 12312, configured to determine the number of auxiliary position information of the key point to be tracked to be interpolated according to the pixel distance and the preset interpolation gap; a second determining module 12313, configured to perform determining the auxiliary position information of the keypoint to be tracked according to the position information of the keypoint to be tracked in the current frame image, the position information of the keypoint to be tracked in the previous frame image, and the number of auxiliary position information of the keypoint to be tracked.
In an embodiment of the present disclosure, as shown in fig. 13, the rendering unit 124 in fig. 12 includes: a first rendering subunit 1241, configured to perform rendering on a stored texture according to the position information of the to-be-tracked key point in the current frame image and the auxiliary position information of the to-be-tracked key point generated this time, where the stored texture is generated by rendering according to the position information of the to-be-tracked key point in each frame image before the current frame image and the auxiliary position information of the to-be-tracked key point generated each time before this time; a second rendering subunit 1242 configured to perform rendering of the rendered texture onto the current frame image.
In an embodiment of the present disclosure, the first rendering subunit 1241 is configured to execute a texture in which a stored texture is blank when the current frame image is the first frame image.
In an embodiment of the present disclosure, the first rendering subunit 1241 is configured to perform that the size of the stored texture coincides with the size of the video.
In the embodiment of the present disclosure, the first obtaining unit 121 is further configured to perform the step of removing the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked based on the video recording start instruction, or replacing the stored texture with a blank texture, and continuing to perform the step of obtaining the current frame image and the key point to be tracked in the current frame image.
In the embodiment of the present disclosure, the first obtaining unit 121 is further configured to execute, based on the special effect removal instruction, removing the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked, or replacing the stored texture with a blank texture, and displaying the newly obtained current frame image.
In the embodiment of the present disclosure, the first obtaining unit 121 is further configured to execute the step of obtaining the current frame image and the to-be-tracked key point in the current frame image based on the special effect restoration instruction.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The video special effect display device provided by the embodiment of the disclosure can acquire key points to be tracked in a current frame image and a current frame image based on a key point track special effect opening instruction, and acquire position information of the key points to be tracked in the current frame image and position information of the key points to be tracked in a previous frame image to generate auxiliary position information of the key points to be tracked, and further perform key point track rendering on the current frame image according to the position information of the key points to be tracked in the current frame image and each frame image before the current frame and the auxiliary position information of the key points to be tracked generated each time, and display the rendered current frame image to realize the display of the video special effect. Therefore, the method and the device can generate the auxiliary position information by performing linear interpolation processing on the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image, and then perform key point track rendering on the current frame image according to the position information and the auxiliary position information of the key point to be tracked, so that the displayed track of the key point in the rendered current frame image is smooth and continuous, and no gap exists. Furthermore, the method and the device can directly perform key point track rendering on the key points acquired in each frame of image, can realize fast rendering of the key point track, avoid the technical problem of rendering blockage, and greatly shorten the time consumption of the video special effect display process.
In order to implement the above embodiments, the present disclosure further provides an electronic device, as shown in fig. 14, where the electronic device 8000 includes: a processor 801; one or more memories 802 for storing instructions executable by the processor 801; the processor 801 is configured to execute the method for presenting a video special effect according to the above embodiment. The processor 801 and the memory 802 are connected by a communication bus.
To implement the above embodiments, the present disclosure also provides a storage medium comprising instructions, such as the memory 802 comprising instructions, executable by the processor 801 of the apparatus 1000 to perform the above method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for displaying a video special effect is characterized by comprising the following steps:
acquiring a current frame image and a key point to be tracked in the current frame image based on a key point track special effect opening instruction;
acquiring the position information of the key point to be tracked in the current frame image;
generating auxiliary position information of the key point to be tracked according to the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image;
performing key point track rendering on the current frame image according to the current frame and the position information of the key point to be tracked in each frame image before the current frame and the generated auxiliary position information of the key point to be tracked each time; and
and displaying the rendered current frame image.
2. The method according to claim 1, wherein the generating auxiliary position information of the keypoint to be tracked according to the position information of the keypoint to be tracked in the current frame image and the position information of the keypoint to be tracked in the previous frame image comprises:
and performing linear interpolation processing on the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image to generate auxiliary position information of the key point to be tracked.
3. The method according to claim 2, wherein the performing linear interpolation processing on the position information of the to-be-tracked key point in the current frame image and the position information of the to-be-tracked key point in the previous frame image to generate the auxiliary position information of the to-be-tracked key point includes:
acquiring the pixel distance between the key point to be tracked in the current frame image and the key point to be tracked in the previous frame image according to the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image;
determining the quantity of auxiliary position information of the key point to be tracked, which needs to be interpolated, according to the pixel distance and a preset interpolation gap;
and determining the auxiliary position information of the key point to be tracked according to the position information of the key point to be tracked in the current frame image, the position information of the key point to be tracked in the previous frame image and the quantity of the auxiliary position information of the key point to be tracked.
4. The method as claimed in claim 1, wherein the performing, according to the current frame and the position information of the keypoint to be tracked in each frame of image before the current frame and the auxiliary position information of the keypoint to be tracked generated each time, keypoint track rendering on the current frame image comprises:
rendering the stored texture according to the position information of the key point to be tracked in the current frame image and the generated auxiliary position information of the key point to be tracked, wherein the stored texture is generated by rendering according to the position information of the key point to be tracked in each frame image before the current frame image and the generated auxiliary position information of the key point to be tracked each time before the current frame image; and
and rendering the rendered texture on the current frame image.
5. The display method according to claim 1 or 4, further comprising:
based on a video recording starting instruction, removing the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked, or replacing the stored texture with blank texture, and continuing to execute the steps of obtaining the current frame image and the key point to be tracked in the current frame image.
6. The display method according to claim 1 or 4, further comprising:
and based on the special effect clearing instruction, clearing the stored position information of the key point to be tracked and the auxiliary position information of the key point to be tracked, or replacing the stored texture with blank texture, and displaying the newly acquired current frame image.
7. The display method according to claim 6, further comprising:
and based on the special effect recovery instruction, continuously executing the step of obtaining the current frame image and the key points to be tracked in the current frame image.
8. A device for displaying a special video effect, comprising:
the system comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is configured to execute a key point track-based special effect opening instruction and acquire a current frame image and a key point to be tracked in the current frame image;
a second obtaining unit configured to perform obtaining of position information of the key point to be tracked in the current frame image;
the generating unit is configured to generate auxiliary position information of the key point to be tracked according to the position information of the key point to be tracked in the current frame image and the position information of the key point to be tracked in the previous frame image;
the rendering unit is configured to perform key point track rendering on the current frame image according to the current frame and the position information of the key point to be tracked in each frame image before the current frame and the generated auxiliary position information of the key point to be tracked each time; and
and the display unit is configured to display the rendered current frame image.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of presenting a video effect according to any one of claims 1 to 8.
10. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform a method of presenting a video effect according to any one of claims 1 to 8.
CN202010624935.2A 2020-07-01 2020-07-01 Video special effect display method and device, electronic equipment and storage medium Active CN111954055B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010624935.2A CN111954055B (en) 2020-07-01 2020-07-01 Video special effect display method and device, electronic equipment and storage medium
PCT/CN2021/103299 WO2022002082A1 (en) 2020-07-01 2021-06-29 Method and apparatus for displaying video special effect, and electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010624935.2A CN111954055B (en) 2020-07-01 2020-07-01 Video special effect display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111954055A true CN111954055A (en) 2020-11-17
CN111954055B CN111954055B (en) 2022-09-02

Family

ID=73337861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010624935.2A Active CN111954055B (en) 2020-07-01 2020-07-01 Video special effect display method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111954055B (en)
WO (1) WO2022002082A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929582A (en) * 2021-02-04 2021-06-08 北京字跳网络技术有限公司 Special effect display method, device, equipment and medium
CN113160244A (en) * 2021-03-24 2021-07-23 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
WO2022002082A1 (en) * 2020-07-01 2022-01-06 北京达佳互联信息技术有限公司 Method and apparatus for displaying video special effect, and electronic device and storage medium
WO2022199102A1 (en) * 2021-03-26 2022-09-29 北京达佳互联信息技术有限公司 Image processing method and device
WO2023098617A1 (en) * 2021-12-03 2023-06-08 北京字节跳动网络技术有限公司 Image processing method and apparatus, electronic device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016034008A1 (en) * 2014-09-04 2016-03-10 华为技术有限公司 Target tracking method and device
WO2018202089A1 (en) * 2017-05-05 2018-11-08 商汤集团有限公司 Key point detection method and device, storage medium and electronic device
CN109068053A (en) * 2018-07-27 2018-12-21 乐蜜有限公司 Image special effect methods of exhibiting, device and electronic equipment
CN109688346A (en) * 2018-12-28 2019-04-26 广州华多网络科技有限公司 A kind of hangover special efficacy rendering method, device, equipment and storage medium
WO2019109650A1 (en) * 2017-12-06 2019-06-13 香港乐蜜有限公司 Video playing method and apparatus, and electronic device
CN110399844A (en) * 2019-07-29 2019-11-01 南京图玩智能科技有限公司 It is a kind of to be identified and method for tracing and system applied to cross-platform face key point

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003058892A (en) * 2001-08-09 2003-02-28 Techno Scope:Kk Device and method for processing animation
JP2012253492A (en) * 2011-06-01 2012-12-20 Sony Corp Image processing apparatus, image processing method, and program
CN109102530B (en) * 2018-08-21 2020-09-04 北京字节跳动网络技术有限公司 Motion trail drawing method, device, equipment and storage medium
CN109646957B (en) * 2018-12-19 2022-03-25 北京像素软件科技股份有限公司 Method and device for realizing tailing special effect
CN110035236A (en) * 2019-03-26 2019-07-19 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment
CN110047124A (en) * 2019-04-23 2019-07-23 北京字节跳动网络技术有限公司 Method, apparatus, electronic equipment and the computer readable storage medium of render video
CN111954055B (en) * 2020-07-01 2022-09-02 北京达佳互联信息技术有限公司 Video special effect display method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016034008A1 (en) * 2014-09-04 2016-03-10 华为技术有限公司 Target tracking method and device
WO2018202089A1 (en) * 2017-05-05 2018-11-08 商汤集团有限公司 Key point detection method and device, storage medium and electronic device
WO2019109650A1 (en) * 2017-12-06 2019-06-13 香港乐蜜有限公司 Video playing method and apparatus, and electronic device
CN109068053A (en) * 2018-07-27 2018-12-21 乐蜜有限公司 Image special effect methods of exhibiting, device and electronic equipment
CN109688346A (en) * 2018-12-28 2019-04-26 广州华多网络科技有限公司 A kind of hangover special efficacy rendering method, device, equipment and storage medium
CN110399844A (en) * 2019-07-29 2019-11-01 南京图玩智能科技有限公司 It is a kind of to be identified and method for tracing and system applied to cross-platform face key point

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022002082A1 (en) * 2020-07-01 2022-01-06 北京达佳互联信息技术有限公司 Method and apparatus for displaying video special effect, and electronic device and storage medium
CN112929582A (en) * 2021-02-04 2021-06-08 北京字跳网络技术有限公司 Special effect display method, device, equipment and medium
WO2022166872A1 (en) * 2021-02-04 2022-08-11 北京字跳网络技术有限公司 Special-effect display method and apparatus, and device and medium
CN113160244A (en) * 2021-03-24 2021-07-23 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN113160244B (en) * 2021-03-24 2024-03-15 北京达佳互联信息技术有限公司 Video processing method, device, electronic equipment and storage medium
WO2022199102A1 (en) * 2021-03-26 2022-09-29 北京达佳互联信息技术有限公司 Image processing method and device
WO2023098617A1 (en) * 2021-12-03 2023-06-08 北京字节跳动网络技术有限公司 Image processing method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
WO2022002082A1 (en) 2022-01-06
CN111954055B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN111954055B (en) Video special effect display method and device, electronic equipment and storage medium
US9681201B2 (en) Comment information generating apparatus and comment information generating method
US9398349B2 (en) Comment information generation device, and comment display device
GB2553991A (en) Tracking support apparatus, tracking support system, and tracking support method
CN108605115B (en) Tracking assistance device, tracking assistance system, and tracking assistance method
KR20130129458A (en) Dynamic template tracking
CN104077024A (en) Information processing apparatus, information processing method, and recording medium
CN112118395B (en) Video processing method, terminal and computer readable storage medium
WO2021254223A1 (en) Video processing method, apparatus and device, and storage medium
CN109960452B (en) Image processing method, image processing apparatus, and storage medium
US10846535B2 (en) Virtual reality causal summary content
WO2020236949A1 (en) Forensic video exploitation and analysis tools
US11334621B2 (en) Image search system, image search method and storage medium
CN111768433B (en) Method and device for realizing tracking of moving target and electronic equipment
CN113194253A (en) Shooting method and device for removing image reflection and electronic equipment
TWM506428U (en) Display system for video stream on augmented reality
KR101308184B1 (en) Augmented reality apparatus and method of windows form
JP6405606B2 (en) Image processing apparatus, image processing method, and image processing program
EP4191529A1 (en) Camera motion estimation method for augmented reality tracking algorithm and system therefor
JP3907344B2 (en) Movie anchor setting device
CN112367487A (en) Video recording method and electronic equipment
CN113949926A (en) Video frame insertion method, storage medium and terminal equipment
US10321089B2 (en) Image preproduction apparatus, method for controlling the same, and recording medium
CN116820251B (en) Gesture track interaction method, intelligent glasses and storage medium
CN110276841B (en) Motion trail determination method and device applied to augmented reality equipment and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant