CN113068072A - Video playing method, device and equipment - Google Patents

Video playing method, device and equipment Download PDF

Info

Publication number
CN113068072A
CN113068072A CN202110342298.4A CN202110342298A CN113068072A CN 113068072 A CN113068072 A CN 113068072A CN 202110342298 A CN202110342298 A CN 202110342298A CN 113068072 A CN113068072 A CN 113068072A
Authority
CN
China
Prior art keywords
video
image
target
playing
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110342298.4A
Other languages
Chinese (zh)
Inventor
汪谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110342298.4A priority Critical patent/CN113068072A/en
Publication of CN113068072A publication Critical patent/CN113068072A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The disclosure relates to a video playing method, which includes responding to an instruction of adding an object to a video, displaying an area selection interface in a current frame image of the video, determining a selected area in the current frame image according to input information on the area selection interface, adding the object used for presenting a preset display effect in the selected area, and playing the video after the object is added, so that the display effect can be locally presented in the playing process of the video.

Description

Video playing method, device and equipment
Technical Field
The present disclosure relates to the field of video processing, and in particular, to a method, an apparatus, and a device for playing a video.
Background
With the development of intelligent terminals and self-media, the application of video software is more and more common. In order to enhance the attractiveness of video, video software generally provides personalized editing materials and display effects for the publishers of video.
However, in the video editing function provided by the current video software, on the first hand, the material can only be applied to all pictures, and the user cannot personally select the used area, and on the second hand, the material realized based on the template can only be placed on the preset fixed position of the picture, so that the video added with the material is too template-ized. In a third aspect, the material and display effect styles are small, resulting in the possibility for a user to reuse the material and display effect when publishing a video, resulting in a video display effect that is too simple.
As can be seen, the functions of the video software are yet to be further improved.
Disclosure of Invention
The present disclosure provides a method, an apparatus, and a device for playing a video, so as to at least solve the problem of how to improve the function of video software in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for playing a video, including:
in response to an instruction to add an object to a video, displaying a region selection interface in a current frame image of the video display, the region selection interface for selecting a region from the current frame image of the video;
determining a selected area in the current frame image according to the input information on the area selection interface;
adding the object in the selected area, wherein the object is used for presenting a preset display effect;
and playing the video added with the object.
Optionally, the adding the object in the selected area includes:
identifying a target in the selected region;
adding the object on the target.
Optionally, the identifying the target in the selected region includes:
when a plurality of objects are recognized from the selected region, the object having the largest display area ratio in the selected region is set as the recognized object.
Optionally, before the playing the video to which the object is added, the method further includes:
determining image frames meeting preset conditions as candidate image frames in the image frames of the video;
identifying objects in the selected region in the candidate image frames;
taking the candidate image frame identified to the target as a target image frame;
adding the object on the target of the target image frame.
Optionally, the determining, among the image frames of the video, an image frame that meets a preset condition as a candidate image frame includes:
and taking the image frame corresponding to a predetermined target time window in the video as the candidate image frame.
Optionally, the determination manner of the target time window includes at least one of the following:
a trigger operation based on a user selected time window;
based on the preset display duration of the object;
based on a display timestamp of the current frame image.
Optionally, the determining the target time window based on the display timestamp of the current frame image and the preset display duration based on the object includes:
and under the condition that the time length between the current frame image and the last frame image of the video is less than the preset display duration, determining the target time window based on the time length from the current frame image to the last frame image.
Optionally, after the adding the object in the selected area, the method further includes:
and if the instruction for adding the object to the video is received again, the area selection interface is displayed on the current frame image again.
Optionally, before the playing the video to which the object is added, the method further includes:
and receiving configuration data of the video display style, wherein the configuration data is used for presenting the video with a preset style.
According to a second aspect of the embodiments of the present disclosure, there is provided a video playing apparatus, including:
a display unit configured to display, in response to an instruction to add an object to a video, a region selection interface for selecting a region from a current frame image of the video in the current frame image of the video display;
the determining unit is configured to determine a selected area in the current frame image according to input information on the area selection interface;
an adding unit configured to add the object in the selected region, the object being used to present a preset display effect;
and the playing unit is configured to play the video added with the object.
Optionally, the adding unit is specifically configured to identify a target in the selected region; adding the object on the target.
The adding unit is specifically configured to, when a plurality of objects are recognized from the selected region, regard an object having a largest display area ratio in the selected region as the recognized object.
Optionally, the apparatus further comprises:
an identifying unit configured to determine, before the playing unit plays the video to which the object is added, an image frame satisfying a preset condition as a candidate image frame among image frames of the video; identifying objects in the selected region in the candidate image frames; taking the candidate image frame identified to the target as a target image frame;
the adding unit is further configured to:
adding the object on the target of the target image frame.
Optionally, the identifying unit is specifically configured to use an image frame in the video corresponding to a predetermined target time window as the candidate image frame.
Optionally, the apparatus further comprises:
a configuration unit configured to determine the target time window using at least one of:
a trigger operation based on a user selected time window;
based on the preset display duration of the object;
based on a display timestamp of the current frame image.
Optionally, the configuration unit is specifically configured to, when a time length between the current frame image and a last frame image of the video is less than the preset display duration, determine the target time window based on the time length from the current frame image to the last frame image.
Optionally, the method further includes:
an interface switching unit configured to, after the adding unit adds the object in the selected area, re-display the area selection interface on the current frame image if the instruction to add an object to a video is received again.
Optionally, the apparatus further comprises:
a receiving unit configured to receive configuration data of a video display style before the playing unit plays the video to which the object is added, wherein the configuration data is used for presenting the video in a preset style.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video playing method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions of the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the aforementioned video playing method.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product comprising computer programs/instructions, wherein the computer programs/instructions, when executed by a processor, implement the aforementioned video playing method.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of displaying an area selection interface in a current frame image of a video based on an instruction for adding an object to the video, determining a selected area in the current frame image according to input information on the area selection interface, adding the object for presenting a preset display effect into the selected area, and playing the video of the added object, so that the display effect can be locally presented in the playing process of the video.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating a method of playing a video according to an exemplary embodiment;
FIG. 2 is an exemplary diagram of a playback method of a video disclosed by the present method in response to an instruction to add an object;
FIG. 3 is a flow diagram illustrating a method of playing a video according to an exemplary embodiment;
FIG. 4 is a flow diagram illustrating a method of playing a video in accordance with an exemplary embodiment;
FIG. 5 is a flow diagram illustrating a method of playing a video in accordance with an exemplary embodiment;
6a-6e are exemplary diagrams illustrating an interface in a flow of performing a method for playing a video according to one illustrative embodiment;
FIG. 7 is a block diagram illustrating a video playback device in accordance with an exemplary embodiment;
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The video playing method disclosed by the embodiment of the application can be applied to but not limited to the client of video software, and aims to enable the video software to have richer functions.
Fig. 1 is a flowchart illustrating a video playing method according to an exemplary embodiment, and as shown in fig. 1, the method includes the following steps:
in step S11, in response to the instruction to add an object to the video, an area selection interface is displayed in the current frame image of the video display.
The area selection interface is used for selecting an area from a current frame image of the video.
It will be appreciated that a video is typically a time-ordered sequence of image frames, and thus, a region selection interface may be used to select a region from at least one image frame. The current frame image is a currently displayed frame image, and generally, the current frame image is defaulted to be a first frame image of a video or a randomly determined frame image, and may be any frame image designated by a user. In addition, in this embodiment, the form of the area selection interface is not limited, and reference may be made to the prior art, which will be illustrated in the following embodiments.
In this embodiment, as shown in fig. 2, after receiving an instruction to add an object to a video, a region selection interface is displayed.
In step S12, the selected region in the current frame image is determined according to the input information on the region selection interface.
It is understood that the input information on the area selection interface may be information that is input by a user operating the area selection interface, for example, the user defines an outline on the area selection interface, coordinates of points on the outline are the input information, and a closed area surrounded by the points is the selected area. The determination of the selected region in the current frame image may be understood as displaying an outline of the selected region, and it is understood that a display position and a form of the outline of the selected region are not limited.
In step S13, the object is added in the selected area.
In this embodiment, the object is used to present a preset display effect, where the preset display effect may be configured in advance by a user. For example, a sticker of a gust effect is added in the selected area. The specific implementation form of the preset display effect can be referred to in the prior art, and is not described herein in detail.
In step S14, the video to which the object is added is played.
Because the object is added in the selected area in the above steps, a preset display effect presented by the object is displayed in the selected area in the image frame to which the object is added during the playing process of the video.
The method for playing the video, which is described in this embodiment, is different from a method for adding an object to a video in the prior art, in this embodiment, a part of the area in the image frame may be selected to add a display effect, so that a method for playing the video is expanded, and the video software has richer functions.
Fig. 3 is a video playing method according to an exemplary embodiment, and compared with the above embodiments, the selected area is further identified to achieve the addition of the object with finer granularity.
Fig. 3 includes the following steps:
and S31, responding to the instruction of adding the object to the video, and displaying an area selection interface in the current frame image of the video display.
And S32, determining the selected area in the current frame image according to the input information on the area selection interface.
And S33, identifying the target in the selected area.
Wherein, the image may include a background and an object (the definition may refer to the prior art), therefore, in this embodiment, after the region is selected, the object may be further identified from the region. The method aims to realize the selection of a display area with a display effect in a finer granularity, thereby realizing the control of the display effect in a finer granularity.
It will be appreciated that the identified objects may be displayed, and further, that the selected region and the identified objects may be displayed simultaneously. For example, a mask is displayed in the image frame, and in the mask, the selected region and the object in the selected region are displayed distinctively.
It will be appreciated that the type of target may be pre-configured and the recognition algorithm may be as per the prior art.
Alternatively, when a plurality of objects are recognized from the selected region, the object having the largest display area ratio in the selected region is set as the recognized object. For example, when a plurality of birds are recognized from the selected area, the bird whose display area is the largest in the selected area is targeted for recognition, and the purpose is to add objects with fine granularity and avoid the problem of a disordered display effect due to the addition of the objects.
And S34, adding the object on the target.
It will be appreciated that one common implementation of adding objects to a target is: the object is superimposed on the target.
Alternatively, if no target is identified in the selected region, the selected region may be targeted.
And S35, playing the video with the added object.
In this embodiment, since the target is further recognized from the selected region, a display effect can be added to the target in the video, and control of a finer-grained display effect is realized compared with the region.
One example of how to distinguish between adding objects to the selected area and adding objects to the target is: the object is added to the selected area, so that the tracking of the face area can be realized, the face area is displayed with highlight effect in all image frames in the video, the object is added to the target, the eyes in the face can be further recognized, the eyes are displayed with highlight effect in all image frames in the video, and the fine-grained control of the display effect is realized. Further, since the eye area is smaller, the user may have difficulty in selection, and the target is recognized from the area as long as the type of the target is previously configured, the user selects the area including the eye, and thus, convenience is provided for the user's operation.
It will be appreciated that it is also possible to track both the selected region and the target, in which case if the target appears outside the selected region, another display effect different from the above-described display effect may be presented, thereby making the software more functional.
As described above, since a video is composed of a plurality of image frames, in order to facilitate user operation, an object can be added to the plurality of image frames by using a recognition technique depending on a region selected by a user in one image frame.
Fig. 4 shows a video playing method according to an exemplary embodiment, which includes the following steps:
in step S41, in response to an instruction to add an object to a video, an area selection interface is displayed in a current frame image of the video.
In step S42, the selected region in the current frame image is determined according to the input information on the region selection interface.
In step S43, objects in the selected area are identified.
In step S44, an image frame satisfying a preset condition among the image frames of the video is determined as a candidate image frame.
In this embodiment, the preset condition is that the preset condition corresponds to the target time window, that is, in the playing process, the image frame played in the target time window is the candidate image frame. The target time window may be predetermined.
Optionally, the target time window may be determined in any one or a combination of the following manners:
1. a trigger operation based on a user selected time window.
That is, an interactive interface may be displayed, and the user selects a time window on the interactive interface, and the time window selected by the user is taken as a target time window. For example, a sliding window with a fixed length is displayed on the interactive interface, and the user can slide the sliding window on the image frame sequence of the video to select the starting and ending positions of the sliding window, so that the target time window is determined. The method can realize human-computer interaction to determine the target time window, and has higher flexibility.
2. Based on the preset display duration of the object.
Since the object is added to the video, in order to enable the object to be completely presented in consideration of the display duration of the object itself, optionally, the length of the target time window needs to be not less than the display duration of the object itself (since the object is generated by pre-configuration, the display duration of the object itself may be referred to as a preset display duration).
3. Based on the display time stamp of the current frame image.
Specifically, since the object is determined to be added to the current frame image, the current frame image may be used as a starting position, an intermediate position, or an end position of the object presentation, that is, the current frame image may be used as a first frame image, an intermediate frame image, or a last frame image in the target time window, so as to improve the continuity of the object in the video. In practice, the position of the current frame image in the target time window may be configured in advance.
The above three points are ways of determining the target time window from three different dimensions, and it can be understood that the three ways can be used alone or in combination, and the following are examples of several combinations:
1. based on the trigger operation of the time window selected by the user and the preset display duration of the object:
in this way, it is necessary to define a lower limit for the user-selected duration, i.e., to define the user-selected duration to be not less than the preset display length of the object.
For example, a sliding window with a fixed minimum length is displayed on the interactive interface, and the minimum length of the sliding window is a preset display duration of the object. The user may slide the sliding window over the sequence of image frames of the video to select the start-stop position of the sliding window. I.e. the image frames included in the sliding window, are candidate image frames. In addition, the user can also extend the length of the sliding window to enable the target time window to be larger than the preset display duration of the object, but cannot shorten the sliding window to enable the target time window to be smaller than the preset display duration of the object. In summary, the user can determine the target time window by operating the sliding window with a fixed minimum length.
2. Based on the preset display duration of the object and the display timestamp of the current frame image.
In this case, if it is to be ensured that the length of the target time window is not less than the preset display duration of the object, it is required that the time length between the current frame image and the last frame image of the video is not less than the preset display duration of the object.
And in the case that the time length between the current frame image and the last frame image of the video is less than the preset display duration of the object, determining the target time window based on the time length from the current frame image to the last frame image, wherein one example is as follows: and taking the display time from the current frame image to the last frame image as a target time window. In this case, it is also possible to add an object in a frame designated by the user, and it is also possible to ensure display duration adaptation for the object.
In step S45, the identified object in the selected region is identified in the candidate image frame.
In step S46, the candidate image frame in which the target is identified is set as the target image frame.
In step S47, an object is added to the target in the target image frame and the current frame image.
In step S48, the video to which the object is added is played.
The flow shown in fig. 4 achieves the purpose of adding objects to the targets in the plurality of image frames by the tracking technology, and achieves batch display of the display effect. In addition, the object can be added to a plurality of image frames by selecting the selected region once, so that the method has high convenience.
In the following, from the perspective of an interactive interface of video software, an application of the video playing method disclosed in the present disclosure to the video software will be described with emphasis.
Specifically, a new type of sticker is added in the sticker function of the existing video social software, and for convenience of description, the new sticker is hereinafter referred to as a special-effect sticker. Before the following process, it is assumed that the software has been triggered to enter a sticker function interface according to the operation of the user on the software, and icons of various types of stickers are displayed in the sticker function interface, such as "special effect", "hot door", "holiday", and "record" shown in fig. 6a, the user may select an icon of any type of sticker, and after clicking, the software displays the icon of each sticker under the type of sticker, and in this embodiment, it is assumed that the user clicks the icon of the special effect sticker. For example, in fig. 6a, the stickers under the "special effect" sticker are "ancient wind", "frozen time", "cartoon", "hand-drawing", "haar mirror", etc. The user can select an icon of any one of the stickers, and after clicking, the software receives an instruction of adding the sticker serving as an object in the video. In this embodiment, it is assumed that the user clicks "ancient wind".
Fig. 5 shows a video playing method according to an exemplary embodiment, which includes the following steps:
in step S51, in response to an instruction to add a special effect sticker to a video, an area selection interface is displayed on a current frame image of the video.
Alternatively, in fig. 6b, a mask is displayed on the image frame, and the guidance information is displayed on the mask: and drawing the region which you want to act on the picture. Furthermore, a "cancel" button is displayed in the lower left corner of the masking layer, the drawing state is exited after clicking, the interface shown in fig. 6a is returned, a "confirm add" button is displayed in the lower right corner of the masking layer, and clicking is not available when no pattern is drawn.
Because the area selection interface is displayed on one image frame, the content of the video playing area in the interface is in a static state, and the user can conveniently perform selection operation.
As previously mentioned, in this embodiment, the user has selected the "antique" special effect sticker.
In step S52, the selected region, i.e., the determined selected region, is displayed on the image frame according to the input information generated by the user drawing the region on the region selection interface, as shown in fig. 6 c.
It will be appreciated that the start point and the end point are connected closed if the user does not draw a closed pattern, starting from when the user's finger touches the screen and ending with the finger off the screen. After the user finishes one drawing, the drawn curve is displayed on the area selection interface, and after the user clicks the 'confirm add' button, the following steps are executed, and after the user clicks the 'cancel' button, the interface shown in fig. 6a is returned to.
In step S53, the target is obtained by performing target recognition on the selected region.
The target identified in fig. 6c is a cat head.
It will be appreciated that the user may select the display effect before the selected region is rendered, as shown in figure 6b, or after the rendering, as shown in figure 6 c.
It will be appreciated that the user may change the sticker without the selected area changing before proceeding to the next step.
In step S54, a plurality of image frames in the target time window selected by the user are acquired as candidate image frames.
In fig. 6d, the image frame selection interface and the image frame after the sticker is added are displayed simultaneously, and the user determines a target time window through a sliding window in the image frame selection interface to select a plurality of image frames. The configuration of the sliding window can be referred to the above embodiments, and is not described here.
In step S55, when the instruction to add an object to the video is received again, the area selection interface is redisplayed in the current frame image.
Also taking fig. 6d as an example, in the case of displaying the image frame after adding the antique sticker in fig. 6d, if the user clicks again the antique sticker icon in the sticker list, indicating that the user needs to redraw the selected area, it can be seen from fig. 6d that S55 can facilitate the update of the display effect by returning to fig. 6 b.
It will be appreciated that in fig. 6d, if the user clicks on the selected region (and/or target), the boxed line of the region may be displayed, as shown in fig. 6e, and a delete marker may also be displayed, which the user clicks on to delete the effect sticker.
In step S56, after the user completes setting of the special effect sticker, the target in the candidate image frame is recognized, and after the object sticker is added to the target, the video to which the special effect sticker is added is played.
Specifically, in a plurality of image frames selected by the user, a special effect of the ancient wind is added in the cat head area, and in the case that a certain image frame of the plurality of image frames selected by the user does not have a cat head, the original image frame is played. And the image frames which are not selected by the user play the original image frames.
As can be seen from the flow shown in fig. 5, the video playing method described in this embodiment is combined with the sticker function of the existing video social software, so that the sticker function can be expanded, and richer stickers can be obtained.
Compared with the existing paster, the special-effect paster has the special-effect which the existing paster does not have.
It should be noted that, in the above embodiment, before playing the video to which the object is added, it is also possible to: and receiving configuration data of the video display style, wherein the configuration data is used for presenting the video with a preset style. The preset style may be a style applied to the entire picture of the video.
The display effect may be an existing display effect that can be added to the whole image frame. For example, each special effect under the special effect sticker is a special effect applied to the whole image frame by the existing video social software. In the present disclosure, these special effects can also be applied locally, so on one hand, the individuation of the video can be maintained in addition to the special effect brought by the software. On the other hand, by combining the region or the target with various special effects, more display modes can be realized, such as co-location of a two-dimensional picture and a three-dimensional picture, static and static content of a part of the picture and the like. In the third aspect, the display effect currently applied to the whole world, including special effects, magic methods, props and the like, can be used in combination with the selected area, so that the method disclosed by the disclosure has higher compatibility with the prior art and is easy to implement.
In conclusion, the method disclosed by the disclosure can be beneficial to catalyzing personalized videos, so that the software has better ecology and users have better experience.
Fig. 7 is a block diagram illustrating a video playback device according to an example embodiment. Referring to fig. 7, the apparatus includes a display unit 71, a determination unit 72, an adding unit 73, and a playing unit 74.
The display unit 71 is configured to display, in response to an instruction to add an object to a video, a region selection interface for selecting a region from the current frame image of the video in the current frame image of the video display.
The determining unit 72 is configured to determine the selected region in the current frame image according to the input information on the region selection interface.
The adding unit 73 is configured to add the object for presenting a preset display effect in the selected region.
The playing unit 74 is configured to play the video to which the object is added.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The video playing device has the functions of adding and displaying more abundant display effects.
Fig. 8 is a block diagram of an electronic device according to an exemplary embodiment, which includes a processor 81 and a memory 82, and optionally, a communication interface 83 and a communication bus 84.
The number of the processor 81, the communication interface 83, the memory 82 and the communication bus 84 is at least one, and the processor 81, the communication interface 83 and the memory 82 complete mutual communication through the communication bus 84;
the processor 81 may be a central processing unit CPU, or an application Specific Integrated circuit asic, or one or more Integrated circuits configured to implement embodiments of the present invention, or the like;
the memory 82 may comprise a high-speed RAM memory, and may further comprise a non-volatile memory (non-volatile memory) or the like, such as at least one disk memory; the memory is used for storing the processor executable instructions, and the processor is configured to execute the instructions to implement the video playing method according to the above embodiment.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as a memory 82 comprising instructions, executable by a processor 81 of an electronic device to perform the above-described method is also provided. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising a computer program/instructions which, when executed by a processor, implement the method of playing a video according to the above-described embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for playing a video, comprising:
in response to an instruction to add an object to a video, displaying a region selection interface in a current frame image of the video display, the region selection interface for selecting a region from the current frame image of the video;
determining a selected area in the current frame image according to the input information on the area selection interface;
adding the object in the selected area, wherein the object is used for presenting a preset display effect;
and playing the video added with the object.
2. The method of claim 1, wherein the adding the object in the selected region comprises:
identifying a target in the selected region;
adding the object on the target.
3. The method of claim 2, wherein the identifying the target in the selected region comprises:
when a plurality of objects are recognized from the selected region, the object having the largest display area ratio in the selected region is set as the recognized object.
4. The method according to claim 1, further comprising, before the playing the video after adding the object:
determining image frames meeting preset conditions as candidate image frames in the image frames of the video;
identifying objects in the selected region in the candidate image frames;
taking the candidate image frame identified to the target as a target image frame;
adding the object on the target of the target image frame.
5. The method according to claim 4, wherein the determining, among the image frames of the video, an image frame satisfying a preset condition as a candidate image frame comprises:
and taking the image frame corresponding to a predetermined target time window in the video as the candidate image frame.
6. The method of claim 5, wherein the target time window is determined in a manner that includes at least one of:
a trigger operation based on a user selected time window;
based on the preset display duration of the object;
based on a display timestamp of the current frame image.
7. A video playback apparatus, comprising:
a display unit configured to display, in response to an instruction to add an object to a video, a region selection interface for selecting a region from a current frame image of the video in the current frame image of the video display;
the determining unit is configured to determine a selected area in the current frame image according to input information on the area selection interface;
an adding unit configured to add the object in the selected region, the object being used to present a preset display effect;
and the playing unit is configured to play the video added with the object.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of playing a video according to any one of claims 1 to 6.
9. A computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform a method of playing a video according to any one of claims 1 to 6.
10. A computer program product comprising computer programs/instructions, characterized in that said computer programs/instructions, when executed by a processor, implement the method of playing a video according to any of claims 1-6.
CN202110342298.4A 2021-03-30 2021-03-30 Video playing method, device and equipment Pending CN113068072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110342298.4A CN113068072A (en) 2021-03-30 2021-03-30 Video playing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110342298.4A CN113068072A (en) 2021-03-30 2021-03-30 Video playing method, device and equipment

Publications (1)

Publication Number Publication Date
CN113068072A true CN113068072A (en) 2021-07-02

Family

ID=76564724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110342298.4A Pending CN113068072A (en) 2021-03-30 2021-03-30 Video playing method, device and equipment

Country Status (1)

Country Link
CN (1) CN113068072A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060093309A1 (en) * 2004-10-05 2006-05-04 Magix Ag System and method for creating a photo movie
CN104796594A (en) * 2014-01-16 2015-07-22 中兴通讯股份有限公司 Preview interface special effect real-time presenting method and terminal equipment
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN109754383A (en) * 2017-11-08 2019-05-14 中移(杭州)信息技术有限公司 A kind of generation method and equipment of special efficacy video
CN111242881A (en) * 2020-01-07 2020-06-05 北京字节跳动网络技术有限公司 Method, device, storage medium and electronic equipment for displaying special effects
CN111757175A (en) * 2020-06-08 2020-10-09 维沃移动通信有限公司 Video processing method and device
CN112040263A (en) * 2020-08-31 2020-12-04 腾讯科技(深圳)有限公司 Video processing method, video playing method, video processing device, video playing device, storage medium and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060093309A1 (en) * 2004-10-05 2006-05-04 Magix Ag System and method for creating a photo movie
CN104796594A (en) * 2014-01-16 2015-07-22 中兴通讯股份有限公司 Preview interface special effect real-time presenting method and terminal equipment
CN106385591A (en) * 2016-10-17 2017-02-08 腾讯科技(上海)有限公司 Video processing method and video processing device
CN109754383A (en) * 2017-11-08 2019-05-14 中移(杭州)信息技术有限公司 A kind of generation method and equipment of special efficacy video
CN111242881A (en) * 2020-01-07 2020-06-05 北京字节跳动网络技术有限公司 Method, device, storage medium and electronic equipment for displaying special effects
CN111757175A (en) * 2020-06-08 2020-10-09 维沃移动通信有限公司 Video processing method and device
CN112040263A (en) * 2020-08-31 2020-12-04 腾讯科技(深圳)有限公司 Video processing method, video playing method, video processing device, video playing device, storage medium and equipment

Similar Documents

Publication Publication Date Title
CN110868631B (en) Video editing method, device, terminal and storage medium
US9619471B2 (en) Background removal tool for a presentation application
CN111770288B (en) Video editing method, device, terminal and storage medium
US11706485B2 (en) Display device and content recommendation method
CN109240567B (en) Information display method and device, storage medium and electronic device
CN109685872B (en) Animation generation method, device, equipment and computer readable storage medium
CN112860148B (en) Medal icon editing method, device, equipment and computer readable storage medium
CN113691829B (en) Virtual object interaction method, device, storage medium and computer program product
CN112653920B (en) Video processing method, device, equipment and storage medium
US11941728B2 (en) Previewing method and apparatus for effect application, and device, and storage medium
CN113342248A (en) Live broadcast display method and device, storage medium and electronic equipment
US20180088791A1 (en) Method and apparatus for producing virtual reality content for at least one sequence
CN114443945A (en) Display method of application icons in virtual user interface and three-dimensional display equipment
WO2020186837A1 (en) Text animation control method and device, storage medium and processor
CN106204695B (en) Editing method and device of 3D animation
CN113068072A (en) Video playing method, device and equipment
JP5850188B2 (en) Image display system
JP7427786B2 (en) Display methods, devices, storage media and program products based on augmented reality
WO2022179415A1 (en) Audiovisual work display method and apparatus, and device and medium
CN115220613A (en) Event prompt processing method, device, equipment and medium
CN113473200B (en) Multimedia resource processing method and device, electronic equipment and storage medium
CN115048010A (en) Method, device, equipment and medium for displaying audiovisual works
CN110908749A (en) Layout generation method and device for display object
KR101116538B1 (en) Choreography production system and choreography production method
CN112527164B (en) Method and device for switching function keys, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210702

RJ01 Rejection of invention patent application after publication