CN115002359A - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115002359A
CN115002359A CN202210567327.1A CN202210567327A CN115002359A CN 115002359 A CN115002359 A CN 115002359A CN 202210567327 A CN202210567327 A CN 202210567327A CN 115002359 A CN115002359 A CN 115002359A
Authority
CN
China
Prior art keywords
image
background plate
angle
displayed
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210567327.1A
Other languages
Chinese (zh)
Inventor
卢智雄
王胜男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210567327.1A priority Critical patent/CN115002359A/en
Publication of CN115002359A publication Critical patent/CN115002359A/en
Priority to PCT/CN2023/094315 priority patent/WO2023226814A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Abstract

The embodiment of the disclosure provides a video processing method, a video processing device, an electronic device and a storage medium, wherein the method comprises the following steps: responding to special effect triggering operation, and extracting a target object in a video frame to be processed; and fusing the target object and an image background plate comprising at least one image to be displayed to obtain a special-effect video frame and display the special-effect video frame, wherein the display content and/or the display angle of the image background plate relative to the target object are/is dynamically changed. The technical scheme provided by the embodiment of the disclosure solves the problems that in the prior art, special-effect video content cannot meet the personalized requirements of users, so that video picture content is not good, and user experience is poor, and the image background board is generated based on the image to be displayed selected by the user, so that the personalized display effect of the background image is met, and further, the attraction of the application software to the users can be improved, and the sticky effect of the users is further improved.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of video processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of network technology, more and more application programs enter the life of users, and particularly, a series of software capable of shooting short videos is deeply favored by the users.
In order to improve the interest of video shooting, related application software can provide multiple special effect video making functions for users, however, the special effect video making functions provided for the users are quite limited at present, the interest of the finally obtained special effect video is required to be further improved, meanwhile, the personalized requirement that the users hope to change background pictures in the video is not considered, and therefore the use experience of the users is reduced.
Disclosure of Invention
The disclosure provides a video processing method, a video processing device, an electronic device and a storage medium, so as to improve the effect of video content richness on the basis that a background picture meets personalized requirements.
In a first aspect, an embodiment of the present disclosure provides a video processing method, where the method includes:
responding to special effect triggering operation, and extracting a target object in a video frame to be processed;
generating an image background plate comprising at least one image to be displayed;
fusing the target object and the image background plate to obtain a special-effect video frame and displaying the special-effect video frame;
wherein the display content and/or the display angle of the image background plate relative to the target object are dynamically changed.
In a second aspect, an embodiment of the present disclosure further provides a video processing apparatus, including:
the object extraction module is used for responding to the special effect trigger operation and extracting a target object in the video frame to be processed;
the background plate generation module is used for generating an image background plate comprising at least one image to be displayed;
the video generation module is used for fusing the target object and the image background plate to obtain and display a special-effect video frame;
wherein the display content and/or the display angle of the image background plate relative to the target object are dynamically changed. In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a video processing method as in any of the embodiments of the present disclosure.
In a fourth aspect, the embodiments of the present disclosure further provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are used for executing the video processing method according to any one of the embodiments of the present disclosure
According to the technical scheme provided by the embodiment of the disclosure, after a target object in a video frame to be processed is extracted in response to a special effect trigger operation and an image background plate comprising at least one image to be displayed is generated, the target object and the image background plate can be fused to obtain and display the special effect video frame until an operation of stopping shooting of the special effect video is received, so that the problems that the special effect video content cannot meet the personalized requirements of users in the prior art, the video picture content is poor and the user experience is poor are solved, the image background plate is generated based on the image to be displayed selected by the users, the personalized display effect of the background image is met, the attraction of application software to the users can be further improved, and the use satisfaction of the users is further improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the disclosure;
fig. 2 is a schematic diagram illustrating an effect of a special effect video frame provided by an embodiment of the disclosure;
fig. 3 is a schematic flow chart of a video processing method according to an embodiment of the disclosure;
fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the type, the use range, the use scene, etc. of the personal information related to the present disclosure should be informed to the user and obtain the authorization of the user through a proper manner according to the relevant laws and regulations.
For example, in response to receiving a user's active request, prompt information is sent to the user to explicitly prompt the user that the requested operation to be performed would require acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
It will be appreciated that the data involved in the subject technology, including but not limited to the data itself, the acquisition or use of the data, should comply with the requirements of the corresponding laws and regulations and related regulations.
Before the technical solution is introduced, an application scenario of the embodiment of the present disclosure may be exemplarily described. Illustratively, when a user shoots a video through application software, or performs a video call with another user, the shot video may want to be more interesting, and meanwhile, the user may have a personalized demand on a picture of the special-effect video, for example, some users want to replace a background in a video picture with specific content. The background board may be generated based on previously uploaded or photographed video frames. That is to say, the background plate is formed by splicing a plurality of images, namely, the background plate can be understood as the existing photo wall.
Fig. 1 is a schematic flow chart of a video processing method provided by an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a situation of generating a special-effect video, and the method may be executed by a video processing apparatus, and the apparatus may be implemented in a form of software and/or hardware, and optionally implemented by an electronic device, and the electronic device may be a mobile terminal, a PC terminal, or a server. As shown in fig. 1, the method includes:
and S110, responding to the special effect trigger operation, and extracting the target object in the video frame to be processed.
The device for executing the method for determining the special effect video provided by the embodiment of the present disclosure may be integrated into application software supporting a special effect video processing function, and the software may be installed in an electronic device, and optionally, the electronic device may be a mobile terminal or a PC terminal, and the like. The application software may be a type of software for image/video processing, and specific application software thereof is not described herein any more, as long as image/video processing can be implemented. The method can also be a specially developed application program to realize software for adding and displaying the special effects, or be integrated in a corresponding page, and a user can realize the processing of the special effect video through the page integrated in the PC terminal.
It should be noted that the technical solution of this embodiment may be executed in a real-time shooting process based on a mobile terminal, or may be executed after the system receives video data actively uploaded by a user. For example, when a user shoots a video in real time based on a camera device on the terminal device, the application software detects a special effect trigger operation, and then responds to the operation, so as to acquire an uploaded image and process the video currently shot by the user, and obtain a special effect video. Or when the user actively uploads the image data through the application software and executes the special effect triggering operation, the application also responds to the operation, and then processes the image data actively uploaded by the user after acquiring the uploaded image, so as to obtain the special effect video.
In the embodiment of the disclosure, responding to the special effect triggering operation comprises at least one of the following modes: triggering a shooting control corresponding to the special-effect video production; monitoring that the voice information comprises a special effect adding instruction; the face image is detected to be included in the display interface.
Specifically, a control used for triggering and running a special-effect video making program can be developed in advance in application software, and the control is the special-effect video making control. Or acquiring voice information based on a microphone array deployed on the terminal equipment, analyzing and processing the voice information, and if a processing result comprises words of special-effect video processing, triggering a function of carrying out special-effect processing on the current video. The advantage of determining whether to execute special effect video processing based on the content of the voice information is that interaction between a user and a display page is avoided, and the intelligence of special effect video processing is improved. Another implementation manner may be that whether the image of the face of the user is included in the field of view is determined according to the shooting field of view of the mobile terminal, and when the image of the face of the user is detected, the application software may use the event that the image of the face is detected as a trigger operation for performing special effect processing on the video. Those skilled in the art should understand that the conditions for specifically selecting what kind of event to be used as special effect video processing may be set according to actual situations, and the embodiment of the present disclosure is not specifically limited herein.
Generally, application software is installed on a terminal device on which a shooting log is set, and after responding to a special effect trigger operation, a video or an image can be shot based on an image pickup device or the application software. If the video shooting control is started, the video can be shot based on the camera device, and each shot frame is taken as a video frame to be processed. At this time, the target object may be included in the video frame to be processed. The target objects may be dynamic or static, and the number of the target objects may be one or more, for example, a plurality of specific users may be used as the target objects, based on which, when facial features of one or more specific users are recognized from a video picture taken in real time based on a pre-trained image recognition model, the special effect video processing procedure of the embodiment of the disclosure may be executed. It is also possible to take all correspondences in the screen as target objects. The target object in the video frame can be obtained by adopting a matting technology, or the target object in the video frame to be processed can be extracted by adopting a limb trunk identification method. Specifically, when the application acquires a video shot by a user in real time and identifies a target object from a picture, the video can be analyzed to obtain a to-be-processed video frame corresponding to the current time, and further, a view corresponding to the target object is extracted from the to-be-processed video frame based on a pre-programmed matting program.
On the basis of the technical schemes, after the image shooting control is triggered, the real-time video frame can be shot based on the camera device, when the condition that the preset condition is met is detected, the target object in the video frame to be processed is extracted, and the special effect video is determined based on the video frame to be processed and the subsequently fused special effect video frame.
Optionally, shooting a video frame to be processed corresponding to the current scene; when the condition that the special effect display condition is met is detected, continuously shooting the video frame to be processed so as to extract the target object in the video frame to be processed.
The current scene may be a scene where the target object is currently located, and the special effect display condition may be that a duration for continuously shooting the video frame to be processed reaches a preset duration threshold.
For example, after the image capturing control is detected to be triggered, the to-be-processed video frame corresponding to the current scene may be captured. When the countdown on the current display interface is detected to be 1, the video frame to be processed can be continuously shot, and the target object in the video frame to be processed is extracted, so that the target object and the image background can be fused, and the special effect video frame can be obtained.
It should be noted that it is detected that the shooting duration reaches a preset shooting duration threshold; the received voice information comprises a special effect display awakening word which is triggered to preset limb actions by a target object in the video frame to be processed.
Specifically, the duration for continuously shooting the current scene reaches a preset shooting duration threshold, or audio data is obtained, and when the audio data includes a voice wakeup word, it indicates that a target object in the video frame to be processed needs to be extracted, or a preset body action is triggered by the target object in the video frame to be processed, it indicates that the target object in the video frame to be processed needs to be extracted.
And S120, generating an image background plate comprising at least one image to be displayed.
The number of the images to be displayed comprises a plurality of images, one or more photo walls can be generated based on the images to be displayed, and the one or more photo walls are used as image background boards. The image to be displayed can be based on the video frame to be processed shot by the camera device, and can also be a pre-shot image or a downloaded image. For example, the images may be stored in an image library or an image repository based on images captured by a camera device, or images downloaded from the internet, for example, a user likes a certain actor very much, and an image background plate may be generated by downloading images corresponding to the certain actor, so as to fuse the images with the image background plate, and obtain a corresponding special effect video frame.
In this embodiment, before generating the image background plate including at least one image to be displayed, the method further includes: and jumping to an image resource library to determine at least one image to be displayed from the image resource library and uploading the image to be displayed, so as to determine the image background plate based on the at least one image to be displayed.
For example, when the user triggers the image uploading control, the application software may be triggered to call an image library on the mobile terminal, or the application software may be triggered to call a cloud image library associated with the image library, so as to determine the uploaded image according to a selection result of the user, and the application software may be triggered to call a relevant interface of the mobile terminal camera device, so as to obtain an image shot by the camera device, and use the image as the image to be displayed.
Illustratively, when a user uses a camera device of the mobile terminal to shoot a video in real time and triggers an image uploading frame displayed in a display interface, the application software can automatically open an album in the mobile terminal according to the triggering operation of the user on the image uploading frame and display the image in the album on the display interface, when the triggering operation of the user on a certain image is detected, it is indicated that the user wants to use the picture of the image as the background of a special-effect video, namely, a spliced image in an image background plate, and further, the image selected by the user can be uploaded to a server or a client corresponding to the application software, so that the application software can manufacture the image background plate based on the uploaded image. Or when the user uses the camera device of the mobile terminal to shoot the video in real time and triggers the image uploading frame displayed in the display interface, the application software can directly acquire the video frame at the current moment from the video shot by the camera device in real time and take the video frame as the image to be displayed.
In this embodiment, the images to be displayed may be stitched to obtain an image background plate including at least one image to be displayed, so as to achieve the effect of fusing the target object and the background image. Determining the image background plate may be: typesetting the at least one image to be displayed based on at least one image typesetting to obtain at least one background plate to be displayed; wherein the at least one image layout is preset and/or pre-uploaded; and determining the image background plate based on the at least one background plate to be displayed.
Wherein, the image layout can be understood as how the images to be displayed are arranged. It is to be understood that there is a wall and the user can apply the image to the wall in any arrangement. And the adopted arrangement mode is used as image typesetting. The image layouts may include a plurality of layouts, and the user may select one or more layouts from any of the plurality of layouts. Or, the client or the server automatically selects the number of the image layouts according to the number of the images to be displayed. The client or the server may automatically generate an image layout corresponding to the image to be displayed according to the image to be displayed, and arrange the corresponding image to be displayed based on the image layout. The background board to be displayed is the background board obtained after the image to be displayed is subjected to typesetting treatment based on the image typesetting. The number of the background plates to be displayed corresponds to the determined number of the image layouts, and of course, the same image to be displayed may be arranged based on different image layouts, that is, one image to be displayed may be located in different background plates to be displayed. The background plates to be displayed can be used as image background plates, and the background plates to be displayed can also be spliced to obtain the image background plates.
It should be noted that the image layout may be preset or may be a layout uploaded by a user in advance, so that an effect of automatically determining the display position of the image to be displayed based on the layout is achieved, and the convenience of determining the background board to be displayed is improved.
Describing in detail how to determine the background plate to be displayed based on the image layout, optionally, the image layout includes a plurality of horizontal grids and vertical grids, and the typesetting processing on the at least one image to be displayed based on at least one image layout to obtain at least one background plate to be displayed includes:
determining a transverse grid and a longitudinal grid corresponding to at least one image to be displayed according to a shooting mode of the at least one image to be displayed; and determining the image to be typeset corresponding to the at least one image to be displayed according to the cutting proportion corresponding to the image shooting mode, and typesetting the at least one image to be displayed to obtain the at least one background plate to be displayed.
The horizontal grid and the vertical grid are relative, and are mainly determined according to the horizontal-vertical proportion of the grid, for example, the horizontal-vertical proportion greater than or equal to 1 may be called as the horizontal grid, and the horizontal-vertical proportion less than 1 may be called as the vertical grid. That is, one image layout may include a plurality of horizontal grids or a plurality of vertical grids, and the image to be displayed and the horizontal or vertical grids may be arranged. The image to be displayed can be directly filled according to the proportion corresponding to the horizontal grid and the vertical grid, and a background plate to be displayed is obtained. In order to further improve the degree of fusion between each image to be displayed in the background plate to be displayed and the corresponding grid, the background plate to be displayed may be determined based on the shooting mode of each image to be displayed.
It should be noted that the shooting modes may include a horizontal screen shooting mode and a vertical screen shooting mode, and the display effects of the images to be displayed in different shooting modes are different. The horizontal screen shooting mode to-be-displayed image can be correspondingly displayed in the horizontal arrangement grids, the vertical screen shooting mode to-be-displayed image is correspondingly displayed in the vertical arrangement grids, so that the purpose that the to-be-displayed image and the corresponding grids are completely overlapped is achieved, and the problem that the display effect is poor due to the fact that black edges appear on the edges of the grids is avoided.
In practical application, the problem that the shooting mode of the image to be displayed is not completely matched with the grid type exists, or even if the image to be displayed is arranged in the corresponding grid, the problem of black edges possibly occurs, the image to be displayed can be further processed, so that the image to be displayed can be completely matched with the grid in the image typesetting, and a better background plate effect is obtained.
Optionally, determining an image to be typeset corresponding to the at least one image to be displayed according to the cutting proportion corresponding to the shooting mode; and respectively placing the at least one image to be typeset into the corresponding longitudinal arrangement grids or the transverse arrangement grids to obtain the background plate to be displayed corresponding to the image typesetting.
After the shooting mode is determined, the cutting proportion corresponding to the image to be displayed can be determined according to the corresponding shooting mode and the proportion information of the horizontal and vertical grids, then the corresponding image to be displayed is cut based on the cutting proportion, and the cut image to be displayed is used as the image to be typeset. Each image to be typeset can be placed in the corresponding longitudinal arrangement grids or the transverse arrangement grids to obtain the background plate to be displayed corresponding to the image typeset.
On the basis of the above technical solution, the determining of the image background plate based on at least one background plate to be displayed may specifically be: and determining the size of a display interface corresponding to each background plate to be displayed so as to determine the image background plates based on the size of the display interface, or performing annular splicing on at least one background plate to be displayed to obtain the image background plates.
It will be appreciated that each background board to be displayed may be used as an image background board. According to the display size of the display interface, the display proportion of the image background plate in the display interface can be determined, and then the corresponding image background plate is adjusted based on the display proportion. Or, the background plates to be displayed are spliced in a ring shape, so that a circular or semicircular image background plate can be obtained. Or, each background plate to be displayed is embedded into a preset 3D surrounding model to obtain a rotatable image background plate.
It should be further noted that, in the process of displaying the image background boards, in order to have a better display effect, the background boards to be displayed may be circularly displayed on the display interface for display, or the surrounding background image boards are controlled to play the image background boards at a certain rate.
And S130, fusing the target object and the image background plate to obtain a special effect video frame and displaying the special effect video frame.
For example, referring to fig. 2, the image background plate may be displayed as a background image, and the target object may be displayed as a foreground image.
It should be noted that the display content of the image background plate in each video frame and the relative display angle with the target object may be changed.
In this embodiment, the fusion process of the target object and the image background plate may be: updating the display size of the image background plate on the display interface according to the relative distance information between the target object and the display interface; and fusing the target object and the image background plate with the updated display size to obtain a special effect video frame. In practical application, a front camera or a rear camera can be adopted to shoot corresponding video frames to be processed. When the shooting modes are different, the relative display distances of the target object and the display interface are different, and the display size of the background image in the display interface can be determined according to the relative display distances. And then, fusing the target object with the corresponding image background plate to obtain a special effect video frame.
In a specific application, the target object and the image background plate are fused, and the fusion processing may further be: determining the scaling of the target object according to the display size of the image background plate; and fusing the target object and the image background plate according to the scaling to obtain the special-effect video frame.
It can be understood that, in order to make the target object and the image background plate have a better fusion effect, the target object may be reduced or enlarged according to the display size of the image background plate to achieve a natural fusion effect, so as to obtain a corresponding special-effect video frame.
On the basis of the above technical solution, it should be further noted that the image background plates are played in a cycle on the display interface, for example, the number of the image background plates includes a plurality of image background plates, and the display duration of each image background can be set. And when the display time of each image background plate reaches a preset display time threshold, displaying the next image background plate. The image background plate can be played circularly, and target objects in the video frame to be processed are fused together to obtain a special-effect video frame.
In order to solve the problem that a blank screen is not caused in the process of switching the image background plates and the user experience is not good, when the background plates are switched, a corresponding transition special effect can be set, for example, the transition special effect can be a special effect that the image background plates gradually come in and out, and when the next image background plate is switched, a preset animation special effect can be played to fill a display interface, so that the watching experience effect of the user is improved.
On the basis of the above technical solution, it should be further explained that the image background plate may also be a surrounding background plate obtained by splicing a plurality of background plates to be displayed. At this time, the target object and the image background plate are fused to obtain a special effect video frame, which may be: determining curvature information of the surrounding background plate, and determining the scaling of the target object according to the curvature information; and fusing the target object and the surrounding background plate based on the scaling to obtain a special-effect video frame.
The curvature information may be a rotation rate of a tangential direction angle of a certain point or a certain area on the curved surface to an arc length, and may be understood as a degree of curvature of the annular image background plate. The scaling can be determined according to the curvature information and the distance between the target object and the image background plate or the distance information of the target object corresponding to the surrounding background plate central point, so that the image fusion is carried out based on the scaling, the target object and the background information are well fused, and the effect of the special effect image reality degree is improved.
It can be understood that, in order to fuse the target object with the image background plate, curvature information of the surrounding background plate can be acquired, an optimal display position in the display interface and the target object can be determined according to the curvature information, and the target object can be scaled based on the optimal display position.
When the corresponding image background plates are displayed, in order to play a visual impact picture, the image to be displayed which needs to be amplified in each image background plate can be determined, or the video needs to be displayed in an amplified manner, or the images in the image background plates are sequentially displayed in an amplified manner at the center position.
It is understood that the target magnified rendering image may be any image in the image background plate, or may be an image of the central position in the image background plate. When the image is displayed in an enlarged manner, the enlarged display image can be displayed in an enlarged manner at the center of the image background plate.
In order to further increase the technological sense, in the process of special effect video frames, a mirror plane corresponding to an image background plate in a display interface is determined. The mirror plane is a plane perpendicular to the lower edge of the image background plate in the display interface. The determination of the mirror plane is advantageous in that the image background is displayed as a plate reflection, thereby causing an effect of richness of the picture contents. Meanwhile, the surrounding image background plate can rotate according to a preset rotation rate.
Based on the above, the image background plate played in a loop or the surrounding image background plate can be displayed in a preset manner.
In this embodiment, when the application detects an operation to stop special effect video shooting, the above-described processing steps of the embodiment of the present disclosure are not executed. Wherein the operation of stopping the special effect video shooting comprises at least one of the following operations: detecting a trigger stop shooting control; detecting that the shooting time of the special effect video reaches a preset shooting time; detecting a wake-up word triggering to stop shooting; limb movements that trigger the cessation of photography are detected. The following describes the above conditions.
Specifically, for the first operation of stopping the special effect video shooting, a control may be developed in advance in the application software, and at the same time, a program for terminating the special effect video processing is associated with the control, where the control is a shooting stop control. Based on this, when it is detected that the user triggers the control, the application software may invoke the relevant program, so as to terminate the processing operation on the current time and each video frame to be processed after the current time, it can be understood that there are various ways for the user to trigger the control.
For the second operation of stopping the special-effect video shooting, the application may preset a time length as a preset shooting time length, record the time length for the user to shoot the video, further compare the recorded result with the preset shooting time length, and terminate the processing operation on the current time and each video frame to be processed after the current time when it is determined that the shooting time length of the user has reached the preset shooting time length.
For the third operation of stopping the special-effect video capturing, specific information may be preset in the application software as a wakeup word for stopping capturing, for example, one or more of the words "stop", "stop capturing", and "stop processing" may be used as a wakeup word for stopping capturing, based on which, after the application software receives the voice information sent by the user, the application software may recognize the voice information by using a pre-trained voice recognition model, determine whether the recognition result includes one or more of the preset special-effect mounting wakeup words, and if so, the application may terminate the processing operation on each to-be-processed video frame at the current time and after the current time.
For the fourth operation of stopping the special-effect video shooting, the action information of a plurality of persons may be entered in the application software, and the action information is used as the preset action information, for example, information reflecting the action of lifting both hands of the person is used as the preset action information, based on which, when the application receives an image or a video actively uploaded by a user or collected by the camera device in real time, the application may recognize the picture in the image or each video frame based on a pre-trained body action information recognition algorithm, and when the recognition result shows that the body action information of the target object in the current picture is consistent with the preset action information, the application may terminate the processing operation on the current time and each video frame to be processed after the current time.
It should be noted that the special effect mounting conditions may be simultaneously effective in the application software, or only one or more of the special effect mounting conditions may be selected to be effective in the application software, which is not specifically limited in the embodiment of the present disclosure.
According to the technical scheme provided by the embodiment of the disclosure, after a target object in a video frame to be processed is extracted and an image background plate comprising at least one image to be displayed is generated in response to a special effect triggering operation, the target object and the image background plate can be fused to obtain and display the special effect video frame until an operation of stopping shooting of the special effect video is received, so that the problems that the special effect video content cannot meet the personalized requirement of a user in the prior art, the content of a video image is poor and the user experience is poor are solved, the image background plate is generated based on the image to be displayed selected by the user, the personalized display effect of the background image is met, further, the attraction degree of the application software to the user can be improved, and the sticky effect of the user is further improved. On the basis of the technical scheme, in order to further improve the interest and the picture sense of the special effect video, when the target object is detected to meet the stop-motion display condition, the target object can be displayed in a stop motion mode in an image background plate to obtain a special effect video frame.
The stop-motion display condition may be that the target object triggers a target action, for example, the target action may be a hand-in-hand action, a heart-comparing action, a hugging action, a kissing action, or the like. It may also be that the target object triggers a corresponding freeze wake word. The method can also be used for setting a timing device and a task, triggering the task when the preset timing duration is reached, and setting the task as a target object and freeze-frame display in a special-effect video frame.
Specifically, when the stop-motion display condition is met, the target object meeting the stop-motion display moment can be displayed in the process of playing the image background plate, and the effect of specific ornamental of the video picture is achieved.
It should be noted that, in the process of generating the special effect video, the edge of the picture may be cut according to a shooting mode for shooting the special effect video, for example, a front mode or a rear mode, so that the video picture is spread over the entire display interface. Or, determining the display proportion of the special effect video in the display back according to a certain proportion.
Fig. 3 is a schematic flow chart of a video processing method according to an embodiment of the present disclosure, based on the foregoing embodiment, it can be understood that an image background board may be displayed based on a set display manner, and in order to further achieve that the image background board changes with a change in a shooting angle of a terminal device, that is, to further enrich interactivity between a video picture and a user, a shooting angle of a shooting terminal may be obtained in real time or periodically in a process of fusing a target object and the image background board, and then display information of the image background board is determined based on the shooting angle, so as to enhance an effect of interactivity between picture content and the user. The same or corresponding terms as those in the above embodiments are not described herein again.
As shown in fig. 3, the method includes:
s210, responding to the special effect trigger operation, and extracting the target object in the video frame to be processed.
S220, generating an image background plate comprising at least one image to be displayed.
S230, determining the shooting angle of the shooting device, determining the display angle of the image background plate based on the shooting angle, determining the display content in the image background plate based on the display angle, and fusing the target object and the display content to obtain the special-effect video frame.
In order to further improve the effect of intellectualization, the display information of the image background plate can be determined based on the shooting angle of the camera device. The camera device, namely the camera device of the terminal equipment, can acquire the rotation angle information of the camera device and determine the information in the background plate for displaying the image.
That is to say, the image background plate may be dynamically displayed or statically displayed in the display interface, and if the image background plate is dynamically displayed, the display information of the image background plate may be comprehensively determined by combining the shooting angle of the camera device during the playing process of the image background plate according to the preset playing effect. If the image background plate is statically displayed on the display interface, the shooting angle of the camera device can be real-time or periodically, so that the display information of the image background plate is based on the shooting angle, and the special effect video frame is obtained.
Optionally, if the display content of the image background board is determined based on the shooting angle of the camera, acquiring a current video shooting mode; determining a scene angle to be adjusted corresponding to the image background plate according to the current shooting angle and the current shooting mode of a shooting device; and determining a target scene angle based on the scene angle to be adjusted and the initial scene angle so as to determine a display angle of the image background based on the target scene angle.
The angle information of the shooting device can be determined based on a gyroscope arranged in the terminal equipment, the gyroscope can acquire three groups of data eulerX, eulerY and eulerZ of the terminal equipment, and the data respectively represent the angles in the coordinate axis directions in three directions. When the special effect video is shot for the first time, the three groups of acquired data are used as initial scene angles. The current shooting mode may be understood as a front shooting mode or a rear shooting mode, and based on different shooting modes, corresponding functions may be employed to determine the corresponding scene angle. That is, the scene angle to be adjusted is an angle corresponding to the image background plate. The scene angle to be adjusted is an angle corresponding to the image background plate obtained after the current scene shooting angle is processed based on the corresponding function. It should be noted that the current shooting mode may include a front shooting mode and a rear shooting mode, and the manner of determining the scene angle to be adjusted in different shooting modes is different. How the two shooting modes determine the angle of the scene to be adjusted is described below.
Optionally, if the current shooting mode is a front-end shooting mode, negating a first direction angle and a second direction angle in the current shooting angle to obtain a scene angle to be adjusted corresponding to the image background plate.
The first direction angle, the second direction angle and the third direction angle are relative, and the X-axis direction is taken as the first direction, the Y-axis direction is taken as the second direction and the Z-axis direction is taken as the third direction according to the world coordinate system. Accordingly, the angles corresponding to the respective directions are taken as the first direction angle, the second direction angle, and the third direction angle. For example, the pre-shooting mode is switched, and the current shooting angles are recorded as eulerX, eulerY, and eulerZ, then the scene angle to be adjusted may be:
neweulerX=-eulerX
neweulerY=-eulerY
neweulerZ=eulerZ
wherein neweulerX is a first direction angle in the scene angle to be adjusted, neweulerY is a second direction angle in the scene angle to be adjusted, and neweulerZ is a third direction angle in the scene angle to be adjusted.
Optionally, if the current shooting mode is a post-shooting mode, determining an angle range of a first direction angle in the current shooting angle, and determining a first angle to be adjusted based on an objective function corresponding to the angle range; and determining the scene angle to be adjusted based on the first angle to be adjusted and other direction angles in the current shooting angle.
For example, the shot picture of the rear camera with the eulerX of 0 is generally the same as the shot picture of the front camera with the eulerX of 15, and the shot pictures of the front camera with the eulerX of 90 and 270 (with the shooting device horizontally placed) are ensured to be unchanged, so as to ensure that the perception of the user is not wrong. Thus, a linear mapping can be made to eulerX. For example, if 270< eulerX <360, the first direction angle in the scene angles to be adjusted may be determined based on the formula newEulerX ═ (eulerX-270)/90 × 105+270, and if 0< eulerX <90, newEulerX ═ eulerX/90 × 75+15, resulting in the first direction angle in the scene angles to be adjusted. For the second direction angle and the third direction angle in the scene angle to be adjusted, the numerical values are the same as the collected data.
The advantage of determining the angle of the scene to be adjusted in the above manner is that the display content of the image background plate most suitable for the scene can be determined according to the shooting mode.
After obtaining the corresponding scene angle to be adjusted based on the above manner, the shooting device, that is, the initial shooting angle of the terminal device, needs to be adapted. Specifically, it may be: determining a target angle corresponding to a second angle to be adjusted based on the initial scene angle, the ideal scene angle and the second angle to be adjusted in the scene angles to be adjusted; and determining the target scene angle based on other angles to be adjusted and the target angle.
It can be understood that the angle of the scene to be adjusted is adjusted by combining the initial angle of the terminal device and the ideal initial angle of each image to be adjusted, so as to obtain the angle of the target scene of the background board for displaying the image.
For example, knowing the ideal initial angle idealEulerY, recording the initial angle startEulerY when the prop is initial, the target scene angle can be determined based on the following formula:
neweulerX=neweulerX
neweulerY=neweulerY-startEulerY+idealEulerY
neweulerZ=neweulerZ
and substituting the determined angles in all directions in the scene angles to be adjusted into the formula to obtain the angles in all directions in the updated target scene angles. And determining the display content in the image background plate based on the target scene angle, and displaying.
On the basis of the technical scheme, the method further comprises the following steps: and playing the preset audio special effect in the process of playing the special effect video frame.
It can be understood that, during the process of playing the special effect video frame, there may be a background sound effect, for example, the background sound effect may be background music, and the background music may be determined according to the image content of each image to be displayed in the image background board. For example, if the image content of the image to be displayed is mostly children, the background music that can be played is a children song, and if the content of the image to be displayed is mostly two target objects, and the relationship of the target objects is close, another type of background music can be played. And the user can set the background music according to actual requirements.
Based on the technical scheme provided by the embodiment of the disclosure, the corresponding target object can be blended on the basis of making the photo wall, and the technical effect of video content richness is achieved.
According to the technical scheme provided by the embodiment of the disclosure, if the display content in the image background plate corresponds to the shooting angle of the equipment on the terminal, the shooting angle and the shooting mode of the terminal equipment can be acquired in real time or periodically, the target scene angle corresponding to the shooting angle is determined, and then the corresponding image background plate is displayed based on the target scene angle, so that the video display content can be changed along with the visual angle of the user, and the matching effect of the video content and the user is further improved.
Fig. 4 is a schematic structural diagram of a video processing apparatus according to an embodiment of the disclosure, and as shown in fig. 4, the apparatus includes: an object extraction module 310, a background board generation module 320, and a video generation module 330.
The object extracting module 310 is configured to, in response to a special effect triggering operation, extract a target object in a video frame to be processed; a background board generating module 320, configured to generate an image background board including at least one image to be displayed; and the video generation module 330 is configured to perform fusion processing on the target object and the image background plate to obtain a special-effect video frame and display the special-effect video frame until receiving an operation of stopping shooting the special-effect video.
On the basis of the above technical solution, the background board generation module is further configured to: and jumping to an image resource library to determine at least one image to be displayed from the image resource library and uploading the image to be displayed, so as to determine the image background plate based on the at least one image to be displayed.
On the basis of the above technical solutions, the background board generation module includes:
the background board to be displayed generating unit is used for typesetting the at least one image to be displayed based on at least one image typesetting to obtain at least one background board to be displayed; wherein the at least one image layout is preset and/or pre-uploaded;
and the background plate generating unit is used for determining the image background plate based on the at least one background plate to be displayed.
On the basis of the above technical solutions, the image layout includes horizontal grids and vertical grids for placing the image to be displayed, and the background board to be displayed generating unit is further configured to:
and determining the transverse grids and the longitudinal grids corresponding to at least one image to be displayed according to the shooting mode of the at least one image to be displayed, and typesetting the at least one image to be displayed to obtain the at least one background plate to be displayed.
On the basis of the above technical solutions, the apparatus further includes:
the image to be typeset determining module is used for determining the image to be typeset corresponding to the at least one image to be displayed according to the cutting proportion corresponding to the shooting mode;
and the background plate determining module is used for respectively placing the at least one image to be typeset into the corresponding longitudinal grids or the transverse grids to obtain the background plate to be displayed corresponding to the image typesetting.
On the basis of the above technical solutions, the background board generation module is further configured to:
determining the size of a display interface corresponding to each background plate to be displayed, and determining the image background plate based on the size of the display interface; or the like, or a combination thereof,
and carrying out annular splicing on the at least one background plate to be displayed to obtain the image background plate.
On the basis of the above technical solutions, the object extraction module is further configured to:
shooting a video frame to be processed corresponding to a current scene;
when the special effect display condition is met, continuously shooting the video frame to be processed so as to extract the target object in the video frame to be processed.
On the basis of the above technical solutions, the video frame generation module includes:
the display size determining unit is used for updating the display size of the image background plate on the display interface according to the relative distance information between the target object and the display interface;
and the video frame determining unit is used for fusing the target object and the image background plate with the updated display size to obtain a special effect video frame.
On the basis of the above technical solutions, the video frame generation module includes:
the scale determining unit is used for determining the scaling of the target object according to the display size of the image background plate;
and the video frame determining unit is used for fusing the target object and the image background plate according to the scaling to obtain the special-effect video frame.
On the basis of each technical scheme, the device further comprises: the cyclic display module is used for cyclically displaying each image background plate according to the cyclic display duration of the image background plates; wherein the number of the image background plates is consistent with the number of the image typesetting.
On the basis of the above technical solutions, the apparatus further includes: and the transition special effect processing module is used for displaying the transition special effect when the image background plates are switched so as to display the next image background plate based on the transition special effect.
On the basis of each technical scheme, the device further comprises: and the image detection module is used for displaying the target object in a freeze frame manner in the image background plate when detecting that the target object meets the freeze frame display condition, so as to obtain the special effect video frame.
On the basis of the above technical solutions, the image background plate is a surrounding background plate obtained by splicing a plurality of background plates to be displayed, and the video frame generation module includes:
the scale determining unit is used for determining curvature information of the surrounding background plate and determining the scaling of the target object according to the curvature information;
and the video frame generating unit is used for carrying out fusion processing on the target object and the surrounding background plate based on the scaling to obtain a special effect video frame.
On the basis of each technical scheme, the device further comprises: and the image amplification display module is used for determining an object amplification display image in the image background plate in the process of displaying the image background plate so as to amplify and display the object amplification display image.
On the basis of the above technical solutions, the apparatus further includes: and the mirror image processing module is used for determining a mirror image plane so as to display the image background plate based on the mirror image plane in a mirror image mode.
On the basis of the above technical solutions, the apparatus further includes:
the shooting mode determining module is used for acquiring a current video shooting mode if the display content of the image background plate is determined based on a camera shooting angle;
the scene angle to be adjusted determining module is used for determining the scene angle to be adjusted corresponding to the image background plate according to the current shooting angle and the current shooting mode of the shooting device;
and the target scene angle determining module is used for determining a target scene angle based on the to-be-adjusted scene angle and the initial scene angle so as to determine a display angle of the image background based on the target scene angle.
On the basis of the above technical solutions, the shooting mode determining module is further configured to, if the current shooting mode is a front-end shooting mode, negate a first direction angle and a second direction angle in the current shooting angle to obtain a to-be-adjusted scene angle corresponding to the image background plate.
On the basis of the above technical solutions, the shooting mode determining module is further configured to determine an angle range of a first direction angle in the current shooting angle if the current shooting mode is a post-shooting mode, and determine a first angle to be adjusted based on an objective function corresponding to the angle range; and determining the scene angle to be adjusted based on the first angle to be adjusted and other direction angles in the current shooting angle.
On the basis of the technical solutions, the target scene angle determining module is further configured to determine a target angle corresponding to a second angle to be adjusted based on the initial scene angle, the ideal scene angle, and the second angle to be adjusted in the scene angles to be adjusted; and determining the target scene angle based on other angles to be adjusted and the target angle.
On the basis of the technical schemes, the device further comprises an audio special effect processing module, wherein the audio special effect processing module is used for playing the preset audio special effect in the process of playing the special effect video frame.
On the basis of the above technical solutions, the operation of stopping the special effect video shooting includes at least one of the following:
detecting a trigger stop shooting control;
detecting that the audio information triggers to stop shooting keywords;
a gesture that triggers stopping shooting is detected.
According to the technical scheme provided by the embodiment of the disclosure, after a target object in a video frame to be processed is extracted and an image background plate comprising at least one image to be displayed is generated in response to a special effect triggering operation, the target object and the image background plate can be fused to obtain and display the special effect video frame until an operation of stopping shooting of the special effect video is received, so that the problems that the special effect video content cannot meet the personalized requirement of a user in the prior art, the content of a video image is poor and the user experience is poor are solved, the image background plate is generated based on the image to be displayed selected by the user, the personalized display effect of the background image is met, further, the attraction degree of the application software to the user can be improved, and the sticky effect of the user is further improved. The video processing device provided by the embodiment of the disclosure can execute the video processing method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are also only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now to fig. 5, a schematic diagram of an electronic device (e.g., the terminal device or the server of fig. 5) 400 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404. An editing/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 5 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The electronic device provided by the embodiment of the present disclosure and the video processing method provided by the above embodiment belong to the same inventive concept, and technical details that are not described in detail in the embodiment can be referred to the above embodiment, and the embodiment has the same beneficial effects as the above embodiment.
The disclosed embodiments provide a computer storage medium having stored thereon a computer program that, when executed by a processor, implements the video processing method provided by the above-described embodiments.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
responding to special effect triggering operation, and extracting a target object in a video frame to be processed;
generating an image background plate comprising at least one image to be displayed;
fusing the target object and the image background plate to obtain a special effect video frame and displaying the special effect video frame;
wherein the display content and/or the display angle of the image background plate relative to the target object are dynamically changed.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, including conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, [ example one ] there is provided a video processing method, the method comprising:
responding to special effect trigger operation, and extracting a target object in a video frame to be processed;
generating an image background plate comprising at least one image to be displayed;
and fusing the target object and the image background plate to obtain a special effect video frame and displaying the special effect video frame until receiving the operation of stopping the special effect video shooting.
According to one or more embodiments of the present disclosure, [ example two ] there is provided a video processing method, the method comprising:
optionally, before generating the image background plate including at least one image to be displayed, the method further includes:
and jumping to an image resource library to determine at least one image to be displayed from the image resource library and uploading the image to be displayed, so as to determine the image background plate based on the at least one image to be displayed.
According to one or more embodiments of the present disclosure, [ example three ] there is provided a video processing method, the method comprising:
optionally, the generating an image background plate including at least one image to be displayed includes:
typesetting the at least one image to be displayed based on at least one image typesetting to obtain at least one background plate to be displayed; wherein, the at least one image typesetting is preset and/or uploaded in advance;
and determining the image background plate based on the at least one background plate to be displayed.
According to one or more embodiments of the present disclosure, [ example four ] there is provided a video processing method, the method comprising:
optionally, the image layout includes a plurality of image horizontal grids and a plurality of image vertical grids, and the layout processing on the at least one image to be displayed based on at least one image layout to obtain at least one background board to be displayed includes:
and determining the transverse grids and the longitudinal grids corresponding to at least one image to be displayed according to the shooting mode of the at least one image to be displayed, and typesetting the at least one image to be displayed to obtain the at least one background plate to be displayed.
According to one or more embodiments of the present disclosure, [ example five ] there is provided a video processing method comprising:
optionally, determining an image to be typeset corresponding to the at least one image to be displayed according to the cutting proportion corresponding to the shooting mode;
and respectively placing the at least one image to be typeset into the corresponding longitudinal grids or the transverse grids to obtain the background plate to be displayed corresponding to the image typesetting.
According to one or more embodiments of the present disclosure, [ example six ] there is provided a video processing method comprising:
optionally, the determining the image background plate based on the at least one background plate to be displayed includes:
determining the size of a display interface corresponding to each background plate to be displayed, and determining the image background plate based on the size of the display interface; or the like, or a combination thereof,
and carrying out annular splicing on the at least one background plate to be displayed to obtain the image background plate.
According to one or more embodiments of the present disclosure, [ example seven ] there is provided a video processing method, the method comprising:
optionally, before extracting the target object in the video frame to be processed, the method further includes:
shooting a video frame to be processed corresponding to a current scene;
when the condition that the special effect display condition is met is detected, continuously shooting the video frame to be processed so as to extract the target object in the video frame to be processed.
According to one or more embodiments of the present disclosure, [ example eight ] there is provided a video processing method, the method comprising:
optionally, the fusing the target object and the image background plate to obtain a special-effect video frame includes:
updating the display size of the image background plate on the display interface according to the relative distance information between the target object and the display interface;
and fusing the target object and the image background plate with the updated display size to obtain a special effect video frame.
According to one or more embodiments of the present disclosure, [ example nine ] there is provided a video processing method, the method comprising:
optionally, the fusing the target object and the image background plate to obtain a special-effect video frame includes:
determining the scaling of the target object according to the display size of the image background plate;
and fusing the target object and the image background plate according to the scaling to obtain the special-effect video frame.
According to one or more embodiments of the present disclosure [ example ten ] there is provided a video processing method comprising:
optionally, in the process of fusing the target object with the background plate, the method further includes:
circularly displaying each image background plate according to the cyclic display duration of the image background plates;
wherein the number of the image background plates is consistent with the number of the image layouts.
According to one or more embodiments of the present disclosure, [ example eleven ] there is provided a video processing method, the method comprising:
optionally, in the process of sequentially displaying the image background plates, the method further includes:
and when the image background plate is switched, displaying the transition special effect so as to display the next image background plate based on the transition special effect.
According to one or more embodiments of the present disclosure, [ example twelve ] there is provided a video processing method, the method comprising:
optionally, when it is detected that the target object meets the stop motion display condition, the target object is stop motion displayed in the image background plate, so as to obtain the special-effect video frame.
According to one or more embodiments of the present disclosure, [ example thirteen ] there is provided a video processing method, the method comprising:
optionally, the image background plate is a surrounding background plate obtained by splicing a plurality of background plates to be displayed, and the process of fusing the target object and the image background plate including at least one image to be displayed to obtain a special-effect video frame includes:
determining curvature information of the surrounding background plate, and determining the scaling of the target object according to the curvature information;
and fusing the target object and the surrounding background plate based on the scaling to obtain a special-effect video frame.
According to one or more embodiments of the present disclosure [ example fourteen ] there is provided a video processing method, the method comprising:
optionally, in the process of displaying the image background plate, an enlarged display image of the target in the image background plate is determined, so as to display the enlarged display image of the target in an enlarged manner.
According to one or more embodiments of the present disclosure, [ example fifteen ] there is provided a video processing method, the method comprising:
optionally, a mirror plane is determined to display the image background plate based on the mirror plane.
According to one or more embodiments of the present disclosure, [ example sixteen ] there is provided a video processing method, the method comprising:
optionally, if the display content of the image background board is determined based on the shooting angle of the camera, acquiring a current video shooting mode;
determining a scene angle to be adjusted corresponding to the image background plate according to the current shooting angle and the current shooting mode of a shooting device;
and determining a target scene angle based on the scene angle to be adjusted and the initial scene angle so as to determine a display angle of the image background based on the target scene angle.
According to one or more embodiments of the present disclosure, [ example seventeen ] there is provided a video processing method, the method comprising:
optionally, the determining, according to the current shooting angle and the current shooting mode of the shooting device, a scene angle to be adjusted corresponding to the image background plate includes:
and if the current shooting mode is a front-mounted shooting mode, negating a first direction angle and a second direction angle in the current shooting angle to obtain a scene angle to be adjusted corresponding to the image background plate.
According to one or more embodiments of the present disclosure [ example eighteen ] there is provided a video processing method, the method comprising:
optionally, the determining, according to the current shooting angle and the current shooting mode of the shooting device, a scene angle to be adjusted corresponding to the image background plate includes:
if the current shooting mode is a post shooting mode, determining an angle range of a first direction angle in the current shooting angle, and determining a first angle to be adjusted based on a target function corresponding to the angle range; and determining the scene angle to be adjusted based on the first angle to be adjusted and other direction angles in the current shooting angle.
According to one or more embodiments of the present disclosure, [ example nineteen ] there is provided a video processing method comprising:
optionally, the determining a target scene angle based on the to-be-adjusted scene angle and the initial scene angle includes:
determining a target angle corresponding to a second angle to be adjusted based on the initial scene angle, the ideal scene angle and the second angle to be adjusted in the scene angles to be adjusted;
and determining the target scene angle based on other angles to be adjusted and the target angle.
According to one or more embodiments of the present disclosure [ example twenty ] there is provided a video processing method, the method comprising:
optionally, a preset audio special effect is played in the process of playing the special-effect video frame.
According to one or more embodiments of the present disclosure, [ example twenty-one ] there is provided a video processing method comprising:
optionally, the operation of stopping special effect video shooting includes at least one of the following:
detecting a trigger stop shooting control;
detecting that the audio information triggers to stop shooting keywords;
a gesture that triggers stopping shooting is detected.
According to one or more embodiments of the present disclosure, [ example twenty-two ] there is provided a video processing apparatus comprising:
optionally, the object extracting module is configured to, in response to the special effect triggering operation, extract a target object in the video frame to be processed;
the background plate generation module is used for generating an image background plate comprising at least one image to be displayed;
and the video generation module is used for fusing the target object and the image background plate to obtain a special effect video frame and displaying the special effect video frame until receiving the operation of stopping the special effect video shooting.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. A video processing method, comprising:
responding to special effect triggering operation, and extracting a target object in a video frame to be processed;
generating an image background plate comprising at least one image to be displayed;
fusing the target object and the image background plate to obtain a special effect video frame and displaying the special effect video frame;
wherein the display content and/or the display angle of the image background plate relative to the target object are dynamically changed.
2. The method of claim 1, wherein generating an image background board including at least one image to be displayed comprises:
typesetting the at least one image to be displayed based on at least one image typesetting to obtain at least one background plate to be displayed; wherein the at least one image layout is preset and/or pre-uploaded; the image to be shown is determined from an image resource library;
and determining the image background plate based on the at least one background plate to be displayed.
3. The method of claim 2, wherein the layout of images includes a horizontal grid and a vertical grid for placing the images to be displayed, and the layout of the at least one image to be displayed based on the at least one layout of images to obtain at least one background board to be displayed comprises:
determining an image to be typeset corresponding to the at least one image to be displayed according to the cutting proportion corresponding to the shooting mode;
and respectively placing the at least one image to be typeset into the corresponding longitudinal grids or the transverse grids to obtain the background plate to be displayed corresponding to the image typesetting.
4. The method of claim 2, wherein determining the image background plate based on the at least one background plate to be displayed comprises:
determining the size of a display interface corresponding to each background plate to be displayed, and determining the image background plate based on the size of the display interface; or the like, or, alternatively,
and annularly splicing the at least one background plate to be displayed to obtain the image background plate.
5. The method according to claim 1, further comprising, before extracting the target object in the video frame to be processed:
shooting a video frame to be processed corresponding to a current scene;
when the special effect display condition is met, continuously shooting the video frame to be processed so as to extract the target object in the video frame to be processed.
6. The method according to claim 1, wherein the fusing the target object with the image background plate to obtain a special effect video frame comprises:
updating the display size of the image background plate on the display interface according to the relative distance information between the target object and the display interface;
and fusing the target object and the image background plate with the updated display size to obtain a special effect video frame.
7. The method according to claim 1, wherein the fusing the target object with the image background plate to obtain a special effect video frame comprises:
determining the scaling of the target object according to the display size of the image background plate;
and fusing the target object and the image background plate according to the scaling to obtain the special-effect video frame.
8. The method according to claim 1, wherein during the process of fusing the target object with the background plate, further comprising:
and circularly displaying each image background plate according to the cyclic display duration of the image background plates.
9. The method of claim 8, wherein in the process of sequentially displaying each image background plate, further comprising:
and when the image background plate is switched, displaying the transition special effect so as to display the next image background plate based on the transition special effect.
10. The method of claim 1, further comprising:
and when the target object is detected to meet the stop motion display condition, displaying the target object in a stop motion mode in the image background plate to obtain the special effect video frame.
11. The method according to claim 1, wherein the image background plate is a surrounding background plate obtained by splicing a plurality of background plates to be displayed, and the process of fusing the target object and the image background plate including at least one image to be displayed to obtain a special effect video frame comprises:
determining curvature information of the surrounding background plate, and determining the scaling of the target object according to the curvature information;
and fusing the target object and the surrounding background plate based on the scaling to obtain a special-effect video frame.
12. The method of claim 1, further comprising:
in the process of displaying the image background plate, determining a target amplification display image in the image background plate so as to amplify and display the target amplification display image.
13. The method of claim 1, further comprising:
determining a mirror plane to display the image background plate based on the mirror plane.
14. The method of claim 1, further comprising:
if the display content of the image background plate is determined based on the shooting angle of the camera, acquiring a current video shooting mode;
determining a scene angle to be adjusted corresponding to the image background plate according to the current shooting angle and the current shooting mode of a shooting device;
and determining a target scene angle based on the scene angle to be adjusted and the initial scene angle so as to determine a display angle of the image background based on the target scene angle.
15. The method of claim 14, wherein determining the scene angle to be adjusted corresponding to the image background plate according to the current shooting angle of the shooting device and the current shooting mode comprises:
and if the current shooting mode is a front shooting mode, negating a first direction angle and a second direction angle in the current shooting angle to obtain a scene angle to be adjusted corresponding to the image background plate.
16. The method of claim 14, wherein determining the scene angle to be adjusted corresponding to the image background plate according to the current shooting angle of the shooting device and the current shooting mode comprises:
if the current shooting mode is a post shooting mode, determining an angle range of a first direction angle in the current shooting angle, and determining a first angle to be adjusted based on a target function corresponding to the angle range; and determining the scene angle to be adjusted based on the first angle to be adjusted and other direction angles in the current shooting angle.
17. The method of claim 14, wherein determining a target scene angle based on the to-be-adjusted scene angle and an initial scene angle comprises:
determining a target angle corresponding to a second angle to be adjusted based on the initial scene angle, the ideal scene angle and the second angle to be adjusted in the scene angles to be adjusted;
and determining the target scene angle based on other angles to be adjusted and the target angle.
18. A video processing apparatus, comprising:
the object extraction module is used for responding to the special effect triggering operation and extracting a target object in the video frame to be processed;
the background plate generating module is used for generating an image background plate comprising at least one image to be displayed;
the video generation module is used for fusing the target object and the image background plate to obtain and display a special-effect video frame;
wherein the display content and/or the display angle of the image background plate relative to the target object are dynamically changed.
19. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the video processing method of any of claims 1-17.
20. A storage medium containing computer-executable instructions for performing the video processing method of any of claims 1-17 when executed by a computer processor.
CN202210567327.1A 2022-05-23 2022-05-23 Video processing method and device, electronic equipment and storage medium Pending CN115002359A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210567327.1A CN115002359A (en) 2022-05-23 2022-05-23 Video processing method and device, electronic equipment and storage medium
PCT/CN2023/094315 WO2023226814A1 (en) 2022-05-23 2023-05-15 Video processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210567327.1A CN115002359A (en) 2022-05-23 2022-05-23 Video processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115002359A true CN115002359A (en) 2022-09-02

Family

ID=83027408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210567327.1A Pending CN115002359A (en) 2022-05-23 2022-05-23 Video processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115002359A (en)
WO (1) WO2023226814A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023226814A1 (en) * 2022-05-23 2023-11-30 北京字跳网络技术有限公司 Video processing method and apparatus, electronic device, and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101309389A (en) * 2008-06-19 2008-11-19 深圳华为通信技术有限公司 Method, apparatus and terminal synthesizing visual images
JP6357387B2 (en) * 2014-08-26 2018-07-11 任天堂株式会社 Information processing apparatus, information processing system, information processing program, and information processing method
CN105872448A (en) * 2016-05-31 2016-08-17 宇龙计算机通信科技(深圳)有限公司 Display method and device of video images in video calls
KR101843018B1 (en) * 2016-12-15 2018-03-28 (주)잼투고 System and Method for Video Composition
CN108234825A (en) * 2018-01-12 2018-06-29 广州市百果园信息技术有限公司 Method for processing video frequency and computer storage media, terminal
CN113038036A (en) * 2019-12-24 2021-06-25 西安诺瓦星云科技股份有限公司 Background image display method, video processing equipment, display system and main control card
CN112822542A (en) * 2020-08-27 2021-05-18 腾讯科技(深圳)有限公司 Video synthesis method and device, computer equipment and storage medium
CN113422914B (en) * 2021-06-24 2023-11-21 脸萌有限公司 Video generation method, device, equipment and medium
CN113973190A (en) * 2021-10-28 2022-01-25 联想(北京)有限公司 Video virtual background image processing method and device and computer equipment
CN115002359A (en) * 2022-05-23 2022-09-02 北京字跳网络技术有限公司 Video processing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023226814A1 (en) * 2022-05-23 2023-11-30 北京字跳网络技术有限公司 Video processing method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
WO2023226814A1 (en) 2023-11-30

Similar Documents

Publication Publication Date Title
CN113475092B (en) Video processing method and mobile device
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112165632B (en) Video processing method, device and equipment
WO2023051185A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN113225483B (en) Image fusion method and device, electronic equipment and storage medium
WO2023169305A1 (en) Special effect video generating method and apparatus, electronic device, and storage medium
CN111970571A (en) Video production method, device, equipment and storage medium
CN114630057B (en) Method and device for determining special effect video, electronic equipment and storage medium
CN114581566A (en) Animation special effect generation method, device, equipment and medium
CN113490010A (en) Interaction method, device and equipment based on live video and storage medium
CN112906553B (en) Image processing method, apparatus, device and medium
CN116934577A (en) Method, device, equipment and medium for generating style image
CN108845741B (en) AR expression generation method, client, terminal and storage medium
CN114445600A (en) Method, device and equipment for displaying special effect prop and storage medium
WO2023226814A1 (en) Video processing method and apparatus, electronic device, and storage medium
CN107197339B (en) Display control method and device of film bullet screen and head-mounted display equipment
CN117244249A (en) Multimedia data generation method and device, readable medium and electronic equipment
CN114697568B (en) Special effect video determining method and device, electronic equipment and storage medium
CN115082368A (en) Image processing method, device, equipment and storage medium
CN115278107A (en) Video processing method and device, electronic equipment and storage medium
CN114926326A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114666622A (en) Special effect video determination method and device, electronic equipment and storage medium
WO2022213798A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN111367598A (en) Action instruction processing method and device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination