CN113810627B - Video processing method, device, mobile terminal and readable storage medium - Google Patents

Video processing method, device, mobile terminal and readable storage medium Download PDF

Info

Publication number
CN113810627B
CN113810627B CN202110933095.2A CN202110933095A CN113810627B CN 113810627 B CN113810627 B CN 113810627B CN 202110933095 A CN202110933095 A CN 202110933095A CN 113810627 B CN113810627 B CN 113810627B
Authority
CN
China
Prior art keywords
video data
camera
video
view
frame area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110933095.2A
Other languages
Chinese (zh)
Other versions
CN113810627A (en
Inventor
许玉新
曾剑青
张永兴
张刘哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huizhou TCL Cloud Internet Corp Technology Co Ltd
Original Assignee
Huizhou TCL Cloud Internet Corp Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huizhou TCL Cloud Internet Corp Technology Co Ltd filed Critical Huizhou TCL Cloud Internet Corp Technology Co Ltd
Priority to CN202110933095.2A priority Critical patent/CN113810627B/en
Publication of CN113810627A publication Critical patent/CN113810627A/en
Priority to PCT/CN2022/106791 priority patent/WO2023016214A1/en
Application granted granted Critical
Publication of CN113810627B publication Critical patent/CN113810627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Abstract

The invention discloses a video processing method, a device and a mobile terminal, wherein the method comprises the following steps: acquiring first video data shot by a first camera; analyzing first video data shot by a first camera, and identifying a view-finding frame area conforming to a frame forming condition from the first video data; acquiring second video data shot by a second camera, processing the second video data, and overlapping the second video data to a view-finding frame area of the first video data to enable the display size of the second video data to be matched with the size of the view-finding frame area; and generating a composite video file with the auxiliary video image of the view frame by the overlapped video. The invention adds new functions to the mobile terminal: the method can find out the view finding frame meeting the preset rule from the shooting preview data of the main camera, automatically match the shooting data of the other camera to the view finding frame, and perform video synthesis, thereby providing convenience for users.

Description

Video processing method, device, mobile terminal and readable storage medium
Technical Field
The present invention relates to the field of mobile terminal video processing technologies, and in particular, to a video processing method, a device, and a mobile terminal.
Background
With the development of technology and the continuous improvement of living standard of people, various mobile terminals such as mobile phones are becoming more and more popular, and mobile phones have become an indispensable communication tool in people's life.
Smartphones are increasingly more and cameras become a necessary function for each handset. With the improvement of user demands, the functions of cameras are more and more provided with front and rear cameras, for example, but the video shot by each camera of the mobile terminal in the prior art is an independent video, so that the video shooting function is single, and the use of the camera is inconvenient for users in some cases.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a video processing method, a device, a mobile terminal and a storage medium aiming at the defects in the prior art, wherein the invention adds new functions to the mobile terminal: the method can find out the view finding frame conforming to the preset rule from the shooting preview data of the main camera, automatically match the other video data to the view finding frame, and perform video synthesis, thereby providing convenience for users.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
A video processing method, wherein the method comprises:
acquiring first video data shot by a first camera;
analyzing first video data shot by a first camera, and identifying a view-finding frame area conforming to a frame forming condition from the first video data;
acquiring second video data shot by a second camera, processing the second video data, and overlapping the second video data to a view-finding frame area of the first video data to enable the display size of the second video data to be matched with the size of the view-finding frame area;
and generating a composite video file with the auxiliary video image of the view frame by the overlapped video.
The video processing method, wherein the step of obtaining the first video data shot by the first camera includes:
the method comprises the steps of setting a double-camera double-shooting function on the mobile terminal in advance, controlling the first camera and the second camera which are started in different directions to shoot simultaneously when the double-camera double-shooting function is started, and overlapping data shot by the first camera and the second camera to an interface for displaying.
The video processing method, wherein the step of obtaining the first video data shot by the first camera includes:
When the mobile terminal detects that the double-camera double-shooting function is started, the mobile terminal controls the first camera and the second camera in different directions of the mobile terminal to start shooting at the same time;
previewing and displaying the video data shot by the first camera as first video data, and taking the video data shot by the second camera as second video data;
acquiring first video data shot by a first camera;
and acquiring second video data shot by the second camera, and taking the second video data shot by the second camera as auxiliary preview data.
The video processing method, wherein the step of analyzing the first video data shot by the first camera and identifying the view-finding frame area meeting the frame forming condition from the first video data comprises the following steps:
analyzing first video data shot by a first camera, and automatically identifying whether a quadrilateral or circular window conforming to a frame forming condition exists in a main preview interface generated by the first video data according to a preset AI algorithm so as to confirm whether a view finding frame area is included in the main preview interface generated by the first video data;
and when a quadrilateral or circular window which meets the conditions exists in the main preview interface formed by the first video data, taking the quadrilateral or circular window area as a view-finding frame area in the main preview interface generated by the first video data.
The video processing method, wherein the step of obtaining the second video data shot by the second camera, and overlapping the second video data after processing to the viewfinder area of the first video data, so that the display size of the second video data matches the size of the viewfinder area includes:
acquiring second video data shot by a second camera as auxiliary preview data; the first camera is a rear camera of the mobile terminal, and the second camera is a front camera of the mobile terminal;
automatically editing second video data based on a view-finding frame area in a main preview interface generated by the first video data, and editing the second video data into video data with the size automatically matched with that of the view-finding frame area;
and controlling the edited second video data to be overlapped on the view-finding frame area to be displayed together.
The video processing method, wherein the step of automatically editing the second video data into the video data automatically matched with the size of the view frame area based on the view frame area in the main preview interface generated by the first video data includes:
and taking the second video data shot by the second camera as auxiliary preview data, cutting or rotating the second video data, automatically matching the display size of the second video data with the size of the view-finding frame area, and editing the second video data into video data which is automatically matched with the size of the view-finding frame area.
The video processing method, wherein the step of obtaining the second video data shot by the second camera further includes:
and receiving an operation instruction from the existing video file as second video data.
The video processing method, wherein the step of superimposing the second video data after processing on the viewfinder area of the first video data includes:
extracting a portrait video with a background ignored from the second video;
and superposing the extracted portrait video to a view frame area of the first video data.
A video processing apparatus, wherein the apparatus comprises:
the first acquisition module is used for acquiring first video data shot by the first camera;
the device comprises a view frame region identification module, a display module and a display module, wherein the view frame region identification module is used for analyzing first video data shot by a first camera and identifying a view frame region which meets the frame forming condition from the first video data;
the video synthesis processing module is used for acquiring second video data shot by a second camera, processing the second video data and then overlapping the second video data to a view-finding frame area of the first video data so as to enable the display size of the second video data to be matched with the size of the view-finding frame area; and generating a composite video file with the auxiliary video image of the view frame by the overlapped video.
A mobile terminal comprising a memory, a processor and a video processing program stored in the memory and executable on the processor, the processor implementing the steps of any one of the video processing methods when executing the video processing program.
A computer readable storage medium having stored thereon a video processing program which, when executed by a processor, implements the steps of any of the video processing methods.
The beneficial effects are that: compared with the prior art, the invention provides a video processing method, which adopts the method that the image data is analyzed in the video data shot by a main camera to find out a regular quadrilateral or round view finding frame, such as a parallelogram or an area similar to the parallelogram, and the area is divided into a preview display area of another video file; the invention adds new functions to the mobile terminal: the invention can realize finding out the view finding frame meeting the preset rule from the image capturing preview data of the main camera, automatically matching the other video data to the view finding frame, and carrying out double-image capturing video synthesis.
Drawings
Fig. 1 is a flowchart of a specific implementation of a video processing method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of view-finding frame area recognition by performing view-finding frame area recognition on quadrangles and circles according to an embodiment of the present invention.
Fig. 3 is a functional effect diagram of using image quality equalization for an auxiliary video image according to an embodiment of the present invention.
Fig. 4 is a schematic flow chart of video shot by cameras before and after synthesis according to a second embodiment of the present invention.
Fig. 5 is a schematic flow chart of a post-synthesis camera and a pre-stored video file according to a third embodiment of the present invention.
Fig. 6 is a schematic flow chart of post-processing a composite video according to a third embodiment of the present invention.
Fig. 7 is a schematic block diagram of a video processing apparatus according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of an internal structure of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and more specific, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear … …) are included in the embodiments of the present invention, the directional indications are merely used to explain the relative positional relationship, movement conditions, etc. between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
With the improvement of living standard of people, people increasingly like to record and share interesting life of people, so according to the demands of people, mobile terminal manufacturers such as mobile phone manufacturers, for example, the pixels of cameras carried in developed mobile phones are increased year by year, and with the rising of short video applications, users who shoot by using mobile terminals such as mobile phones and upload to a network are more and more, video special effects are developed from the initial beauty to the current AI face changing, decoration, blue line and more special effects and applications related to mobile phone cameras.
However, in the prior art, when the user wants to shoot the scenery of the rear camera and the content of the user of the front camera at the same time, the front camera and the rear camera can be simply spliced in real time only by a method of rotating the mobile phone and shooting respectively or by a software algorithm, but the video shot in any mode is monotonous, stiff and offensive, and the scenery of the rear camera and the content of the front camera cannot be skillfully fused, so that the natural and interesting shooting effect is achieved.
In order to solve the above-mentioned problems, according to the video processing method of the present embodiment, when a user starts a preset dual-camera dual-shooting function, a mobile terminal intelligently analyzes first video data of a first camera, searches and automatically generates a view frame area in the first video data, automatically matches and superimposes contents shot by a second camera in the view frame area, and finally generates natural and interesting front and rear camera synthesized videos in a video synthesis manner. The user can obtain the composite video which is skillfully combined with the front camera and the rear camera and is natural and interesting when the user needs to shoot the contents of the front camera and the rear camera at the same time.
Exemplary method
First embodiment
As shown in fig. 1, an embodiment of the present invention provides a video processing method, which may be used in mobile devices such as a mobile phone and a tablet computer. The method in the embodiment of the invention comprises the following steps:
step S100, acquiring first video data shot by a first camera;
the method comprises the following steps of:
the mobile terminal is provided with a double-camera double-shooting function in advance, and the double-camera double-shooting function is used for controlling the first camera and the second camera which are started in different directions to shoot simultaneously when the double-camera double-shooting function is started, and data shot by the first camera and the second camera respectively are overlapped to an interface for display, and the method is described in detail below.
In this embodiment, a dual-camera dual-shooting function is set in a mobile terminal used by a user in advance, when the user wakes up the dual-camera dual-shooting function by clicking or other operations, the mobile terminal starts two cameras in different directions set on the mobile terminal to shoot simultaneously, when more than 2 cameras are set in the mobile terminal, a selection operation interface for manually selecting a first camera and a second camera by the user is further set in an application interface of the dual-camera dual-shooting function, so that the user can conveniently take any camera as the first camera and the second camera, shoot more various videos according to own ideas by the user, and the first camera and the second camera superimpose respectively shot data into a display interface for recording by the user.
Therefore, after the user starts the double-camera double-shooting function, the mobile terminal automatically acquires the default data shot by the first camera as first video data or the user-defined data shot by the first camera as first video data, and takes the second video data shot by the second camera as auxiliary preview data.
For example, in addition to a front camera and a rear camera, the mobile phone a of the user a is further provided with a telescopic rotary periscope camera disposed at the upper edge of the mobile phone a. After a user A opens a preset double-camera double-shooting function of the mobile phone, the mobile phone A opens the front camera by default and the rear camera acquires a camera picture by default, and takes the rear camera as the first camera to acquire first video data shot by the first camera by default. When the user A wants to use the periscope type camera as a first camera, and uses the rear camera as a second camera, the first camera is changed into the periscope type camera through a camera switching option preset in a double-camera double-shooting function, the second camera is changed into the rear camera, the periscope type camera rises and shoots after receiving an image acquisition starting instruction, and meanwhile, the mobile phone A acquires first video data acquired by the periscope type camera, and simultaneously acquires second video data acquired by the rear camera as auxiliary preview data. The method for freely setting the first camera and the second camera enables the user A to customize the shooting angle when the double-camera double-shooting function is used.
Further, step S200, analyzing first video data shot by a first camera, and automatically identifying a viewfinder area meeting a frame forming condition from a main preview interface generated by the first video data;
in this embodiment, the first video data, which is the shooting data of the selected first camera, is analyzed, the mobile terminal controls the first video data to generate a main preview interface, and automatically identifies a viewfinder area conforming to a frame forming condition in the main preview interface through an intelligent algorithm such as AI, where the frame forming condition includes a closed area formed by lines appearing in the video, for example, when the shooting object is glasses, the edge wire frames of the glasses lens on each side can form a closed area, that is, the shot glasses can obtain two viewfinder areas; or automatically identifying objects available as a viewfinder area by intelligent algorithms such as AI, for example, billboards, mirrors and unfolded white paper including roadsides, etc., when such objects are identified by the mobile terminal, they can be automatically used as viewfinder areas; or identifying whether a quadrilateral or circular window is contained in the main preview interface, for example, a wall is provided with dot wallpaper with different sizes and different colors, and each dot in the wallpaper can be regarded as a circular window and is controlled to generate a view-finding frame area. The method has the advantages that when the object or composition shot by the user is too complex, the selection of the view-finding frame area can be carried out on the image in the main preview interface in various modes, and when a plurality of view-finding frame areas are identified in the image shot by the user, the selection of the view-finding frame area can be switched by the user in a clicking mode, so that more intelligent and diversified composite videos are provided for the user, the composite videos have no offence and sense, the interestingness of the videos is increased, and the fun and the viscosity of the user on shooting are enhanced.
For example, when the user a is in a room and the preset double-camera double-shooting function of the mobile phone a is started, and at this time, the rear camera of the mobile phone a is the first camera, the front camera is the second camera, the main preview interface obtained by shooting by the first camera includes a sunglasses, a round window, and three main bodies are hung, then the mobile phone a is obtained by analysis of an AI algorithm, and the frames on two sides of the sunglasses respectively form a closed area, so that the main preview interface is judged to include a first view-finding frame area related to the sunglasses; analyzing to obtain that the window is a round window, and judging that the round window is a second view finding frame area if the preset condition is met; and recognizing that the hanging picture in the video is a main body which can be used as a view-finding frame area through an AI algorithm, and judging that the hanging picture is a third view-finding frame area. Finally, the mobile phone A takes the hanging picture as a default view-finding frame area and processes the video to generate the view-finding frame area.
If the user A is not satisfied with the view-finding frame area of the hanging picture selected by the mobile phone A at this time, the view-finding frame area can be directly modified through the main preview interface, for example, a round window in the main preview picture is double-clicked, the mobile phone A controls to cancel the original video processing operation, and the round window is used as a new view-finding frame area to perform video processing and generate the view-finding frame area.
In addition, as shown in fig. 2, in the application of using the regular graph as the view-finder area, the image shot by the user a through the rear camera of the mobile phone a is a billboard, the mobile phone a finds that the image contains a quadrangle, that is, the shape of the billboard through the AI algorithm, and determines that the side length or the diameter of the image is greater than 1/4 of the screen width, then determines that the quadrangle area of the billboard can be used as the view-finder area, and controls the generation of the view-finder area.
Further, step S300 is to acquire second video data shot by a second camera, process the second video data, superimpose the second video data on a view-finding frame area of the first video data, make the display size of the second video data match with the size of the view-finding frame area, and generate a composite video file with a view-finding frame auxiliary video image from the superimposed video.
In this embodiment, the mobile terminal acquires second video data, that is, auxiliary preview data, which is to be captured by the second camera superimposed on the viewfinder area. And the mobile terminal controls the second video data serving as auxiliary preview data to be overlapped in the view-finding frame area, automatically matches the display size and direction of the auxiliary preview data according to the size of the main preview interface, and generates a composite file of the auxiliary video image with the view-finding frame according to a shooting instruction.
In the embodiment of the invention, regarding the implementation of overlapping the second video data serving as the auxiliary preview data in the view-finding frame area, the second video data can be cut into the same size as the view-finding frame area, and then the cut second video data is overlapped in the view-finding frame area to generate the composite video file with the auxiliary video image of the view-finding frame, so that the method is simple, convenient and easy to realize.
Of course, the second video data may be superimposed on the viewfinder area, and then the second video data exceeding the viewfinder area may be cut out to generate a composite video file with the auxiliary video image of the viewfinder, so that the cutting is more convenient, and the second video data may be directly used as a cutting edge according to the viewfinder area.
The automatic matching method comprises one or more processing methods of rotating, zooming, translating, cutting or deforming the image of the auxiliary video data, and when the mobile terminal cannot automatically place the main body part shot in the auxiliary video image in the center of the view-finding frame area or the position which the user wants to display, the main body part can be translated in a dragging mode, so that the mobile terminal is convenient for the user to use.
Further, in order to make the superimposed composite video more natural, before the processing of the auxiliary video image, whether the viewfinder area is inclined relative to the mobile terminal camera is identified by an AI algorithm, that is, whether the image in the viewfinder area is opposite to the mobile terminal camera is identified by the image in the viewfinder area and/or the shadow information of the surrounding environment, and when the image is judged not to be opposite to the mobile terminal camera, the inclination angle of the image relative to the mobile terminal camera is further analyzed, and the auxiliary video image is deformed so as to display the same inclination angle. The video spliced by the method is not simple to simply superimpose the two videos, but simulates the effect that the auxiliary video image of the second camera is attached to the real object in the view-finding frame area for shooting, so that the effect that the video is more natural can be achieved, and the use experience of a user is improved.
In addition, when the user moves the first camera position to deform the view-finding frame area when using the double-camera double-shooting function, the method can be used for carrying out the transformation of the same inclination angle and the same size on the auxiliary video image in real time, so that the illegal and uncomfortable feeling of the synthesized video is further eliminated.
Meanwhile, an image quality equalization function can be set according to the needs of a user, namely, the first video data of the first camera including various parameters such as resolution, frame number, ambiguity, filter effect and the like are analyzed, and the same image quality processing is carried out on the auxiliary video image, so that the auxiliary video image is further fused into a picture. As shown in fig. 3, after the wide-angle lens is opened, the first camera is aligned with the automobile rearview mirror, and the second camera is aligned with one frame of the shot video of the automobile rear seat. Therefore, as the ambiguity of the first video data shot by the first camera is higher, the mobile terminal automatically carries out the blurring treatment on the edge of the auxiliary video image, so that the synthesized image is more natural and artistic.
In a further embodiment, in the video processing method, the step of superimposing the second video data after processing on the viewfinder area of the first video data may further include: extracting a portrait video with a background ignored from the second video; and superposing the extracted portrait video to a view frame area of the first video data. In the embodiment of the invention, the second video data shot by the second camera can be processed as required to extract the portrait video (neglecting the background) in the second video data, then the extracted portrait video is overlapped to the view-finding frame area of the first video data, and then the overlapped video is generated into the synthesized video file with the auxiliary video image of the view-finding frame, so that the synthesis technology is more in line with the requirements of users, the auxiliary video is not required to be additionally processed by the users, and the use convenience is provided for the users.
Further, the auxiliary video data is not only provided by the second camera, but also includes the history video files stored in the mobile terminal, namely, the history video files can be imported into the view-finding frame area through the selection of the user, so that more operation space for synthesizing videos is provided for the user, and the user experience is improved.
Second embodiment
The process according to the invention is described in further detail below by way of a specific application example:
in this embodiment, the mobile terminal uses a mobile phone as an example, as shown in fig. 4, and the video processing method in this embodiment includes the following steps:
step S10, starting and entering step S11;
step S11, starting a rear camera for previewing, and entering step S12;
step S12, controlling to start an AI algorithm, and entering step S13;
step S13, detecting quadrangle and/or circle of the preview data of the rear camera, and entering step S14;
step S14, analyzing whether quadrilateral and/or circular areas exist in the rear camera preview data, entering step S15 when the quadrilateral and/or circular areas exist, and entering step S14 when the quadrilateral and/or circular areas do not exist;
step S15, calculating that the position information of the identified quadrilateral and/or circular area comprises information such as vertexes, dots, radiuses and the like, and entering step S16;
Step S16, transmitting the position information to a front camera for previewing, establishing a preview window, and entering step S17;
step S17, the front camera preview data are overlapped above a window of the rear camera, which accords with the area condition, and the step S18 is entered;
step S18, starting video recording, and entering step S19;
step S19, closing video recording, and entering step S20;
and step S20, ending.
In the second embodiment of the present invention, the user completes the simultaneous starting of the front camera and the rear camera and the splicing of the double-camera and the double-shot video through one mobile phone. The effect of enabling the synthesized video to be more natural and attractive is achieved, and user experience is improved.
As shown in fig. 4, after a user B starts a dual-camera dual-shooting function of a mobile phone B, the mobile phone B controls to start a rear camera and preview a video shot by the rear camera, and at the same time starts an AI algorithm to analyze preview data shot by the rear camera, and analyze whether the video data contains quadrilateral and/or circular images or not; when detecting that the video data contains quadrilateral and/or circular image areas, calculating the area position information of the image areas, wherein the area position information comprises information such as vertex coordinates, dot coordinates, radius and the like of the areas; the region position information is sent to a preview picture of the front-end camera, and the region meeting the condition is formed in the preview picture of the front-end camera, so that a user previews whether the content shot by the front-end camera is in the region meeting the condition; when the user selects the position of the front camera image, the mobile phone B controls the front camera preview data to be overlapped above the window of the rear camera meeting the condition area, starts video recording, and after the user clicks a function button for stopping video recording, ends video recording, and generates a composite video without violating the sense.
Third embodiment
In this embodiment, the mobile terminal uses a tablet pc as an example, as shown in fig. 5, and the video processing method in this embodiment includes the following steps:
step S30, starting and proceeding to step S31;
step S31, starting a rear camera for previewing, and entering step S32;
step S32, controlling to start an AI algorithm, and entering step S33;
step S33, detecting quadrangle and/or circle of the preview data of the rear camera, and entering step S34;
step S34, analyzing whether quadrilateral and/or circular areas exist in the rear camera preview data, entering step S35 when the quadrilateral and/or circular areas exist, and entering step S34 when the quadrilateral and/or circular areas do not exist;
step S35, calculating that the position information of the identified quadrilateral and/or circular area comprises information such as vertexes, dots, radiuses and the like, and entering step S36;
step S36, clicking the area, controlling to jump to a file system of the tablet personal computer, controlling to start the selected video file, and entering step S37;
step S37, superposing the opened video playing suspension frame above a detection window of the rear camera, and entering step S38;
step S38, starting video recording, and entering step S39;
Step S39, closing video recording, and entering step S40;
and step S40, ending.
From the above, in the third specific application embodiment of the present invention, the purpose of video stitching with the existing video through the rear camera is achieved through one tablet computer, so that the interestingness of the video is increased, and the interest and viscosity of the user for shooting are enhanced.
As shown in fig. 5, the main processes from step S30 to step S35 are the same as those of the second embodiment, and will not be described again. After the tablet personal computer identifies the position information of the quadrilateral and/or circular area, a file system of the tablet personal computer, such as a system gallery, is started by receiving the operation of clicking the quadrilateral and/or circular area by a user, the operation of selecting video files by the user is received, the selected video files are displayed in a floating frame, after the video recording function is started by the user, the front camera is a picture recorded on site, the video file floating frame is a pre-stored video and is played at the original speed, and when the user presses a function button for closing the video recording, the video recording is finished.
After the recording is finished, as shown in fig. 6, the recorded video is further processed. The method comprises the following steps:
Step S50, starting video post-processing, and proceeding to step S51 and step S61;
step S51, loading a first video file shot by a rear camera, and entering step S52;
step S52, analyzing each frame of the first video file by utilizing a video editing algorithm, and entering step S53;
step S53, the tablet personal computer starts an AI algorithm, and the step S54 is entered;
step S54, the tablet pc controls detection of quadrangles and/or circles for each frame in the first video file, and the step S55 is entered;
step S55, judging whether a quadrilateral and/or circular area is detected, if yes, entering step S56, and if not, entering step S54;
step S56, calculating that the detected position information of the region comprises vertexes, dots, radiuses and the like, and proceeding to step S63;
step S61, loading a second video file pre-stored in the file system of the tablet computer, and entering step S62;
step S62, analyzing each frame of the second video file by utilizing a video editing algorithm, and entering step S63;
step S63, the tablet personal computer processes the first video file and the second video file in a cutting and/or rotating mode according to the position information and the image data, and the step S64 is entered;
Step S64, the superposition of single-frame images is recoded into a new video file, the new video file is stored, and the step S70 is carried out;
step S70, end.
From the above, the method for post-processing video in this embodiment further performs fine processing on the video frame previewed by the user, thereby improving the image quality of the synthesized video and enhancing the use experience of the user.
As shown in fig. 6, after the user finishes video recording, the tablet pc performs post-processing on the first video file captured by the rear camera of the tablet pc and the original second video file in the file system. Analyzing each frame of the first video file, controlling and starting an AI algorithm to detect quadrangles and/or circles of each frame of image, and calculating the regional position information of the region when the region meeting the conditions is detected, wherein the regional position information comprises parameters such as vertexes, dots, radiuses and the like; and on the other hand, analyzing each frame of the second video file loaded, combining the vertex, dot and radius parameter information obtained, superposing the first video file and the second video file together through cutting or rotating of an editing tool, and finally recoding the superposed frame image into a new video file, wherein the coded video formats comprise but are not limited to H.264, MP4, MKV and the like.
Exemplary apparatus
As shown in fig. 7, an embodiment of the present invention provides a video processing apparatus including: a first acquisition module 710, a viewfinder area identification module 720, and a video composition module 730. Specifically, the first obtaining module 710 is configured to obtain first video data captured by the first camera; the viewfinder area identifying module 720 is configured to analyze first video data captured by a first camera, and automatically identify a viewfinder area that meets a frame forming condition from a main preview interface generated by the first video data; the video composition processing module 730 is configured to obtain second video data captured by the second camera, control the second video data to be superimposed on the viewfinder area, automatically match the display size of the second video data with the viewfinder area, and generate a composite video file of the auxiliary video image with the viewfinder according to the capturing instruction.
Based on the above embodiment, the present invention also provides a terminal device, and a functional block diagram thereof may be shown in fig. 8. The terminal equipment comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein the processor of the terminal device is adapted to provide computing and control capabilities. The memory of the terminal device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the terminal device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a video processing. The display screen of the terminal device may be a liquid crystal display screen or an electronic ink display screen.
It will be appreciated by those skilled in the art that the schematic block diagram of fig. 8 is merely a block diagram of a portion of the structure related to the present invention, and does not constitute a limitation of the terminal device to which the present invention is applied, and that a specific terminal device may include more or less components than those shown, or may combine some components or have a different arrangement of components.
In one embodiment, a terminal device is provided, the terminal device including a memory, a processor, and a video processing program stored on and executable on the processor, the process performing the steps of:
acquiring first video data shot by a first camera;
analyzing first video data shot by a first camera, and automatically identifying a view-finding frame area meeting frame forming conditions from a main preview interface generated by the first video data;
and acquiring second video data shot by a second camera, controlling the second video data to be overlapped on the view-finding frame area, automatically matching the display size of the second video data with the size of the view-finding frame area, and generating a composite video file with a view-finding frame auxiliary video image according to a shooting instruction.
The step of obtaining the first video data shot by the first camera comprises the following steps:
The method comprises the steps of setting a double-camera double-shooting function on the mobile terminal in advance, controlling the first camera and the second camera which are started in different directions to shoot simultaneously when the double-camera double-shooting function is started, and overlapping data shot by the first camera and the second camera to an interface for displaying.
The step of obtaining the first video data shot by the first camera comprises the following steps:
when the mobile terminal detects that the double-camera double-shooting function is started, the mobile terminal controls the first camera and the second camera in different directions of the mobile terminal to start shooting at the same time;
previewing and displaying the video data shot by the first camera as first video data, and taking the video data shot by the second camera as second video data;
acquiring first video data shot by a first camera;
and acquiring second video data shot by the second camera, and taking the auxiliary preview data shot by the second camera as second video data.
The step of analyzing the first video data shot by the first camera and automatically identifying the view-finding frame area meeting the frame forming condition from the main preview interface generated by the first video data comprises the following steps:
Analyzing first video data shot by a first camera, and identifying whether a quadrilateral or circular window conforming to a frame forming condition exists or not from the first video data according to a preset AI algorithm so as to confirm whether a main preview interface generated by the first video data comprises a view finding frame area or not;
and when a quadrilateral or circular window which meets the conditions exists in the main preview interface formed by the first video data, taking the quadrilateral or circular window area as a view-finding frame area in the main preview interface generated by the first video data.
The step of obtaining second video data shot by a second camera, controlling the second video data to be overlapped to the view-finding frame area, automatically matching the display size of the second video data with the size of the view-finding frame area, and generating a synthesized video file with a view-finding frame auxiliary video image according to a shooting instruction comprises the following steps:
acquiring auxiliary preview data shot by a second camera as second video data; the first camera is a rear camera of the mobile terminal, and the second camera is a front camera of the mobile terminal;
automatically editing second video data based on a view-finding frame area in a main preview interface generated by the first video data, and editing the second video data into video data with the size automatically matched with that of the view-finding frame area;
Controlling the edited second video data to be overlapped on the view-finding frame area to be displayed together; and generating a composite video file with the auxiliary video image of the view frame according to the shooting instruction.
The step of automatically editing the second video data into video data with the size automatically matched with the size of the view frame area comprises the following steps of:
and taking the auxiliary preview data shot by the second camera as second video data, cutting or rotating the second video data, automatically matching the display size of the second video data with the size of the view-finding frame area, and editing the second video data into video data which is automatically matched with the size of the view-finding frame area.
The step of obtaining the second video data shot by the second camera further comprises the following steps:
and receiving an operation instruction from the existing video file as second video data.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
In summary, the invention discloses a video processing method, a device and a mobile terminal, wherein the method comprises the following steps: acquiring first video data shot by a first camera and second video data shot by a second camera; analyzing first video data shot by a main camera, and finding out a view finding frame area in a main preview interface generated by the first video data; and controlling to superimpose auxiliary preview data shot by a second camera on the view-finding frame area in the main preview interface generated based on the first video data, and automatically matching the display size of the auxiliary preview data with the size of the view-finding frame area. The invention adds new functions to the mobile terminal: the method can find out the view finding frame meeting the preset rule from the shooting preview data of the main camera, automatically match the shooting data of the other camera to the view finding frame, and perform video synthesis, thereby providing convenience for users.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A method of video processing, the method comprising:
acquiring first video data shot by a first camera;
analyzing first video data shot by a first camera, and identifying a view-finding frame area conforming to a frame forming condition from the first video data;
acquiring second video data shot by a second camera, processing the second video data, and overlapping the second video data to a view-finding frame area of the first video data to enable the display size of the second video data to be matched with the size of the view-finding frame area;
generating a composite video file with auxiliary video images of a view frame from the superimposed video;
the method for identifying the view frame area meeting the frame forming conditions from the first video data comprises the following steps of:
analyzing first video data shot by a first camera, and automatically identifying whether a quadrilateral or circular window conforming to a frame forming condition exists in a main preview interface generated by the first video data according to a preset AI algorithm so as to confirm whether a view finding frame area is included in the main preview interface generated by the first video data;
when a quadrilateral or circular window meeting the conditions exists in a main preview interface formed by the first video data, taking the quadrilateral or circular window area as a view-finding frame area in the main preview interface generated by the first video data;
When the fact that a plurality of view finding frame areas are in a main preview interface generated by first video data shot by a first camera is recognized, selecting one of the view finding frame areas as the view finding frame area, and when the selected view finding frame area does not meet the preset requirement, re-selecting one of the view finding frame areas as the view finding frame area;
the step of obtaining the second video data shot by the second camera further comprises the following steps:
receiving an operation instruction from an existing video file to serve as second video data;
the step of superimposing the second video data after processing to the viewfinder area of the first video data includes:
extracting a portrait video with a background ignored from the second video;
superimposing the extracted portrait video to a viewfinder area of the first video data;
the step of obtaining second video data shot by a second camera, processing the second video data, and then overlapping the second video data to a view-finding frame area of the first video data, so that the display size of the second video data is matched with the size of the view-finding frame area comprises the following steps:
acquiring second video data shot by a second camera as auxiliary preview data; the first camera is a rear camera of the mobile terminal, and the second camera is a front camera of the mobile terminal;
Automatically editing second video data based on a view-finding frame area in a main preview interface generated by the first video data, and editing the second video data into video data with the size automatically matched with that of the view-finding frame area;
controlling the edited second video data to be overlapped on the view-finding frame area to be displayed together;
the step of automatically editing the second video data into video data with the size automatically matched with the size of the view-finding frame area specifically comprises the following steps of:
cutting the second video data into the same size as the view-finding frame area, and then superposing the cut second video data on the view-finding frame area to generate a synthesized video file with the view-finding frame auxiliary video image;
the step of generating the composite video file with the auxiliary video image with the view frame from the superimposed video comprises the following steps: before processing the auxiliary video image, identifying whether the image in the view frame is opposite to the second camera or not according to the light and shadow information of the image in the view frame area and/or the surrounding environment, and when judging that the image is not opposite to the second camera, further analyzing the inclination angle of the image relative to the second camera, and deforming the auxiliary video image to enable the auxiliary video image to show the same inclination angle.
2. The video processing method according to claim 1, wherein the step of acquiring the first video data captured by the first camera includes, before:
the method comprises the steps of setting a double-camera double-shooting function on the mobile terminal in advance, controlling the first camera and the second camera which are started in different directions to shoot simultaneously when the double-camera double-shooting function is started, and overlapping data shot by the first camera and the second camera to an interface for displaying.
3. The video processing method according to claim 2, wherein the step of acquiring the first video data captured by the first camera includes:
when the mobile terminal detects that the double-camera double-shooting function is started, the mobile terminal controls the first camera and the second camera in different directions of the mobile terminal to start shooting at the same time;
previewing and displaying the video data shot by the first camera as first video data, and taking the video data shot by the second camera as second video data;
acquiring first video data shot by a first camera;
and acquiring second video data shot by the second camera, and taking the second video data shot by the second camera as auxiliary preview data.
4. The video processing method according to claim 1, wherein the step of automatically editing second video data into video data automatically matching the size of the frame area based on the frame area in the main preview interface generated from the first video data includes:
and taking the second video data shot by the second camera as auxiliary preview data, cutting or rotating the second video data, automatically matching the display size of the second video data with the size of the view-finding frame area, and editing the second video data into video data which is automatically matched with the size of the view-finding frame area.
5. A video processing apparatus, the apparatus comprising:
the first acquisition module is used for acquiring first video data shot by the first camera;
the device comprises a view frame region identification module, a display module and a display module, wherein the view frame region identification module is used for analyzing first video data shot by a first camera and identifying a view frame region which meets the frame forming condition from the first video data;
the video synthesis processing module is used for acquiring second video data shot by a second camera, processing the second video data and then overlapping the second video data to a view-finding frame area of the first video data so as to enable the display size of the second video data to be matched with the size of the view-finding frame area; generating a composite video file with auxiliary video images of a view frame from the superimposed video;
The method for identifying the view frame area meeting the frame forming conditions from the first video data comprises the following steps of:
analyzing first video data shot by a first camera, and automatically identifying whether a quadrilateral or circular window conforming to a frame forming condition exists in a main preview interface generated by the first video data according to a preset AI algorithm so as to confirm whether a view finding frame area is included in the main preview interface generated by the first video data;
when a quadrilateral or circular window meeting the conditions exists in a main preview interface formed by the first video data, taking the quadrilateral or circular window area as a view-finding frame area in the main preview interface generated by the first video data;
when the fact that a plurality of view finding frame areas are in a main preview interface generated by first video data shot by a first camera is recognized, selecting one of the view finding frame areas as the view finding frame area, and when the selected view finding frame area does not meet the preset requirement, re-selecting one of the view finding frame areas as the view finding frame area;
the step of obtaining the second video data shot by the second camera further comprises the following steps:
Receiving an operation instruction from an existing video file to serve as second video data;
the step of superimposing the second video data after processing to the viewfinder area of the first video data includes:
extracting a portrait video with a background ignored from the second video;
superimposing the extracted portrait video to a viewfinder area of the first video data;
the step of obtaining second video data shot by a second camera, processing the second video data, and then overlapping the second video data to a view-finding frame area of the first video data, so that the display size of the second video data is matched with the size of the view-finding frame area comprises the following steps:
acquiring second video data shot by a second camera as auxiliary preview data; the first camera is a rear camera of the mobile terminal, and the second camera is a front camera of the mobile terminal;
automatically editing second video data based on a view-finding frame area in a main preview interface generated by the first video data, and editing the second video data into video data with the size automatically matched with that of the view-finding frame area;
controlling the edited second video data to be overlapped on the view-finding frame area to be displayed together;
The step of automatically editing the second video data into video data with the size automatically matched with the size of the view-finding frame area specifically comprises the following steps of:
cutting the second video data into the same size as the view-finding frame area, and then superposing the cut second video data on the view-finding frame area to generate a synthesized video file with the view-finding frame auxiliary video image;
the step of generating the composite video file with the auxiliary video image with the view frame from the superimposed video specifically comprises the following steps:
before processing the auxiliary video image, identifying whether the image in the view frame is opposite to the mobile terminal camera or not according to the light and shadow information of the image in the view frame area and/or the surrounding environment, and when judging that the image is not opposite to the second camera, further analyzing the inclination angle of the image relative to the second camera, and deforming the auxiliary video image to enable the auxiliary video image to show the same inclination angle.
6. A mobile terminal comprising a memory, a processor and a video processing program stored in the memory and executable on the processor, the processor implementing the steps of the video processing method according to any of claims 1-4 when the video processing program is executed by the processor.
7. A computer readable storage medium, wherein a video processing program is stored on the computer readable storage medium, which when executed by a processor, implements the steps of the video processing method according to any of claims 1-4.
CN202110933095.2A 2021-08-12 2021-08-12 Video processing method, device, mobile terminal and readable storage medium Active CN113810627B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110933095.2A CN113810627B (en) 2021-08-12 2021-08-12 Video processing method, device, mobile terminal and readable storage medium
PCT/CN2022/106791 WO2023016214A1 (en) 2021-08-12 2022-07-20 Video processing method and apparatus, and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110933095.2A CN113810627B (en) 2021-08-12 2021-08-12 Video processing method, device, mobile terminal and readable storage medium

Publications (2)

Publication Number Publication Date
CN113810627A CN113810627A (en) 2021-12-17
CN113810627B true CN113810627B (en) 2023-12-19

Family

ID=78942952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110933095.2A Active CN113810627B (en) 2021-08-12 2021-08-12 Video processing method, device, mobile terminal and readable storage medium

Country Status (2)

Country Link
CN (1) CN113810627B (en)
WO (1) WO2023016214A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810627B (en) * 2021-08-12 2023-12-19 惠州Tcl云创科技有限公司 Video processing method, device, mobile terminal and readable storage medium
CN117411982A (en) * 2022-07-05 2024-01-16 摩托罗拉移动有限责任公司 Augmenting live content

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847636A (en) * 2016-06-08 2016-08-10 维沃移动通信有限公司 Video recording method and mobile terminal
CN108184052A (en) * 2017-12-27 2018-06-19 努比亚技术有限公司 A kind of method of video record, mobile terminal and computer readable storage medium
CN108234891A (en) * 2018-04-04 2018-06-29 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN109035191A (en) * 2018-08-01 2018-12-18 Oppo(重庆)智能科技有限公司 Image processing method, picture processing unit and terminal device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580910B (en) * 2015-01-09 2018-07-24 宇龙计算机通信科技(深圳)有限公司 Image combining method based on forward and backward camera and system
CN106027900A (en) * 2016-06-22 2016-10-12 维沃移动通信有限公司 Photographing method and mobile terminal
JP6765054B2 (en) * 2016-08-05 2020-10-07 フリュー株式会社 Photo creation game console and display method
CN111200686A (en) * 2018-11-19 2020-05-26 中兴通讯股份有限公司 Photographed image synthesizing method, terminal, and computer-readable storage medium
CN112804458B (en) * 2021-01-11 2022-03-01 广东小天才科技有限公司 Shooting view finding method and device, terminal equipment and storage medium
CN113824871A (en) * 2021-08-02 2021-12-21 惠州Tcl云创科技有限公司 Method and device for processing superposed photographing of front camera and rear camera of mobile terminal and mobile terminal
CN113810627B (en) * 2021-08-12 2023-12-19 惠州Tcl云创科技有限公司 Video processing method, device, mobile terminal and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847636A (en) * 2016-06-08 2016-08-10 维沃移动通信有限公司 Video recording method and mobile terminal
CN108184052A (en) * 2017-12-27 2018-06-19 努比亚技术有限公司 A kind of method of video record, mobile terminal and computer readable storage medium
CN108234891A (en) * 2018-04-04 2018-06-29 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN109035191A (en) * 2018-08-01 2018-12-18 Oppo(重庆)智能科技有限公司 Image processing method, picture processing unit and terminal device

Also Published As

Publication number Publication date
CN113810627A (en) 2021-12-17
WO2023016214A1 (en) 2023-02-16

Similar Documents

Publication Publication Date Title
CN113810627B (en) Video processing method, device, mobile terminal and readable storage medium
US20150077591A1 (en) Information processing device and information processing method
US7876334B2 (en) Photography with embedded graphical objects
WO2022227393A1 (en) Image photographing method and apparatus, electronic device, and computer readable storage medium
CN107197144A (en) Filming control method and device, computer installation and readable storage medium storing program for executing
WO2022110837A1 (en) Image processing method and device
US11403789B2 (en) Method and electronic device for processing images
WO2022042776A1 (en) Photographing method and terminal
US11024090B2 (en) Virtual frame for guided image composition
CN106791390B (en) Wide-angle self-timer real-time preview method and user terminal
KR20170027266A (en) Image capture apparatus and method for operating the image capture apparatus
KR20130112578A (en) Appratus and method for providing augmented reality information based on user
CN113840070A (en) Shooting method, shooting device, electronic equipment and medium
CN114598819A (en) Video recording method and device and electronic equipment
CN111200686A (en) Photographed image synthesizing method, terminal, and computer-readable storage medium
JP2009187387A (en) Image processing unit, image data communication system, and image processing program
CN108600614B (en) Image processing method and device
CN110177216A (en) Image processing method, device, mobile terminal and storage medium
JP3983623B2 (en) Image composition apparatus, image composition method, image composition program, and recording medium on which image composition program is recorded
CN114697530B (en) Photographing method and device for intelligent view finding recommendation
KR102022559B1 (en) Method and computer program for photographing image without background and taking composite photograph using digital dual-camera
KR101738896B1 (en) Fitting virtual system using pattern copy and method therefor
US11276241B2 (en) Augmented reality custom face filter
US11252341B2 (en) Method and device for shooting image, and storage medium
CN113824871A (en) Method and device for processing superposed photographing of front camera and rear camera of mobile terminal and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant