WO2023016214A1 - Video processing method and apparatus, and mobile terminal - Google Patents

Video processing method and apparatus, and mobile terminal Download PDF

Info

Publication number
WO2023016214A1
WO2023016214A1 PCT/CN2022/106791 CN2022106791W WO2023016214A1 WO 2023016214 A1 WO2023016214 A1 WO 2023016214A1 CN 2022106791 W CN2022106791 W CN 2022106791W WO 2023016214 A1 WO2023016214 A1 WO 2023016214A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
camera
video
frame area
viewfinder frame
Prior art date
Application number
PCT/CN2022/106791
Other languages
French (fr)
Chinese (zh)
Inventor
许玉新
曾剑青
张永兴
张刘哲
Original Assignee
惠州Tcl云创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 惠州Tcl云创科技有限公司 filed Critical 惠州Tcl云创科技有限公司
Publication of WO2023016214A1 publication Critical patent/WO2023016214A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Definitions

  • the present application relates to the technical field of mobile terminal video processing, in particular to a video processing method, device, and mobile terminal.
  • the video shot by each camera of the mobile terminal is an independent video, and the video shooting function is single, which is inconvenient for users to use.
  • a video processing method wherein the method includes:
  • the superimposed video is generated into a composite video file with an auxiliary video image with a viewfinder frame.
  • the video processing method wherein, before the step of obtaining the first video data captured by the first camera, includes:
  • the dual-camera dual-shooting function is pre-set on the mobile terminal, which is used to control the simultaneous shooting of the first camera and the second camera in different directions when the dual-camera dual-shooting function is turned on, and the data captured by the first camera and the second camera respectively Superimposed to an interface display.
  • the video processing method wherein the step of acquiring the first video data captured by the first camera includes:
  • the mobile terminal detects that the dual-camera dual-shooting function is enabled, and then controls the first camera and the second camera in different directions of the mobile terminal to start shooting at the same time;
  • the video data previewed by the first camera is displayed as the first video data, and the video data captured by the second camera is used as the second video data;
  • the video processing method described above, wherein the first video data captured by the first camera is analyzed, and the step of identifying a viewfinder frame area that meets the conditions for forming a frame from the first video data includes:
  • the video processing method wherein the second video data captured by the second camera is acquired, and the second video data is processed and superimposed on the viewfinder frame area of the first video data, so that the second video data is displayed
  • the step of matching the size with the size of the viewfinder frame area includes:
  • the second video data taken by the second camera is the rear camera of the mobile terminal, and the second camera is the front camera of the mobile terminal;
  • the second video data is automatically edited to be video data that automatically matches the size of the viewfinder frame area;
  • the video processing method wherein, the second video data is automatically edited for the viewfinder frame area in the main preview interface generated based on the first video data, and edited to automatically match the size of the viewfinder frame area
  • the steps for video data include:
  • the second video data taken by the second camera is used as the auxiliary preview data, and the second video data is cut or rotated, and the second video data display size is automatically matched with the size of the viewfinder frame area, and edited to be the same as the second video data.
  • the video processing method wherein the step of acquiring the second video data captured by the second camera further includes:
  • the operation instruction is received from the existing video file as the second video data.
  • the video processing method wherein the step of processing the second video data and superimposing it on the viewfinder frame area of the first video data includes:
  • a video processing device wherein the device includes:
  • the first obtaining module is used to obtain the first video data captured by the first camera
  • the viewfinder frame area identification module is used to analyze the first video data captured by the first camera, and identify the viewfinder frame area that meets the frame forming conditions from the first video data;
  • the video synthesis processing module is used to obtain the second video data taken by the second camera, and superimpose the second video data on the viewfinder frame area of the first video data after processing, so that the display size of the second video data is the same as that of the first video data.
  • the size of the viewing frame area is matched; the superimposed video is generated into a composite video file with a secondary video image of the viewing frame.
  • a mobile terminal wherein the mobile terminal includes a memory, a processor, and a video processing program stored in the memory and operable on the processor, and when the processor executes the video processing program, the The step of the video processing method described in any one.
  • a computer-readable storage medium wherein a video processing program is stored on the computer-readable storage medium, and when the video processing program is executed by a processor, the steps of any one of the video processing methods described above are realized.
  • the present application provides a video processing method.
  • the present application analyzes the image data in the video data captured by the main camera, and finds a regular quadrilateral or circular viewfinder, such as a parallelogram, or an approximate parallelogram.
  • the quadrilateral area divides this area into the preview display area of another video file; this application adds a new function to the mobile terminal: it can realize the viewfinder frame that meets the predetermined rules from the camera preview data of the main camera, and another
  • the video data is automatically matched to the viewfinder frame to perform dual-camera video synthesis.
  • This application is to automatically identify the viewfinder frame of the video data captured by the main camera.
  • the position of the viewfinder frame of each different shooting subject is different, which can be automatically identified and synthesized.
  • the application is simple and convenient for shooting, which provides convenience for users.
  • FIG. 1 is a flow chart of a specific implementation of a video processing method provided in an embodiment of the present application.
  • FIG. 2 is a schematic diagram of identifying a viewfinder frame area by quadrilaterals and circles provided by the embodiment of the present application.
  • FIG. 3 is an effect diagram of using an image quality assimilation function for auxiliary video images according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of synthesizing videos captured by front and rear cameras according to the second embodiment of the present application.
  • FIG. 5 is a schematic flowchart of synthesizing a rear camera and a pre-stored video file according to a third embodiment of the present application.
  • Fig. 6 is a schematic flow chart of post-processing the synthesized video provided by the third embodiment of the present application.
  • FIG. 7 is a functional block diagram of a video processing device provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an internal structure of a terminal device provided in an embodiment of the present application.
  • an embodiment of the present application provides a video processing method.
  • the mobile terminal intelligently analyzes the first video data of the first camera. , and search for and automatically generate a viewfinder frame area in the first video data, automatically match and superimpose the content captured by the second camera on the viewfinder frame area, and finally generate a natural and interesting front and rear camera composition through video synthesis video. It enables users to get a natural and interesting synthetic video that combines the two cleverly when they need to shoot the content of the front camera and the rear camera at the same time.
  • an embodiment of the present application provides a video processing method, and the video processing method can be used in mobile devices such as mobile phones and tablet computers.
  • the method described in the embodiment of this application includes the following steps:
  • Step S100 acquiring the first video data captured by the first camera
  • step of obtaining the first video data captured by the first camera includes:
  • the dual-camera dual-shooting function is pre-set on the mobile terminal, which is used to control the simultaneous shooting of the first camera and the second camera in different directions when the dual-camera dual-shooting function is turned on, and the data captured by the first camera and the second camera respectively Superimposed to an interface display, as described below.
  • the dual-camera dual-shooting function is pre-set in the mobile terminal used by the user.
  • the mobile terminal opens the different orientations set on the mobile terminal.
  • the two cameras of the mobile terminal shoot at the same time.
  • a selection operation interface for the user to manually select the first camera and the second camera is also set in the application interface of the dual-camera dual-shooting function.
  • the mobile terminal automatically obtains the data captured by the default first camera as the first video data, or obtains the data captured by the user-defined first camera as the first video data. data, and use the second video data captured by the second camera as auxiliary preview data.
  • the mobile phone A is also provided with a telescoping and rotating periscope camera placed on the upper edge of the mobile phone A.
  • the mobile phone A defaults to enabling the front camera and the rear camera to collect camera images, and uses the rear camera as the second camera by default.
  • a camera for acquiring first video data captured by the first camera.
  • the first camera can be changed to Periscope camera, changing the second camera to a rear camera, the periscope camera rises and shoots after receiving the command to start capturing images, and mobile phone A obtains the first video collected by the periscope camera data, and simultaneously obtain the second video data collected by the rear camera as auxiliary preview data.
  • the user A can customize the shooting angle when using the dual-camera dual-shooting function.
  • step S200 analyzing the first video data captured by the first camera, and automatically identifying the viewfinder frame area meeting the frame forming condition from the main preview interface generated by the first video data;
  • the selected shooting data of the first camera i.e. the first video data
  • the mobile terminal controls the first video data to generate a main preview interface, and in the main preview interface through AI and other intelligent algorithms to automatically identify the viewfinder frame area that meets the conditions for forming a frame.
  • the conditions for forming a frame include the closed area composed of lines that appear in the video, for example, when the subject is glasses, the edge lines of the glasses lenses on each side
  • the frame can form a closed area, that is, the shooting glasses can get two viewfinder frame areas; or through AI and other intelligent algorithms to automatically identify objects that can be used as the viewfinder frame area, such as roadside billboards, mirrors and Spread out white paper, etc.
  • the mobile terminal When the mobile terminal recognizes such objects, it can automatically use them as the viewfinder frame area; or recognize whether there are quadrilateral or circular windows in the main preview interface, for example, there are different sizes and colors on a certain wall. If there is a dot wallpaper, each dot in the wallpaper can be regarded as a circular window, and the frame area generated by it can be controlled.
  • the viewfinder area of the image in the main preview interface can be selected in various ways, and when multiple viewfinder areas are recognized in the image taken by the user, The user can switch the selection of the viewfinder area by clicking, providing users with more intelligent and diversified composite videos, and the composite videos not only have no sense of violation, but also increase the interest of the video and enhance the user's fun of shooting and sticky.
  • the rear camera of the mobile phone A is the first camera
  • the front camera is the first camera.
  • the main preview interface captured by the first camera includes sunglasses, a circular window, and hanging pictures. Then the mobile phone A is obtained through AI algorithm analysis, and the borders on both sides of the sunglasses form a frame.
  • the main preview interface contains the first viewfinder frame area related to sunglasses; the analysis shows that the window is a circular window, and if the pre-set conditions are met, it is judged that the circular window is the second viewfinder frame Area:
  • the hanging picture in the video is recognized as the subject of the viewfinder frame area by AI algorithm, and then the hanging picture is judged to be the third viewfinder frame area.
  • the mobile phone A uses the hanging picture as the default viewfinder area and processes the video to generate its viewfinder area.
  • the viewfinder frame area can be directly modified through the main preview interface, such as double-clicking the circular window in the main preview screen, the mobile phone A controls to cancel the original video processing operation, and uses the circular window as a new viewfinder area for video processing and generates its viewfinder area.
  • the image captured by the user A through the rear camera of the mobile phone A is a billboard
  • the mobile phone A finds that the image contains a quadrilateral , that is, the shape of the billboard, and it is judged that its side length or diameter is greater than 1/4 of the screen width, then it is judged that the quadrilateral area of the billboard can be used as the viewfinder frame area, and the generation of the viewfinder frame area is controlled.
  • step S300 obtain the second video data captured by the second camera, process the second video data and superimpose it on the viewfinder frame area of the first video data, so that the display size of the second video data is the same as that of the viewfinder The size of the frame area is matched, and the superimposed video is generated into a composite video file with an auxiliary video image of the viewfinder frame.
  • the mobile terminal acquires the second video data captured by the second camera that needs to be superimposed on the viewfinder frame area, that is, auxiliary preview data.
  • the mobile terminal controls to superimpose the second video data as auxiliary preview data on the viewfinder frame area, and automatically matches the display size and direction of the auxiliary preview data according to the size of the main preview interface, and automatically matches the display size and direction of the auxiliary preview data according to the shooting instruction.
  • the second video data can be cropped to the same size as the viewfinder area, and then the cropped The second video data is superimposed on the viewfinder frame area to generate a composite video file with auxiliary video images of the viewfinder frame, which is simple, convenient and easy to implement.
  • the second video data can also be superimposed on the viewfinder area, and then the second video data exceeding the viewfinder area can be clipped to generate a synthetic video file with auxiliary video images of the viewfinder, which is more convenient for cutting like this, and can directly Use the viewfinder frame area as the cropping edge.
  • the automatic matching method includes performing one or more of the above-mentioned processing methods such as rotating, zooming, translating, cropping or deforming the image of the auxiliary video data.
  • processing methods such as rotating, zooming, translating, cropping or deforming the image of the auxiliary video data.
  • an AI algorithm is used to identify whether the viewfinder area is tilted relative to the mobile terminal camera, that is, through the images in the viewfinder area and the / or the light and shadow information of its surrounding environment to identify whether the image in the viewfinder is facing the camera of the mobile terminal, and when it is judged that the image is not facing the camera of the mobile terminal, further analyze its inclination angle relative to the camera of the mobile terminal, and The auxiliary video image is deformed so that it shows the same tilt angle.
  • the video spliced by this method is not a simple superimposition of two videos, but a simulated effect of attaching the auxiliary video image of the second camera to the real object in the viewfinder area and then shooting, which can achieve a more natural video effect , improve user experience.
  • this method can be used to transform the auxiliary video image in real time with the same tilt angle and size, further eliminating the need for synthesizing. The incongruity of the video.
  • the image quality assimilation function can be set according to the needs of the user, that is, analyze the first video data of the first camera including various parameters such as resolution, frame number, blur degree, filter effect, etc., and do the same for the auxiliary video image
  • the image quality processing makes it further blend into the picture.
  • the first camera is aimed at the rearview mirror of the car, and the second camera is aimed at a frame in the video shot of the rear seat of the car. It can be seen that because the first video data captured by the first camera has a high degree of blur, the mobile terminal automatically blurs the edges of the auxiliary video image, so that the synthesized image is more natural and artistic.
  • the step of superimposing the second video data on the viewfinder frame area of the first video data after processing may further include: extracting the ignored video from the second video. Background portrait video; superimposing the extracted portrait video on the viewfinder frame area of the first video data. That is, in the embodiment of the present application, the second video data captured by the second camera may also be selected to be processed as required, to extract the portrait video (ignoring the background) in the second video data, and then superimpose the extracted portrait video on the first video.
  • the viewfinder frame area of the video data, and then the superimposed video is generated into a composite video file with a viewfinder frame auxiliary video image. This synthesis technology is more in line with user needs, and does not require users to additionally process auxiliary video, which provides convenience for users.
  • auxiliary video data is not only provided by the second camera, but also includes the historical video files stored in the mobile terminal, that is, the historical video files can be imported into the viewfinder area through the selection of the user, for The user is provided with more operating space for synthesizing video, which improves user experience.
  • the mobile terminal takes a mobile phone as an example, as shown in Figure 4, the video processing method in this specific application embodiment includes the following steps:
  • Step S10 start, enter step S11;
  • Step S11 open the rear camera preview, enter step S12;
  • Step S12 control and start the AI algorithm, and enter step S13;
  • Step S13 performing quadrilateral and/or circular detection on the preview data of the rear camera, and proceeding to step S14;
  • Step S14 analyze whether there is a quadrilateral and/or circular area in the preview data of the rear camera, if there is, then go to step S15, if not, then go to step S14;
  • Step S15 calculate the position information of the identified quadrilateral and/or circular area including information such as vertices, dots and radii, and proceed to step S16;
  • Step S16 transmit the position information to the front camera for preview, build a preview window, and enter step S17;
  • Step S17 the preview data of the front camera is superimposed on the top of the window of the rear camera that meets the area conditions, and enters step S18;
  • Step S18 start video recording, enter step S19;
  • Step S19 close the video recording, enter step S20;
  • the user completes the simultaneous opening of the front camera and the rear camera through a mobile phone and performs the splicing of the double-camera double-shooting video. Achieve the effect of making the synthesized video more natural and beautiful, and improve user experience.
  • the mobile phone B controls to turn on the rear camera and previews the video captured by it, and at the same time starts the AI algorithm to analyze the video in the rear camera.
  • the captured preview data is analyzed to analyze whether there are quadrilateral and/or circular images in the video data; when it is detected that the video data contains quadrilateral and/or circular image areas, calculate its area position information including the Information such as the vertex coordinates, dot coordinates and radius of the above-mentioned area; and the location information of the area is sent to the preview screen of the front camera, and the qualified area is formed in the preview screen of the front camera , so that the user can preview whether the content captured by the front camera is in the eligible area; when the user selects the position of the front camera image, the mobile phone B controls to superimpose the preview data of the front camera on the rear camera At the top of the eligible area window, and start the recording, and after the user clicks the function button to terminate the recording, the recording will end, and a synthetic video without any sense of violation will be generated.
  • the mobile terminal takes a tablet computer as an example, as shown in Figure 5, the video processing method in this specific application embodiment includes the following steps:
  • Step S30 start, enter step S31;
  • Step S31 open the rear camera preview, enter step S32;
  • Step S32 control and start the AI algorithm, and enter step S33;
  • Step S33 performing quadrilateral and/or circular detection on the preview data of the rear camera, and proceeding to step S34;
  • Step S34 analyze whether there is a quadrilateral and/or circular area in the preview data of the rear camera, if there is, then go to step S35, if not, then go to step S34;
  • Step S35 calculate the position information of the identified quadrilateral and/or circular area including information such as vertices, dots and radii, and proceed to step S36;
  • Step S36 click on the area, the control jumps to the file system of the tablet computer and controls to open the selected video file, and enters step S37;
  • Step S37 superimposing the opened video playback floating frame above the detection window of the rear camera, and proceeding to step S38;
  • Step S38 start video recording, enter step S39;
  • Step S39 close the video recording, enter step S40;
  • the purpose of video splicing through the rear camera and the existing video is realized through a tablet computer, which increases the fun of the video and enhances the user's enjoyment and interest in taking pictures. viscosity.
  • step S30 to step S35 the main flow from step S30 to step S35 is the same as that of the second embodiment, and will not be repeated here.
  • the file system of the tablet computer such as the system gallery
  • the file system of the tablet computer is opened by receiving the operation of clicking the quadrilateral and/or circular area by the user, and receiving the user's selection of video files operation, and display the selected video file in a floating frame.
  • the front camera will be the picture recorded on the spot, and the floating frame of the video file will be the pre-stored video. It plays at the original speed, and when the user presses the function button of closing the recording, the recording ends.
  • Step S50 start video post-processing, enter step S51 and step S61;
  • Step S51 load the first video file captured by the rear camera, and enter step S52;
  • Step S52 using a video editing algorithm to analyze each frame of the first video file, and proceed to step S53;
  • Step S53 the tablet computer starts the AI algorithm, and enters step S54;
  • Step S54 the tablet computer controls the quadrilateral and/or circular detection of each frame in the first video file, and enters step S55;
  • Step S55 judging whether a quadrilateral and/or circular area is detected, if yes, proceed to step S56, otherwise proceed to step S54;
  • Step S56 calculate the position information of the region detected by it, including apex, dot, radius, etc., and enter step S63;
  • Step S61 load the second video file pre-stored in the file system of the tablet computer, and enter step S62;
  • Step S62 using a video editing algorithm to analyze each frame of the second video file, and proceed to step S63;
  • Step S63 the tablet computer processes the first video file and the second video file by cutting and/or rotating according to the position information and image data, and proceeds to step S64;
  • Step S64 control to re-encode the superimposed single-frame image into a new video file, and store it, and enter step S70;
  • the video picture previewed by the user is further finely processed, which improves the image quality of the synthesized video and improves the user experience.
  • the tablet computer performs post-processing on the first video file captured by the rear camera of the tablet computer and the original second video file in the file system respectively.
  • Analyze each frame of the first video file control and start the AI algorithm to detect the quadrilateral and/or circle of each frame image, and calculate its regional position information when detecting a qualified region, including vertices, parameters such as dots and radius; on the other hand, analyze each frame of the loaded second video file, combine the obtained apex, dots and radius parameter information, and use editing tools to cut or rotate the
  • the first video file and the second video file are superimposed together, and finally the superimposed frame image is re-encoded into a new video file, and the encoded video format includes but is not limited to H.264, MP4, MKV and other video formats.
  • an embodiment of the present application provides a video processing device, which includes: a first acquisition module 710 , a viewfinder frame area identification module 720 , and a video synthesis module 730 .
  • the first acquisition module 710 is configured to acquire the first video data captured by the first camera
  • the viewfinder frame area identification module 720 is configured to analyze the first video data captured by the first camera, and obtain In the main preview interface generated by the first video data, the viewfinder frame area that meets the frame forming conditions is automatically identified
  • the video synthesis processing module 730 is used to obtain the second video data captured by the second camera, and control the second video
  • the data is superimposed on the viewfinder frame area, and the display size of the second video data is automatically matched with the size of the viewfinder frame area, and a composite video file of an auxiliary video image with a viewfinder frame is generated according to the shooting instruction.
  • the present application further provides a terminal device, the functional block diagram of which may be shown in FIG. 8 .
  • the terminal equipment includes a processor, a memory, a network interface, and a display screen connected through a system bus.
  • the processor of the terminal device is used to provide calculation and control capabilities.
  • the memory of the terminal device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer programs.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the network interface of the terminal device is used to communicate with external terminals through a network connection.
  • the computer program implements a kind of video processing when executed by the processor.
  • the display screen of the terminal device may be a liquid crystal display screen or an electronic ink display screen.
  • FIG. 8 is only a block diagram of a partial structure related to the solution of this application, and does not constitute a limitation on the terminal equipment on which the solution of this application is applied.
  • the specific terminal equipment may include comparison diagrams more or fewer components, or combine certain components or have a different arrangement of components.
  • a terminal device in one embodiment, includes a memory, a processor, and a video processing program stored on the processor and operable on the processor. The following steps are performed for processing:
  • step of obtaining the first video data captured by the first camera includes:
  • the dual-camera dual-shooting function is pre-set on the mobile terminal, which is used to control the simultaneous shooting of the first camera and the second camera in different directions when the dual-camera dual-shooting function is turned on, and the data captured by the first camera and the second camera respectively Superimposed to an interface display.
  • the step of obtaining the first video data taken by the first camera includes:
  • the mobile terminal detects that the dual-camera dual-shooting function is enabled, and then controls the first camera and the second camera in different directions of the mobile terminal to start shooting at the same time;
  • the video data previewed by the first camera is displayed as the first video data, and the video data captured by the second camera is used as the second video data;
  • the step of analyzing the first video data captured by the first camera, and automatically identifying the viewfinder frame area that meets the frame forming conditions from the main preview interface generated by the first video data includes:
  • the acquisition of the second video data captured by the second camera controls the superimposition of the second video data on the viewfinder frame area, and automatically matches the display size of the second video data with the size of the viewfinder frame area, according to
  • the steps of generating a synthetic video file with auxiliary video images of the viewfinder frame by the shooting instruction include:
  • the auxiliary preview data taken by the second camera is the second video data;
  • the first camera is the rear camera of the mobile terminal, and the second camera is the front camera of the mobile terminal;
  • the second video data is automatically edited to be video data that automatically matches the size of the viewfinder frame area;
  • the second video data is automatically edited based on the framing area in the main preview interface generated by the first video data, and the step of editing the video data automatically matching the size of the framing area includes:
  • the auxiliary preview data shot by the second camera is used as the second video data, and the second video data is cut or rotated, and the display size of the second video data is automatically matched with the size of the viewfinder frame area, and edited to be the same as the second video data.
  • the size of the frame area automatically matches the video data.
  • the step of obtaining the second video data taken by the second camera also includes:
  • the operation instruction is received from the existing video file as the second video data.
  • Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in many forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM Static RAM
  • DRAM Dynamic RAM
  • SDRAM Synchronous DRAM
  • DDRSDRAM Double Data Rate SDRAM
  • ESDRAM Enhanced SDRAM
  • SLDRAM Synchronous Chain Synchlink DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM
  • the present application discloses a video processing method, device, and mobile terminal.
  • the method includes: acquiring the first video data captured by the first camera, and acquiring the second video data captured by the second camera; Analyze the data, find out the viewfinder frame area in the main preview interface generated by the first video data; The auxiliary preview data is superimposed on the viewfinder area, and the display size of the auxiliary preview data is automatically matched with the size of the viewfinder area.
  • This application adds a new function to the mobile terminal: it can find out the viewfinder frame that meets the predetermined rules from the camera preview data of the main camera, automatically match the shooting data of another camera to the viewfinder frame, and perform video synthesis for the user's use. Convenience is provided.

Abstract

The present application discloses a video processing method and apparatus, and a mobile terminal. The method comprises: in first video data captured by a first camera, identifying a view finding frame area conforming to a frame forming condition; processing second video data captured by a second camera and then superimposing said data on the view finding frame area of the first video data, so that the display size of the second video data matches the size of the view finding frame area; and, according to the video after superimposition, generating a synthesized video file carrying an auxiliary video image of the viewing frame. The present application can provide convenience for a user to perform video synthesis.

Description

视频处理方法、装置、移动终端Video processing method, device, mobile terminal
本申请要求申请日为2021年08月12日、申请号为2021109330952、发明名称为“视频处理方法、装置、移动终端”的的中国专利申请的优先权,其全部内容通过引用结合在本申请中This application claims the priority of a Chinese patent application with an application date of August 12, 2021, an application number of 2021109330952, and an invention title of "video processing method, device, and mobile terminal", the entire contents of which are incorporated by reference in this application
技术领域technical field
本申请涉及移动终端视频处理技术领域,尤其涉及视频处理方法、装置、移动终端。The present application relates to the technical field of mobile terminal video processing, in particular to a video processing method, device, and mobile terminal.
背景技术Background technique
随着科技的发展和人们生活水平的不断提高,各种移动终端如手机的使用越来越普及,手机已经成为人们生活中不可缺少的通信工具。With the development of science and technology and the continuous improvement of people's living standards, the use of various mobile terminals such as mobile phones has become more and more popular, and mobile phones have become an indispensable communication tool in people's lives.
智能手机越来越多,相机成为每一部手机的必备功能。随着用户需求的提升,相机的功能也越来越多例如设置有前后摄像头,但现有技术的移动终端每个摄像头拍摄的视频都是单独的视频,视频拍摄功能单一,有时不方便用户使用。With more and more smartphones, cameras have become a must-have feature for every phone. Along with the promotion of user's demand, the function of camera is also more and more, for example is provided with front and back camera, but the video that each camera of the mobile terminal of prior art shoots is independent video, and the video shooting function is single, sometimes inconvenient for the user to use .
因此,现有技术还有待改进和提高。Therefore, the prior art still needs to be improved and improved.
技术问题technical problem
移动终端每个摄像头拍摄的视频都是单独的视频,视频拍摄功能单一,不方便用户使用。The video shot by each camera of the mobile terminal is an independent video, and the video shooting function is single, which is inconvenient for users to use.
技术解决方案technical solution
为了解决上述技术问题,本申请所采用的技术方案如下:In order to solve the above technical problems, the technical scheme adopted by the application is as follows:
一种视频处理方法,其中,所述方法包括:A video processing method, wherein the method includes:
获取第一摄像头拍摄的第一视频数据;Obtain the first video data captured by the first camera;
对第一摄像头拍摄的第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的取景框区域;Analyzing the first video data captured by the first camera, identifying a viewfinder frame area meeting the frame forming condition from the first video data;
获取第二摄像头拍摄的第二视频数据,将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,使第二视频数据显示大小与所述取景框区域大小匹配;Obtain the second video data captured by the second camera, process the second video data and superimpose it on the viewfinder frame area of the first video data, so that the display size of the second video data matches the size of the viewfinder frame area;
将叠加后的视频生成带取景框辅视频图像的合成视频文件。The superimposed video is generated into a composite video file with an auxiliary video image with a viewfinder frame.
所述的视频处理方法,其中,所述获取第一摄像头拍摄的第一视频数据的步骤之前包括:The video processing method, wherein, before the step of obtaining the first video data captured by the first camera, includes:
预先在移动终端上设置双摄像头双拍摄功能,用于当开启双摄像头双拍摄功能则控制启动不同方向的第一摄像头和第二摄像头同时拍摄,并将第一摄像头和第二摄像头分别拍摄的数据叠加到一个界面显示。The dual-camera dual-shooting function is pre-set on the mobile terminal, which is used to control the simultaneous shooting of the first camera and the second camera in different directions when the dual-camera dual-shooting function is turned on, and the data captured by the first camera and the second camera respectively Superimposed to an interface display.
所述的视频处理方法,其中,所述获取第一摄像头拍摄的第一视频数据的步骤包括:The video processing method, wherein the step of acquiring the first video data captured by the first camera includes:
移动终端检测到开启双摄像头双拍摄功能,则控制在移动终端不同方向的第一摄像头和第二摄像头同时启动拍摄;The mobile terminal detects that the dual-camera dual-shooting function is enabled, and then controls the first camera and the second camera in different directions of the mobile terminal to start shooting at the same time;
将第一摄像头拍摄的视频数据预览显示为第一视频数据,将第二摄像头拍摄的视频数据作为第二视频数据;The video data previewed by the first camera is displayed as the first video data, and the video data captured by the second camera is used as the second video data;
获取第一摄像头拍摄的第一视频数据;Obtain the first video data captured by the first camera;
以及获取第二摄像头拍摄的第二视频数据,并将第二摄像头拍摄的第二视频数据作为辅预览数据。And acquire the second video data captured by the second camera, and use the second video data captured by the second camera as auxiliary preview data.
所述的视频处理方法,其中,对第一摄像头拍摄的第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的取景框区域的步骤包括:The video processing method described above, wherein the first video data captured by the first camera is analyzed, and the step of identifying a viewfinder frame area that meets the conditions for forming a frame from the first video data includes:
对第一摄像头拍摄的第一视频数据进行分析,根据预设的AI算法从所述第一视频数据生成的主预览界面中自动识别是否有符合形成框条件的四边形或圆形窗口,以确认所述第一视频数据生成的主预览界面中是否包括取景框区域;Analyze the first video data captured by the first camera, and automatically identify whether there is a quadrilateral or circular window that meets the conditions for forming a frame from the main preview interface generated by the first video data according to a preset AI algorithm, so as to confirm the Whether the main preview interface generated by the first video data includes a viewfinder frame area;
当所述第一视频数据形成的主预览界面中有符合条件的四边形或圆形窗口,则将所述四边形或圆形窗口区域作为所述第一视频数据生成的主预览界面中的取景框区域。When there is a qualified quadrilateral or circular window in the main preview interface formed by the first video data, then use the quadrilateral or circular window area as the viewfinder frame area in the main preview interface generated by the first video data .
所述的视频处理方法,其中,所述获取第二摄像头拍摄的第二视频数据,将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,使第二视频数据显示大小与所述取景框区域大小匹配的步骤包括:The video processing method, wherein the second video data captured by the second camera is acquired, and the second video data is processed and superimposed on the viewfinder frame area of the first video data, so that the second video data is displayed The step of matching the size with the size of the viewfinder frame area includes:
获取第二摄像头拍摄的第二视频数据作为辅预览数据;所述第一摄像头为移动终端后置摄像头,所述第二摄像头为移动终端前置摄像头;Obtain the second video data taken by the second camera as the auxiliary preview data; the first camera is the rear camera of the mobile terminal, and the second camera is the front camera of the mobile terminal;
基于所述第一视频数据生成的主预览界面中的取景框区域,对第二视频数据进行自动编辑,编辑为与所述取景框区域大小自动匹配的视频数据;Based on the viewfinder frame area in the main preview interface generated by the first video data, the second video data is automatically edited to be video data that automatically matches the size of the viewfinder frame area;
控制将编辑后的第二视频数据,叠加到所述取景框区域一起显示。Controlling to superimpose the edited second video data on the viewfinder frame area and display it together.
所述的视频处理方法,其中,所述基于所述第一视频数据生成的主预览界面中的取景框区域,对第二视频数据进行自动编辑,编辑为与所述取景框区域大小自动匹配的视频数据的步骤包括:The video processing method, wherein, the second video data is automatically edited for the viewfinder frame area in the main preview interface generated based on the first video data, and edited to automatically match the size of the viewfinder frame area The steps for video data include:
将第二摄像头拍摄的第二视频数据作为辅预览数据作,将所述为第二视频数据进行裁剪或旋转,将第二视频数据显示大小与所述取景框区域大小自动匹配,编辑为与所述取景框区域大小自动匹配的视频数据。The second video data taken by the second camera is used as the auxiliary preview data, and the second video data is cut or rotated, and the second video data display size is automatically matched with the size of the viewfinder frame area, and edited to be the same as the second video data. Video data that automatically matches the size of the frame area described above.
所述的视频处理方法,其中,所述获取第二摄像头拍摄的第二视频数据的步骤还包括:The video processing method, wherein the step of acquiring the second video data captured by the second camera further includes:
接收操作指令从已有的视频文件中,作为第二视频数据。The operation instruction is received from the existing video file as the second video data.
所述的视频处理方法,其中,所述将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域的步骤包括:The video processing method, wherein the step of processing the second video data and superimposing it on the viewfinder frame area of the first video data includes:
从所述第二视频中提取忽略背景的人像视频;Extracting a portrait video ignoring the background from the second video;
将提取的所述人像视频叠加到所述第一视频数据的取景框区域。and superimposing the extracted portrait video on the viewfinder frame area of the first video data.
一种视频处理装置,其中,所述装置包括:A video processing device, wherein the device includes:
第一获取模块,用于获取第一摄像头拍摄的第一视频数据;The first obtaining module is used to obtain the first video data captured by the first camera;
取景框区域识别模块,用于对第一摄像头拍摄的第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的取景框区域;The viewfinder frame area identification module is used to analyze the first video data captured by the first camera, and identify the viewfinder frame area that meets the frame forming conditions from the first video data;
视频合成处理模块,用于获取第二摄像头拍摄的第二视频数据,将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,使第二视频数据显示大小与所述取景框区域大小匹配;将叠加后的视频生成带取景框辅视频图像的合成视频文件。The video synthesis processing module is used to obtain the second video data taken by the second camera, and superimpose the second video data on the viewfinder frame area of the first video data after processing, so that the display size of the second video data is the same as that of the first video data. The size of the viewing frame area is matched; the superimposed video is generated into a composite video file with a secondary video image of the viewing frame.
一种移动终端,其中,所述移动终端包括存储器、处理器及存储在所述存储器中并可在所述处理器上运行的视频处理程序,所述处理器执行所述视频处理程序时,实现任一项所述的视频处理方法的步骤。A mobile terminal, wherein the mobile terminal includes a memory, a processor, and a video processing program stored in the memory and operable on the processor, and when the processor executes the video processing program, the The step of the video processing method described in any one.
一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有视频处理程序,所述视频处理程序被处理器执行时,实现任一项所述的视频处理方法的步骤。A computer-readable storage medium, wherein a video processing program is stored on the computer-readable storage medium, and when the video processing program is executed by a processor, the steps of any one of the video processing methods described above are realized.
有益效果Beneficial effect
与现有技术相比,本申请提供了一种视频处理方法,本申请采用对主摄像头拍摄的视频数据中分析图像数据,找出规则的四边形或圆形取景框,如平行四边形,或近似平行四边形的区域,把这块区域划分为另一视频文件的预览显示区域;本申请使移动终端增加了新功能:可以实现从主摄像头的摄像预览数据找出符合预定规则的取景框,将另一视频数据自动匹配到所述取景框,进行双摄像的视频合成,本申请是对主摄像头拍摄的视频数据的取景框进行自动识别,每个不同拍摄主体取景框位置不同,可以自动识别,合成,本申请拍摄简单方便,为用户的使用提供了方便。Compared with the prior art, the present application provides a video processing method. The present application analyzes the image data in the video data captured by the main camera, and finds a regular quadrilateral or circular viewfinder, such as a parallelogram, or an approximate parallelogram. The quadrilateral area divides this area into the preview display area of another video file; this application adds a new function to the mobile terminal: it can realize the viewfinder frame that meets the predetermined rules from the camera preview data of the main camera, and another The video data is automatically matched to the viewfinder frame to perform dual-camera video synthesis. This application is to automatically identify the viewfinder frame of the video data captured by the main camera. The position of the viewfinder frame of each different shooting subject is different, which can be automatically identified and synthesized. The application is simple and convenient for shooting, which provides convenience for users.
附图说明Description of drawings
图1为本申请实施例提供的视频处理方法的具体实施方式的流程图。FIG. 1 is a flow chart of a specific implementation of a video processing method provided in an embodiment of the present application.
图2为本申请实施例提供的通过对四边形以及圆形进行取景框区域识别的示意图。FIG. 2 is a schematic diagram of identifying a viewfinder frame area by quadrilaterals and circles provided by the embodiment of the present application.
图3为本申请实施例提供的对辅视频图像使用画质同化功能效果图。FIG. 3 is an effect diagram of using an image quality assimilation function for auxiliary video images according to an embodiment of the present application.
图4为本申请第二实施例提供的合成前后摄像头拍摄的视频的流程示意图。FIG. 4 is a schematic flowchart of synthesizing videos captured by front and rear cameras according to the second embodiment of the present application.
图5为本申请第三实施例提供的合成后置摄像头以及预先存储的视频文件的流程示意图。FIG. 5 is a schematic flowchart of synthesizing a rear camera and a pre-stored video file according to a third embodiment of the present application.
图6为本申请第三实施例提供的将合成视频进行后处理的流程示意图。Fig. 6 is a schematic flow chart of post-processing the synthesized video provided by the third embodiment of the present application.
图7为本申请实施例提供的视频处理装置的原理框图。FIG. 7 is a functional block diagram of a video processing device provided by an embodiment of the present application.
图8为本申请实施例提供的终端设备的内部结构原理图。FIG. 8 is a schematic diagram of an internal structure of a terminal device provided in an embodiment of the present application.
本申请的实施方式Embodiment of this application
为使本申请的目的、技术方案及效果更加清楚、明确,以下参照附图并举实施例对本申请进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solution and effect of the present application more clear and definite, the present application will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described here are only used to explain the present application, not to limit the present application.
需要说明,若本申请实施例中有涉及方向性指示(诸如上、下、左、右、前、后……),则该方向性指示仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则该方向性指示也相应地随之改变。It should be noted that if there are directional indications (such as up, down, left, right, front, back...) in the embodiment of the present application, the directional indications are only used to explain the position in a certain posture (as shown in the accompanying drawings). If the specific posture changes, the directional indication will also change accordingly.
另外,若本申请实施例中有涉及“第一”、“第二”等的描述,则该“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。In addition, if there are descriptions involving "first", "second", etc. in the embodiments of the present application, the descriptions of "first", "second", etc. are only for descriptive purposes, and cannot be interpreted as indications or hints Its relative importance or implicitly indicates the number of technical features indicated. Thus, the features defined as "first" and "second" may explicitly or implicitly include at least one of these features. In addition, the technical solutions of the various embodiments can be combined with each other, but it must be based on the realization of those skilled in the art. When the combination of technical solutions is contradictory or cannot be realized, it should be considered that the combination of technical solutions does not exist , nor within the scope of protection required by the present application.
随着人们生活水平的提高,人们越来越喜欢记录以及分享自己的有趣生活,所以根据人们的需求,移动终端厂商例如手机厂商所以研发的手机中搭载的摄像头像素逐年升高,并且伴随着短视频应用的兴起,利用手机等移动终端进行拍摄并上传至网络的用户越来越多,视频特效也由最开始的美颜到现在的AI换脸、装饰、蓝线等越来越多的关于手机摄像头的特效以及应用被开发出来。With the improvement of people's living standards, people are more and more fond of recording and sharing their interesting lives. Therefore, according to people's needs, mobile terminal manufacturers such as mobile phone manufacturers have increased the camera pixels in mobile phones developed year by year. With the rise of video applications, more and more users are using mobile terminals such as mobile phones to take pictures and upload them to the Internet. Video special effects have also changed from the initial beauty to the current AI face change, decoration, blue line and so on. Special effects and applications for mobile phone cameras have been developed.
但现有技术中,当用户想要同时拍摄下后置摄像头的景物以及前置摄像头的用户的内容时,仅能够通过旋转手机并分别拍摄的方法或通过软件算法实时的将前置摄像头与后置摄像头的进行简单的拼接,但不论是哪种方式拍出来的视频都显得单调、僵硬且有违和感,并不能将后置摄像头的景色与前置摄像头的内容进行巧妙的融合,达到自然、有趣的拍摄效果。But in the prior art, when the user wants to take pictures of the scenery of the rear camera and the content of the user of the front camera at the same time, the front camera and the rear camera can only be connected in real time by rotating the mobile phone and shooting separately or by software algorithms. However, no matter which method is used to shoot the video, it seems monotonous, stiff and inconsistent, and it cannot skillfully integrate the scenery of the rear camera with the content of the front camera to achieve natural , Interesting shooting effects.
为了解决上述问题,本申请实施例提供一种视频处理方法,根据本实施例的视频处理方法,当用户开启预设的双摄像头双拍摄功能时,移动终端智能分析第一摄像头的第一视频数据,并在第一视频数据中查找并自动生成取景框区域,将第二摄像头所拍摄的内容自动匹配并叠加在所述取景框区域中,最终通过视频合成的方式生成自然、有趣的前后摄像头合成视频。使用户在需要同时拍摄前置摄像头与后置摄像头内容时能够得到两者巧妙结合而且自然、有趣的合成视频。In order to solve the above problems, an embodiment of the present application provides a video processing method. According to the video processing method of this embodiment, when the user turns on the preset dual-camera dual-shooting function, the mobile terminal intelligently analyzes the first video data of the first camera. , and search for and automatically generate a viewfinder frame area in the first video data, automatically match and superimpose the content captured by the second camera on the viewfinder frame area, and finally generate a natural and interesting front and rear camera composition through video synthesis video. It enables users to get a natural and interesting synthetic video that combines the two cleverly when they need to shoot the content of the front camera and the rear camera at the same time.
示例性方法exemplary method
第一实施例first embodiment
如图1中所示,本申请实施例提供一种视频处理方法,所述视频处理方法可用于手机、平板电脑等移动设备中。在本申请实施例中所述方法包括如下步骤:As shown in FIG. 1 , an embodiment of the present application provides a video processing method, and the video processing method can be used in mobile devices such as mobile phones and tablet computers. The method described in the embodiment of this application includes the following steps:
步骤S100、获取第一摄像头拍摄的第一视频数据;Step S100, acquiring the first video data captured by the first camera;
其中,在所述获取第一摄像头拍摄的第一视频数据的步骤之前包括:Wherein, before the step of obtaining the first video data captured by the first camera includes:
预先在移动终端上设置双摄像头双拍摄功能,用于当开启双摄像头双拍摄功能则控制启动不同方向的第一摄像头和第二摄像头同时拍摄,并将第一摄像头和第二摄像头分别拍摄的数据叠加到一个界面显示,具体如下所述。The dual-camera dual-shooting function is pre-set on the mobile terminal, which is used to control the simultaneous shooting of the first camera and the second camera in different directions when the dual-camera dual-shooting function is turned on, and the data captured by the first camera and the second camera respectively Superimposed to an interface display, as described below.
在本实施例中,预先在用户使用的移动终端中设置双摄像头双拍摄功能,当用户通过点击或其他操作唤醒该双摄像头双拍摄功能后,所述移动终端开启设置在移动终端上的不同方位的两个摄像头同时进行拍摄,当移动终端中设置了超过2个的摄像头,则在所述双摄像头双拍摄功能的应用界面中还设置一用户手动选取第一摄像头以及第二摄像头的选择操作界面,便于用户将任意的摄像头作为第一摄像头以及第二摄像头,便于用户根据自己的想法拍摄出更多种多样的视频,并将所述第一摄像头以及第二摄像头将分别拍摄的数据叠加到一个显示界面中,供用户录制。In this embodiment, the dual-camera dual-shooting function is pre-set in the mobile terminal used by the user. When the user wakes up the dual-camera dual-shooting function by clicking or other operations, the mobile terminal opens the different orientations set on the mobile terminal. The two cameras of the mobile terminal shoot at the same time. When more than 2 cameras are set in the mobile terminal, a selection operation interface for the user to manually select the first camera and the second camera is also set in the application interface of the dual-camera dual-shooting function. , it is convenient for the user to use any camera as the first camera and the second camera, and it is convenient for the user to shoot more diverse videos according to his own ideas, and superimpose the data captured by the first camera and the second camera into one Display interface for users to record.
所以当用户开启所述双摄像头双拍摄功能后,所述移动终端自动获取默认的第一摄像头拍摄的数据为第一视频数据,或获取用户自定义的第一摄像头所拍摄的数据为第一视频数据,并将所述第二摄像头拍摄的第二视频数据作为辅预览数据。Therefore, when the user turns on the dual-camera dual-shooting function, the mobile terminal automatically obtains the data captured by the default first camera as the first video data, or obtains the data captured by the user-defined first camera as the first video data. data, and use the second video data captured by the second camera as auxiliary preview data.
举例说明,用户A的手机除了一个前置摄像头、一个后置摄像头,该手机A还设置了一个可伸缩旋转的,置于手机A上边沿的潜望式摄像头。当用户A开启手机预置的双摄像头双拍摄功能后,所述手机A默认开启所述前置摄像头以及所述后置摄像头对摄像头画面进行采集,并默认将所述后置摄像头作为所述第一摄像头,获取第一摄像头所拍摄的第一视频数据。当用户A想使用所述潜望式摄像头当作第一摄像头,用后置摄像头作为所述第二摄像头时,通过双摄像头双拍摄功能中预置的摄像头切换选项,将第一摄像头更变为潜望式摄像头,将第二摄像头更变为后置摄像头,所述潜望式摄像头接收到开始采集图像指令后升起并进行拍摄,同时手机A获取所述潜望式摄像头采集的第一视频数据,同时获取所述后置摄像头采集的第二视频数据作为辅预览数据。通过自由设置第一摄像头与第二摄像头的方法使用户A在使用双摄像头双拍摄功能的时候能够自定义拍摄角度。For example, in addition to a front camera and a rear camera on the mobile phone of user A, the mobile phone A is also provided with a telescoping and rotating periscope camera placed on the upper edge of the mobile phone A. After user A turns on the dual-camera double-shooting function preset in the mobile phone, the mobile phone A defaults to enabling the front camera and the rear camera to collect camera images, and uses the rear camera as the second camera by default. A camera, for acquiring first video data captured by the first camera. When user A wants to use the periscope camera as the first camera and the rear camera as the second camera, the first camera can be changed to Periscope camera, changing the second camera to a rear camera, the periscope camera rises and shoots after receiving the command to start capturing images, and mobile phone A obtains the first video collected by the periscope camera data, and simultaneously obtain the second video data collected by the rear camera as auxiliary preview data. By freely setting the first camera and the second camera, the user A can customize the shooting angle when using the dual-camera dual-shooting function.
进一步地,步骤S200、对第一摄像头拍摄的第一视频数据进行分析,从所述第一视频数据生成的主预览界面中自动识别符合形成框条件的取景框区域;Further, step S200, analyzing the first video data captured by the first camera, and automatically identifying the viewfinder frame area meeting the frame forming condition from the main preview interface generated by the first video data;
在本实施例中,对选定的所述第一摄像头的拍摄数据即第一视频数据进行分析,移动终端控制所述第一视频数据生成主预览界面,并在所述主预览界面中通过AI等智能算法自动识别符合形成框条件的取景框区域,所述符合形成框的条件包括,视频中出现的由线条组成的闭合区域,例如拍摄对象为眼镜时,所述每一边的眼镜镜片边缘线框都能组成一个闭合区域,即所述拍摄的眼镜可以得到两个取景框区域;或通过AI等智能算法自动识别可用于当作取景框区域的物体,例如包括路边的广告牌、镜子以及摊开的白纸等,当移动终端识别到此类物体时,可自动将其作为取景框区域;或识别主预览界面中是否含有四边形或圆形窗口,例如某墙壁上有大小不同、颜色不一的圆点壁纸,则所述壁纸中的每一个圆点都可视为一个圆形窗口,并控制其生成取景框区域。其优势在于,当用户拍摄的对象或构图过于复杂时可通过多种方式对主预览界面中的图像进行取景框区域的选取,且当用户拍摄的图像中识别到有多个取景框区域时,用户通过点击的方式可切换取景框区域的选取,为用户提供更加智能、多样化的合成视频,且该合成视频不但没有违和感,而且增加了视频的趣味性,增强了用户对摄像的乐趣和粘性。In this embodiment, the selected shooting data of the first camera, i.e. the first video data, is analyzed, and the mobile terminal controls the first video data to generate a main preview interface, and in the main preview interface through AI and other intelligent algorithms to automatically identify the viewfinder frame area that meets the conditions for forming a frame. The conditions for forming a frame include the closed area composed of lines that appear in the video, for example, when the subject is glasses, the edge lines of the glasses lenses on each side The frame can form a closed area, that is, the shooting glasses can get two viewfinder frame areas; or through AI and other intelligent algorithms to automatically identify objects that can be used as the viewfinder frame area, such as roadside billboards, mirrors and Spread out white paper, etc. When the mobile terminal recognizes such objects, it can automatically use them as the viewfinder frame area; or recognize whether there are quadrilateral or circular windows in the main preview interface, for example, there are different sizes and colors on a certain wall. If there is a dot wallpaper, each dot in the wallpaper can be regarded as a circular window, and the frame area generated by it can be controlled. Its advantage is that when the object or composition taken by the user is too complex, the viewfinder area of the image in the main preview interface can be selected in various ways, and when multiple viewfinder areas are recognized in the image taken by the user, The user can switch the selection of the viewfinder area by clicking, providing users with more intelligent and diversified composite videos, and the composite videos not only have no sense of violation, but also increase the interest of the video and enhance the user's fun of shooting and sticky.
举例说明,当所述用户A处于室内,并开启了所述手机A预置的双摄像头双拍摄功能,且此时所述手机A的后置摄像头为所述第一摄像头,前置摄像头为所述第二摄像头,所述第一摄像头拍摄得到的主预览界面中包含墨镜,圆形窗口,挂画三个主体,则所述手机A通过AI算法分析得到,所述墨镜两边的边框各组成一个闭合区域,故判断所述主预览界面中包含与墨镜有关的第一取景框区域;分析得到所述窗口为圆形窗口,满足预先设置的条件,则判断所述圆形窗口为第二取景框区域;通过AI算法识别视频中挂画为可作为取景框区域的主体,则判断所述挂画为第三取景框区域。最终所述手机A将挂画作为默认的取景框区域并对视频进行处理生成其取景框区域。For example, when the user A is indoors, and opens the dual-camera double-shooting function preset in the mobile phone A, and at this time, the rear camera of the mobile phone A is the first camera, and the front camera is the first camera. Describe the second camera, the main preview interface captured by the first camera includes sunglasses, a circular window, and hanging pictures. Then the mobile phone A is obtained through AI algorithm analysis, and the borders on both sides of the sunglasses form a frame. Closed area, so it is judged that the main preview interface contains the first viewfinder frame area related to sunglasses; the analysis shows that the window is a circular window, and if the pre-set conditions are met, it is judged that the circular window is the second viewfinder frame Area: The hanging picture in the video is recognized as the subject of the viewfinder frame area by AI algorithm, and then the hanging picture is judged to be the third viewfinder frame area. Finally, the mobile phone A uses the hanging picture as the default viewfinder area and processes the video to generate its viewfinder area.
若此时用户A对所述手机A选取的挂画的取景框区域不满意,则通过主预览界面可直接对取景框区域进行修改,例如双击主预览画面中的圆形窗口,则所述手机A控制取消原有视频处理操作,并将所述圆形窗口作为新的取景框区域进行视频处理并生成其取景框区域。If user A is dissatisfied with the viewfinder frame area of the hanging picture selected by the mobile phone A at this time, the viewfinder frame area can be directly modified through the main preview interface, such as double-clicking the circular window in the main preview screen, the mobile phone A controls to cancel the original video processing operation, and uses the circular window as a new viewfinder area for video processing and generates its viewfinder area.
另外,如图2所示,在通过规则图形作为取景框区域的应用中,所述用户A通过手机A的后置摄像头拍摄的图像为广告牌,所述手机A通过AI算法找到图像中含有四边形,即广告牌外形,且判定其边长或直径大于屏幕宽度的1/4,则判断其广告牌的四边形区域可作为取景框区域,控制生成取景框区域。In addition, as shown in Figure 2, in the application where regular graphics are used as the viewfinder frame area, the image captured by the user A through the rear camera of the mobile phone A is a billboard, and the mobile phone A finds that the image contains a quadrilateral , that is, the shape of the billboard, and it is judged that its side length or diameter is greater than 1/4 of the screen width, then it is judged that the quadrilateral area of the billboard can be used as the viewfinder frame area, and the generation of the viewfinder frame area is controlled.
进一步地,步骤S300、获取第二摄像头拍摄的第二视频数据,将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,使第二视频数据显示大小与所述取景框区域大小匹配,将叠加后的视频生成带取景框辅视频图像的合成视频文件。Further, in step S300, obtain the second video data captured by the second camera, process the second video data and superimpose it on the viewfinder frame area of the first video data, so that the display size of the second video data is the same as that of the viewfinder The size of the frame area is matched, and the superimposed video is generated into a composite video file with an auxiliary video image of the viewfinder frame.
在本实施例中,移动终端获取需要叠加在所述取景框区域的所述第二摄像头拍摄的第二视频数据,即辅预览数据。所述移动终端控制将作为辅预览数据的第二视频数据叠加在所述取景框区域,并根据所述主预览界面的大小对所述辅预览数据的显示大小以及方向进行自动匹配,根据拍摄指令生成带取景框的辅视频图像的合成文件。In this embodiment, the mobile terminal acquires the second video data captured by the second camera that needs to be superimposed on the viewfinder frame area, that is, auxiliary preview data. The mobile terminal controls to superimpose the second video data as auxiliary preview data on the viewfinder frame area, and automatically matches the display size and direction of the auxiliary preview data according to the size of the main preview interface, and automatically matches the display size and direction of the auxiliary preview data according to the shooting instruction. Generate a composite file of secondary video images with frames.
本申请实施例中,关于将作为辅预览数据的第二视频数据叠加在所述取景框区域实施中,可以先将第二视频数裁剪成与所述取景框区域相同大小,然后再将裁剪后的第二视频数据叠加在所述取景框区域,生成带取景框辅视频图像的合成视频文件,简单方便,实现容易。In the embodiment of the present application, regarding the implementation of superimposing the second video data as auxiliary preview data on the viewfinder area, the second video data can be cropped to the same size as the viewfinder area, and then the cropped The second video data is superimposed on the viewfinder frame area to generate a composite video file with auxiliary video images of the viewfinder frame, which is simple, convenient and easy to implement.
当然也可以将第二视频数据叠加在所述取景框区域,然后把超出所述取景框区域的第二视频数据裁剪,生成带取景框辅视频图像的合成视频文件,这样裁剪更方便,可以直接根据所述取景框区域作为裁剪边。Of course, the second video data can also be superimposed on the viewfinder area, and then the second video data exceeding the viewfinder area can be clipped to generate a synthetic video file with auxiliary video images of the viewfinder, which is more convenient for cutting like this, and can directly Use the viewfinder frame area as the cropping edge.
所述自动匹配的方法包括对辅视频数据的图像进行旋转、缩放、平移、裁切或形变等上述一种或多种处理方法,当移动终端无法自动将辅视频图像中拍摄的主体部分置于取景框区域中央或用户希望显示的位置时,可通过拖动的形式对其进行平移,便于用户使用。The automatic matching method includes performing one or more of the above-mentioned processing methods such as rotating, zooming, translating, cropping or deforming the image of the auxiliary video data. When the mobile terminal cannot automatically place the main part of the auxiliary video image in the In the center of the viewfinder area or the position that the user wants to display, it can be translated by dragging, which is convenient for the user.
进一步地,为使叠加得到的合成视频更加自然,在对所述辅视频图像进行处理前,通过AI算法识别所述取景框区域相对于移动终端摄像头是否有倾斜,即通过取景框区域内图像和/或其周围环境的光影信息识别所述取景框内图像是否正对于移动终端摄像头,当判断所述图像并未正对与移动终端摄像头,则进一步分析其相对于移动终端摄像头的倾斜角度,并将辅视频图像进行变形处理使其表现出同样的倾斜角度。通过该方法拼接的视频不是简单的将两份视频简单的进行叠加,而是模拟将第二摄像头的辅视频图像附着在取景框区域实物上再进行拍摄的效果,可达到使视频更加自然的效果,提高用户的使用体验。Further, in order to make the superimposed synthetic video more natural, before processing the auxiliary video image, an AI algorithm is used to identify whether the viewfinder area is tilted relative to the mobile terminal camera, that is, through the images in the viewfinder area and the / or the light and shadow information of its surrounding environment to identify whether the image in the viewfinder is facing the camera of the mobile terminal, and when it is judged that the image is not facing the camera of the mobile terminal, further analyze its inclination angle relative to the camera of the mobile terminal, and The auxiliary video image is deformed so that it shows the same tilt angle. The video spliced by this method is not a simple superimposition of two videos, but a simulated effect of attaching the auxiliary video image of the second camera to the real object in the viewfinder area and then shooting, which can achieve a more natural video effect , improve user experience.
另外,当用户在使用所述双摄像头双拍摄功能时移动第一摄像头位置导致取景框区域发生形变的情况下,可通过本方法实时对辅视频图像进行同样倾斜角度、大小的变换,进一步消除合成视频的违和感。In addition, when the user moves the position of the first camera when using the dual-camera dual-shooting function, causing the viewfinder frame area to be deformed, this method can be used to transform the auxiliary video image in real time with the same tilt angle and size, further eliminating the need for synthesizing. The incongruity of the video.
同时,根据用户的需要可设置画质同化功能,即分析所述第一摄像头的第一视频数据包括分辨率、帧数、模糊度、滤镜效果等多种参数,并对辅视频图像做同样的画质处理,使其进一步融入画面。如图3所示,为开启了广角镜头后,第一摄像头对准汽车后视镜、第二摄像头对准汽车后座的拍摄视频中的一帧。可见,由于第一摄像头所拍摄的第一视频数据模糊度较高,导致移动终端自动将辅视频图像边缘做模糊处理,使合成的得到的图像更加自然,且有艺术感。Meanwhile, the image quality assimilation function can be set according to the needs of the user, that is, analyze the first video data of the first camera including various parameters such as resolution, frame number, blur degree, filter effect, etc., and do the same for the auxiliary video image The image quality processing makes it further blend into the picture. As shown in Figure 3, after the wide-angle lens is turned on, the first camera is aimed at the rearview mirror of the car, and the second camera is aimed at a frame in the video shot of the rear seat of the car. It can be seen that because the first video data captured by the first camera has a high degree of blur, the mobile terminal automatically blurs the edges of the auxiliary video image, so that the synthesized image is more natural and artistic.
进一步地实施例,所述的视频处理方法,所述将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域的步骤还可以包括:从所述第二视频中提取忽略背景的人像视频;将提取的所述人像视频叠加到所述第一视频数据的取景框区域。即本申请实施例中也可以根据需要选择将第二摄像头拍摄的第二视频数据处理为,提取第二视频数据中的人像视频(忽略背景),然后将提取的人像视频叠加到所述第一视频数据的取景框区域,然后将叠加后的视频生成带取景框辅视频图像的合成视频文件,这样的合成技术更符合用户需求,无需用户额外处理辅助视频,为用户的使用提供了方便。As a further embodiment, in the video processing method, the step of superimposing the second video data on the viewfinder frame area of the first video data after processing may further include: extracting the ignored video from the second video. Background portrait video; superimposing the extracted portrait video on the viewfinder frame area of the first video data. That is, in the embodiment of the present application, the second video data captured by the second camera may also be selected to be processed as required, to extract the portrait video (ignoring the background) in the second video data, and then superimpose the extracted portrait video on the first video. The viewfinder frame area of the video data, and then the superimposed video is generated into a composite video file with a viewfinder frame auxiliary video image. This synthesis technology is more in line with user needs, and does not require users to additionally process auxiliary video, which provides convenience for users.
进一步地,所述辅视频数据不仅仅由第二摄像头提供,还包括所述移动终端内已存储的历史视频文件,即通过用户的选择可将历史视频文件导入到所述取景框区域中,为用户提供更多的合成视频的操作空间,提升用户体验。Further, the auxiliary video data is not only provided by the second camera, but also includes the historical video files stored in the mobile terminal, that is, the historical video files can be imported into the viewfinder area through the selection of the user, for The user is provided with more operating space for synthesizing video, which improves user experience.
第二实施例second embodiment
以下通过一具体应用实施例对本申请方法做进一步详细说明:The method of the present application is described in further detail below through a specific application example:
本具体应用实施例,移动终端以手机为例,如图4所示,本具体应用实施例的视频处理方法,包括如下步骤:In this specific application embodiment, the mobile terminal takes a mobile phone as an example, as shown in Figure 4, the video processing method in this specific application embodiment includes the following steps:
步骤S10、开始,进入步骤S11;Step S10, start, enter step S11;
步骤S11、开启后置摄像头预览,进入步骤S12;Step S11, open the rear camera preview, enter step S12;
步骤S12、控制启动AI算法,进入步骤S13;Step S12, control and start the AI algorithm, and enter step S13;
步骤S13、对所述后置摄像头预览数据进行四边形和/或圆形的检测,进入步骤S14;Step S13, performing quadrilateral and/or circular detection on the preview data of the rear camera, and proceeding to step S14;
步骤S14、分析所述后置摄像头预览数据中是否存在四边形和/或圆形区域,当存在则进入步骤S15,当不存在则进入步骤S14;Step S14, analyze whether there is a quadrilateral and/or circular area in the preview data of the rear camera, if there is, then go to step S15, if not, then go to step S14;
步骤S15、计算所述识别到的四边形和/或圆形区域的位置信息包括顶点、圆点以及半径等信息,进入步骤S16;Step S15, calculate the position information of the identified quadrilateral and/or circular area including information such as vertices, dots and radii, and proceed to step S16;
步骤S16、将所述位置信息传给前置摄像头预览,建立预览窗口,进入步骤S17;Step S16, transmit the position information to the front camera for preview, build a preview window, and enter step S17;
步骤S17、所述前置摄像头预览数据叠加到后置摄像头符合区域条件的窗口上方,进入步骤S18;Step S17, the preview data of the front camera is superimposed on the top of the window of the rear camera that meets the area conditions, and enters step S18;
步骤S18、开启录像,进入步骤S19;Step S18, start video recording, enter step S19;
步骤S19、关闭录像,进入步骤S20;Step S19, close the video recording, enter step S20;
步骤S20、结束。Step S20, end.
由上可见,在本申请第二具体应用实施例中,用户通过一台手机完成了同时开启前置摄像头以及后置摄像头并进行双摄像头双拍摄视频的拼接。达到使合成视频更加自然、美观的效果,提高用户体验。It can be seen from the above that in the second specific application embodiment of the present application, the user completes the simultaneous opening of the front camera and the rear camera through a mobile phone and performs the splicing of the double-camera double-shooting video. Achieve the effect of making the synthesized video more natural and beautiful, and improve user experience.
如图4所示,当用户B开启手机B的双摄像头双拍摄功能后,所述手机B控制开启后置摄像头并对其拍摄到的视频进行预览,同时启动AI算法对所述后置摄像头中拍摄的预览数据进行分析,分析其视频数据中是否含有四边形和/或圆形的图像;当检测到所述视频数据中含有四边形和/或圆形的图像区域后,计算其区域位置信息包括所述区域的顶点坐标、圆点坐标以及半径等信息;并将所述区域位置信息发送给所述前置摄像头的预览画面中,并在所述前置摄像头预览画面中形成所述符合条件的区域,使用户预览前置摄像头拍摄的内容是否处于所述符合条件的区域当中;当用户选定前置摄像头图像的位置后,所述手机B控制将前置摄像头预览数据叠加到所述后置摄像头符合条件区域窗口的上方,并启动录像,并在用户点击终止录像的功能按钮后,结束录像,生成无违和感的合成视频。As shown in Figure 4, when the user B turns on the dual-camera dual-shooting function of the mobile phone B, the mobile phone B controls to turn on the rear camera and previews the video captured by it, and at the same time starts the AI algorithm to analyze the video in the rear camera. The captured preview data is analyzed to analyze whether there are quadrilateral and/or circular images in the video data; when it is detected that the video data contains quadrilateral and/or circular image areas, calculate its area position information including the Information such as the vertex coordinates, dot coordinates and radius of the above-mentioned area; and the location information of the area is sent to the preview screen of the front camera, and the qualified area is formed in the preview screen of the front camera , so that the user can preview whether the content captured by the front camera is in the eligible area; when the user selects the position of the front camera image, the mobile phone B controls to superimpose the preview data of the front camera on the rear camera At the top of the eligible area window, and start the recording, and after the user clicks the function button to terminate the recording, the recording will end, and a synthetic video without any sense of violation will be generated.
第三实施例third embodiment
本具体应用实施例,移动终端以平板电脑为例,如图5所示,本具体应用实施例的视频处理方法,包括如下步骤:In this specific application embodiment, the mobile terminal takes a tablet computer as an example, as shown in Figure 5, the video processing method in this specific application embodiment includes the following steps:
步骤S30、开始,进入步骤S31;Step S30, start, enter step S31;
步骤S31、开启后置摄像头预览,进入步骤S32;Step S31, open the rear camera preview, enter step S32;
步骤S32、控制启动AI算法,进入步骤S33;Step S32, control and start the AI algorithm, and enter step S33;
步骤S33、对所述后置摄像头预览数据进行四边形和/或圆形的检测,进入步骤S34;Step S33, performing quadrilateral and/or circular detection on the preview data of the rear camera, and proceeding to step S34;
步骤S34、分析所述后置摄像头预览数据中是否存在四边形和/或圆形区域,当存在则进入步骤S35,当不存在则进入步骤S34;Step S34, analyze whether there is a quadrilateral and/or circular area in the preview data of the rear camera, if there is, then go to step S35, if not, then go to step S34;
步骤S35、计算所述识别到的四边形和/或圆形区域的位置信息包括顶点、圆点以及半径等信息,进入步骤S36;Step S35, calculate the position information of the identified quadrilateral and/or circular area including information such as vertices, dots and radii, and proceed to step S36;
步骤S36、点击所述区域,控制跳转到平板电脑的文件系统中并控制开启被选择的视频文件,进入步骤S37;Step S36, click on the area, the control jumps to the file system of the tablet computer and controls to open the selected video file, and enters step S37;
步骤S37、将所述被打开的视频播放悬浮框叠加在所述后置摄像头的检测窗口上方,进入步骤S38;Step S37, superimposing the opened video playback floating frame above the detection window of the rear camera, and proceeding to step S38;
步骤S38、开启录像,进入步骤S39;Step S38, start video recording, enter step S39;
步骤S39、关闭录像,进入步骤S40;Step S39, close the video recording, enter step S40;
步骤S40、结束。Step S40, end.
由上可见,在本申请第三具体应用实施例中,通过一台平板电脑实现了通过后置摄像头与现有视频完成视频拼接的目的,增加了视频的趣味性,增强用户对摄像的乐趣和粘性。It can be seen from the above that in the third specific application embodiment of the present application, the purpose of video splicing through the rear camera and the existing video is realized through a tablet computer, which increases the fun of the video and enhances the user's enjoyment and interest in taking pictures. viscosity.
如图5所示,其步骤S30到步骤S35与第二实施例的主要流程相同,在此不再赘述。当所述平板电脑识别到四边形和/或圆形区域的位置信息后,通过接收用户点击所述四边形和/或圆形区域的操作开启平板电脑的文件系统,例如系统图库,接收用户选择视频文件的操作,并将所述被选择的视频文件进行悬浮框显示,当用户开启录像功能后,所述前置摄像头则为现场录制的画面,而视频文件悬浮框则为预先存储的视频,并按照其原速进行播放,当用户按下关闭录像的功能按钮时,录像结束。As shown in FIG. 5 , the main flow from step S30 to step S35 is the same as that of the second embodiment, and will not be repeated here. After the tablet computer recognizes the position information of the quadrilateral and/or circular area, the file system of the tablet computer, such as the system gallery, is opened by receiving the operation of clicking the quadrilateral and/or circular area by the user, and receiving the user's selection of video files operation, and display the selected video file in a floating frame. When the user turns on the video recording function, the front camera will be the picture recorded on the spot, and the floating frame of the video file will be the pre-stored video. It plays at the original speed, and when the user presses the function button of closing the recording, the recording ends.
在所述录像结束后,如图6所示,为对录像视频的进一步处理。其中包括如下步骤:After the recording is finished, as shown in FIG. 6 , it is further processing of the recorded video. It includes the following steps:
步骤S50、开始视频后处理,进入步骤S51以及步骤S61;Step S50, start video post-processing, enter step S51 and step S61;
步骤S51、载入后置摄像头拍摄的第一视频文件,进入步骤S52;Step S51, load the first video file captured by the rear camera, and enter step S52;
步骤S52、利用视频编辑算法,对所述第一视频文件的每一帧进行解析,进入步骤S53;Step S52, using a video editing algorithm to analyze each frame of the first video file, and proceed to step S53;
步骤S53、平板电脑开启AI算法,进入步骤S54;Step S53, the tablet computer starts the AI algorithm, and enters step S54;
步骤S54、所述平板电脑控制对所述第一视频文件中每一帧的进行四边形和/或圆形的检测,进入步骤S55;Step S54, the tablet computer controls the quadrilateral and/or circular detection of each frame in the first video file, and enters step S55;
步骤S55、判断是否检测到四边形和/或圆形区域,当是则进入步骤S56,当否则进入步骤S54;Step S55, judging whether a quadrilateral and/or circular area is detected, if yes, proceed to step S56, otherwise proceed to step S54;
步骤S56、计算其检测到的所述区域的位置信息包括顶点,圆点,半径等,进入步骤S63;Step S56, calculate the position information of the region detected by it, including apex, dot, radius, etc., and enter step S63;
步骤S61、载入所述平板电脑文件系统中预先存储的第二视频文件,进入步骤S62;Step S61, load the second video file pre-stored in the file system of the tablet computer, and enter step S62;
步骤S62、利用视频编辑算法,对所述第二视频文件的每一帧进行解析,进入步骤S63;Step S62, using a video editing algorithm to analyze each frame of the second video file, and proceed to step S63;
步骤S63,平板电脑根据所述位置信息以及图像数据,通过裁剪和/或旋转的方式将所述第一视频文件以及第二视频文件进行处理,进入步骤S64;Step S63, the tablet computer processes the first video file and the second video file by cutting and/or rotating according to the position information and image data, and proceeds to step S64;
步骤S64、控制将叠加后的单帧图像重新编码成为新的视频文件,并存储,进入步骤S70;Step S64, control to re-encode the superimposed single-frame image into a new video file, and store it, and enter step S70;
步骤S70,结束。Step S70, end.
由上可见,通过本实施例的中对视频后处理的方法将用户预览到的视频画面进行更进一步的精细处理,提高了合成视频的画质,提升了用户的使用感受。It can be seen from the above that, through the video post-processing method in this embodiment, the video picture previewed by the user is further finely processed, which improves the image quality of the synthesized video and improves the user experience.
如图6所示,用户在结束录像后,所述平板电脑分别对平板电脑后置摄像头拍摄的第一视频文件以及对文件系统中原有的第二视频文件进行后处理。将所述第一视频文件的每一帧进行分析,控制启动AI算法对每一帧图像的四边形和/或圆形进行检测,当检测到有符合条件的区域时计算其区域位置信息包括顶点、圆点以及半径等参数;另一方面对载入的所述第二视频文件的每一帧进行分析,结合以获取的所述顶点、圆点以及半径参数信息,通过编辑工具的裁剪或旋转将所述第一视频文件与第二视频文件叠加在一起,最后将叠加完成的所述但帧图像进行重新编码成为新的视频文件,所述编码的视频格式包括但不仅限于H.264、MP4、MKV等视频格式。As shown in FIG. 6 , after the user finishes recording, the tablet computer performs post-processing on the first video file captured by the rear camera of the tablet computer and the original second video file in the file system respectively. Analyze each frame of the first video file, control and start the AI algorithm to detect the quadrilateral and/or circle of each frame image, and calculate its regional position information when detecting a qualified region, including vertices, parameters such as dots and radius; on the other hand, analyze each frame of the loaded second video file, combine the obtained apex, dots and radius parameter information, and use editing tools to cut or rotate the The first video file and the second video file are superimposed together, and finally the superimposed frame image is re-encoded into a new video file, and the encoded video format includes but is not limited to H.264, MP4, MKV and other video formats.
示例性设备exemplary device
如图7中所示,本申请实施例提供一种视频处理装置,该装置包括:第一获取模块710、取景框区域识别模块720、视频合成模块730。具体地,所述第一获取模块710,用于获取第一摄像头拍摄的第一视频数据;所述取景框区域识别模块720,用于对第一摄像头拍摄的第一视频数据进行分析,从所述第一视频数据生成的主预览界面中自动识别符合形成框条件的取景框区域;所述视频合成处理模块730,用于获取第二摄像头拍摄的第二视频数据,控制将所述第二视频数据叠加到所述取景框区域,并将第二视频数据显示大小与所述取景框区域大小自动匹配,根据拍摄指令生成带取景框的辅视频图像的合成视频文件。As shown in FIG. 7 , an embodiment of the present application provides a video processing device, which includes: a first acquisition module 710 , a viewfinder frame area identification module 720 , and a video synthesis module 730 . Specifically, the first acquisition module 710 is configured to acquire the first video data captured by the first camera; the viewfinder frame area identification module 720 is configured to analyze the first video data captured by the first camera, and obtain In the main preview interface generated by the first video data, the viewfinder frame area that meets the frame forming conditions is automatically identified; the video synthesis processing module 730 is used to obtain the second video data captured by the second camera, and control the second video The data is superimposed on the viewfinder frame area, and the display size of the second video data is automatically matched with the size of the viewfinder frame area, and a composite video file of an auxiliary video image with a viewfinder frame is generated according to the shooting instruction.
基于上述实施例,本申请还提供了一种终端设备,其原理框图可以如图8所示。该终端设备包括通过系统总线连接的处理器、存储器、网络接口、显示屏。其中,该终端设备的处理器用于提供计算和控制能力。该终端设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机程序。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该终端设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种视频处理。该终端设备的显示屏可以是液晶显示屏或者电子墨水显示屏。Based on the foregoing embodiments, the present application further provides a terminal device, the functional block diagram of which may be shown in FIG. 8 . The terminal equipment includes a processor, a memory, a network interface, and a display screen connected through a system bus. Wherein, the processor of the terminal device is used to provide calculation and control capabilities. The memory of the terminal device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and computer programs. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium. The network interface of the terminal device is used to communicate with external terminals through a network connection. The computer program implements a kind of video processing when executed by the processor. The display screen of the terminal device may be a liquid crystal display screen or an electronic ink display screen.
本领域技术人员可以理解,图8中的原理框图仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用与其上的终端设备的限定,具体的终端设备可以包括比图中更多或更少的部件,或者组合某些部件或者具有不同的部件布置。Those skilled in the art can understand that the functional block diagram in Figure 8 is only a block diagram of a partial structure related to the solution of this application, and does not constitute a limitation on the terminal equipment on which the solution of this application is applied. The specific terminal equipment may include comparison diagrams more or fewer components, or combine certain components or have a different arrangement of components.
在一个实施例中,提供了一种终端设备,终端设备包括存储器、处理器及存储在处理器上并可在处理器上运行的视频处理程序,处理执行如下步骤:In one embodiment, a terminal device is provided. The terminal device includes a memory, a processor, and a video processing program stored on the processor and operable on the processor. The following steps are performed for processing:
获取第一摄像头拍摄的第一视频数据;Obtain the first video data captured by the first camera;
对第一摄像头拍摄的第一视频数据进行分析,从所述第一视频数据生成的主预览界面中自动识别符合形成框条件的取景框区域;Analyzing the first video data captured by the first camera, automatically identifying the viewfinder frame area that meets the frame forming conditions from the main preview interface generated by the first video data;
获取第二摄像头拍摄的第二视频数据,控制将所述第二视频数据叠加到所述取景框区域,并将第二视频数据显示大小与所述取景框区域大小自动匹配,根据拍摄指令生成带取景框辅视频图像的合成视频文件。Obtain the second video data shot by the second camera, control to superimpose the second video data on the viewfinder frame area, and automatically match the display size of the second video data with the size of the viewfinder frame area, and generate a tape according to the shooting instruction Composite video file of the frame secondary video image.
其中,所述获取第一摄像头拍摄的第一视频数据的步骤之前包括:Wherein, before the step of obtaining the first video data captured by the first camera includes:
预先在移动终端上设置双摄像头双拍摄功能,用于当开启双摄像头双拍摄功能则控制启动不同方向的第一摄像头和第二摄像头同时拍摄,并将第一摄像头和第二摄像头分别拍摄的数据叠加到一个界面显示。The dual-camera dual-shooting function is pre-set on the mobile terminal, which is used to control the simultaneous shooting of the first camera and the second camera in different directions when the dual-camera dual-shooting function is turned on, and the data captured by the first camera and the second camera respectively Superimposed to an interface display.
其中,所述获取第一摄像头拍摄的第一视频数据的步骤包括:Wherein, the step of obtaining the first video data taken by the first camera includes:
移动终端检测到开启双摄像头双拍摄功能,则控制在移动终端不同方向的第一摄像头和第二摄像头同时启动拍摄;The mobile terminal detects that the dual-camera dual-shooting function is enabled, and then controls the first camera and the second camera in different directions of the mobile terminal to start shooting at the same time;
将第一摄像头拍摄的视频数据预览显示为第一视频数据,将第二摄像头拍摄的视频数据作为第二视频数据;The video data previewed by the first camera is displayed as the first video data, and the video data captured by the second camera is used as the second video data;
获取第一摄像头拍摄的第一视频数据;Obtain the first video data captured by the first camera;
以及获取第二摄像头拍摄的第二视频数据,并将第二摄像头拍摄的辅预览数据作为第二视频数据。And acquire the second video data captured by the second camera, and use the auxiliary preview data captured by the second camera as the second video data.
其中,所述对第一摄像头拍摄的第一视频数据进行分析,从所述第一视频数据生成的主预览界面中自动识别符合形成框条件的取景框区域的步骤包括:Wherein, the step of analyzing the first video data captured by the first camera, and automatically identifying the viewfinder frame area that meets the frame forming conditions from the main preview interface generated by the first video data includes:
对第一摄像头拍摄的第一视频数据进行分析,根据预设的AI算法从所述第一视频数据识别是否有符合形成框条件的四边形或圆形窗口,以确认所述第一视频数据生成的主预览界面中是否包括取景框区域;Analyze the first video data captured by the first camera, and identify from the first video data according to a preset AI algorithm whether there is a quadrilateral or circular window that meets the conditions for forming a frame, so as to confirm the shape generated by the first video data Whether the viewfinder area is included in the main preview interface;
当所述第一视频数据形成的主预览界面中有符合条件的四边形或圆形窗口,则将所述四边形或圆形窗口区域作为所述第一视频数据生成的主预览界面中的取景框区域。When there is a qualified quadrilateral or circular window in the main preview interface formed by the first video data, then use the quadrilateral or circular window area as the viewfinder frame area in the main preview interface generated by the first video data .
其中,所述获取第二摄像头拍摄的第二视频数据,控制将所述第二视频数据叠加到所述取景框区域,并将第二视频数据显示大小与所述取景框区域大小自动匹配,根据拍摄指令生成带取景框辅视频图像的合成视频文件的步骤包括:Wherein, the acquisition of the second video data captured by the second camera controls the superimposition of the second video data on the viewfinder frame area, and automatically matches the display size of the second video data with the size of the viewfinder frame area, according to The steps of generating a synthetic video file with auxiliary video images of the viewfinder frame by the shooting instruction include:
获取第二摄像头拍摄的辅预览数据作为第二视频数据;所述第一摄像头为移动终端后置摄像头,所述第二摄像头为移动终端前置摄像头;Obtain the auxiliary preview data taken by the second camera as the second video data; the first camera is the rear camera of the mobile terminal, and the second camera is the front camera of the mobile terminal;
基于所述第一视频数据生成的主预览界面中的取景框区域,对第二视频数据进行自动编辑,编辑为与所述取景框区域大小自动匹配的视频数据;Based on the viewfinder frame area in the main preview interface generated by the first video data, the second video data is automatically edited to be video data that automatically matches the size of the viewfinder frame area;
控制将编辑后的第二视频数据,叠加到所述取景框区域一起显示;并根据拍摄指令生成带取景框辅视频图像的合成视频文件。Controlling the edited second video data to be superimposed on the viewfinder frame area to be displayed together; and generating a composite video file with auxiliary video images of the viewfinder frame according to the shooting instruction.
其中,所述基于所述第一视频数据生成的主预览界面中的取景框区域,对第二视频数据进行自动编辑,编辑为与所述取景框区域大小自动匹配的视频数据的步骤包括:Wherein, the second video data is automatically edited based on the framing area in the main preview interface generated by the first video data, and the step of editing the video data automatically matching the size of the framing area includes:
将第二摄像头拍摄的辅预览数据作为第二视频数据,将所述为第二视频数据进行裁剪或旋转,将第二视频数据显示大小与所述取景框区域大小自动匹配,编辑为与所述取景框区域大小自动匹配的视频数据。The auxiliary preview data shot by the second camera is used as the second video data, and the second video data is cut or rotated, and the display size of the second video data is automatically matched with the size of the viewfinder frame area, and edited to be the same as the second video data. The size of the frame area automatically matches the video data.
其中,所述获取第二摄像头拍摄的第二视频数据的步骤还包括:Wherein, the step of obtaining the second video data taken by the second camera also includes:
接收操作指令从已有的视频文件中,作为第二视频数据。The operation instruction is received from the existing video file as the second video data.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink) DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of the above embodiments can be implemented through computer programs to instruct related hardware, and the computer programs can be stored in a non-volatile computer-readable memory In the medium, when the computer program is executed, it may include the processes of the embodiments of the above-mentioned methods. Wherein, any references to memory, storage, database or other media used in the various embodiments provided in the present application may include non-volatile and/or volatile memory. Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in many forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
综上,本申请公开了视频处理方法、装置、移动终端,方法包括:获取第一摄像头拍摄的第一视频数据,以及获取第二摄像头拍摄的第二视频数据;对主摄像头拍摄的第一视频数据进行分析,找出所述第一视频数据生成的主预览界面中找出取景框区域;基于所述第一视频数据生成的主预览界面中的取景框区域,则控制将第二摄像头拍摄的辅预览数据叠加到所述取景框区域,并将辅预览数据显示大小与所述取景框区域大小自动匹配。本申请使移动终端增加了新功能:可以实现从主摄像头的摄像预览数据找出符合预定规则的取景框,将另一摄像头拍摄数据自动匹配到所述取景框,进行视频合成,为用户的使用提供了方便。To sum up, the present application discloses a video processing method, device, and mobile terminal. The method includes: acquiring the first video data captured by the first camera, and acquiring the second video data captured by the second camera; Analyze the data, find out the viewfinder frame area in the main preview interface generated by the first video data; The auxiliary preview data is superimposed on the viewfinder area, and the display size of the auxiliary preview data is automatically matched with the size of the viewfinder area. This application adds a new function to the mobile terminal: it can find out the viewfinder frame that meets the predetermined rules from the camera preview data of the main camera, automatically match the shooting data of another camera to the viewfinder frame, and perform video synthesis for the user's use. Convenience is provided.
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, rather than limiting them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still Modifications are made to the technical solutions described in the foregoing embodiments, or equivalent replacements are made to some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the various embodiments of the present application.

Claims (20)

  1. 一种视频处理方法,其中,所述方法包括:A video processing method, wherein the method includes:
    获取第一摄像头拍摄的第一视频数据; Obtain the first video data captured by the first camera;
    对所述第一摄像头拍摄的所述第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的取景框区域;Analyzing the first video data captured by the first camera, identifying a viewfinder frame area that meets the frame forming condition from the first video data;
    获取第二摄像头拍摄的第二视频数据,将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,使所述第二视频数据显示大小与所述取景框区域大小匹配;Acquiring the second video data shot by the second camera, processing the second video data and superimposing it on the viewfinder frame area of the first video data, so that the display size of the second video data matches the size of the viewfinder frame area ;
    将叠加后的视频生成带取景框辅视频图像的合成视频文件。The superimposed video is generated into a composite video file with an auxiliary video image with a viewfinder frame.
  2. 根据权利要求1所述的视频处理方法,其中,所述获取第一摄像头拍摄的第一视频数据的步骤之前包括:The video processing method according to claim 1, wherein, before the step of obtaining the first video data captured by the first camera, the step comprises:
    预先在移动终端上设置双摄像头双拍摄功能,用于当开启双摄像头双拍摄功能则控制启动不同方向的第一摄像头和第二摄像头同时拍摄,并将第一摄像头和第二摄像头分别拍摄的数据叠加到一个界面显示。The dual-camera dual-shooting function is pre-set on the mobile terminal, which is used to control the simultaneous shooting of the first camera and the second camera in different directions when the dual-camera dual-shooting function is turned on, and the data captured by the first camera and the second camera respectively Superimposed to an interface display.
  3. 根据权利要求2所述的视频处理方法,其中,所述获取第一摄像头拍摄的第一视频数据的步骤,包括:The video processing method according to claim 2, wherein the step of obtaining the first video data captured by the first camera comprises:
    当检测到开启双摄像头双拍摄功能时,控制在移动终端不同方向的第一摄像头和第二摄像头同时启动拍摄;When it is detected that the dual-camera dual-shooting function is turned on, the first camera and the second camera in different directions of the mobile terminal are controlled to start shooting simultaneously;
    将第一摄像头拍摄的视频数据作为第一视频数据;Using the video data captured by the first camera as the first video data;
    获取第二摄像头拍摄的第二视频数据,包括:Obtain the second video data captured by the second camera, including:
    将第二摄像头拍摄的视频数据作为第二视频数据。The video data captured by the second camera is used as the second video data.
  4. 根据权利要求3所述的视频处理方法,其中,所述移动终端包括超过2个的摄像头,The video processing method according to claim 3, wherein the mobile terminal includes more than two cameras,
    在当检测到开启双摄像头双拍摄功能时,控制在移动终端不同方向的第一摄像头和第二摄像头同时启动拍摄之前,还包括:Before the first camera and the second camera in different directions of the mobile terminal are controlled to start shooting at the same time when it is detected that the dual-camera dual-shooting function is enabled, it also includes:
    显示所述第一摄像头以及所述第二摄像头的选择操作界面;displaying an operation interface for selecting the first camera and the second camera;
    响应于对所述选择操作界面的选择操作,将所述选择操作对应的摄像头作为所述第一摄像头和所述第二摄像头。In response to a selection operation on the selection operation interface, the cameras corresponding to the selection operation are used as the first camera and the second camera.
  5. 根据权利要求1所述的视频处理方法,其中,对第一摄像头拍摄的第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的取景框区域的步骤包括:The video processing method according to claim 1, wherein the first video data captured by the first camera is analyzed, and the step of identifying a viewfinder frame area that meets the frame forming conditions from the first video data includes:
    对第一摄像头拍摄的第一视频数据进行分析,根据预设的AI算法从所述第一视频数据生成的主预览界面中自动识别是否有符合形成框条件的四边形或圆形窗口,以确认所述第一视频数据生成的主预览界面中是否包括取景框区域;Analyze the first video data captured by the first camera, and automatically identify whether there is a quadrilateral or circular window that meets the conditions for forming a frame from the main preview interface generated by the first video data according to a preset AI algorithm, so as to confirm the Whether the main preview interface generated by the first video data includes a viewfinder frame area;
    当所述第一视频数据形成的主预览界面中有符合条件的四边形或圆形窗口时,将所述四边形或圆形窗口区域作为所述第一视频数据生成的主预览界面中的取景框区域。When there is a qualified quadrilateral or circular window in the main preview interface formed by the first video data, use the quadrilateral or circular window area as the viewfinder frame area in the main preview interface generated by the first video data .
  6. 根据权利要求1所述的视频处理方法,其中,所述对第一摄像头拍摄的第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的取景框区域的步骤,包括:The video processing method according to claim 1, wherein the step of analyzing the first video data captured by the first camera, and identifying a viewfinder frame area that meets the conditions for forming a frame from the first video data includes:
    当通过AI等智能算法从所述第一视频数据识别出用于当作取景框区域的物体时,将所述物体所在的区域作为取景框区域。When an object used as a viewfinder frame area is recognized from the first video data by an intelligent algorithm such as AI, the area where the object is located is used as a viewfinder frame area.
  7. 根据权利要求1所述的视频处理方法,其中,所述对第一摄像头拍摄的第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的取景框区域的步骤,包括:The video processing method according to claim 1, wherein the step of analyzing the first video data captured by the first camera, and identifying a viewfinder frame area that meets the conditions for forming a frame from the first video data includes:
    对第一摄像头拍摄的第一视频数据进行分析,得到所述第一视频数据中由线条组成的闭合区域;Analyzing the first video data captured by the first camera to obtain a closed area composed of lines in the first video data;
    将所述闭合区域作为符合形成框条件的取景框区域。The closed area is taken as a viewfinder frame area meeting the frame forming condition.
  8. 根据权利要求1所述的视频处理方法,其中,所述对所述第一摄像头拍摄的所述第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的取景框区域,包括:The video processing method according to claim 1, wherein, analyzing the first video data captured by the first camera, identifying a viewfinder frame area that meets a frame forming condition from the first video data, include:
    对所述第一摄像头拍摄的所述第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的多个取景框区域;Analyzing the first video data captured by the first camera, identifying a plurality of viewfinder frame areas meeting the frame forming conditions from the first video data;
    从多个取景框区域筛选出默认的取景框区域;Filter out the default viewfinder area from multiple viewfinder areas;
    所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,使所述第二视频数据显示大小与所述取景框区域大小匹配,包括:The second video data is superimposed on the viewfinder frame area of the first video data after processing, so that the display size of the second video data matches the size of the viewfinder frame area, including:
    所述第二视频数据处理后叠加到所述第一视频数据的默认的取景框区域,使第二视频数据显示大小与所述默认的取景框区域大小匹配。The second video data is processed and superimposed on the default viewfinder frame area of the first video data, so that the display size of the second video data matches the size of the default viewfinder frame area.
  9. 根据权利要求1所述的视频处理方法,其中,所述第一摄像头为移动终端后置摄像头,所述第二摄像头为移动终端前置摄像头,所述将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,使所述第二视频数据显示大小与所述取景框区域大小匹配的步骤,包括:The video processing method according to claim 1, wherein the first camera is a mobile terminal rear camera, the second camera is a mobile terminal front camera, and the second video data is processed and superimposed on The step of making the display size of the second video data match the size of the viewfinder frame area of the first video data includes:
    基于所述第一视频数据生成的主预览界面中的取景框区域,对所述第二视频数据进行自动编辑,编辑为与所述取景框区域大小自动匹配的视频数据;Automatically edit the second video data based on the viewfinder frame area in the main preview interface generated by the first video data, and edit it into video data that automatically matches the size of the viewfinder frame area;
    控制将编辑后的第二视频数据,叠加到所述取景框区域一起显示。Controlling to superimpose the edited second video data on the viewfinder frame area and display it together.
  10. 根据权利要求9所述的视频处理方法,其中,所述基于所述第一视频数据生成的主预览界面中的取景框区域,对第二视频数据进行自动编辑,编辑为与所述取景框区域大小自动匹配的视频数据的步骤包括:The video processing method according to claim 9, wherein the second video data is automatically edited based on the viewfinder frame area in the main preview interface generated based on the first video data, and edited to match the viewfinder frame area The steps for automatically matching the size of the video data include:
    基于所述第一视频数据生成的主预览界面中的取景框区域,对所述第二视频数据进行裁剪或旋转,以将所述第二视频数据编辑为与所述取景框区域大小自动匹配的视频数据。Crop or rotate the second video data based on the viewfinder frame area in the main preview interface generated by the first video data, so as to edit the second video data to automatically match the size of the viewfinder frame area video data.
  11. 根据权利要求1所述的视频处理方法,其中,所述将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,使所述第二视频数据显示大小与所述取景框区域大小匹配,包括:The video processing method according to claim 1, wherein, after the processing, the second video data is superimposed on the viewfinder frame area of the first video data, so that the display size of the second video data is the same as that of the viewfinder Matching box area sizes, including:
    将所述第二视频数据叠加在所述取景框区域;superimposing the second video data on the viewfinder frame area;
    将所述第二视频数据中,超出所述取景框区域的数据进行裁剪,使所述第二视频数据显示大小与所述取景框区域大小匹配。Clipping the data beyond the viewfinder frame area in the second video data, so that the display size of the second video data matches the size of the viewfinder frame area.
  12. 根据权利要求1所述的视频处理方法,其中,所述获取第二摄像头拍摄的第二视频数据的步骤,包括:The video processing method according to claim 1, wherein the step of obtaining the second video data captured by the second camera comprises:
    接收操作指令;receive operating instructions;
    根据所述操作指令,从已有的视频文件中,获取第二视频数据。According to the operation instruction, the second video data is obtained from the existing video file.
  13. 根据权利要求1所述的视频处理方法,其中,所述将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域的步骤,包括:The video processing method according to claim 1, wherein the step of superimposing the processed second video data on the viewfinder frame area of the first video data comprises:
    从所述第二视频中提取忽略背景的人像视频;Extracting a portrait video ignoring the background from the second video;
    将提取的所述人像视频叠加到所述第一视频数据的取景框区域。and superimposing the extracted portrait video on the viewfinder frame area of the first video data.
  14. 根据权利要求1所述的视频处理方法,其中,所述将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,包括:The video processing method according to claim 1, wherein said processing the second video data and superimposing it on the viewfinder frame area of the first video data comprises:
    确定所述取景框区域相对于所述第一摄像头的倾斜角度;determining an inclination angle of the viewfinder frame area relative to the first camera;
    对所述第二视频数据旋转所述倾斜角度;rotating the tilt angle for the second video data;
    将旋转所述倾斜角度后的第二视频数据叠加到所述第一视频数据的取景框区域。Superimposing the second video data rotated by the tilt angle on the viewfinder frame area of the first video data.
  15. 根据权利要求1所述的视频处理方法,其中,所述将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,包括:The video processing method according to claim 1, wherein said processing the second video data and superimposing it on the viewfinder frame area of the first video data comprises:
    获取所述第一视频数据的分辨率、帧数、模糊度以及滤镜效果;Obtaining the resolution, number of frames, degree of blur and filter effect of the first video data;
    根据所述第一视频数据的分辨率、帧数、模糊度以及滤镜效果,对所述第二视频数据做画质处理;performing image quality processing on the second video data according to the resolution, frame number, blur and filter effect of the first video data;
    将画质处理后的第二视频数据叠加到所述第一视频数据的取景框区域中。Superimposing the image-quality-processed second video data into the viewfinder frame area of the first video data.
  16. 一种视频处理装置,其中,所述装置包括:A video processing device, wherein the device includes:
    第一获取模块,用于获取第一摄像头拍摄的第一视频数据; The first obtaining module is used to obtain the first video data captured by the first camera;
    取景框区域识别模块,用于对所述第一摄像头拍摄的所述第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的取景框区域;A viewfinder frame area identification module, configured to analyze the first video data captured by the first camera, and identify a viewfinder frame area that meets the frame forming conditions from the first video data;
    视频合成处理模块,用于获取第二摄像头拍摄的第二视频数据,将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,使所述第二视频数据显示大小与所述取景框区域大小匹配;将叠加后的视频生成带取景框辅视频图像的合成视频文件。The video synthesis processing module is used to obtain the second video data taken by the second camera, and superimpose the second video data on the viewfinder frame area of the first video data after processing, so that the display size of the second video data is the same as that of the first video data. The size of the viewfinder frame area is matched; the superimposed video is generated into a composite video file with auxiliary video images of the viewfinder frame.
  17. 一种移动终端,其中,所述移动终端包括存储器、处理器及存储在所述存储器中并可在所述处理器上运行的视频处理程序,所述处理器执行所述视频处理程序时,实现:A mobile terminal, wherein the mobile terminal includes a memory, a processor, and a video processing program stored in the memory and operable on the processor, and when the processor executes the video processing program, the :
    获取第一摄像头拍摄的第一视频数据; Obtain the first video data captured by the first camera;
    对所述第一摄像头拍摄的所述第一视频数据进行分析,从所述第一视频数据识别出符合形成框条件的取景框区域;Analyzing the first video data captured by the first camera, identifying a viewfinder frame area that meets the frame forming condition from the first video data;
    获取第二摄像头拍摄的第二视频数据,将所述第二视频数据处理后叠加到所述第一视频数据的取景框区域,使所述第二视频数据显示大小与所述取景框区域大小匹配;Acquiring the second video data shot by the second camera, processing the second video data and superimposing it on the viewfinder frame area of the first video data, so that the display size of the second video data matches the size of the viewfinder frame area ;
    将叠加后的视频生成带取景框辅视频图像的合成视频文件。The superimposed video is generated into a composite video file with an auxiliary video image with a viewfinder frame.
  18. 根据权利要求17所述的移动终端,其中,所述处理器执行所述视频处理程序时,还实现:The mobile terminal according to claim 17, wherein, when the processor executes the video processing program, it further implements:
    当检测到开启双摄像头双拍摄功能时,控制在移动终端不同方向的第一摄像头和第二摄像头同时启动拍摄;When it is detected that the dual-camera dual-shooting function is turned on, the first camera and the second camera in different directions of the mobile terminal are controlled to start shooting simultaneously;
    将第一摄像头拍摄的视频数据作为第一视频数据;Using the video data captured by the first camera as the first video data;
    获取第二摄像头拍摄的第二视频数据,包括:Obtain the second video data captured by the second camera, including:
    将第二摄像头拍摄的视频数据作为第二视频数据。The video data captured by the second camera is used as the second video data.
  19. 根据权利要求17所述的移动终端,其中,所述处理器执行所述视频处理程序时,还实现:The mobile terminal according to claim 17, wherein, when the processor executes the video processing program, it further implements:
    显示所述第一摄像头以及所述第二摄像头的选择操作界面;displaying an operation interface for selecting the first camera and the second camera;
    响应于对所述选择操作界面的选择操作,将所述选择操作对应的摄像头作为所述第一摄像头和所述第二摄像头。In response to a selection operation on the selection operation interface, the cameras corresponding to the selection operation are used as the first camera and the second camera.
  20. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有视频处理程序,所述视频处理程序被处理器执行时,实现如权利要求1-16任一项所述的视频处理方法的步骤。A computer-readable storage medium, wherein a video processing program is stored on the computer-readable storage medium, and when the video processing program is executed by a processor, the video processing according to any one of claims 1-16 is realized method steps.
PCT/CN2022/106791 2021-08-12 2022-07-20 Video processing method and apparatus, and mobile terminal WO2023016214A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110933095.2 2021-08-12
CN202110933095.2A CN113810627B (en) 2021-08-12 2021-08-12 Video processing method, device, mobile terminal and readable storage medium

Publications (1)

Publication Number Publication Date
WO2023016214A1 true WO2023016214A1 (en) 2023-02-16

Family

ID=78942952

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/106791 WO2023016214A1 (en) 2021-08-12 2022-07-20 Video processing method and apparatus, and mobile terminal

Country Status (2)

Country Link
CN (1) CN113810627B (en)
WO (1) WO2023016214A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810627B (en) * 2021-08-12 2023-12-19 惠州Tcl云创科技有限公司 Video processing method, device, mobile terminal and readable storage medium
CN117411982A (en) * 2022-07-05 2024-01-16 摩托罗拉移动有限责任公司 Augmenting live content

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580910A (en) * 2015-01-09 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Image synthesis method and system based on front camera and rear camera
CN106027900A (en) * 2016-06-22 2016-10-12 维沃移动通信有限公司 Photographing method and mobile terminal
JP2018023069A (en) * 2016-08-05 2018-02-08 フリュー株式会社 Game player for creating photograph and display method
CN111200686A (en) * 2018-11-19 2020-05-26 中兴通讯股份有限公司 Photographed image synthesizing method, terminal, and computer-readable storage medium
CN112804458A (en) * 2021-01-11 2021-05-14 广东小天才科技有限公司 Shooting view finding method and device, terminal equipment and storage medium
CN113810627A (en) * 2021-08-12 2021-12-17 惠州Tcl云创科技有限公司 Video processing method and device and mobile terminal
CN113824871A (en) * 2021-08-02 2021-12-21 惠州Tcl云创科技有限公司 Method and device for processing superposed photographing of front camera and rear camera of mobile terminal and mobile terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105847636B (en) * 2016-06-08 2018-10-16 维沃移动通信有限公司 A kind of video capture method and mobile terminal
CN108184052A (en) * 2017-12-27 2018-06-19 努比亚技术有限公司 A kind of method of video record, mobile terminal and computer readable storage medium
CN108234891B (en) * 2018-04-04 2019-11-05 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN109035191A (en) * 2018-08-01 2018-12-18 Oppo(重庆)智能科技有限公司 Image processing method, picture processing unit and terminal device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580910A (en) * 2015-01-09 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Image synthesis method and system based on front camera and rear camera
CN106027900A (en) * 2016-06-22 2016-10-12 维沃移动通信有限公司 Photographing method and mobile terminal
JP2018023069A (en) * 2016-08-05 2018-02-08 フリュー株式会社 Game player for creating photograph and display method
CN111200686A (en) * 2018-11-19 2020-05-26 中兴通讯股份有限公司 Photographed image synthesizing method, terminal, and computer-readable storage medium
CN112804458A (en) * 2021-01-11 2021-05-14 广东小天才科技有限公司 Shooting view finding method and device, terminal equipment and storage medium
CN113824871A (en) * 2021-08-02 2021-12-21 惠州Tcl云创科技有限公司 Method and device for processing superposed photographing of front camera and rear camera of mobile terminal and mobile terminal
CN113810627A (en) * 2021-08-12 2021-12-17 惠州Tcl云创科技有限公司 Video processing method and device and mobile terminal

Also Published As

Publication number Publication date
CN113810627A (en) 2021-12-17
CN113810627B (en) 2023-12-19

Similar Documents

Publication Publication Date Title
WO2023016214A1 (en) Video processing method and apparatus, and mobile terminal
US9692964B2 (en) Modification of post-viewing parameters for digital images using image region or feature information
US7120461B2 (en) Camera phone and photographing method for a camera phone
US7349020B2 (en) System and method for displaying an image composition template
JP4865038B2 (en) Digital image processing using face detection and skin tone information
US7492375B2 (en) High dynamic range image viewing on low dynamic range displays
US8717412B2 (en) Panoramic image production
US9129381B2 (en) Modification of post-viewing parameters for digital images using image region or feature information
CN112492209B (en) Shooting method, shooting device and electronic equipment
US20150077591A1 (en) Information processing device and information processing method
US8704929B2 (en) System and method for user guidance of photographic composition in image acquisition systems
CN110738595A (en) Picture processing method, device and equipment and computer storage medium
CN106231195A (en) A kind for the treatment of method and apparatus of taking pictures of intelligent terminal
WO2023151611A1 (en) Video recording method and apparatus, and electronic device
KR20170027266A (en) Image capture apparatus and method for operating the image capture apparatus
CN106791390B (en) Wide-angle self-timer real-time preview method and user terminal
CN113709545A (en) Video processing method and device, computer equipment and storage medium
CN108965699A (en) Parameter regulation means and device, terminal, the readable storage medium storing program for executing of reference object
CN114598819A (en) Video recording method and device and electronic equipment
CN111200686A (en) Photographed image synthesizing method, terminal, and computer-readable storage medium
Chang et al. Panoramic human structure maintenance based on invariant features of video frames
WO2016131226A1 (en) Intelligent terminal and image processing method and apparatus therefor
US20230217067A1 (en) Producing and adapting video images for presentation displays with different aspect ratios
KR102022559B1 (en) Method and computer program for photographing image without background and taking composite photograph using digital dual-camera
CN115225806A (en) Cinematic image framing for wide field of view (FOV) cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22855197

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE