CN107786827B - Video shooting method, video playing method and device and mobile terminal - Google Patents

Video shooting method, video playing method and device and mobile terminal Download PDF

Info

Publication number
CN107786827B
CN107786827B CN201711083066.1A CN201711083066A CN107786827B CN 107786827 B CN107786827 B CN 107786827B CN 201711083066 A CN201711083066 A CN 201711083066A CN 107786827 B CN107786827 B CN 107786827B
Authority
CN
China
Prior art keywords
video
camera
target
shooting
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711083066.1A
Other languages
Chinese (zh)
Other versions
CN107786827A (en
Inventor
刘秋菊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201711083066.1A priority Critical patent/CN107786827B/en
Publication of CN107786827A publication Critical patent/CN107786827A/en
Application granted granted Critical
Publication of CN107786827B publication Critical patent/CN107786827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23216Control of parameters, e.g. field or angle of view of camera via graphical user interface, e.g. touchscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23245Operation mode switching of cameras, e.g. between still/video, sport/normal or high/low resolution mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/45Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen

Abstract

The invention discloses a video shooting method, a video playing device and a mobile terminal. In the technical scheme of the invention, one camera can be used for shooting the video in a common video mode to obtain a common video, the other camera can be used for shooting the target object in a slow-lens video mode to obtain a slow-lens video, and the slow-lens video is merged into the common video. The method comprises the steps of playing a common video by default when the video is played, pausing the current video playing and playing the slow lens video when the user is detected to select to play the slow lens video containing a target object picture, returning to a common video playing interface after the slow lens video is played, so that the user can watch the complete video in a normal playing mode while expecting to see the details of the video content, the modes of video shooting and video playing are enriched, and the diversified requirements of the user on video watching are met.

Description

Video shooting method, video playing method and device and mobile terminal
Technical Field
The embodiment of the invention relates to the technical field of video processing, in particular to a video shooting method, a video playing device and a mobile terminal.
Background
With the improvement of living standard of people, more and more users select high-speed performance items with rich picture contents and large scenes, such as magic performance, acrobatic performance, speed and passion items, and the like, to be used as a part of entertainment and recreation. When viewing high-speed performance projects, the user desires to see more details of the performance content.
In the prior art, a terminal device controls a camera to shoot a video of a high-speed performance project in a slow-lens video recording mode in the whole process, and then plays the video at a normal playing speed.
Disclosure of Invention
The embodiment of the invention provides a video shooting method, a video playing device and a mobile terminal, and aims to solve the technical problems that in the prior art, video shooting and playing modes are single, and diversified requirements of users for video watching cannot be met.
To solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention further provides a video shooting method, where the method includes:
when a video shooting instruction is received, controlling a first camera to shoot a video in a common video recording mode according to the video shooting instruction to obtain a first video;
selecting a target object from a framing picture of the first camera;
controlling a second camera to perform tracking shooting on the target object in a slow-lens video recording mode to obtain a second video;
and merging the first video shot by the first camera and the second video shot by the second camera to obtain a target video, and marking the second video in the target video.
In a second aspect, an embodiment of the present invention provides a video playing method, configured to play a target video, where the method includes:
detecting whether a video playing instruction triggered by a second video in the target video is received or not in the playing process of the target video; the target video is a video obtained by combining a first video shot by a first camera and a second video shot by a second camera, the first camera is used for shooting videos in a common video recording mode, and the second camera is used for tracking and shooting a target object in a framing picture of the first camera in a slow-lens video recording mode;
when a video playing instruction triggered by the second video is received, pausing a currently played video picture and playing the second video;
and when the second video is detected to be played completely, continuing playing the paused video picture.
In a third aspect, an embodiment of the present invention provides a video shooting apparatus, including:
the first control unit is used for controlling the first camera to carry out video shooting in a common video recording mode according to a video shooting instruction when the video shooting instruction is received, so that a first video is obtained;
a selection unit configured to select a target object from a framing picture of the first camera;
the second control unit is used for controlling the second camera to carry out tracking shooting on the target object in a slow-lens video recording mode to obtain a second video;
the processing unit is used for merging the first video shot by the first camera and the second video shot by the second camera to obtain a target video;
a marking unit, configured to mark the second video in the target video.
In a fourth aspect, an embodiment of the present invention provides a video playing apparatus, configured to play a target video, where the apparatus includes:
the second detection unit is used for detecting whether a video playing instruction triggered by a second video in the target video is received or not in the playing process of the target video; the target video is a video obtained by combining a first video shot by a first camera and a second video shot by a second camera, the first camera is used for shooting videos in a common video recording mode, and the second camera is used for tracking and shooting a target object in a framing picture of the first camera in a slow-lens video recording mode;
the pause unit is used for pausing the currently played video picture when receiving a video playing instruction triggered by the second video;
a first playing unit for playing the second video;
and the second playing unit is used for continuously playing the paused video picture when the second video playing is detected to be finished.
In a fifth aspect, an embodiment of the present invention provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the video shooting method.
In a sixth aspect, an embodiment of the present invention provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the video playing method.
In the embodiment of the invention, when the video is shot, the double cameras of the terminal equipment are utilized, the double cameras are respectively used for shooting the video in different modes in the same viewing range, one camera is used for shooting the video in a common video mode to obtain the common video, and the other camera is used for shooting the video in a slow-lens video mode through target tracking after a target object is confirmed to obtain the slow-lens video. And then, the slow-shot video is fused into the ordinary video through a video fusion technology, the ordinary video is played by default when the shooting video is played, the current video playing is paused when the slow-shot video containing the target object picture is detected to be played by a user, the slow-shot video is played, and the ordinary video playing interface is returned after the slow-shot video is played. Compared with the prior art, in the embodiment of the invention, when the video of a high-speed performance project is shot, the interested object in the performance project can be shot at a slow speed, other objects in the performance project can be shot normally, when the shot video is watched, a user only needs to play back the interested object at a slow motion, and the user can watch the complete video in a normal playing mode while expecting to see more performance content details, so that the modes of video shooting and video playing are enriched, and the diversified requirements of the user on video watching are met.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow diagram of a video capture method of one embodiment of the present invention;
FIG. 2 is a flow diagram of a video capture method of another embodiment of the present invention;
FIG. 3 is a flow chart of a video playback method of one embodiment of the present invention;
FIG. 4-1 is a scene diagram of a video playing method according to an embodiment of the present invention;
fig. 4-2 is a scene diagram of a video playing method according to another embodiment of the present invention;
fig. 4-3 are scene diagrams of a video playing method according to another embodiment of the present invention;
FIG. 5 is a schematic diagram of a video camera according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video playback device according to an embodiment of the present invention;
fig. 7 is a schematic hardware structure diagram of a mobile terminal implementing various embodiments of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a video shooting method, a video playing device and a mobile terminal.
First, a video shooting method according to an embodiment of the present invention will be described below.
It should be noted that the video shooting method provided by the embodiment of the present invention is applicable to a terminal device, and in practical application, the terminal device may include: a mobile terminal such as a smart phone and a tablet computer, a camera, a notebook computer or a desktop computer, etc., which are not limited in the embodiments of the present invention.
Fig. 1 is a flow chart of a video capture method of one embodiment of the present invention, which, as shown in fig. 1, may include the steps of:
in step 101, when a video shooting instruction is received, the first camera is controlled to shoot a video in a normal video mode according to the video shooting instruction, so as to obtain a first video.
For ease of understanding, some concepts involved in the embodiments of the present invention will be explained first.
The normal video recording mode refers to a video recording mode in which video shooting is performed at a normal shooting speed.
The slow-lens video recording mode has the working principle that more picture frames are recorded when a video is shot through a high-speed light field shooting technology, more picture information can be recorded than that received when light is transmitted to human eyes because the shooting speed of the slow-lens video recording mode exceeds the light speed, and then the picture information is played at a normal playing speed, so that a user can see more object motion details than that seen by the human eyes.
In this embodiment of the present invention, the first camera and the second camera may be integrated in the terminal device, or the first camera and the second camera may also be externally connected to the terminal device.
In the embodiment of the invention, in the process of shooting the video by the first camera, the framing picture of the first camera can be displayed on the corresponding terminal equipment screen; specifically, the viewfinder of the first camera may be displayed on the screen in a full screen mode, or may be displayed on the screen in a small window mode.
For convenience of understanding, a dual-camera mobile phone is taken as an example to explain the technical scheme of the invention.
After a user clicks a button (or an icon) for triggering video shooting, the double-camera mobile phone starts a double-camera independent video recording mode, one camera in the double cameras is used for carrying out common video recording, and a framing picture of the camera is displayed on a screen of the mobile phone in a full screen mode, so that the user can know the current video picture which is recorded at any time.
In step 102, a target object is selected from a viewfinder screen of the first camera.
In the embodiment of the present invention, the target object is generally an object of interest to a user, and the target object may be manually selected by the user or automatically selected by a terminal device.
When the target object is manually selected by the user, in an alternative embodiment, the step 102 may include: s11 and S12, wherein,
in S11, receiving an object selection instruction triggered by a framing picture of a first camera, wherein the framing picture of the first camera contains at least one object;
in S12, the object pointed to by the object selection instruction is determined as the target object.
In the embodiment of the present invention, an interface for selecting a target object may be provided for a user, and the interface may be disposed on a finder screen of the first camera or on an area other than the finder screen, and in practical applications, the form of the interface may include: buttons, icons, or specific areas of human-computer interaction. When the interface is a button or an icon, a user can trigger an object selection instruction by clicking the button or the icon; when the interface is a feature area capable of being interacted with by a human machine, a user can trigger an object selection instruction by clicking the feature area or performing gesture operation on the feature area.
As can be seen from the above embodiments, the embodiment may provide an interface for a user to select a target object, so that the user can select the target object autonomously, and since the object selected autonomously by the user is usually an object of interest to the user, the embodiment may sufficiently meet the object selection requirement of the user.
When the target object is automatically selected by the terminal device, in another alternative embodiment, the step 102 may include: s13 and S14, wherein,
in S13, a target region of interest in the finder screen of the first camera is determined.
In this embodiment of the present invention, the target attention area may include: the method includes the steps of obtaining an eyeball focus area of a user and/or a scene focus area in a video shooting scene, wherein the eyeball focus area of the user is an area where a sight line focus point of the user is located when the user observes an object, and the scene focus area is an area where a key object in the video shooting scene is located, wherein the key object can be a static object or a dynamic object, such as a stage or a certain actor on the stage.
It should be noted that, when a user watches a performance or a video, the video content in the eye attention area of the user is generally the content that the user is interested in; in addition, the video content in the scene focus area in the video shooting scene is also the content that is often more interesting to the user, for example, when the video shooting scene is a star concert, the scene focus area of the scene is a stage, the star mainly performs on the stage, and the user is often more interesting to the video content corresponding to the stage area.
In the embodiment of the invention, the eyeball attention area of the user can be determined by the front camera of the terminal equipment and the eyeball tracking technology; the scene focus area may be determined in conjunction with the video capture scene according to preset area determination rules.
In S14, the object in the target attention area is determined as the target object.
For example, when watching a ball game, the eyeball focus area of the user mainly falls on the ball star, and at this time, the ball star may be determined as the target object.
As can be seen from the foregoing embodiments, the embodiments can determine a region of interest of a user, and determine an object in the region of interest as a target object, and since the region of interest of the user is usually an object of interest of the user, the embodiments can meet the object selection requirement of the user to some extent.
When the target object is automatically selected by the terminal device, in another alternative embodiment, the step 102 may include: s15 and S16, wherein,
in S15, a preset object selection condition is acquired, where the object selection condition records characteristics of the target object;
in S16, the object having the above-described characteristics in the through-view screen of the first camera is determined as the target object.
In the embodiment of the invention, some characteristics can be preset by a user, and when video shooting is carried out, an object with the characteristics in a framing picture is determined as a target object.
As can be seen from the foregoing embodiments, according to the feature requirement of the user for the target object, the target object meeting the feature requirement can be selected from the viewing screen, so that the present embodiment can meet the object selection requirement of the user to some extent.
In step 103, the second camera is controlled to perform tracking shooting on the target object in a slow-lens video recording manner, so as to obtain a second video.
In the embodiment of the invention, after the target object is selected, the target object is tracked and positioned, and the second camera is started to track and shoot the target object in a slow-lens video mode. In the process of shooting a video by the second camera, the framing picture of the second camera can be displayed on the screen of the corresponding terminal device, specifically, the framing picture of the second camera can be displayed on the screen in a full screen mode, and the framing picture of the second camera can also be displayed on the screen in a small window mode.
In order to facilitate a user to know currently recorded video pictures (including a video picture currently recorded by a first camera and a video picture currently recorded by a second camera) at any time, in the embodiment of the invention, in the process of shooting videos by the first camera and the second camera, a framing picture of the first camera can be displayed on a screen in a full screen mode, and a framing picture of the second camera is displayed on the framing picture of the first camera in a small window mode; or, in the process of shooting videos by the first camera and the second camera, the framing picture of the second camera can be displayed on the screen in a full screen mode, and the framing picture of the first camera can be displayed on the framing picture of the second camera in a small window mode.
In step 104, the first video shot by the first camera and the second video shot by the second camera are merged to obtain a target video, and the second video in the target video is marked.
In the embodiment of the invention, in the process of shooting the video of the first camera, the first video shot by the first camera and the second video shot by the second camera can be merged to obtain the target video, namely, the shot videos are merged while being shot; or after the first camera finishes video shooting, merging the first video shot by the first camera and the second video shot by the second camera to obtain the target video, namely merging the shot videos after stopping shooting the videos.
In this embodiment of the present invention, merging the first video and the second video may include: merging the initial frame of the second video into the first video through picture fusion; or, the first video and the second video are simply merged and stored, and do not relate to processing of video frame pictures, which is not limited in the embodiment of the present invention.
In the embodiment of the invention, the marking of the second video in the target video can be completed by recording the position of the starting frame of the second video in the video sequence of the target video, so that the second video in the target video can be distinguished when the target video is played.
As can be seen from the above embodiments, in the embodiments, when a video is shot, the two cameras of the terminal device may be used in the same viewing range, and are respectively used for shooting videos in different modes, one camera is used for shooting a video in a normal video mode to obtain a normal video, and the other camera is used for shooting a video in a slow-lens video mode through target tracking after a target object is confirmed to obtain a slow-lens video. And then, the slow-shot video is fused into the common video through a video fusion technology and is marked, so that in the process of playing the section of shooting video, when the fact that the user selects to play the slow-shot video containing the target object picture is detected, the current video playing is paused, the slow-shot video is played, and the normal video playing interface is returned after the slow-shot video is played. Compared with the prior art, in the embodiment of the invention, when the video of a high-speed performance project is shot, the interested object in the performance project can be shot at a slow speed, other objects in the performance project can be shot normally, when the shot video is watched, a user only needs to play back the interested object at a slow motion, and the user can watch the complete video in a normal playing mode while expecting to see more performance content details, so that the modes of video shooting and video playing are enriched, and the diversified requirements of the user on video watching are met.
Fig. 2 is a flowchart of a video capture method according to another embodiment of the present invention, which may include the steps of, as shown in fig. 2:
in step 201, when a video shooting instruction is received, the first camera is controlled to shoot a video in a normal video mode according to the video shooting instruction, so as to obtain a first video.
In step 202, a target object is selected from a finder screen of the first camera.
In step 203, the second camera is controlled to perform tracking shooting on the target object in a slow-lens video recording mode.
Steps 201 to 203 in the embodiment of the present invention are similar to steps 101 to 103 in the embodiment shown in fig. 1, and are not described herein again, for details, please refer to relevant contents in the embodiment shown in fig. 1.
In step 204, detecting whether a target object is in a current framing picture of the first camera; if so, go to step 205, otherwise go to step 206.
In the embodiment of the invention, whether the target object is in the current framing picture of the first camera can be determined by comparing the current framing picture of the first camera with the current framing picture of the second camera.
It can be understood that if the target object is not in the current framing picture of the first camera, the video shot by the second camera is meaningless, and only if the target object is always in the current framing picture of the first camera, the video shot by the second camera is meaningless.
Based on the situation, when the target object is detected to be in the current framing picture of the first camera, the slow-lens tracking shooting of the target object is continued; and when the target object is detected not to be in the current framing picture of the first camera, stopping performing slow-lens tracking shooting on the target object, and storing the shot slow-lens video for subsequent video combination.
In the embodiment of the invention, the validity of the video shot by the second camera can be ensured by detecting whether the target object is not in the current framing picture of the first camera.
In step 205, the second camera is controlled to continue to track and shoot the target object in a slow-lens video mode.
In step 206, the second camera is controlled to stop tracking shooting of the target object, and the shot video is saved.
In step 207, detecting whether an image pickup ending instruction triggered by the first camera is received; if so, go to step 208, otherwise do not.
In the embodiment of the present invention, an interface for controlling the first camera to end recording may be provided, and in practical application, the form of the interface may include: a button, or an icon. At this time, the user may trigger the image capturing end instruction by clicking the button or icon.
In the embodiment of the present invention, the shooting end instruction may also be automatically triggered, for example, a shooting duration is preset, and when the duration for the first camera to shoot the video reaches the preset shooting duration, the shooting end instruction is automatically triggered.
In step 208, the first video captured by the first camera and the second video captured by the second camera are merged to obtain a target video, and the second video in the target video is marked.
In the embodiment of the invention, after the first camera finishes video shooting, the first video shot by the first camera and the second video shot by the second camera are merged to obtain the target video, namely, the shot videos are merged only after the shooting of the videos is stopped.
As can be seen from the above embodiments, in the embodiments, when a video is shot, the two cameras of the terminal device may be used in the same viewing range, and are respectively used for shooting videos in different modes, one camera is used for shooting a video in a normal video mode to obtain a normal video, and the other camera is used for shooting a video in a slow-lens video mode through target tracking after a target object is confirmed to obtain a slow-lens video. After video shooting is finished, a slow-shot video is fused into a common video and is marked through a video fusion technology, so that in the process of playing the section of shot video, when a user is detected to select to play a slow-shot video containing a target object picture, current video playing is paused, the slow-shot video is played, and the user returns to a common video playing interface after the slow-shot video is played. Compared with the prior art, in the embodiment of the invention, when the video of a high-speed performance project is shot, the interested object in the performance project can be shot at a slow speed, other objects in the performance project can be shot normally, when the shot video is watched, a user only needs to play back the interested object at a slow motion, and the user can watch the complete video in a normal playing mode while expecting to see more performance content details, so that the modes of video shooting and video playing are enriched, and the diversified requirements of the user on video watching are met.
In addition to the manner of automatically ending the video capturing of the second camera disclosed in the embodiment shown in fig. 2, in another embodiment provided by the present invention, the video capturing of the second camera may be manually ended by a user, and at this time, the video capturing method provided in the embodiment of the present invention may further add the following steps on the basis of the embodiment shown in fig. 1 or fig. 2:
receiving a shooting end instruction triggered by a second camera;
and controlling the second camera to stop tracking shooting of the target object in a slow-lens video recording mode according to the shooting end instruction, and storing the shot video.
In the embodiment of the present invention, an interface for controlling the second camera to end recording may be provided, and in practical application, the form of the interface may include: a button, or an icon. At this time, the user may trigger the image capturing end instruction by clicking the button or icon.
As can be seen from the above embodiments, the embodiments can provide various manners for ending the capturing of the video by the second camera, so as to meet the user's requirements for different manners for ending the capturing of the video by the second camera.
Next, a video playing method provided by an embodiment of the present invention is described.
It should be noted that the video playing method provided by the embodiment of the present invention is applicable to a terminal device, and in practical application, the terminal device may include: a smart phone, a mobile terminal of a tablet computer, a camera, a notebook computer or a desktop computer, etc., which are not limited in this embodiment of the present invention.
Fig. 3 is a flowchart of a video playing method according to an embodiment of the present invention, the method is used for playing a target video in the above-mentioned video capturing method embodiment, as shown in fig. 3, the method may include the following steps:
in step 301, in the playing process of the target video, detecting whether a video playing instruction triggered by a second video in the target video is received; if yes, go to step 302, otherwise do not process; the target video is a video obtained by combining a first video shot by a first camera and a second video shot by a second camera, the first camera is used for shooting videos in a common video recording mode, and the second camera is used for tracking and shooting a target object in a framing picture of the first camera in a slow-lens video recording mode.
In this embodiment of the present invention, an interface for controlling a terminal device to play a second video may be provided, and in practical application, the interface may include: a designated area in the video playback screen, or a designated button outside the video playback screen. At this time, the user triggers the video playing instruction by clicking a specified area in the video playing picture or a specified button outside the video playing picture.
In the embodiment of the invention, whether a video playing instruction triggered by a second video in the target video is received or not can be detected in real time in the target video playing process; alternatively, after playing to the position marked with the second video, it may also be started to detect whether a video playing instruction triggered for the second video in the target video is received. At this time, in an optional implementation, the step 301 may include:
in the playing process of the target video, acquiring the position of the starting frame of the second video in the video sequence of the target video; and after the position where the initial frame of the second video is played is reached, detecting whether a video playing instruction triggered by the second video is received.
In step 302, the currently playing video frame is paused, and a second video is played.
In the embodiment of the invention, when the second video is played, the second video can be played on the paused video picture in a mode of overlapping windows; or, a new playing window can be opened to play the second video, and the playing modes are various, so that the personalized requirements of the user on the playing modes are met.
In step 303, when it is detected that the second video is completely played, the paused video picture is continuously played.
In this embodiment of the present invention, the user may manually end the playing of the second video, and at this time, the step 303 may include: and when a playing ending instruction triggered by the second video is received, continuing playing the paused video picture. In addition, the playing of the second video may also be automatically ended, in this case, the step 303 may include: and when the automatic playing of the second video is finished, continuing to play the paused video picture.
For ease of understanding, described in conjunction with the scene diagrams of fig. 4-1, 4-2, and 4-3, fig. 4-1 through 4-3 show: the process of switching from the video frame 41 currently played by the mobile phone screen to the slow video frame 42, wherein fig. 4-1 shows the video frame 41 currently played by the mobile phone screen, and fig. 4-2 shows that the user triggers the mobile phone to pause playing the video frame 41 by clicking the object "bird" in the video frame 41, and superimposes a window on the video frame 41 to play the slow video frame 42, as shown in fig. 4-3. When the slow video frame 42 is played, the video frame 41 is continuously played on the screen of the mobile phone.
As can be seen from the above embodiments, in the embodiment, when the target video is played, the normal video is started by default, when it is detected that the user selects to play the slow-lens video including the target object picture, the current video playing is paused, the slow-lens video is played, and the normal video playing interface is returned after the slow-lens video playing is completed. Compared with the prior art, in the embodiment of the invention, when the video of a high-speed performance project is shot, the interested object in the performance project can be shot at a slow speed, other objects in the performance project can be shot normally, when the shot video is watched, a user only needs to play back the interested object at a slow motion, and the user can watch the complete video in a normal playing mode while expecting to see more performance content details, so that the modes of video shooting and video playing are enriched, and the diversified requirements of the user on video watching are met.
Fig. 5 is a schematic structural diagram of a video camera according to an embodiment of the present invention, and as shown in fig. 5, the video camera 500 may include: a first control unit 501, a selection unit 502, a second control unit 503, a processing unit 504 and a marking unit 505, wherein,
the first control unit 501 is configured to, when a video shooting instruction is received, control a first camera to shoot a video in a normal video recording manner according to the video shooting instruction, so as to obtain a first video;
a selection unit 502 for selecting a target object from a finder screen of the first camera;
a second control unit 503, configured to control a second camera to perform tracking shooting on the target object in a slow-lens video recording manner, so as to obtain a second video;
a processing unit 504, configured to combine the first video captured by the first camera and the second video captured by the second camera to obtain a target video;
a marking unit 505, configured to mark the second video in the target video.
As can be seen from the above embodiments, in the embodiments, when a video is shot, the two cameras of the terminal device may be used in the same viewing range, and are respectively used for shooting videos in different modes, one camera is used for shooting a video in a normal video mode to obtain a normal video, and the other camera is used for shooting a video in a slow-lens video mode through target tracking after a target object is confirmed to obtain a slow-lens video. And then, the slow-shot video is fused into the common video through a video fusion technology and is marked, so that in the process of playing the section of shooting video, when the fact that the user selects to play the slow-shot video containing the target object picture is detected, the current video playing is paused, the slow-shot video is played, and the normal video playing interface is returned after the slow-shot video is played. Compared with the prior art, in the embodiment of the invention, when the video of a high-speed performance project is shot, the interested object in the performance project can be shot at a slow speed, other objects in the performance project can be shot normally, when the shot video is watched, a user only needs to play back the interested object at a slow motion, and the user can watch the complete video in a normal playing mode while expecting to see more performance content details, so that the modes of video shooting and video playing are enriched, and the diversified requirements of the user on video watching are met.
In another embodiment provided by the present invention, which may be based on the embodiment shown in fig. 5, the marking unit 505 may include:
and the position marking subunit is used for recording the position of the starting frame of the second video in the video sequence of the target video.
In another embodiment provided by the present invention, which may be based on the embodiment shown in fig. 5, the selecting unit 502 may include:
the instruction receiving subunit is configured to receive an object selection instruction triggered by a framing picture of the first camera, where the framing picture of the first camera includes at least one object;
and the first object determination subunit is used for determining the object pointed by the object selection instruction as the target object.
In another embodiment provided by the present invention, which may be based on the embodiment shown in fig. 5, the selecting unit 502 may include:
a region-of-interest determining subunit, configured to determine a target region of interest in a framing picture of the first camera;
a second object determination subunit, configured to determine an object in the target attention region as a target object.
In another embodiment provided by the present invention, on the basis of the previous embodiment, the target attention area may include:
an eye attention area of the user, and/or a scene focus area in a video capture scene.
In another embodiment provided by the present invention, which may be based on the embodiment shown in fig. 5, the selecting unit 502 may include:
the device comprises a selection condition acquisition subunit, a selection condition acquisition unit and a selection module, wherein the selection condition acquisition subunit is used for acquiring a preset object selection condition, and the object selection condition records the characteristics of a target object;
and a third object determining subunit, configured to determine, as a target object, an object having the feature in the finder screen of the first camera.
In another embodiment provided by the present invention, which may be based on the embodiment shown in fig. 5, the video camera 500 may further include:
the first detection unit is used for detecting whether the target object is in a current framing picture of the first camera or not;
a third control unit, configured to control the second camera to continue to perform tracking shooting on the target object in a slow-shot video recording manner if the detection result of the first detection unit is yes;
and the fourth control unit is used for controlling the second camera to stop tracking shooting of the target object and storing the shot video under the condition that the detection result of the first detection unit is negative.
In another embodiment provided by the present invention, which may be based on the embodiment shown in fig. 5, the processing unit 504 may include:
and the video merging subunit is configured to merge the first video shot by the first camera and the second video shot by the second camera to obtain a target video when a shooting end instruction triggered by the first camera is received.
In another embodiment provided by the present invention, which may be based on the embodiment shown in fig. 5, the video camera 500 may further include:
a first instruction receiving unit, configured to receive a shooting end instruction triggered by the second camera;
and the fifth control unit is used for controlling the second camera to stop tracking and shooting the target object in a slow-lens video recording mode and storing the shot video.
In another embodiment provided by the present invention, which may be based on the embodiment shown in fig. 5, the video camera 500 may further include:
the first display unit is used for displaying a framing picture of the first camera in a full screen mode and displaying the framing picture of the second camera in a small window mode on the framing picture of the first camera in the process that the second camera carries out tracking shooting on the target object in a slow-lens video recording mode; alternatively, the first and second electrodes may be,
and the second display unit is used for displaying the framing picture of the second camera in a full screen mode and displaying the framing picture of the first camera in a small window mode on the framing picture of the second camera in the process of tracking and shooting the target object by the second camera in a slow-lens video recording mode.
Fig. 6 is a schematic structural diagram of a video playback device according to an embodiment of the present invention, and as shown in fig. 6, the video playback device 600 may include: a second detection unit 601, a pause unit 602, a first play unit 603, and a second play unit 604, wherein,
a second detecting unit 601, configured to detect whether a video playing instruction triggered by a second video in the target video is received in a playing process of the target video; the target video is a video obtained by combining a first video shot by a first camera and a second video shot by a second camera, the first camera is used for shooting videos in a common video recording mode, and the second camera is used for tracking and shooting a target object in a framing picture of the first camera in a slow-lens video recording mode;
a pausing unit 602, configured to pause a currently played video picture when a video playing instruction triggered for the second video is received;
a first playing unit 603 configured to play the second video;
a second playing unit 604, configured to continue playing the paused video frame when it is detected that the second video playing is completed.
As can be seen from the above embodiments, in the embodiment, when the target video is played, the normal video is started by default, when it is detected that the user selects to play the slow-lens video including the target object picture, the current video playing is paused, the slow-lens video is played, and the normal video playing interface is returned after the slow-lens video playing is completed. Compared with the prior art, in the embodiment of the invention, when the video of a high-speed performance project is shot, the interested object in the performance project can be shot at a slow speed, other objects in the performance project can be shot normally, when the shot video is watched, a user only needs to play back the interested object at a slow motion, and the user can watch the complete video in a normal playing mode while expecting to see more performance content details, so that the modes of video shooting and video playing are enriched, and the diversified requirements of the user on video watching are met.
In another embodiment provided by the present invention, which may be based on the embodiment shown in fig. 6, the second detecting unit 601 may include:
the position acquisition subunit is configured to acquire, in the playing process of the target video, a position of a start frame of the second video in a video sequence of the target video;
and the detection subunit is used for detecting whether a video playing instruction triggered by the second video is received or not after the second video is played to the position.
In another embodiment provided by the present invention, which may be based on the embodiment shown in fig. 6, the first playing unit 603 may include:
and the playing subunit is used for playing the second video in a mode of overlapping a window on the paused video picture.
The mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to 3, and is not described herein again in order to avoid repetition.
Fig. 7 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, where the mobile terminal 700 includes but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 7 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 710 is configured to, when receiving a video shooting instruction, control the first camera to shoot a video in a common video recording manner according to the video shooting instruction, so as to obtain a first video;
selecting a target object from a framing picture of the first camera;
controlling a second camera to perform tracking shooting on the target object in a slow-lens video recording mode to obtain a second video;
and merging the first video shot by the first camera and the second video shot by the second camera to obtain a target video, and marking the second video in the target video.
The processor 710 is further configured to detect, during a playing process of a target video, whether a video playing instruction triggered by a second video in the target video is received; the target video is a video obtained by combining a first video shot by a first camera and a second video shot by a second camera, the first camera is used for shooting videos in a common video recording mode, and the second camera is used for tracking and shooting a target object in a framing picture of the first camera in a slow-lens video recording mode;
when a video playing instruction triggered by the second video is received, pausing a currently played video picture and playing the second video;
and when the second video is detected to be played completely, continuing playing the paused video picture.
In the embodiment of the invention, when the video is shot, the double cameras of the terminal equipment are utilized, the double cameras are respectively used for shooting the video in different modes in the same viewing range, one camera is used for shooting the video in a common video mode to obtain the common video, and the other camera is used for shooting the video in a slow-lens video mode through target tracking after a target object is confirmed to obtain the slow-lens video. And then, the slow-shot video is fused into the ordinary video through a video fusion technology, the ordinary video is played by default when the shooting video is played, the current video playing is paused when the slow-shot video containing the target object picture is detected to be played by a user, the slow-shot video is played, and the ordinary video playing interface is returned after the slow-shot video is played. Compared with the prior art, in the embodiment of the invention, when the video of a high-speed performance project is shot, the interested object in the performance project can be shot at a slow speed, other objects in the performance project can be shot normally, when the shot video is watched, a user only needs to play back the interested object at a slow motion, and the user can watch the complete video in a normal playing mode while expecting to see more performance content details, so that the modes of video shooting and video playing are enriched, and the diversified requirements of the user on video watching are met.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access via the network module 702, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the mobile terminal 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The mobile terminal 700 also includes at least one sensor 705, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the mobile terminal 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 708 is an interface through which an external device is connected to the mobile terminal 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 700 or may be used to transmit data between the mobile terminal 700 and external devices.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby integrally monitoring the mobile terminal. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The mobile terminal 700 may also include a power supply 711 (e.g., a battery) for powering the various components, and the power supply 711 may be logically coupled to the processor 710 via a power management system that may enable managing charging, discharging, and power consumption by the power management system.
In addition, the mobile terminal 700 includes some functional modules that are not shown, and thus will not be described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 710, a memory 709, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program is executed by the processor 710 to implement each process of the above video shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 710, a memory 709, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program is executed by the processor 710 to implement each process of the above-mentioned video playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (18)

1. A method of video capture, the method comprising:
when a video shooting instruction is received, controlling a first camera to shoot a video in a common video recording mode according to the video shooting instruction to obtain a first video;
selecting a target object from a framing picture of the first camera; wherein the selecting a target object from the framing picture of the first camera comprises:
determining a target attention area in a framing picture of the first camera, and determining an object in the target attention area as a target object, wherein the target attention area comprises an eyeball attention area of a user and/or a scene focus area in a video shooting scene;
or acquiring a preset object selection condition, and determining an object with the characteristics in a framing picture of the first camera as a target object, wherein the characteristics of the target object are recorded in the object selection condition;
controlling a second camera to perform tracking shooting on the target object in a slow-lens video recording mode to obtain a second video;
merging the first video shot by the first camera and the second video shot by the second camera to obtain a target video, and marking the second video in the target video;
after the step of controlling the second camera to track and shoot the target object in a slow-lens video mode, the method further includes:
detecting whether the target object is in a current framing picture of the first camera;
if the target object is in the current framing picture of the first camera, controlling the second camera to continue to perform tracking shooting on the target object in a slow-lens video recording mode;
and if the target object is not in the current framing picture of the first camera, controlling the second camera to stop tracking and shooting the target object, and storing the shot video.
2. The method of claim 1, wherein said marking the second video of the target videos comprises:
recording the position of the starting frame of the second video in the video sequence of the target video.
3. The method of claim 1, wherein selecting a target object from the viewfinder frame of the first camera comprises:
receiving an object selection instruction triggered by a framing picture of the first camera, wherein the framing picture of the first camera comprises at least one object;
and determining the object pointed by the object selection instruction as a target object.
4. The method of claim 1, wherein merging the first video captured by the first camera with the second video captured by the second camera to obtain a target video comprises:
and when a shooting end instruction triggered by the first camera is received, combining the first video shot by the first camera and the second video shot by the second camera to obtain a target video.
5. The method of claim 1, further comprising:
receiving a shooting end instruction triggered by the second camera;
and controlling the second camera to stop tracking and shooting the target object in a slow-lens video recording mode, and storing the shot video.
6. The method of claim 1, further comprising:
in the process that the second camera carries out tracking shooting on the target object in a slow-lens video mode, displaying a framing picture of the first camera in a full screen mode, and displaying the framing picture of the second camera in a small window mode on the framing picture of the first camera; alternatively, the first and second electrodes may be,
and in the process that the second camera carries out tracking shooting on the target object in a slow-lens video mode, displaying a framing picture of the second camera in a full screen mode, and displaying a framing picture of the first camera in a small window mode on the framing picture of the second camera.
7. A video playing method for playing a target video, the method comprising:
detecting whether a video playing instruction triggered by a second video in the target video is received or not in the playing process of the target video; the target video is a video obtained by combining a first video shot by a first camera and a second video shot by a second camera, the first camera is used for shooting videos in a common video recording mode, the second camera is used for tracking and shooting a target object in a framing picture of the first camera in a slow-lens video recording mode, and after the target object is tracked and shot by the second camera in the slow-lens video recording mode, if the target object is in a current framing picture of the first camera, the second camera continues to track and shoot the target object in the slow-lens video recording mode; if the target object is not in the current framing picture of the first camera, the second camera stops tracking and shooting the target object, and stores the shot video;
when a video playing instruction triggered by the second video is received, pausing a currently played video picture and playing the second video;
when the second video is detected to be played completely, the paused video picture is continuously played;
wherein the detecting whether a video playing instruction triggered by a second video in the target video is received includes:
acquiring the position of the starting frame of the second video in the video sequence of the target video;
and after the video is played to the position, detecting whether a video playing instruction triggered by the second video is received.
8. The method of claim 7, wherein said playing said second video comprises:
and playing the second video in a mode of overlapping windows on the paused video picture.
9. A video camera, the device comprising:
the first control unit is used for controlling the first camera to carry out video shooting in a common video recording mode according to a video shooting instruction when the video shooting instruction is received, so that a first video is obtained;
a selection unit configured to select a target object from a framing picture of the first camera; wherein the selection unit includes:
the second object determination subunit is used for determining a target attention area in a framing picture of the first camera and determining an object in the target attention area as a target object, wherein the target attention area comprises an eyeball attention area of a user and/or a scene focus area in a video shooting scene;
or, the third object determining subunit is configured to acquire a preset object selection condition, and determine, as the target object, an object having the feature in the finder screen of the first camera, where the feature of the target object is recorded in the object selection condition;
the second control unit is used for controlling the second camera to carry out tracking shooting on the target object in a slow-lens video recording mode to obtain a second video;
the processing unit is used for merging the first video shot by the first camera and the second video shot by the second camera to obtain a target video;
a marking unit, configured to mark the second video in the target video;
wherein the apparatus further comprises:
the first detection unit is used for detecting whether the target object is in a current framing picture of the first camera or not;
a third control unit, configured to control the second camera to continue to perform tracking shooting on the target object in a slow-shot video recording manner if the detection result of the first detection unit is yes;
and the fourth control unit is used for controlling the second camera to stop tracking shooting of the target object and storing the shot video under the condition that the detection result of the first detection unit is negative.
10. The apparatus of claim 9, wherein the marking unit comprises:
and the position marking subunit is used for recording the position of the starting frame of the second video in the video sequence of the target video.
11. The apparatus of claim 9, wherein the selection unit comprises:
the instruction receiving subunit is configured to receive an object selection instruction triggered by a framing picture of the first camera, where the framing picture of the first camera includes at least one object;
and the first object determination subunit is used for determining the object pointed by the object selection instruction as the target object.
12. The apparatus of claim 9, wherein the processing unit comprises:
and the video merging subunit is configured to merge the first video shot by the first camera and the second video shot by the second camera to obtain a target video when a shooting end instruction triggered by the first camera is received.
13. The apparatus of claim 9, further comprising:
a first instruction receiving unit, configured to receive a shooting end instruction triggered by the second camera;
and the fifth control unit is used for controlling the second camera to stop tracking and shooting the target object in a slow-lens video recording mode and storing the shot video.
14. The apparatus of claim 9, further comprising:
the first display unit is used for displaying a framing picture of the first camera in a full screen mode and displaying the framing picture of the second camera in a small window mode on the framing picture of the first camera in the process that the second camera carries out tracking shooting on the target object in a slow-lens video recording mode; alternatively, the first and second electrodes may be,
and the second display unit is used for displaying the framing picture of the second camera in a full screen mode and displaying the framing picture of the first camera in a small window mode on the framing picture of the second camera in the process of tracking and shooting the target object by the second camera in a slow-lens video recording mode.
15. A video playback apparatus for playing back a target video, the apparatus comprising:
the second detection unit is used for detecting whether a video playing instruction triggered by a second video in the target video is received or not in the playing process of the target video; the target video is a video obtained by combining a first video shot by a first camera and a second video shot by a second camera, the first camera is used for shooting videos in a common video recording manner, the second camera is used for tracking and shooting a target object in a framing picture of the first camera in a slow-lens video recording manner, and after the second camera tracks and shoots the target object in the slow-lens video recording manner, the method further includes: detecting whether the target object is in a current framing picture of the first camera; if the target object is in the current framing picture of the first camera, controlling the second camera to continue to perform tracking shooting on the target object in a slow-lens video recording mode; if the target object is not in the current framing picture of the first camera, controlling the second camera to stop tracking and shooting the target object, and storing the shot video;
the pause unit is used for pausing the currently played video picture when receiving a video playing instruction triggered by the second video;
a first playing unit for playing the second video;
the second playing unit is used for continuously playing the paused video picture when the second video playing is detected to be finished;
wherein the second detection unit includes:
the position acquisition subunit is configured to acquire, in the playing process of the target video, a position of a start frame of the second video in a video sequence of the target video;
and the detection subunit is used for detecting whether a video playing instruction triggered by the second video is received or not after the second video is played to the position.
16. The apparatus of claim 15, wherein the first playback unit comprises:
and the playing subunit is used for playing the second video in a mode of overlapping a window on the paused video picture.
17. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video capturing method according to any one of claims 1 to 6.
18. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the video playback method according to any one of claims 7 to 8.
CN201711083066.1A 2017-11-07 2017-11-07 Video shooting method, video playing method and device and mobile terminal Active CN107786827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711083066.1A CN107786827B (en) 2017-11-07 2017-11-07 Video shooting method, video playing method and device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711083066.1A CN107786827B (en) 2017-11-07 2017-11-07 Video shooting method, video playing method and device and mobile terminal

Publications (2)

Publication Number Publication Date
CN107786827A CN107786827A (en) 2018-03-09
CN107786827B true CN107786827B (en) 2020-03-10

Family

ID=61432801

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711083066.1A Active CN107786827B (en) 2017-11-07 2017-11-07 Video shooting method, video playing method and device and mobile terminal

Country Status (1)

Country Link
CN (1) CN107786827B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180270445A1 (en) * 2017-03-20 2018-09-20 Samsung Electronics Co., Ltd. Methods and apparatus for generating video content
CN109525886B (en) 2018-11-08 2020-07-07 北京微播视界科技有限公司 Method, device and equipment for controlling video playing speed and storage medium
CN110557566A (en) * 2019-08-30 2019-12-10 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111147779A (en) * 2019-12-31 2020-05-12 维沃移动通信有限公司 Video production method, electronic device, and medium
CN112399077A (en) * 2020-10-30 2021-02-23 维沃移动通信有限公司 Shooting method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105657299A (en) * 2015-07-14 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Method and system for processing shot data based on double cameras
CN105847636A (en) * 2016-06-08 2016-08-10 维沃移动通信有限公司 Video recording method and mobile terminal
CN106412221A (en) * 2015-07-27 2017-02-15 Lg电子株式会社 Mobile terminal and method for controlling the same

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016172619A1 (en) * 2015-04-23 2016-10-27 Apple Inc. Digital viewfinder user interface for multiple cameras

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105657299A (en) * 2015-07-14 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Method and system for processing shot data based on double cameras
CN106412221A (en) * 2015-07-27 2017-02-15 Lg电子株式会社 Mobile terminal and method for controlling the same
CN105847636A (en) * 2016-06-08 2016-08-10 维沃移动通信有限公司 Video recording method and mobile terminal

Also Published As

Publication number Publication date
CN107786827A (en) 2018-03-09

Similar Documents

Publication Publication Date Title
CN107786827B (en) Video shooting method, video playing method and device and mobile terminal
CN109274823B (en) Multimedia file playing control method and terminal equipment
CN108174103B (en) Shooting prompting method and mobile terminal
CN108628515B (en) Multimedia content operation method and mobile terminal
WO2020042890A1 (en) Video processing method, terminal, and computer readable storage medium
WO2019196929A1 (en) Video data processing method and mobile terminal
CN108924412B (en) Shooting method and terminal equipment
WO2020108261A1 (en) Photographing method and terminal
WO2021036536A1 (en) Video photographing method and electronic device
CN110740259B (en) Video processing method and electronic equipment
CN108924422B (en) Panoramic photographing method and mobile terminal
CN108616771B (en) Video playing method and mobile terminal
CN110602565A (en) Image processing method and electronic equipment
CN108132749B (en) Image editing method and mobile terminal
CN110557683B (en) Video playing control method and electronic equipment
CN109618218B (en) Video processing method and mobile terminal
CN110062273B (en) Screenshot method and mobile terminal
CN109922294B (en) Video processing method and mobile terminal
CN111083354A (en) Video recording method and electronic equipment
CN109819168B (en) Camera starting method and mobile terminal
CN110933306A (en) Method for sharing shooting parameters and electronic equipment
CN109451178B (en) Video playing method and terminal
CN109639970B (en) Shooting method and terminal equipment
CN108924413B (en) Shooting method and mobile terminal
CN107911585B (en) Image processing method and device of mobile terminal and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant