CN113747073A - Video shooting method and device and electronic equipment - Google Patents

Video shooting method and device and electronic equipment Download PDF

Info

Publication number
CN113747073A
CN113747073A CN202111067953.6A CN202111067953A CN113747073A CN 113747073 A CN113747073 A CN 113747073A CN 202111067953 A CN202111067953 A CN 202111067953A CN 113747073 A CN113747073 A CN 113747073A
Authority
CN
China
Prior art keywords
area
video
image
input
static
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111067953.6A
Other languages
Chinese (zh)
Other versions
CN113747073B (en
Inventor
王凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111067953.6A priority Critical patent/CN113747073B/en
Publication of CN113747073A publication Critical patent/CN113747073A/en
Application granted granted Critical
Publication of CN113747073B publication Critical patent/CN113747073B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of video shooting processing, and discloses a video shooting method, a video shooting device and electronic equipment, wherein the method comprises the following steps: in the process of recording a video, receiving a first input of a user on a shooting preview interface of the video, wherein the first input is used for selecting a first video area; in response to a first input, updating the video image of the first dynamic region and displaying a first static image in the first static region; the first dynamic area and the first static area are respectively one of a first video area and a second video area, and the second video area is an area except the first video area in a video picture display area of a video; the first static image is an image in a first static area in the first video image, and the first video image is an image collected by a camera at a first moment for receiving a first input. According to the embodiment of the application, the dynamic and static combined image can be directly, simply and flexibly generated.

Description

Video shooting method and device and electronic equipment
Technical Field
The application belongs to the technical field of videos, and particularly relates to a video shooting method and device and electronic equipment.
Background
With the increasing development of communication technology, electronic devices such as mobile phones and tablet computers have become indispensable devices in daily life of people. After the shooting software is installed on the electronic equipment, a user can realize the shooting function of a picture or a video and record and share life anytime and anywhere, so that the user is more and more popular with the user. However, if a video shot by the existing electronic device wants to increase a specific visual effect, post-production is needed, and the difficulty of post-production is large, cumbersome and time-consuming.
Disclosure of Invention
The embodiment of the application aims to provide a video shooting method, a video shooting device and electronic equipment, and the problems that the post-production of the existing shot video is difficult, complex and time-consuming can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video shooting method, where the method includes: receiving a first input of a user on a shooting preview interface of a video in the process of recording the video, wherein the first input is used for selecting a first video area;
in response to the first input, updating the video image of the first dynamic region and displaying a first static image in the first static region;
wherein the first dynamic area and the first static area are respectively one of the first video area and a second video area, and the second video area is an area of the video picture display area of the video except the first video area; the first static image is an image in a first video image, which is located in the first static area, and the first video image is an image collected by a camera at a first moment for receiving the first input.
In a second aspect, an embodiment of the present application provides a video shooting apparatus, including:
the area selection module is used for receiving a first input of a user on a shooting preview interface of a video in the process of recording the video, wherein the first input is used for selecting a first video area;
a first display module for updating a video image of a first dynamic region in response to the first input;
a second display module for displaying a first static image in a first static area in response to the first input;
wherein the first dynamic area and the first static area are respectively one of the first video area and a second video area, and the second video area is an area of the video picture display area of the video except the first video area; the first static image is an image in a first video image, which is located in the first static area, and the first video image is an image collected by a camera at a first moment for receiving the first input.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of any of the video capture methods described above.
In the embodiment of the application, in the process of recording the video, a first input of a user on a shooting preview interface is received, the first input is used for selecting a first video area, and a second video area is an area except the first video area in a video picture display area of the video. In response to a first input, the video image of the first dynamic region is updated and the first static image is displayed in the first static region. The first dynamic area and the first static area are one of a first video area and a second video area, the first static image is an image in the first video image and located in the first static area, and the first video image is an image collected by a camera at a first moment for receiving a first input. Therefore, a user can flexibly select any local area in the video shooting preview picture to be static or dynamic, a part of the video picture is a static video picture, and the other part of the video picture is a dynamic video picture, so that a dynamic and static combined image can be directly, simply and flexibly generated, later-stage video editing processing is not needed, and the interestingness of video shooting can be improved.
Drawings
Fig. 1 is a schematic flowchart of a video shooting method provided in an embodiment of the present application;
fig. 2 is a second schematic flowchart of a video shooting method according to an embodiment of the present application;
fig. 3 is a third schematic flowchart of a video shooting method according to an embodiment of the present application;
fig. 4 is a fourth schematic flowchart of a video shooting method according to an embodiment of the present application;
FIG. 5 is a schematic interface diagram of an electronic device according to an embodiment of the present disclosure;
fig. 6 is a second schematic interface diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a third schematic interface diagram of an electronic device according to an embodiment of the present application;
FIG. 8 is a fourth schematic interface diagram of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a fifth schematic interface diagram of an electronic device according to an embodiment of the present application;
FIG. 10 is a sixth schematic interface diagram of an electronic device according to an embodiment of the present disclosure;
fig. 11 is a seventh schematic interface diagram of an electronic device according to an embodiment of the present disclosure;
fig. 12 is an eighth schematic interface diagram of an electronic device according to an embodiment of the present application;
FIG. 13 is a ninth illustration of an interface diagram of an electronic device according to an embodiment of the disclosure;
FIG. 14 is a tenth schematic diagram of an interface of an electronic device provided by an embodiment of the present application;
FIG. 15 is an eleventh illustration of an interface of an electronic device according to an embodiment of the disclosure;
fig. 16 is a schematic structural diagram of a video camera provided in an embodiment of the present application;
fig. 17 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 18 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof. Referring to fig. 1, fig. 1 is a schematic flowchart of a video shooting method provided in an embodiment of the present application, including the following steps:
step 101, in the process of recording a video, receiving a first input of a user on a shooting preview interface of the video, wherein the first input is used for selecting a first video area.
In this embodiment, the process of recording the video includes a process from the start of recording to the end of recording.
The shooting preview interface may be an interface displayed when a camera of the electronic device is in a preview state during shooting of a video, and at this time, a preview image is displayed in the shooting preview interface. The user can make a first input on the photographing preview interface for selecting the first video area. The first input may specifically be an input through a slide input, a click input, a long press input, a gesture input, or a voice input performed on the shooting preview interface. The electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, a netbook or a personal digital assistant, etc.
Step 102, in response to the first input, updating the video image of the first dynamic area and displaying the first static image in the first static area.
In this embodiment, the first dynamic region and the first static region are respectively one of a first video region and a second video region, and the second video region is a region other than the first video region in a video picture display region of a video. The first static image is an image in a first static area in the first video image, and the first video image is an image collected by a camera at a first moment for receiving a first input. For example, when a first input of the user is received when the video is recorded to 1 minute 23 seconds (i.e., the first time) in the process of recording the video, the first video image is an image captured by the camera at 1 minute 23 seconds.
In this embodiment, a user may flexibly select any local area in the video preview picture to be still or dynamic, that is, may select the first dynamic area and the first static area at will, where the first dynamic area displays a dynamic video image, and the first static area displays a static first static image, and may finally directly generate a dynamic and static combined image including both the video image and the first static image. It is worth reminding that the number of the first dynamic areas and the number of the first static areas are not limited, and a user can input an instruction to set one or any plurality of first dynamic areas and first static areas according to actual requirements.
Optionally, the step of updating the video image of the first dynamic region in step 102 includes:
and substep 1, acquiring a second video image acquired by the camera at a second moment.
For example, when the video is recorded to 2 minutes and 34 seconds (i.e., the second time) during the video recording process, the second video image is the image captured by the camera at 2 minutes and 34 seconds.
And a substep 2, intercepting an image positioned in the first dynamic region in the second video image to obtain a first local video image.
And a substep 3 of updating the image of the first dynamic region to the first local video image.
The process of recording the video is actually the process of continuously acquiring images by the camera. Therefore, the video image update of the first dynamic area is a process of capturing the image in the first dynamic area in the second video image in real time and continuously refreshing, that is, replacing the image in the first dynamic area with the first partial video image.
Optionally, the first input is a first sliding input of the user on the shooting preview interface, and after the electronic device receives a second sliding input, the method includes: acquiring a first sliding track of a first sliding input; determining an area surrounded by the first sliding track as a dynamic area, and determining an area except the dynamic area in the video picture display area as a static area; alternatively, the area surrounded by the first sliding trajectory is determined as a static area, and the area other than the static area in the video screen display area is determined as a dynamic area. The first sliding track may be any predetermined shape such as a circle, an ellipse, a polygon, a straight line, a curved line, an irregular shape, and the like. In the embodiment, the shooting preview interface can be divided into the dynamic area and the static area through the first sliding track, and the operation is simple and convenient.
As shown in fig. 6, the user can slide a closed trajectory surrounding a bird 605 with a finger on the shooting preview interface 601, the electronic device generates a first visible sliding trajectory 602 according to the closed trajectory, an area within the first sliding trajectory 602 is a dynamic area 603, and an image of the bird 605 in the dynamic area 603 remains dynamic. While the area outside the closed first sliding trajectory line 602 is a static area 604, the images of trees 606, kites 607, etc. contained in the static area 604 remain static.
As shown in fig. 13, the user can also slide a closed trajectory surrounding a mountain 1305 with a finger on the shooting preview interface 1301, and the electronic device generates a visible first sliding trajectory line 1302 according to the closed trajectory line, an area within the first sliding trajectory line 1302 is a static area 1304, and an image of the mountain 1305 in the static area 1304 remains static. And the area outside the first sliding trajectory line 1302 is a dynamic area 1303, and images of an airplane 1306, a cloud 1307 and the like contained in the dynamic area 1303 are kept dynamic.
Optionally, the first input is a second sliding input of the user on the shooting preview interface, and after the electronic device receives the second sliding input, the method includes: acquiring at least one second closed sliding track of a second sliding input; determining the area where the first object is located as a dynamic area, and determining the area except the dynamic area in the video picture display area as a static area; alternatively, the region where the first object is located is determined as a static region, and a region other than the static region in the video screen display region is determined as a dynamic region. And the first object is a video object in the area enclosed by the second closed sliding track. The second closed sliding track can be in any predetermined closed shape such as a circle, an ellipse, a polygon, an irregular shape, and the like. In this embodiment, the shooting preview interface can be divided into a dynamic area and a static area by the second closed sliding track, and the operation is simple and convenient.
As shown in fig. 6, the user can slide a closed trajectory surrounding a bird 605 on the shooting preview interface 601 with a finger, the electronic device generates a second visible closed sliding trajectory 602 according to the closed trajectory, and analyzes, through image recognition, that the bird 605 is present in the second closed sliding trajectory 602 as a first object, an area where the bird 605 is present is a dynamic area 603, and an image of the bird 605 in the dynamic area 603 is kept dynamic. While the area outside the second closed sliding trajectory 602 is a static area 604, the images of trees 606, kites 607, etc. contained in the static area 604 remain static.
As shown in fig. 13, the user may also slide a closed trajectory surrounding a mountain 1305 with a finger on the shooting preview interface 1301, the electronic device generates a second visible closed sliding trajectory 1302 according to the closed trajectory, and analyzes, through image recognition, the mountain 1305 in the second closed sliding trajectory 1302 as a first object, the area where the mountain 1305 is located is a static area 1304, and the image of the mountain 1305 in the static area 1304 remains still. And the area outside the second closed sliding track 1302 is a dynamic area 1303, and images of the airplane 1306, the cloud 1307 and the like contained in the dynamic area 1303 are kept dynamic.
Optionally, the first input is a first touch input of the user on the shooting preview interface, and the first touch input includes, but is not limited to, a single-click input, a double-click input, a long-press input, or the like. After receiving the first touch input, the method includes: acquiring a first input position of a first touch input; determining an area of a preset range including the first input position as a dynamic area, and determining an area except the dynamic area in the video picture display area as a static area; or, determining an area including a preset range of the first input position as a static area, and determining an area other than the static area in a video picture display area of the shooting preview interface as a dynamic area. In the embodiment, the shooting preview interface can be divided into the dynamic area and the static area through the first touch input, that is, a complete video picture acquired by one camera is divided into the dynamic area and the static area, so that local static and specific dynamic video effects are realized, and the user operation is simple and convenient.
As shown in fig. 6, the user may point and click a bird 605 with a hand on the shooting preview interface 601, and the electronic device may set a region within a predetermined range including the click as a dynamic region 603, for example, a predetermined radiation range centered on the click position as the dynamic region 603, where the region within the predetermined range may be any predetermined shape such as a circle, a polygon, an irregular shape, and the image of the bird 605 in the dynamic region 603 remains dynamic. While the area outside the dynamic area 603 is a static area 604, the images of trees 606, kites 607, etc. contained in the static area 604 remain static.
As shown in fig. 13, the user may also point and click a mountain 1305 with a finger on the shooting preview interface, and the electronic device may set an area within a predetermined range including the click as a static area 1304, for example, a predetermined radiation range centering on the position of the click as the static area 1304, where the area within the predetermined range may be any predetermined shape such as a circle, a polygon, an irregular shape, and the like, and the image of the mountain 1305 in the static area 1304 remains still. And the area outside the predetermined range is a dynamic area 1303, and images of the airplane 1306, the cloud 1307 and the like contained in the dynamic area 1303 are kept dynamic.
Optionally, the first input is a second touch input of the user on the shooting preview interface, where the second touch input includes, but is not limited to, a single-click operation, a double-click operation, or a long-press operation. After receiving the second touch input, the method includes: acquiring a second input position of a second touch input; determining the area where the second object is located as a dynamic area, and determining the area except the dynamic area in the video picture display area as a static area; or, the area where the second object is located is determined as a static area, and the area other than the static area in the video picture display area is determined as a dynamic area. And the second object is a video object where the second input position is located. In this embodiment, the dynamic area or the static area can be set by recognizing the second touch input, and the second object on the shooting preview interface is simple and convenient to operate, and the dynamic area or the static area can be set more accurately.
As shown in fig. 6, the user can point a bird 605 with a hand on the shooting preview interface 601, analyze the bird 605 corresponding to the clicked position as the second object by image recognition, determine the area where the bird 605 is located as the dynamic area 603, and keep the image of the bird 605 in the dynamic area 603 dynamic. While the area outside the bird 605 is a static area 604, the images of trees 606, kites 607, etc. contained in the static area 604 remain static.
As shown in fig. 13, the user may also point and click a mountain 1305 with a hand on the shooting preview interface 1301, and analyze, through image recognition, that the mountain 1305 corresponding to the clicked position is a second object, where an area where the mountain 1305 is located is a static area 1304, and an image of the mountain 1305 in the static area remains static. And the area outside the mountain 1305 is a dynamic area 1303, and images of the airplane 1306, the cloud 1307 and the like contained in the dynamic area 1303 are kept dynamic.
According to the video shooting method, a user can flexibly select any local area in a video shooting preview picture to be static or dynamic, namely, a first dynamic area and a first static area are selected randomly, the first dynamic area displays a dynamic video image, the first static area displays a static first static image, and finally, a dynamic and static combined image which simultaneously comprises the video image and the first static image can be generated directly. Therefore, according to the method and the device, a part of the video picture is a static video picture, the other part of the video picture is a dynamic video picture, a dynamic and static combined image is directly, simply and flexibly generated, the video editing processing in the later period is not needed, and the interestingness of video shooting can be improved.
Referring to fig. 2, fig. 2 is a second schematic flowchart of a video shooting method according to an embodiment of the present application, including the following steps:
step 201, in the process of recording a video, receiving a first input of a user on a shooting preview interface of the video, wherein the first input is a sliding input and is used for selecting a first video area.
In this embodiment, the process of recording the video includes a process from the start of recording to the end of recording.
Step 202, displaying a region selection frame, wherein the region selection frame is generated by the electronic equipment according to the sliding track of the first input.
As shown in fig. 6, a user can slide on the shooting preview interface 601 by using a touch device such as a finger or a stylus pen, the sliding track is a closed track surrounding a bird 605, the electronic device generates a first sliding track line 602 consistent with the sliding track according to the closed track, and the first sliding track line 602 is an area selection frame 602.
As shown in fig. 13, the user can also slide a closed track surrounding a mountain 1305 with a finger on the shooting preview interface 1301, and the electronic device generates a visible first sliding track line 1302 according to the closed track, where the first sliding track line 1302 is the area selection box.
Step 203, in response to the first input, updating the video image of the first dynamic area and displaying the first static image in the first static area.
In this embodiment, the first dynamic region and the first static region are respectively one of a first video region and a second video region, and the second video region is a region other than the first video region in a video picture display region of a video. The first static image is an image in a first static area in the first video image, and the first video image is an image collected by a camera at a first moment for receiving a first input. For example, when a first input of the user is received when the video is recorded to 1 minute 23 seconds (i.e., the first time) in the process of recording the video, the first video image is an image captured by the camera at 1 minute 23 seconds.
And step 204, receiving a second input of the area selection box from the user.
Alternatively, the second input may be a drag operation of the area selection box by the user, or the like.
Step 205, updating the display parameters of the region selection box in response to the second input.
Optionally, the display parameters include at least one of: display area, display position.
Step 206, determining a second dynamic area and a second static area based on the area framed by the area selection box after the display parameters are updated.
For example, in response to a drag operation of the region selection box by the user, the display area parameter, the display position parameter, the display area parameter, and the display position parameter of the region selection box are updated.
As shown in FIG. 7, the user may press the region selection box location in the dynamic region 603 with a finger and slide the region size of the second dynamic region 603 framed by the increased region selection box outward. Alternatively, the user may slide both fingers outward simultaneously to zoom in the second dynamic region 603 proportionally, etc. At this time, the electronic device updates the display area parameter of the area selection frame, and increases or proportionally enlarges the area of the second dynamic area 603 based on the area framed by the area selection frame after the display parameter is updated, and updates the area other than the second dynamic area 603 to be the second static area.
As shown in FIG. 8, the user may wish to adjust the position of the area selection box 602, such as to reselect a dynamic area, at which point the user may drag the area selection box 602 with a finger to another location. At this time, the electronic device updates the display position parameter of the area selection frame, and re-determines the dynamic area as the area shown in fig. 8 based on the area framed by the area selection frame 602 after updating the display parameter.
Step 207, the video image of the second dynamic area is updated, and the second static image is displayed in the second static area.
In this embodiment, the second still image is an image in a second still area in the second video image, and the second video image is an image acquired by the camera at a second time when the second input is received. For example, when the video is recorded to 2 minutes and 34 seconds (i.e., the second time) during the video recording process, the second video image is the image captured by the camera at 2 minutes and 34 seconds.
The specific implementation of step 201 and step 202 may refer to the descriptions in step 101 to step 102, which are not described herein again.
In the video shooting method of the embodiment of the application, the second input may be a dragging operation performed by a user on an area selection frame of the dynamic area or the static area, and the user may drag the area selection frame leftward, rightward, upward or downward, or may perform an equal-scale scaling on the dynamic area or the static area. The electronic device controls the boundary to adjust one of a size and a shape according to a dragging direction and a dragging distance of the user. For example, when the user drags the area selection box 2 cm to the right, it moves 2 cm to the right. In this way, the user can adjust one or both of the display area and the display position of the dynamic area or the static area by dragging the area selection frame, and the operation is convenient and fast.
Referring to fig. 3, fig. 3 is a third schematic flowchart of a video shooting method according to an embodiment of the present application, including the following steps:
step 301, in the process of recording a video, receiving a first input of a user on a shooting preview interface of the video, wherein the first input is used for selecting a first video area.
Step 302, a third input of the user to the first dynamic region is received.
In this embodiment, the first dynamic area is a first video area. The third input may specifically be input through sliding input, click input, long-time press input, gesture operation, voice operation, or the like performed on the shooting preview interface, where the click input may be click input or double-click input, and may specifically be determined according to actual use requirements, which is not limited in this embodiment of the application.
Step 303, in response to a third input, determining the video object in the first dynamic region as a target object, wherein the video object is a moving object.
In this embodiment, the user selects at least one video object in the first dynamic region through the third input, and determines that the video object is the target object to be locked.
And step 304, in the motion process of the video object, determining a third dynamic area and a third static area according to the change information of the display position of the video object. Wherein the change information comprises movement of the video object from a first position to a second position. The third dynamic region is a region where the video object in the third video image is located. The third static area is an area other than the third dynamic area in the video picture display area.
In this embodiment, since the video object is moving all the time, the position of the video object may change in real time, and the electronic device may monitor the change in the position of the video object in real time through image recognition. The video object moves from a first position to a second position, the first position being the position of the video object at a first time instant, the second position being the position of the video object at a second time instant. For example, when the video is recorded to 1 minute 23 seconds (i.e., the first time) in the process of recording the video, a third input of the user is received, and the video object in the first dynamic area is determined as the target object, where the position of the video object is the first position. Next, when the video is recorded for 2 minutes and 34 seconds (i.e., the second time) during the recording of the video, the video object moves to the second position at this time.
And 305, updating the video image of the third dynamic area and displaying the third static image in the third static area.
In this embodiment, the third still image is an image in a third still area in the third video image, and the third video image is an image acquired by the camera at a third time when the third input is received. For example, when the video is recorded to 1 minute 23 seconds (i.e., the first time) in the process of recording the video, a third input of the user is received, and the video object in the first dynamic area is determined as the target object, where the position of the video object is the first position. Then, when the video is recorded for 2 minutes and 34 seconds (i.e., the second time) during the recording of the video, the video object moves to the second position at this time. Then, when the video is recorded to 3 minutes and 45 seconds (i.e., the third time) during the video recording process, the third video image is an image captured by the camera at the time of 3 minutes and 45 seconds.
As shown in fig. 9, a user may first slide a closed trajectory surrounding a bird 605 on a shooting preview interface 601 with a finger, where an area within the closed trajectory is a dynamic area 603, and an image of the bird 605 in the dynamic area 603 remains dynamic. At this time, the user can slide in the dynamic region 603, the sliding trajectory is a closed trajectory including the bird 605, the user clicks the locking flag 608, the region selection frame 602 maintains a following state for the bird 605 (i.e., the third object), the bird 605 moves upward, downward, leftward or rightward in the shooting field of view, and according to the movement of the tracked object, the dynamic region is updated in synchronization with the position update, that is, the region selection frame 602 coincides with the real-time display position of the tracked object, that is, the first position of the region selection frame 602 is automatically updated upward, downward, leftward or rightward to the second position, that is, on the shooting preview interface 601, the region selection frame 602 automatically moves along with the moving direction and the moving distance of the bird 605. The mark in the present application is used for indicating words, symbols, images and the like of information, and a control or other container can be used as a carrier for displaying information, including but not limited to a word mark, a symbol mark, an image mark.
As shown in fig. 9, the user may further click on the flying bird 605 in the dynamic area 603 and then click on the locking flag 608, at this time, the area selection box 602 will keep following the flying bird 605, the flying bird 605 moves up, down, left or right in the shooting field of view, and the area selection box 602 will automatically follow the flying bird 605, that is, the area selection box 602 performs synchronous position update on the shooting preview interface according to the moving direction and the moving distance of the flying bird 605, that is, the area selection box 602 is consistent with the real-time display position of the tracked object.
According to the video shooting method, a user can select at least one video object as a target object to be locked in a dynamic area, in the motion process of the video object, the electronic equipment can automatically determine the dynamic area and a static area according to the change information of the display position of the video object, the area selection frame 602 can automatically follow the video object to move, the following state (namely synchronous position updating) of the video object is always kept, and therefore video shooting is more flexible and changeable, and operation is convenient and fast.
Referring to fig. 4, fig. 4 is a fourth schematic flowchart of a video shooting method provided in the embodiment of the present application, including the following steps:
step 401, in the process of recording a video, receiving a first input of a user on a shooting preview interface of the video, where the first input is used to select a first video area.
For example, when a video is recorded for 1 minute 23 seconds (i.e., a first time) during recording of the video, a first input from the user is received, and the first video area is selected.
Step 402, receiving a fourth input of the user to the video object of the first dynamic region, where the fourth input is used to input a fourth position.
In this embodiment, the first dynamic area is a first video area. The fourth input may specifically be input through a sliding operation, a clicking operation, a long-time pressing operation, a gesture operation, a voice operation, or the like performed on the shooting preview interface, where the clicking operation may be a single-click operation or a double-click operation. For example, the electronic device may determine the fourth input to the video object by an end point of a sliding operation, or by a single-click operation, a double-click operation, a long-press operation, a gesture operation, a voice operation, or the like.
And step 403, in response to the fourth input, intercepting an image of an area where the video object is located in the fourth video image to obtain a second local video image.
And step 404, updating the second local video image to a fourth position for displaying.
In this embodiment, the initial position of the video object in the video frame display area is a third position, and the user may move the video object to a fourth position through a fourth input to the video object.
For example, when the video is recorded to 4 minutes and 56 seconds (i.e., the fourth time) during the process of recording the video, a fourth input of the user is received, the fourth input is used for inputting a fourth position, and the second partial video image is updated to the fourth position for displaying.
Step 405, determining the area where the video object at the fourth position is located as a fourth dynamic area, and determining the area other than the fourth dynamic area in the video picture display area as a fourth static area.
Step 406, the video image of the fourth dynamic area is updated, and the fourth static image is displayed in the fourth static area.
In this embodiment, the fourth still image is an image in a fourth still area in the fourth video image, and the fourth video image is an image acquired by a camera at a fourth time receiving a fourth input.
For example, when a fourth input from the user is received when the video is recorded to 4 minutes and 56 seconds (i.e., a fourth time) during the recording of the video, the fourth video image is an image captured by the camera at 1 minute and 23 seconds.
As shown in FIG. 8, the user wishes to adjust the position of the dynamic area 603, the bird 605 is at a third position in the video display area, and the user drags the area selection box 602 to the fourth position in FIG. 8 where the kite is located.
According to the video shooting method, a user can select at least one video object in a dynamic area as a target object to be locked, the dynamic area and a static area can be determined according to the change information of the display position of the video object in the motion process of the video object, and the area selection frame 602 can automatically move along with the specific object to update the synchronous position, and the synchronous position is consistent with the real-time display position of the tracked object. Therefore, the user can flexibly control the display position of the video object in the static area, so that the area for moving the third position of the video object to the fourth position can be realized in the video recording process, the video shooting is more flexible and changeable, and the operation is convenient and fast.
To facilitate understanding of the present solution, the following description illustrates specific embodiments of the present application in conjunction with an interface schematic diagram of an electronic device.
Fig. 5 to 10 are interface schematic diagrams of an electronic device according to an embodiment of the present application, in this embodiment, a user opens a camera application of the electronic device to enter video shooting, the user slides out a closed sliding track 602 on a shooting preview interface 601, an area in the closed sliding track 602 is a dynamic area 603, and other areas are static areas 604, at this time, images of other areas remain static, only an image of the dynamic area 603 remains in a dynamic state, and the user can adjust the size of the dynamic area 603 at will, and move the position of the dynamic area 603 on the shooting preview interface 601. The user may also lock an object via the area selection box 602, and the area selection box 602 will automatically follow the object, i.e. the dynamic area 603 performs a synchronous position update on the capture preview interface 601, and the dynamic area 603 is consistent with the real-time display position of the tracked object. The method specifically comprises the following steps:
step A1: the user opens the camera application of the electronic device into video capture, as shown in fig. 5.
Step A2: in the process of recording a video, receiving a first sliding input of a user on a shooting preview interface, wherein the first sliding input is used for acquiring a first sliding track of the first sliding input, determining an area surrounded by the first sliding track as a dynamic area, and determining an area except the dynamic area in a video picture display area as a static area.
As shown in fig. 6, the user can slide a closed sliding track 602 surrounding a bird 605 on the shooting preview interface 601 with a finger, the area surrounded by the closed sliding track 602 is determined as a dynamic area 603, and the area other than the dynamic area 603 in the video screen display area is determined as a static area 604.
Step A3: and receiving a second input of the user to the area selection frame, and adjusting the area selection frame.
As shown in FIG. 7, the user may slide outward to increase the size of the second dynamic region 603 by pressing a finger at the region selection box location of the dynamic region 603. Alternatively, the user may slide both fingers outward simultaneously to zoom in the second dynamic region 603 proportionally, etc. At this time, the electronic device updates the display area parameter of the area selection frame, and increases or proportionally enlarges the area of the second dynamic area 603 based on the area framed by the area selection frame after the display parameter is updated, and updates the area other than the second dynamic area 603 to be the second static area.
As shown in fig. 8, the user wishes to adjust the position of the second dynamic region 603, and then the user presses the inner region of the dynamic region 603 to select the whole second dynamic region 603, and then slides the finger to drag the dynamic region 603 to other positions. At this time, the electronic device updates the display position parameters of the area selection frame, and based on the area framed by the area selection frame after updating the display parameters, the electronic device can move the two dynamic areas 603 corresponding to the bird 605 and the kite from the first position to the second position.
Step A4: receiving a third input of the user to the first dynamic region; in response to a third input, the video object in the first dynamic region is determined to be a target object.
As shown in fig. 9, a user may first slide a closed trajectory surrounding a bird 605 on a shooting preview interface 601 with a finger, where an area within the closed trajectory is a dynamic area 603, and an image of the bird 605 in the dynamic area 603 remains dynamic. At this time, the user may slide out a closed trajectory containing the bird 605 again in the dynamic region 603, and click the locking flag, at this time, the region selection box 602 will keep the following state for the bird 605 (i.e., the third object), the bird 605 moves upward, downward, leftward or rightward in the shooting field of view, and according to the movement of the tracked object, the dynamic region will be updated in synchronization with the position update, that is, the dynamic region 603 coincides with the real-time display position of the tracked object, that is, the first position of the dynamic region 603 will be automatically updated upward, downward, leftward or rightward to the second position, that is, on the shooting preview interface 601, the region selection box 602 automatically moves following the moving direction and the moving distance of the bird 605.
Fig. 11 to 15 are interface schematic diagrams of an electronic device according to an embodiment of the present application, where a user slides a closed sliding track 1302 on a shooting preview interface 1301, an image inside the closed sliding track 1302 remains static, an image outside the closed sliding track 1302 remains dynamic, and the user can slide a plurality of areas at will and adjust positions of the areas on the shooting preview interface 1301. The user may also cancel the static-on-hold state of the static area 1304 at any time.
Step B1: and entering video recording. As shown in fig. 11, the operation is the same as that of the above embodiment.
Step B2: the user selects dynamic region 1303.
As shown in fig. 12, a user takes a video through a window on an airplane 1306 while a large mountain 1305 is in view.
As shown in fig. 13, the user may also slide a closed sliding track 1302 surrounding a mountain 1305 with a finger on the shooting preview interface 1301, where an area within the closed sliding track 1302 is a static area 1304, and an image of the mountain 1305 in the static area 1304 remains static. And the area outside the closed sliding track 1302 is a dynamic area 1303, and images of the airplane 1306, the cloud 1307 and the like contained in the dynamic area 1303 are kept dynamic.
Step B3: the plurality of dynamic areas 1303 are slid out, and the positions of the areas on the shooting preview interface 1301 are dragged and adjusted.
As shown in fig. 14, at this time, a flying bird 1308 appears on the shooting preview interface 1301, the user wants to take a snapshot for a longer time, the user slides out a closed sliding track 1302 to frame the flying bird 1308, the frame of the framed static area 1304 remains static, and at the same time, the user presses the static area 1304 and drags to the position where the user wants to drop to continue the recording.
Step B4: the static area 1304 is updated to a dynamic area.
As shown in fig. 15, when the user satisfies the shooting of the mountain 1305, the user may press the static area 1304 and quickly slide down to cancel the static state of the area of the mountain 1305, the area of the mountain 1305 is released from the static state and updated to the dynamic area, and the mountain starts to gradually disappear from the field of view.
According to the video shooting method, a user can flexibly select any local area in a video shooting preview picture to be static or dynamic, namely, the first dynamic area and the first static area 1304 are selected randomly, the first dynamic area displays a dynamic video image, the first static area 1304 displays a static first static image, and finally, a dynamic and static combined image containing the video image and the first static image can be generated directly. Therefore, according to the method and the device, a part of the video picture is a static video picture, the other part of the video picture is a dynamic video picture, a dynamic and static combined image can be directly, simply and flexibly generated, the video editing processing in the later period is not needed, and the interestingness of video shooting can be improved.
It should be noted that, in the video shooting method provided in the embodiment of the present application, the execution subject may be an electronic device, a video shooting device, or a control module in the video shooting device for executing the video shooting method. In the embodiment of the present application, a video shooting method executed by a video shooting device is taken as an example, and the video shooting device provided in the embodiment of the present application is described.
Referring to fig. 16, fig. 16 is a schematic structural diagram of a video camera according to an embodiment of the present application, where the video camera 1600 includes:
the area selection module 1601 is configured to receive a first input of a user on a shooting preview interface of a video during a video recording process, where the first input is used to select a first video area;
a first display module 1602, configured to update the video image of the first dynamic region in response to a first input;
a second display module 1603 for displaying the first static image in the first static area in response to the first input;
the first dynamic area and the first static area are respectively one of a first video area and a second video area, and the second video area is an area except the first video area in a video picture display area of a video; the first static image is an image in a first static area in the first video image, and the first video image is an image collected by a camera at a first moment for receiving a first input.
Optionally, the first display module further comprises:
the image acquisition submodule is used for responding to the first input and acquiring a second video image acquired by the camera at a second moment;
the image intercepting submodule is used for intercepting an image positioned in the first dynamic area in the second video image to obtain a first local video image;
and the image updating submodule is used for updating the image of the first dynamic area into the first local video image.
Optionally, the first input is a slide input;
the device still includes:
and the selection frame display module is used for displaying the area selection frame after receiving the first input, and the area selection frame is generated according to the sliding track of the first input.
Optionally, the apparatus further comprises:
the first receiving module is used for receiving a second input of the area selection frame from the user after the area selection frame is displayed;
a parameter updating module for updating the display parameters of the region selection box in response to a second input;
the area adjusting module is used for determining a second dynamic area and a second static area based on the area framed by the area selection frame after the display parameters are updated;
the first display module is used for updating the video image of the second dynamic area;
a second display module for displaying a second static image in a second static area;
the second static image is an image in a second static area in the second video image, and the second video image is an image collected by the camera at a second moment for receiving a second input.
Optionally, the display parameters include at least one of: display area, display position.
Optionally, the first dynamic region is a first video region;
the device further comprises:
the second receiving module is used for receiving a third input of the user to the first dynamic area after receiving the first input;
an object determination module, configured to determine, in response to a third input, a video object in the first dynamic region as a target object, the video object being a moving object;
the object tracking module is used for determining a third dynamic area and a third static area according to the change information of the display position of the video object in the motion process of the video object; wherein the change information comprises movement of the video object from a first position to a second position; the third dynamic area is an area where the video object in the third video image is located; the third static area is an area except the third dynamic area in the video picture display area;
the first display module is used for updating the video image of the third dynamic area;
a second display module for displaying a third static image in a third static area;
the third static image is an image in a third static area in the third video image, and the third video image is an image collected by the camera at a third moment for receiving a third input.
Optionally, the first dynamic region is a first video region; a third position of the video object in the video picture display area;
the device further comprises:
the third receiving module is used for receiving a fourth input of the video object of the first dynamic area from the user after receiving the first input, and the fourth input is used for inputting a fourth position;
the image acquisition module is used for responding to the fourth input, intercepting the image of the area where the video object is located in the fourth video image, and obtaining a second local video image;
the image updating module is used for updating the second local video image to a fourth position for displaying;
the area changing module is used for determining the area where the video object at the fourth position is located as a fourth dynamic area and determining the area except the fourth dynamic area in the video picture display area as a fourth static area;
the first display module is used for updating the video image of the fourth dynamic area;
the second display module is used for displaying a fourth static image in a fourth static area;
the fourth still image is an image in a fourth video image, which is located in a fourth still area, and the fourth video image is an image collected by a camera at a fourth moment for receiving a fourth input.
Optionally, the first input comprises a first sliding input of the user on the shooting preview interface;
the region selection module further comprises:
the first obtaining submodule is used for obtaining a first sliding track of the first sliding input;
the first determining submodule is used for determining the area surrounded by the first sliding track as a dynamic area and determining the area except the dynamic area in the video picture display area as a static area;
alternatively, the area surrounded by the first sliding trajectory is determined as a static area, and the area other than the static area in the video screen display area is determined as a dynamic area.
Optionally, the first input comprises a second sliding input of the user on the shooting preview interface;
the region selection module further comprises:
a second obtaining submodule for obtaining at least one second closed sliding trajectory of a second sliding input;
the second determining submodule is used for determining the area where the first object is located as a dynamic area and determining the area except the dynamic area in the video picture display area as a static area;
or determining the area where the first object is located as a static area, and determining the area except the static area in the video picture display area as a dynamic area;
and the first object is a video object in the area enclosed by the second closed sliding track.
Optionally, the first input includes a first touch input of a user on a shooting preview interface;
the region selection module further comprises:
the third obtaining submodule is used for obtaining a first input position of the first touch input;
a third determining submodule, configured to determine a region of a preset range including the first input position as a dynamic region, and determine a region other than the dynamic region in the video screen display region as a static region;
or, determining an area including a preset range of the first input position as a static area, and determining an area other than the static area in a video picture display area of the shooting preview interface as a dynamic area.
Optionally, the first input is a second touch input of the user on the shooting preview interface;
the region selection module further comprises:
the fourth acquiring submodule is used for acquiring a second input position of the second touch input;
determining the area where the second object is located as a dynamic area, and determining the area except the dynamic area in the video picture display area as a static area;
the fourth determining submodule is used for determining the area where the second object is positioned as a static area and determining the area except the static area in the video picture display area as a dynamic area;
and the second object is a video object where the second input position is located.
The video camera 1600 in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video camera 1600 in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an IOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The video shooting device 1600 provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 15, and is not described here again to avoid repetition.
The video shooting device 1600 of the embodiment of the application can flexibly select any local area in the video shooting preview picture to be static or dynamic, namely, a first dynamic area and a first static area are selected randomly, the first dynamic area displays dynamic video images, the first static area displays static first static images, and finally, dynamic and static combined images including the video images and the first static images can be generated directly. Therefore, according to the method and the device, a part of the video picture is a static video picture, the other part of the video picture is a dynamic video picture, a dynamic and static combined image can be directly, simply and flexibly generated, the video editing processing in the later period is not needed, and the interestingness of video shooting can be improved.
Optionally, as shown in fig. 17, an electronic device 1700 is further provided in an embodiment of the present application, and includes a processor 1701, a memory 1702, and a program or an instruction stored in the memory 1702 and executable on the processor 1701, where the program or the instruction is executed by the processor 1701 to implement each process of the above-mentioned embodiment of the video shooting method, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 18 is a hardware configuration diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1800 includes, but is not limited to: radio frequency unit 1801, network module 1802, audio output unit 1803, input unit 1804, sensors 1805, display unit 1806, user input unit 1807, interface unit 1808, memory 1809, and processor 1810.
It should be understood that, in the embodiment of the present application, the radio frequency unit 1801 may be configured to receive and transmit signals during a message sending and receiving process or a call process, and specifically, receive downlink data from a base station and then process the received downlink data to the processor 1810; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 1801 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 1801 may also communicate with a network and other devices through a wireless communication system.
The user input unit 1807 is configured to:
receiving a first input of a user on a shooting preview interface of a video in the process of recording the video, wherein the first input is used for selecting a first video area;
the processor 1810 is configured to:
in response to the first input, updating the video image of the first dynamic region and displaying the first still image in the first still region through the display unit 1806;
wherein the first dynamic area and the first static area are respectively one of the first video area and a second video area, and the second video area is an area of the video picture display area of the video except the first video area; the first static image is an image in a first video image, which is located in the first static area, and the first video image is an image collected by a camera at a first moment for receiving the first input.
Optionally, the processor 1810 is further configured to:
acquiring a second video image acquired by the camera at a second moment;
intercepting an image in the second video image, which is positioned in the first dynamic area, to obtain a first local video image;
and updating the image of the first dynamic region into the first local video image.
Optionally, the first input is a slide input;
after receiving the first input of the user on the shooting preview interface of the video, the processor 1810 is further configured to:
an area selection box generated according to the sliding trajectory of the first input is displayed through the display unit 1806.
Optionally, after the video image of the first dynamic region is updated in response to the first input and the first static image is displayed in the first static region, the user input unit 1807 is further configured to: receiving a second input of the area selection box from the user;
the processor 1810, further configured to:
updating display parameters of the region selection box in response to the second input;
determining a second dynamic area and a second static area based on the area framed by the area selection frame after the display parameters are updated;
updating the video image of the second dynamic region, and displaying a second static image in the second static region through the display unit 1806;
the second static image is an image in a second video image, which is located in the second static area, and the second video image is an image collected by a camera at a second moment and used for receiving the second input.
Optionally, the display parameters include at least one of: display area, display position.
Optionally, the first dynamic region is the first video region;
after receiving the first input of the user on the shooting preview interface of the video, the user input unit 1807 is further configured to:
receiving a third input of the first dynamic region from a user;
the processor 1810, further configured to:
determining a video object in the first dynamic region as a target object in response to the third input, the video object being a moving object;
determining a third dynamic area and a third static area according to the change information of the display position of the video object in the motion process of the video object; wherein the change information comprises movement of the video object from a first location to a second location; the third dynamic area is an area where the video object is located in a third video image; the third static area is an area except the third dynamic area in the video picture display area;
updating the video image of the third dynamic region, and displaying a third static image in the third static region through the display unit 1806;
the third still image is an image in a third video image, which is located in the third still area, and the third video image is an image acquired by a camera at a third moment and receives the third input.
Optionally, the first dynamic region is the first video region; a third position of the video object in the video picture display area;
after receiving the first input of the user on the shooting preview interface of the video, the user input unit 1807 is further configured to:
receiving a fourth input of the video object of the first dynamic region by a user, wherein the fourth input is used for inputting a fourth position;
the processor 1810, further configured to:
in response to the fourth input, intercepting an image of an area where the video object is located in a fourth video image to obtain a second local video image;
the second local video image is updated to the fourth position for displaying through the display unit 1806;
determining the area where the video object at the fourth position is located as a fourth dynamic area, and determining the area except the fourth dynamic area in the video picture display area as a fourth static area;
updating the video image of the fourth dynamic region, and displaying a fourth static image in the fourth static region through the display unit 1806;
the fourth still image is an image in a fourth video image, which is located in the fourth still area, and the fourth video image is an image collected by a fourth camera at a fourth moment and receives the fourth input.
Optionally, the first input comprises a first sliding input of a user on the shooting preview interface;
before the video image of the dynamic area is updated and the first video image is displayed in the static area, the user input unit 1807 is further configured to:
acquiring a first sliding track of the first sliding input;
the processor 1810, further configured to:
determining an area surrounded by the first sliding track as a dynamic area, and determining an area except the dynamic area in the video picture display area as a static area;
or, determining an area surrounded by the first sliding track as a static area, and determining an area other than the static area in the video picture display area as a dynamic area.
Optionally, the first input comprises a second sliding input of the user on the shooting preview interface;
before the video image of the dynamic area is updated and the first video image is displayed in the static area, the processor 1810 is further configured to:
obtaining at least one second closed sliding trajectory of the second sliding input;
determining the area where the first object is located as a dynamic area, and determining the area except the dynamic area in the video picture display area as a static area;
or determining the area where the first object is located as a static area, and determining the area except the static area in the video picture display area as a dynamic area;
wherein the first object is a video object in an area enclosed by the second closed sliding track.
Optionally, the first input comprises a first touch input of a user on the shooting preview interface;
before the video image of the dynamic area is updated and the first video image is displayed in the static area, the processor 1810 is further configured to:
acquiring a first input position of the first touch input;
determining an area including a preset range of the first input position as a dynamic area, and determining an area except the dynamic area in the video picture display area as a static area;
or determining an area including a preset range of the first input position as a static area, and determining an area except the static area in a video picture display area of the shooting preview interface as a dynamic area.
Optionally, the first input is a second touch input of the user on the shooting preview interface;
before the video image of the dynamic area is updated and the first video image is displayed in the static area, the processor 1810 is further configured to:
acquiring a second input position of the second touch input;
determining the area where the second object is located as a dynamic area, and determining the area except the dynamic area in the video picture display area as a static area;
or determining the area where the second object is located as a static area, and determining the area except the static area in the video picture display area as a dynamic area;
and the second object is a video object where the second input position is located.
According to the electronic equipment, a user can flexibly select any local area in a video shooting preview picture to be static or dynamic, namely, a first dynamic area and a first static area are selected randomly, the first dynamic area displays a dynamic video image, the first static area displays a static first static image, and finally, a dynamic and static combined image which simultaneously comprises the video image and the first static image can be generated directly. Therefore, according to the method and the device, a part of the video picture is a static video picture, the other part of the video picture is a dynamic video picture, a dynamic and static combined image is directly, simply and flexibly generated, the video editing processing in the later period is not needed, and the interestingness of video shooting can be improved.
The electronic device provides wireless broadband internet access to the user through the network module 1802, such as assisting the user in sending and receiving e-mail, browsing web pages, and accessing streaming media.
The audio output unit 1803 may convert audio data received by the radio frequency unit 1801 or the network module 1802 or stored in the memory 1809 into an audio signal and output as sound. Also, the audio output unit 1803 may provide audio output related to a particular function performed by the electronic device 1800 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1803 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1804 is used to receive audio or video signals. It should be understood that in the embodiment of the present application, the input Unit 1804 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the graphics processing Unit 6041 processes image data of a still picture or a video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode.
The electronic device 1800 also includes at least one sensor 1805, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the electronic device 1800 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1805 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 1806 is used to display information input by the user or information provided to the user. The Display unit 1806 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1807 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 1807 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The interface unit 1808 interfaces an external device to the electronic apparatus 1800. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1808 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 1800 or may be used to transmit data between the electronic apparatus 1800 and the external device.
The memory 1809 may be used to store software programs as well as various data. The memory 1809 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1809 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 1810 is a control center of the electronic device, and is connected to various parts of the whole electronic device through various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 1809 and calling data stored in the memory 1809, thereby performing overall monitoring of the electronic device. Processor 1810 may include one or more processing units; preferably, the processor 1810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1810.
Those skilled in the art will appreciate that the electronic device 1800 may also include a power supply (e.g., a battery) for powering the various components, and that the power supply may be logically connected to the processor 1810 via a power management system to perform functions such as managing charging, discharging, and power consumption. The electronic device structure shown in fig. 18 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted. In the embodiment of the present application, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device (e.g., a bracelet, glasses), a pedometer, and the like.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (18)

1. A video capture method, comprising:
receiving a first input of a user on a shooting preview interface of a video in the process of recording the video, wherein the first input is used for selecting a first video area;
in response to the first input, updating the video image of the first dynamic region and displaying a first static image in the first static region;
wherein the first dynamic area and the first static area are respectively one of the first video area and a second video area, and the second video area is an area of the video picture display area of the video except the first video area; the first static image is an image in a first video image, which is located in the first static area, and the first video image is an image collected by a camera at a first moment for receiving the first input.
2. The method of claim 1, wherein updating the video image of the first dynamic region comprises:
acquiring a second video image acquired by the camera at a second moment;
intercepting an image in the second video image, which is positioned in the first dynamic area, to obtain a first local video image;
and updating the image of the first dynamic region into the first local video image.
3. The method of claim 1, wherein the first input is a slide input;
after the receiving of the first input of the user on the shooting preview interface of the video, the method further includes:
displaying a region selection box, wherein the region selection box is generated according to the sliding track of the first input.
4. The method of claim 3, wherein updating the video image of the first dynamic region in response to the first input and after displaying the first static image in the first static region further comprises:
receiving a second input of the area selection box from the user;
updating display parameters of the region selection box in response to the second input;
determining a second dynamic area and a second static area based on the area framed by the area selection frame after the display parameters are updated;
updating the video image of the second dynamic area and displaying a second static image in the second static area;
the second static image is an image in a second video image, which is located in the second static area, and the second video image is an image collected by a camera at a second moment and used for receiving the second input.
5. The method of claim 4, wherein the display parameters include at least one of: display area, display position.
6. The method according to claim 1, wherein said first dynamic region is said first video region;
after the receiving of the first input of the user on the shooting preview interface of the video, the method further includes:
receiving a third input of the first dynamic region from a user;
determining a video object in the first dynamic region as a target object in response to the third input, the video object being a moving object;
determining a third dynamic area and a third static area according to the change information of the display position of the video object in the motion process of the video object; wherein the change information comprises movement of the video object from a first location to a second location; the third dynamic area is an area where the video object is located in a third video image; the third static area is an area except the third dynamic area in the video picture display area;
updating the video image of the third dynamic area and displaying a third static image in the third static area;
the third still image is an image in a third video image, which is located in the third still area, and the third video image is an image acquired by a camera at a third moment and receives the third input.
7. The method according to claim 1, wherein said first dynamic region is said first video region; a third position of the video object in the video picture display area;
after the receiving of the first input of the user on the shooting preview interface of the video, the method further includes:
receiving a fourth input of the video object of the first dynamic region by a user, wherein the fourth input is used for inputting a fourth position;
in response to the fourth input, intercepting an image of an area where the video object is located in a fourth video image to obtain a second local video image;
updating the second local video image to the fourth position for displaying;
determining the area where the video object at the fourth position is located as a fourth dynamic area, and determining the area except the fourth dynamic area in the video picture display area as a fourth static area;
updating the video image of the fourth dynamic area and displaying a fourth static image in the fourth static area;
the fourth still image is an image in a fourth video image, which is located in the fourth still area, and the fourth video image is an image collected by a fourth camera at a fourth moment and receives the fourth input.
8. The method of claim 1, wherein the first input comprises a first sliding input by a user on the capture preview interface;
the updating the video image of the dynamic area and before the first video image is displayed in the static area further comprises:
acquiring a first sliding track of the first sliding input;
determining an area surrounded by the first sliding track as a dynamic area, and determining an area except the dynamic area in the video picture display area as a static area;
or, determining an area surrounded by the first sliding track as a static area, and determining an area other than the static area in the video picture display area as a dynamic area.
9. The method of claim 1, wherein the first input comprises a second sliding input by a user on the capture preview interface;
the updating the video image of the dynamic area and before the first video image is displayed in the static area further comprises:
obtaining at least one second closed sliding trajectory of the second sliding input;
determining the area where the first object is located as a dynamic area, and determining the area except the dynamic area in the video picture display area as a static area;
or determining the area where the first object is located as a static area, and determining the area except the static area in the video picture display area as a dynamic area;
wherein the first object is a video object in an area enclosed by the second closed sliding track.
10. The method of claim 1, wherein the first input comprises a first touch input by a user on the capture preview interface;
the updating the video image of the dynamic area and before the first video image is displayed in the static area further comprises:
acquiring a first input position of the first touch input;
determining an area including a preset range of the first input position as a dynamic area, and determining an area except the dynamic area in the video picture display area as a static area;
or determining an area including a preset range of the first input position as a static area, and determining an area except the static area in a video picture display area of the shooting preview interface as a dynamic area.
11. The method of claim 1, wherein the first input is a second touch input by a user on the capture preview interface;
the updating the video image of the dynamic area and before the first video image is displayed in the static area further comprises:
acquiring a second input position of the second touch input;
determining the area where the second object is located as a dynamic area, and determining the area except the dynamic area in the video picture display area as a static area;
or determining the area where the second object is located as a static area, and determining the area except the static area in the video picture display area as a dynamic area;
and the second object is a video object where the second input position is located.
12. A video camera, comprising:
the area selection module is used for receiving a first input of a user on a shooting preview interface of a video in the process of recording the video, wherein the first input is used for selecting a first video area;
a first display module for updating a video image of a first dynamic region in response to the first input;
a second display module for displaying a first static image in a first static area in response to the first input;
wherein the first dynamic area and the first static area are respectively one of the first video area and a second video area, and the second video area is an area of the video picture display area of the video except the first video area; the first static image is an image in a first video image, which is located in the first static area, and the first video image is an image collected by a camera at a first moment for receiving the first input.
13. The apparatus of claim 12, wherein the first display module further comprises:
the image acquisition submodule is used for responding to the first input and acquiring a second video image acquired by the camera at a second moment;
the image intercepting submodule is used for intercepting an image in the second video image, which is positioned in the first dynamic area, so as to obtain a first local video image;
and the image updating submodule is used for updating the image of the first dynamic area into the first local video image.
14. The apparatus of claim 12, wherein the first input is a slide input;
the device further comprises:
a selection frame display module, configured to display a region selection frame after receiving the first input, where the region selection frame is generated according to a sliding track of the first input;
the first receiving module is used for receiving a second input of the region selection frame from a user after the region selection frame is displayed;
a parameter updating module for updating the display parameters of the region selection box in response to the second input;
the area adjusting module is used for determining a second dynamic area and a second static area based on the area framed by the area selection frame after the display parameters are updated;
the first display module is used for updating the video image of the second dynamic area;
the second display module is used for displaying a second static image in the second static area;
the second static image is an image in a second video image, which is located in the second static area, and the second video image is an image collected by a camera at a second moment and used for receiving the second input.
15. The apparatus of claim 14, wherein the display parameters comprise at least one of: display area, display position.
16. The apparatus according to claim 12, wherein said first dynamic region is said first video region;
the device further comprises:
the second receiving module is used for receiving a third input of the first dynamic area from a user after receiving the first input;
an object determination module, configured to determine, in response to the third input, a video object in the first dynamic region as a target object, the video object being a moving object;
the object tracking module is used for determining a third dynamic area and a third static area according to the change information of the display position of the video object in the motion process of the video object; wherein the change information comprises movement of the video object from a first location to a second location; the third dynamic area is an area where the video object is located in a third video image; the third static area is an area except the third dynamic area in the video picture display area;
the first display module is used for updating the video image of the third dynamic area;
the second display module is used for displaying a third static image in the third static area;
the third still image is an image in a third video image, which is located in the third still area, and the third video image is an image acquired by a camera at a third moment and receives the third input.
17. The apparatus according to claim 12, wherein said first dynamic region is said first video region; a third position of the video object in the video picture display area;
the device further comprises:
a third receiving module, configured to receive, after receiving the first input, a fourth input of the video object of the first dynamic region by the user, where the fourth input is used to input a fourth position;
the image acquisition module is used for responding to the fourth input, intercepting an image of an area where the video object is located in a fourth video image to obtain a second local video image;
the image updating module is used for updating the second local video image to the fourth position for displaying;
the area changing module is used for determining the area where the video object at the fourth position is located as a fourth dynamic area and determining the area except the fourth dynamic area in the video picture display area as a fourth static area;
the first display module is used for updating the video image of the fourth dynamic area;
the second display module is used for displaying a fourth static image in the fourth static area;
the fourth still image is an image in a fourth video image, which is located in the fourth still area, and the fourth video image is an image collected by a fourth camera at a fourth moment and receives the fourth input.
18. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video capture method of any of claims 1-11.
CN202111067953.6A 2021-09-13 2021-09-13 Video shooting method and device and electronic equipment Active CN113747073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111067953.6A CN113747073B (en) 2021-09-13 2021-09-13 Video shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111067953.6A CN113747073B (en) 2021-09-13 2021-09-13 Video shooting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113747073A true CN113747073A (en) 2021-12-03
CN113747073B CN113747073B (en) 2024-02-02

Family

ID=78738270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111067953.6A Active CN113747073B (en) 2021-09-13 2021-09-13 Video shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113747073B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334242A (en) * 2022-08-19 2022-11-11 维沃移动通信有限公司 Video recording method, video recording device, electronic equipment and medium
WO2023151609A1 (en) * 2022-02-10 2023-08-17 维沃移动通信有限公司 Time-lapse photography video recording method and apparatus, and electronic device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08149356A (en) * 1994-11-17 1996-06-07 Canon Inc Moving picture display device
JP2005229370A (en) * 2004-02-13 2005-08-25 Casio Comput Co Ltd Imaging apparatus, image processing method, and program
JP2013115692A (en) * 2011-11-30 2013-06-10 Jvc Kenwood Corp Imaging apparatus and control program for use in imaging apparatus
CN104618572A (en) * 2014-12-19 2015-05-13 广东欧珀移动通信有限公司 Photographing method and device for terminal
CN106657767A (en) * 2016-10-31 2017-05-10 维沃移动通信有限公司 Photographing method and mobile terminal
US20170163929A1 (en) * 2015-12-04 2017-06-08 Livestream LLC Video stream encoding system with live crop editing and recording
CN106851128A (en) * 2017-03-20 2017-06-13 努比亚技术有限公司 A kind of video data handling procedure and device based on dual camera
CN110149479A (en) * 2019-05-20 2019-08-20 上海闻泰信息技术有限公司 Dual camera imaging method, device, terminal and medium
CN111917980A (en) * 2020-07-29 2020-11-10 Oppo(重庆)智能科技有限公司 Photographing control method and device, storage medium and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08149356A (en) * 1994-11-17 1996-06-07 Canon Inc Moving picture display device
JP2005229370A (en) * 2004-02-13 2005-08-25 Casio Comput Co Ltd Imaging apparatus, image processing method, and program
JP2013115692A (en) * 2011-11-30 2013-06-10 Jvc Kenwood Corp Imaging apparatus and control program for use in imaging apparatus
CN104618572A (en) * 2014-12-19 2015-05-13 广东欧珀移动通信有限公司 Photographing method and device for terminal
US20170163929A1 (en) * 2015-12-04 2017-06-08 Livestream LLC Video stream encoding system with live crop editing and recording
CN106657767A (en) * 2016-10-31 2017-05-10 维沃移动通信有限公司 Photographing method and mobile terminal
CN106851128A (en) * 2017-03-20 2017-06-13 努比亚技术有限公司 A kind of video data handling procedure and device based on dual camera
CN110149479A (en) * 2019-05-20 2019-08-20 上海闻泰信息技术有限公司 Dual camera imaging method, device, terminal and medium
CN111917980A (en) * 2020-07-29 2020-11-10 Oppo(重庆)智能科技有限公司 Photographing control method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023151609A1 (en) * 2022-02-10 2023-08-17 维沃移动通信有限公司 Time-lapse photography video recording method and apparatus, and electronic device
CN115334242A (en) * 2022-08-19 2022-11-11 维沃移动通信有限公司 Video recording method, video recording device, electronic equipment and medium

Also Published As

Publication number Publication date
CN113747073B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN108762954B (en) Object sharing method and mobile terminal
CN108668083B (en) Photographing method and terminal
CN108737904B (en) Video data processing method and mobile terminal
CN110740259B (en) Video processing method and electronic equipment
CN111010510B (en) Shooting control method and device and electronic equipment
US11451706B2 (en) Photographing method and mobile terminal
CN107943390B (en) Character copying method and mobile terminal
CN113360238A (en) Message processing method and device, electronic equipment and storage medium
CN111031398A (en) Video control method and electronic equipment
CN111274777B (en) Thinking guide display method and electronic equipment
US20210096739A1 (en) Method For Editing Text And Mobile Terminal
CN109213407B (en) Screenshot method and terminal equipment
CN107748741B (en) Text editing method and mobile terminal
CN110196668B (en) Information processing method and terminal equipment
CN110933511A (en) Video sharing method, electronic device and medium
CN108132749B (en) Image editing method and mobile terminal
CN110413363B (en) Screenshot method and terminal equipment
CN113747073B (en) Video shooting method and device and electronic equipment
CN108696642B (en) Method for arranging icons and mobile terminal
CN109408072A (en) A kind of application program delet method and terminal device
CN111770374B (en) Video playing method and device
CN112911147A (en) Display control method, display control device and electronic equipment
CN110007821B (en) Operation method and terminal equipment
CN109669710B (en) Note processing method and terminal
CN111061530A (en) Image processing method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant