CN112714253B - Video recording method and device, electronic equipment and readable storage medium - Google Patents

Video recording method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN112714253B
CN112714253B CN202011580693.8A CN202011580693A CN112714253B CN 112714253 B CN112714253 B CN 112714253B CN 202011580693 A CN202011580693 A CN 202011580693A CN 112714253 B CN112714253 B CN 112714253B
Authority
CN
China
Prior art keywords
focus
frames
video
recording
focus tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011580693.8A
Other languages
Chinese (zh)
Other versions
CN112714253A (en
Inventor
张睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011580693.8A priority Critical patent/CN112714253B/en
Publication of CN112714253A publication Critical patent/CN112714253A/en
Application granted granted Critical
Publication of CN112714253B publication Critical patent/CN112714253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Abstract

The embodiment of the application discloses a video recording method and device, electronic equipment and a readable storage medium, and belongs to the technical field of computers. The specific implementation scheme comprises the following steps: acquiring a plurality of focus tracking objects of a video to be recorded; displaying a plurality of focus tracking frames corresponding to the plurality of focus tracking objects on a video preview interface; and recording the video to be recorded according to the positions of the plurality of focus tracking frames. According to the scheme in the application, the plurality of shot objects can be tracked when the video is recorded, so that the recording effect is improved.

Description

Video recording method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a video recording method and device, electronic equipment and a readable storage medium.
Background
With the development of the photographing technology, the photographing technology is widely applied to various electronic devices. Currently, electronic devices may use a "movie shooting mode" when recording video. However, in this mode, the video will focus on a single subject in the viewfinder, only supporting tracking of the single subject, resulting in poor recording.
Disclosure of Invention
An embodiment of the present application provides a video recording method, an apparatus, an electronic device, and a readable storage medium, so as to solve the problem that an existing video recording method is poor in recording effect.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video recording method, including:
acquiring a plurality of focus tracking objects of a video to be recorded;
displaying a plurality of focus tracking frames corresponding to the plurality of focus tracking objects on a video preview interface;
and recording the video to be recorded according to the positions of the plurality of focus tracking frames.
In a second aspect, an embodiment of the present application provides a video recording apparatus, including:
the device comprises an acquisition module, a focusing module and a focusing module, wherein the acquisition module is used for acquiring a plurality of focus-following objects of a video to be recorded;
the display module is used for displaying a plurality of focus tracking frames corresponding to the plurality of focus tracking objects on a video preview interface;
and the recording module is used for recording the video to be recorded according to the positions of the plurality of focus tracking frames.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored in the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, when a video is recorded, a plurality of focus tracking objects of the video to be recorded can be obtained, a plurality of focus tracking frames corresponding to the plurality of focus tracking objects are displayed on a video preview interface, and the video to be recorded is recorded based on the positions of the plurality of focus tracking frames. Therefore, tracking of a plurality of shot objects can be achieved when videos are recorded, and recording effects are improved.
Drawings
Fig. 1 is a flowchart of a video recording method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a cell phone interface in an example of the application;
FIG. 3 is a second schematic diagram of a handset interface in an example of the present application;
FIG. 4 is a third schematic diagram of a cell phone interface in an example of the present application;
FIG. 5 is a fourth schematic illustration of a cell phone interface in an example of the present application;
FIG. 6 is a fifth schematic view of a cell phone interface in an example of the present application;
FIG. 7 is a sixth schematic representation of a cell phone interface in an example of the present application;
FIG. 8A is a seventh schematic view of a cell phone interface in an example of the present application;
FIG. 8B is an eighth schematic view of a cell phone interface in an example of the present application;
FIG. 9A is a ninth illustration of a cell phone interface in an example of the present application;
FIG. 9B is a schematic diagram of a mobile phone interface of an example of the application;
fig. 10 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of another electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video recording method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a flowchart of a video recording method applied to an electronic device according to an embodiment of the present disclosure. As shown in fig. 1, the method may include the steps of:
step 11: and acquiring a plurality of focus tracking objects of the video to be recorded.
Alternatively, the plurality of focus tracking objects may include at least two focus tracking objects. In this step, the electronic device may obtain the focus tracking object of the video to be recorded according to a pre-input condition of the selected focus tracking object, such as a name, an action, and/or a location of the focus tracking object; the focus tracking object of the video to be recorded can also be acquired according to user input, such as clicking operation, sliding operation and the like of the focus tracking object on a video preview interface by a user or an object identifier selected by the user. In this step, the electronic device acquires information of a plurality of focus-following objects of the video to be recorded.
It is understood that prior to this step 11, the electronic device may receive a video recording instruction from the user to initiate video recording. For example, after the video recording function in the electronic device is started, the user may click the recording button to input a video recording command, and accordingly, the electronic device receives the video recording command.
In a specific example, the video recording process in the present application may be performed in a "movie shooting mode". After the video recording function in the electronic equipment is started, the user can click the 'movie shooting mirror' button, so that the electronic equipment enters a movie shooting mirror mode.
Step 12: displaying a corresponding plurality of focus chasing frames of a plurality of focus chasing objects on a video preview interface.
In this embodiment, the focus tracking frame may be a fixed size, or may be determined based on the size of the corresponding focus tracking object. For example, taking the focus tracking object as the person as an example, the corresponding focus tracking frame may be determined based on the completely covered person as shown in fig. 5.
Optionally, the focus tracking frame in the present application may be a square frame, as shown in fig. 2 or fig. 3, or may be a frame with another shape, such as a circular frame. The focus tracking frame in the present application may be an entity frame, that is, a frame that can be displayed in a video preview interface, as shown in fig. 2 or fig. 3, or may be a virtual frame, for example, a focus tracking frame that is embodied by covering other areas.
Step 13: and recording the video to be recorded according to the positions of the plurality of focus tracking frames.
In this embodiment, when a video is recorded, simultaneous focus tracking may be performed on a plurality of corresponding focus tracking objects according to positions of a plurality of focus tracking frames, that is, a video is recorded by using a picture selected by the plurality of focus tracking frames as a recording content, so that the plurality of focus tracking objects are highlighted.
According to the video recording method, when the video is recorded, the multiple focus following objects of the video to be recorded can be obtained, the multiple focus following frames corresponding to the multiple focus following objects are displayed on the video preview interface, and the video to be recorded is recorded based on the positions of the multiple focus following frames. Therefore, tracking of a plurality of shot objects can be achieved when videos are recorded, and recording effects are improved.
In the embodiment of the application, in order to facilitate a user to know a recording effect, in the process of recording a video, a shielding object may be used to cover other areas except for multiple focus tracking frames in a video preview interface, so as to display the recording content when multiple focus tracking objects are focused simultaneously in the video preview interface. Optionally, the shade is a translucent black mask, for example.
In the embodiment of the application, the electronic device can acquire a plurality of focus-following objects of the video to be recorded according to the input of the user. Optionally, step 11 may include:
receiving a first input of a user on a video preview interface;
in response to the first input, a plurality of focus following objects of the video to be recorded are acquired.
For example, the first input may be a click operation, a slide operation, or the like of the user on the focus tracking object on the video preview interface, or may also be a selection operation of the user on an object identifier displayed on the video preview interface. For example, if a portrait identifier and a pet identifier are displayed on the video preview interface, after the user selects the portrait identifier, it may be determined that an object corresponding to the portrait identifier is a focus-following object; or after the user selects the pet identifier, the object corresponding to the pet identifier can be determined to be the focus tracking object. Therefore, the required focus tracking object can be accurately determined by means of user input, and the video recording effect is guaranteed.
It should be noted that, in the embodiment of the present application, in addition to all the required focus following objects that can be selected by the user, after a certain focus following object is determined, other focus following objects may be automatically identified according to the characteristic information of the focus following object, such as motion, sound, scene, and the like, so as to perform synchronous focus following.
Optionally, the acquiring the multiple focus-following objects in step 11 may include:
acquiring a first focus tracking object; for example, the first focus tracking object may be determined by an input operation of a user;
determining a second focus tracking object according to the characteristic information of the first focus tracking object; wherein the plurality of focus-following objects include a first focus-following object and a second focus-following object, that is, the second focus-following object is a focus-following object other than the first focus-following object in the plurality of focus-following objects.
For example, taking an example in which the plurality of focused objects include the focused person 1 and the focused person 2, after the focused person 1 is specified, the focused person 2 can be specified based on the direction in which the focused person 1 faces and/or the name of the person who focused the person 1.
Therefore, other focus tracking objects are further determined by means of the determined focus tracking objects, the relevance between the focus tracking objects which are simultaneously focused can be ensured, and the video recording effect is improved.
In the embodiment of the application, in order to switch the lens more accurately, the focus tracking frames can be combined and separated according to actual requirements. Optionally, the recording of the video to be recorded according to the multiple focus tracking frames may include:
combining the multiple focus tracking frames into at least one target focus tracking frame;
and recording the video to be recorded according to the position of the at least one target focus tracking frame.
Further, after combining the multiple focus tracking frames into at least one target focus tracking frame, the electronic device may further separate the at least one target focus tracking frame into multiple focus tracking frames, and record the video to be recorded according to the positions of the separated multiple focus tracking frames. And for the separated focus tracking frames, there can be one focus tracking object in each focus tracking frame.
In this way, by means of merging and separating the focus-following frames, the effect of corresponding focus-following objects when the focus-following objects are together can be obtained after merging the focus-following frames, and a single focus-following object can be close-up after separating the focus-following frames, so that a better recording effect is realized.
Optionally, in the embodiments of the present application, the following ways, which are detailed below, may be adopted, but not limited to, to merge and separate the focus tracking frames.
In a first mode
In this manner, the electronic device may automatically merge corresponding focus tracking frames when multiple focus tracking objects are close, and automatically separate focus tracking frames when multiple focus tracking objects are far away. For example, taking the tracking object a and the tracking object B as an example, when the tracking object a and the tracking object B are close to each other, the tracking frame of the tracking object a and the tracking frame of the tracking object B may be merged into one frame, for example, as the tracking frame m. Further, when the tracking object a and the tracking object B are distant, the tracking frame m may be separated into two, that is, the tracking frame of the tracking object a and the tracking frame of the tracking object B.
As an example, the close proximity of the above-described focus following object a and focus following object B means: the image areas occupied by the object a and the object B to be focused, that is, the sum of the image areas occupied by the focus tracking frame of the object a and the focus tracking frame of the object B to be focused in the case of non-merging is in a certain proportion of the image area occupied by the merged focus tracking frame. The ratio can be set according to the screen content or user operation.
As an example, the above-described focus following object a and focus following object B being far away means: the image area occupied by the tracking object a and the tracking object B, that is, the sum of the image areas occupied by the tracking frame of the tracking object a and the tracking frame of the tracking object B in the case of non-merging, is smaller than a certain ratio in the image area occupied by the merged tracking frame. The ratio can be set according to the screen content or user operation.
Optionally, the merging the plurality of focus chase frames into the at least one target focus chase frame may include: when the ratio of the sum of the image areas occupied by the plurality of focus-following frames to the first image area is larger than a first threshold, the electronic equipment merges the plurality of focus-following frames into one target focus-following frame. The first image area is an image area occupied by the plurality of focus tracking frames after being combined. The first threshold value may be set according to screen content or user operation.
Further, after combining the multiple focus tracking frames into one target focus tracking frame, the electronic device may further separate the one target focus tracking frame into multiple focus tracking frames when a ratio of a sum of image areas occupied by the multiple focus tracking frames to the first image area is smaller than a second threshold, and record the video to be recorded according to the separated multiple focus tracking frames. The second threshold may be set according to screen content or user operation. In particular, to avoid the occurrence of flicker, the second threshold is lower than the first threshold.
Therefore, in the video recording process, the combination and separation of the focus tracking frames can be automatically realized by means of preset conditions, so that the recording effect is improved.
For example, taking as an example that the plurality of focus following frames include a focus following frame of a focus following object a and a focus following frame of a focus following object B, it is assumed that the sum of image areas occupied by the focus following frame of the focus following object a and the focus following frame of the focus following object B is 52cm 2 The combined focus-following frame occupies an image area of 86cm 2 If the first threshold value is 60%, the ratio of 52 to 86 is greater than 60%, and therefore the condition for focusing frame merging is satisfied, and the focusing frame of focusing object a and the focusing frame of focusing object B can be merged into one target focusing frame.
Further, after merging into one target focus tracking frame, through the distancing of the focus tracking object a and the focus tracking object B, it is assumed that the focus tracking frame and the focus tracking pair of the focus tracking object a are pairedThe sum of the image areas occupied by the focus tracking frames of the elephant B is 44cm 2 The combined focus-following frame occupies an image area of 86cm 2 If the second threshold value is 55%, the ratio of 44 to 86 is less than 55%, and therefore the condition for separating the tracking frames is satisfied, and the one target tracking frame can be separated into the tracking frame for tracking object a and the tracking frame for tracking object B, and video recording can be continued from the separated tracking frames.
Mode two
In the second mode, the electronic device can realize the merging and the separation of the focus tracking frames according to the user input. The user input for the chase combining may include, but is not limited to, a two-finger pinch operation, a multi-finger (three-finger or more) pinch operation, and the like of the user, among others. User input for chase box separation may include, but is not limited to, a user's two-finger expansion operation, a multi-finger (three or more finger) expansion operation, and the like.
Optionally, the merging the plurality of focus chase frames into the at least one target focus chase frame may include: the electronic device receives a second input of the video preview interface from the user and merges the plurality of focus chasing frames into at least one target focus chasing frame in response to the second input. Wherein the second input is, for example: the user performs a double-finger pinch operation, a multi-finger pinch operation, and the like for a plurality of focus chasing frames on the video preview interface.
Further, after combining the multiple focus tracking frames into one target focus tracking frame, the electronic device may further receive a third input of the user to the video preview interface, in response to the third input, separate at least one target focus tracking frame into the multiple focus tracking frames, and record the video to be recorded according to the separated multiple focus tracking frames. Wherein the third input is, for example: and the user performs double-finger expansion operation, multi-finger expansion operation and the like on the target focus tracking frame on the video preview interface.
Therefore, in the process of recording the video, the combination and separation of the specific focus tracking frames can be realized by means of user input, so that the recording effect is improved.
The present application will now be described in detail with reference to the following examples and accompanying drawings.
Example one
In this example, taking the electronic device as a mobile phone, and performing simultaneous focus tracking on the focus tracking object a and the focus tracking object B as an example, the corresponding video recording process may include:
s1: the mobile phone enters a movie shooting mode based on the operation of clicking a 'movie shooting' button by a user, and receives a video recording instruction based on the operation of clicking a recording button by the user.
S2: in the process of recording a video, as shown in fig. 2 and 3, by clicking the focus tracking object a on the preview interface by the user, the mobile phone can display a focus tracking frame corresponding to the focus tracking object a on the preview interface, and the preview interface outside the focus tracking frame is covered by the translucent black mask.
S3: as shown in fig. 4 and 5, by clicking the focus tracking object B on the preview interface by the user, the mobile phone may further display a focus tracking frame corresponding to the focus tracking object B on the preview interface, and the preview interface outside the focus tracking frame is covered by the translucent black mask.
S4: when the close proximity of the tracking object a and the tracking object B satisfies the condition for tracking frame merging, as shown in fig. 6, the tracking frame of the tracking object a and the tracking frame of the tracking object B may be merged into one tracking frame, and video recording may be performed based on the position of the merged tracking frame.
S5: when the tracking object a and the tracking object B are far from each other and the condition for tracking frame separation is satisfied, as shown in fig. 7, the combined tracking frame may be separated into the tracking frame of the tracking object a and the tracking frame of the tracking object B, and video recording may be performed based on the separated tracking frames.
Example two
In this second embodiment, taking the electronic device as a mobile phone, and performing simultaneous focus tracking on the focus tracking object a and the focus tracking object B, the corresponding video recording process may include:
s1: the mobile phone enters a movie shooting mode based on the operation of clicking a 'movie shooting' button by a user, and receives a video recording instruction based on the operation of clicking a recording button by the user.
S2: in the process of recording a video, as shown in fig. 2 and 3, by clicking the focus tracking object a on the preview interface by the user, the mobile phone can display a focus tracking frame corresponding to the focus tracking object a on the preview interface, and the preview interface outside the focus tracking frame is covered by the translucent black mask.
S3: as shown in fig. 4 and 5, by clicking the focus tracking object B on the preview interface by the user, the mobile phone may further display a focus tracking frame corresponding to the focus tracking object B on the preview interface, and the preview interface outside the focus tracking frame is covered by the translucent black mask.
S4: by the user performing a double-finger pinch operation on the focus chase frames of the focus object a and the focus object B as shown in fig. 8A or performing a multi-finger pinch operation as shown in fig. 8B, it is possible to merge the chase frame of the focus object a and the chase frame of the focus object B into one chase frame as shown in fig. 6 and perform video recording based on the merged chase frames.
S5: by the user performing the double-finger expansion operation on the merged focus chase frame as shown in fig. 9A or performing the multi-finger expansion operation as shown in fig. 9B, the merged focus chase frame can be separated into a focus chase frame corresponding to the focus chase object a and a focus chase frame corresponding to the focus chase object B as shown in fig. 7, and video recording can be performed based on the separated focus chase frames.
It should be noted that, in the video recording method provided in the embodiment of the present application, the execution main body may be a video recording apparatus, or a control module in the video recording apparatus for executing the video recording method. In the embodiment of the present application, a video recording method executed by a video recording apparatus is taken as an example to describe the video recording apparatus provided in the embodiment of the present application.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present disclosure, and is applied to an electronic device. As shown in fig. 10, the video recording apparatus 100 includes:
the acquisition module 101 is configured to acquire multiple focus tracking objects of a video to be recorded;
the display module 102 is configured to display, on a video preview interface, a plurality of focus tracking frames corresponding to the plurality of focus tracking objects;
and the recording module 103 is configured to record the video to be recorded according to the positions of the multiple focus tracking frames.
Optionally, the obtaining module 101 includes:
the first receiving unit is used for receiving a first input of a user on the video preview interface;
a first determination unit to acquire the plurality of focus-following objects in response to the first input.
Optionally, the obtaining module 101 includes:
an acquisition unit configured to acquire a first focus following object;
a second determination unit configured to determine a second focus tracking object according to the feature information of the first focus tracking object; wherein the plurality of focus-following objects includes the first focus-following object and the second focus-following object.
Optionally, the recording module 103 includes:
a merging unit configured to merge the plurality of focus tracking frames into at least one target focus tracking frame;
and the first recording unit is used for recording the video to be recorded according to the position of the at least one target focus tracking frame.
Optionally, the merging unit is specifically configured to: when the ratio of the sum of the image areas occupied by the plurality of focus-following frames to the first image area is larger than a first threshold value, combining the plurality of focus-following frames into a target focus-following frame; the first image area is an image area occupied by the plurality of focus-following frames after being combined.
Optionally, the recording module 103 further includes:
a first separating unit, configured to separate the target focus tracking frame into multiple focus tracking frames when a ratio of a sum of image areas occupied by the multiple focus tracking frames to the first image area is smaller than a second threshold;
and the second recording unit is used for recording the video to be recorded according to the positions of the separated multiple focus tracking frames.
Optionally, the merging unit includes:
the receiving subunit is used for receiving a second input of the user on the video preview interface;
a merging subunit, configured to merge the plurality of focus tracking frames into at least one target focus tracking frame in response to the second input.
Optionally, the recording module 103 further includes:
the second receiving unit is used for receiving a third input of the user on the video preview interface;
a second separating unit for separating the at least one target focus tracking frame into a plurality of focus tracking frames in response to the third input;
and the third recording unit is used for recording the video to be recorded according to the positions of the separated multiple focus tracking frames.
The video recording apparatus 100 provided in this embodiment of the application can implement each process implemented in the method embodiment shown in fig. 1, and achieve the same technical effect, and for avoiding repetition, details are not repeated here.
The video recording device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The video recording apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
Optionally, as shown in fig. 11, an electronic device 110 is further provided in this embodiment of the present application, and includes a processor 111, a memory 112, and a program or an instruction stored in the memory 112 and executable on the processor 111, where the program or the instruction is executed by the processor 111 to implement each process of the above-mentioned video recording method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 12 is a schematic hardware structure diagram of an electronic device implementing the embodiment of the present application.
The electronic device 1200 includes, but is not limited to: radio frequency unit 1201, network module 1202, audio output unit 1203, input unit 1204, sensors 1205, display unit 1206, user input unit 1207, interface unit 1208, memory 1209, and processor 1210.
Those skilled in the art will appreciate that the electronic device 1200 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1210 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 12 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1210 is configured to acquire a plurality of focus tracking objects of a video to be recorded;
a display unit 1206, configured to display, on a video preview interface, a plurality of focus tracking frames corresponding to a plurality of focus tracking objects;
the processor 1210 is further configured to record the video to be recorded according to the positions of the multiple focus tracking frames.
According to the scheme in the embodiment, the plurality of shot objects can be tracked when the video is recorded, so that the recording effect is improved.
Optionally, the user input unit 1207 is further configured to receive a first input of the user on the video preview interface;
a processor 1210 further configured to acquire the plurality of focus-following objects in response to the first input.
Optionally, the processor 1210 is further configured to acquire a first focus tracking object; determining a second focus tracking object according to the characteristic information of the first focus tracking object; wherein the plurality of focus tracking objects includes the first focus tracking object and the second focus tracking object.
Optionally, the processor 1210 is further configured to merge the plurality of focus chasing frames into at least one target focus chasing frame; and recording the video to be recorded according to the position of the at least one target focus tracking frame.
Optionally, the processor 1210 is further configured to merge the multiple focus-following frames into one target focus-following frame when a ratio of a sum of image areas occupied by the multiple focus-following frames to the first image area is greater than a first threshold; the first image area is an image area occupied by the plurality of focus-following frames after being combined.
Optionally, the processor 1210 is further configured to separate the target focus tracking frame into the multiple focus tracking frames when a ratio of a sum of image areas occupied by the multiple focus tracking frames to the first image area is smaller than a second threshold; and recording the video to be recorded according to the positions of the separated multiple focus tracking frames.
Optionally, the user input unit 1207 is further configured to receive a second input from the user on the video preview interface;
a processor 1210 further configured to merge the plurality of chase boxes into at least one target chase box in response to the second input.
Optionally, the user input unit 1207 is further configured to receive a third input from the user on the video preview interface;
a processor 1210 further configured to separate the at least one target focus tracking frame into a plurality of focus tracking frames in response to the third input; and recording the video to be recorded according to the positions of the separated multiple focus tracking frames.
The electronic device 1200 provided in the embodiment of the present application may implement each process implemented in the method embodiment shown in fig. 1, and achieve the same technical effect, and for avoiding repetition, the details are not repeated here.
It should be understood that, in the embodiment of the present application, the input Unit 1204 may include a Graphics Processing Unit (GPU) 12041 and a microphone 12042, and the Graphics Processing Unit 12041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1206 may include a display panel 12061, and the display panel 12061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1207 includes a touch panel 12071 and other input devices 12072. A touch panel 12071, also referred to as a touch screen. The touch panel 12071 may include two parts of a touch detection device and a touch controller. Other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1209 may be used to store software programs and various data, including but not limited to application programs and an operating system. The processor 1210 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communication. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video recording method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned video recording method embodiment, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (14)

1. A method for video recording, comprising:
acquiring a plurality of focus tracking objects of a video to be recorded;
displaying a plurality of focus tracking frames corresponding to the plurality of focus tracking objects on a video preview interface;
recording the video to be recorded according to the positions of the plurality of focus tracking frames;
the acquiring of the multiple focus-following objects of the video to be recorded comprises:
acquiring a first focus tracking object;
determining a second focus tracking object based on a direction in which the first focus tracking object faces or a name of a person said by the first focus tracking object; wherein the plurality of focus-following objects includes the first focus-following object and the second focus-following object.
2. The method according to claim 1, wherein the recording the video to be recorded according to the positions of the plurality of focus tracking frames comprises:
merging the plurality of focus chasing frames into at least one target focus chasing frame;
and recording the video to be recorded according to the position of the at least one target focus tracking frame.
3. The method of claim 2, wherein said merging the plurality of chase frames into at least one target chase frame comprises:
when the ratio of the sum of the image areas occupied by the plurality of focus-following frames to the first image area is larger than a first threshold value, combining the plurality of focus-following frames into a target focus-following frame; the first image area is an image area occupied by the plurality of focus-following frames after being combined.
4. The method according to claim 3, wherein after combining the plurality of focus chasing frames into one target focus chasing frame, the recording the video to be recorded according to the positions of the plurality of focus chasing frames further comprises:
when the ratio of the sum of the image areas occupied by the plurality of focus-following frames to the first image area is smaller than a second threshold value, separating the target focus-following frame into a plurality of focus-following frames;
and recording the video to be recorded according to the positions of the separated multiple focus tracking frames.
5. The method of claim 2, wherein said merging the plurality of chase blocks into at least one target chase block comprises:
receiving a second input of the user on the video preview interface;
in response to the second input, merging the plurality of chase boxes into at least one target chase box.
6. The method according to claim 2, wherein after combining the plurality of focus chasing frames into at least one target focus chasing frame, the recording the video to be recorded according to the plurality of focus chasing frames further comprises:
receiving a third input of a user on the video preview interface;
separating the at least one target chase frame into a plurality of chase frames in response to the third input;
and recording the video to be recorded according to the positions of the separated multiple focus tracking frames.
7. A video recording apparatus, comprising:
the device comprises an acquisition module, a focusing module and a focusing module, wherein the acquisition module is used for acquiring a plurality of focus-following objects of a video to be recorded;
the display module is used for displaying a plurality of focus tracking frames corresponding to the plurality of focus tracking objects on a video preview interface;
the recording module is used for recording the video to be recorded according to the positions of the plurality of focus tracking frames;
the acquisition module includes:
an acquisition unit configured to acquire a first focus following object;
a second determination unit configured to determine a second focus tracking object based on a direction in which the first focus tracking object faces or a name of a person said by the first focus tracking object; wherein the plurality of focus-following objects includes the first focus-following object and the second focus-following object.
8. The apparatus of claim 7, wherein the recording module comprises:
a merging unit configured to merge the plurality of focus tracking frames into at least one target focus tracking frame;
and the first recording unit is used for recording the video to be recorded according to the position of the at least one target focus tracking frame.
9. The apparatus of claim 8,
the merging unit is specifically configured to: when the ratio of the sum of the image areas occupied by the plurality of focus-following frames to the first image area is larger than a first threshold value, combining the plurality of focus-following frames into a target focus-following frame; the first image area is an image area occupied by the plurality of focus-following frames after being combined.
10. The apparatus of claim 9, wherein the recording module further comprises:
a first separating unit, configured to separate the target focus tracking frame into multiple focus tracking frames when a ratio of a sum of image areas occupied by the multiple focus tracking frames to the first image area is smaller than a second threshold;
and the second recording unit is used for recording the video to be recorded according to the positions of the separated multiple focus tracking frames.
11. The apparatus of claim 8, wherein the merging unit comprises:
the receiving subunit is used for receiving a second input of the user on the video preview interface;
a merging subunit, configured to merge the plurality of chase frames into at least one target chase frame in response to the second input.
12. The apparatus of claim 8, wherein the recording module further comprises:
the second receiving unit is used for receiving a third input of the user on the video preview interface;
a second separating unit for separating the at least one target focus tracking frame into a plurality of focus tracking frames in response to the third input;
and the third recording unit is used for recording the video to be recorded according to the positions of the separated multiple focus tracking frames.
13. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the video recording method as claimed in any one of claims 1-6.
14. A readable storage medium, characterized in that a program or instructions are stored thereon which, when executed by a processor, carry out the steps of the video recording method according to any one of claims 1-6.
CN202011580693.8A 2020-12-28 2020-12-28 Video recording method and device, electronic equipment and readable storage medium Active CN112714253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011580693.8A CN112714253B (en) 2020-12-28 2020-12-28 Video recording method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011580693.8A CN112714253B (en) 2020-12-28 2020-12-28 Video recording method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112714253A CN112714253A (en) 2021-04-27
CN112714253B true CN112714253B (en) 2022-08-26

Family

ID=75546975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011580693.8A Active CN112714253B (en) 2020-12-28 2020-12-28 Video recording method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112714253B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113573157B (en) * 2021-07-23 2023-09-12 维沃移动通信(杭州)有限公司 Video recording method, video recording device, electronic apparatus, and readable storage medium
CN114500851A (en) * 2022-02-23 2022-05-13 广州博冠信息科技有限公司 Video recording method and device, storage medium and electronic equipment
CN116095460B (en) * 2022-05-25 2023-11-21 荣耀终端有限公司 Video recording method, device and storage medium
CN116112781B (en) * 2022-05-25 2023-12-01 荣耀终端有限公司 Video recording method, device and storage medium
CN116055866B (en) * 2022-05-30 2023-09-12 荣耀终端有限公司 Shooting method and related electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010230871A (en) * 2009-03-26 2010-10-14 Fujifilm Corp Auto focus system
CN103813098A (en) * 2012-11-12 2014-05-21 三星电子株式会社 Method and apparatus for shooting and storing multi-focused image in electronic device
WO2015120673A1 (en) * 2014-02-11 2015-08-20 惠州Tcl移动通信有限公司 Method, system and photographing equipment for controlling focusing in photographing by means of eyeball tracking technology
CN105704389A (en) * 2016-04-12 2016-06-22 上海斐讯数据通信技术有限公司 Intelligent photo taking method and device
CN106095748A (en) * 2016-06-06 2016-11-09 东软集团股份有限公司 A kind of method and device generating event relation collection of illustrative plates
CN107360387A (en) * 2017-07-13 2017-11-17 广东小天才科技有限公司 The method, apparatus and terminal device of a kind of video record
CN107734149A (en) * 2017-09-25 2018-02-23 努比亚技术有限公司 A kind of image pickup method, terminal and computer-readable recording medium
CN111292773A (en) * 2020-01-13 2020-06-16 北京大米未来科技有限公司 Audio and video synthesis method and device, electronic equipment and medium
CN111669503A (en) * 2020-06-29 2020-09-15 维沃移动通信有限公司 Photographing method and device, electronic equipment and medium
WO2020248900A1 (en) * 2019-06-10 2020-12-17 北京字节跳动网络技术有限公司 Panoramic video processing method and apparatus, and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791535B (en) * 2016-11-28 2020-07-14 阿里巴巴(中国)有限公司 Video recording method and device
CN107633235B (en) * 2017-09-27 2020-12-25 Oppo广东移动通信有限公司 Unlocking control method and related product
CN109918969B (en) * 2017-12-12 2021-03-05 深圳云天励飞技术有限公司 Face detection method and device, computer device and computer readable storage medium
CN109151312A (en) * 2018-09-04 2019-01-04 广州视源电子科技股份有限公司 Focusing method, device and video presenter
CN109597431B (en) * 2018-11-05 2020-08-04 视联动力信息技术股份有限公司 Target tracking method and device
CN109640020A (en) * 2018-12-19 2019-04-16 努比亚技术有限公司 A kind of video record control method, terminal and computer readable storage medium
CN110532984B (en) * 2019-09-02 2022-10-11 北京旷视科技有限公司 Key point detection method, gesture recognition method, device and system
CN110677592B (en) * 2019-10-31 2022-06-10 Oppo广东移动通信有限公司 Subject focusing method and device, computer equipment and storage medium
CN111654622B (en) * 2020-05-28 2022-10-14 维沃移动通信有限公司 Shooting focusing method and device, electronic equipment and storage medium
CN111787259B (en) * 2020-07-17 2021-11-23 北京字节跳动网络技术有限公司 Video recording method and device, electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010230871A (en) * 2009-03-26 2010-10-14 Fujifilm Corp Auto focus system
CN103813098A (en) * 2012-11-12 2014-05-21 三星电子株式会社 Method and apparatus for shooting and storing multi-focused image in electronic device
WO2015120673A1 (en) * 2014-02-11 2015-08-20 惠州Tcl移动通信有限公司 Method, system and photographing equipment for controlling focusing in photographing by means of eyeball tracking technology
CN105704389A (en) * 2016-04-12 2016-06-22 上海斐讯数据通信技术有限公司 Intelligent photo taking method and device
CN106095748A (en) * 2016-06-06 2016-11-09 东软集团股份有限公司 A kind of method and device generating event relation collection of illustrative plates
CN107360387A (en) * 2017-07-13 2017-11-17 广东小天才科技有限公司 The method, apparatus and terminal device of a kind of video record
CN107734149A (en) * 2017-09-25 2018-02-23 努比亚技术有限公司 A kind of image pickup method, terminal and computer-readable recording medium
WO2020248900A1 (en) * 2019-06-10 2020-12-17 北京字节跳动网络技术有限公司 Panoramic video processing method and apparatus, and storage medium
CN111292773A (en) * 2020-01-13 2020-06-16 北京大米未来科技有限公司 Audio and video synthesis method and device, electronic equipment and medium
CN111669503A (en) * 2020-06-29 2020-09-15 维沃移动通信有限公司 Photographing method and device, electronic equipment and medium

Also Published As

Publication number Publication date
CN112714253A (en) 2021-04-27

Similar Documents

Publication Publication Date Title
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
CN112954210B (en) Photographing method and device, electronic equipment and medium
KR20230160397A (en) Shooting interface display methods, devices, electronic devices and media
CN112954214B (en) Shooting method, shooting device, electronic equipment and storage medium
CN112135046A (en) Video shooting method, video shooting device and electronic equipment
CN113873151A (en) Video recording method and device and electronic equipment
CN111669495B (en) Photographing method, photographing device and electronic equipment
CN112911147A (en) Display control method, display control device and electronic equipment
CN113794829A (en) Shooting method and device and electronic equipment
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN112449110B (en) Image processing method and device and electronic equipment
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114125297B (en) Video shooting method, device, electronic equipment and storage medium
CN113407144B (en) Display control method and device
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112367467B (en) Display control method, display control device, electronic apparatus, and medium
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN114245017A (en) Shooting method and device and electronic equipment
CN112653841B (en) Shooting method and device and electronic equipment
CN113891018A (en) Shooting method and device and electronic equipment
CN113596329A (en) Photographing method and photographing apparatus
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114025100B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN114157810B (en) Shooting method, shooting device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant