CN116132617A - Video recording method, device, electronic equipment and storage medium - Google Patents
Video recording method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116132617A CN116132617A CN202310153417.0A CN202310153417A CN116132617A CN 116132617 A CN116132617 A CN 116132617A CN 202310153417 A CN202310153417 A CN 202310153417A CN 116132617 A CN116132617 A CN 116132617A
- Authority
- CN
- China
- Prior art keywords
- video
- intensity adjustment
- jitter
- offset
- preview
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The application discloses a video recording method, a video recording device, electronic equipment and a storage medium, and belongs to the technical field of image processing. The method comprises the following steps: receiving a first input of a first video, the first video comprising N frame preview frames and M frame buffer frames; responding to the first input, and displaying P jitter intensity adjustment identifications on a video recording preview interface; each jitter intensity adjustment identifier points to a group of jitter intensity adjustment coefficients, the jitter intensity adjustment coefficients are used for compensating video jitter generated in the video recording process, the jitter intensity adjustment coefficients are generated according to a video frame data set, each video frame data set comprises N frames of preview frames and a buffer frame set, the number of buffer frames in each buffer frame set is different, and M, N and P are positive integers.
Description
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a video recording method, a video recording device, electronic equipment and a storage medium.
Background
With the continuous development of terminal equipment technology, more and more users can use terminal equipment to record video, and in order to ensure the recorded video quality, video anti-shake processing is very important.
In the related art, anti-shake parameters for video anti-shake processing are often written into a terminal device in advance, and when different users record videos by holding the terminal device, the adaptation degree of the anti-shake parameters is different, if the users cannot adapt to the anti-shake parameters written into the terminal device in advance well, the jitter of a finally recorded video picture may be serious.
Disclosure of Invention
The embodiment of the application aims to provide a video recording method, a video recording device, electronic equipment and a storage medium, which can solve the problem that recorded video pictures have serious jitter if a user cannot well adapt to anti-shake parameters which are written into terminal equipment in advance.
In a first aspect, an embodiment of the present application provides a video recording method, including:
receiving a first input of a first video, the first video comprising N frame preview frames and M frame buffer frames;
responding to the first input, and displaying P jitter intensity adjustment identifications on a video recording preview interface; each jitter intensity adjustment identifier points to a group of jitter intensity adjustment coefficients, the jitter intensity adjustment coefficients are used for compensating video jitter generated in the video recording process, the jitter intensity adjustment coefficients are generated according to a video frame data set, each video frame data set comprises N frames of preview frames and a buffer frame set, the number of buffer frames in each buffer frame set is different, and M, N and P are positive integers.
In a second aspect, an embodiment of the present application provides a video recording apparatus, including:
the first receiving module is used for receiving a first input of a first video, and the first video comprises N frames of preview frames and M frames of buffer frames;
the first display module is used for responding to the first input and displaying P jitter intensity adjustment identifications on a video recording preview interface; each jitter intensity adjustment identifier points to a group of jitter intensity adjustment coefficients, the jitter intensity adjustment coefficients are used for compensating video jitter generated in the video recording process, the jitter intensity adjustment coefficients are generated according to a video frame data set, each video frame data set comprises N frames of preview frames and a buffer frame set, the number of buffer frames in each buffer frame set is different, and M, N and P are positive integers.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, by importing the first video shot in advance by the user, constructing a plurality of groups of video frame data sets according to N frames of preview frames and different numbers of buffer frames in the first video, generating a plurality of groups of jitter intensity adjustment coefficients which are more suitable for shooting habits of the user according to the plurality of groups of video frame data sets, and further displaying corresponding jitter intensity adjustment identifications on a video recording interface, the user can select the jitter intensity adjustment coefficients which can be adapted according to the requirements when recording the video, so that the stability of recorded video pictures is effectively ensured.
Drawings
Fig. 1 is a schematic flow chart of a video recording method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a display provided in an embodiment of the present application;
FIG. 3 is a second schematic illustration of the display provided in the embodiment of the present application;
FIG. 4 is a third schematic illustration of a display provided in an embodiment of the present application;
FIG. 5 is a fourth schematic illustration of a display provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video recording method, the device, the electronic equipment and the storage medium provided by the embodiment of the application are described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a video recording method according to an embodiment of the present application, as shown in fig. 1, including:
the first video described in the embodiments of the present application may specifically be a video recorded by a user in advance, where the user is recording a video, and in an embodiment, the first video may also be a part of video content in a video that is continuously shot.
The first video may be a video stored locally at the terminal device or may be a video stored at the cloud server.
In the video recording process, the user is often helped to know the recorded video content along with the video preview process, the video preview has higher requirement on real-time performance, all video frames collected in the video recording process cannot be previewed, only part of video frames in the video frames can be previewed and displayed, the video frames used for the video preview are preview frames, and the cache frames can be video frames cached in the preview process, namely, video frames which are not subjected to video preview in the recorded video frames are cache frames.
In the embodiment of the present application, the N frame preview frames are video frames for performing video preview in the recorded first video, and the M frame buffer frames are video frames for not performing video preview in the recorded first video, which can be understood that the buffer frames may be video frames recorded between each preview frame.
The terminal device receives the first input of the first video, and the terminal device may be a device with a video recording function, specifically may be a device such as a mobile phone, a camera, a tablet computer, etc., and the type of the terminal device is not specifically limited in the embodiment of the present application.
In an alternative embodiment, the first input may also be an operation of using, as the first video, a part of the video content that has completed recording during the video recording process, and generating a plurality of sets of jitter intensity adjustment coefficients based on the first video.
In another alternative embodiment, the first input may be an operation for importing a first video, generating a plurality of sets of jitter intensity adjustment coefficients based on the first video, and the first input may be the first operation.
The video recording preview interface may include a video import identifier, where the first operation includes, but is not limited to, selecting a video file corresponding to the first video and confirming an import operation after the user clicks the video import identifier by a touch device such as a finger or a stylus; or the first input may also be a voice command input by the user, or a specific gesture input by the user, or other feasibility inputs, which may be specifically determined according to the actual use requirement, and the embodiment of the present invention is not limited. The specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture; the click input in the embodiment of the application may be a single click input, a double click input, or any number of click inputs.
The terminal device may further generate a plurality of groups of video frame data sets according to the N preview frames and the different number of buffered frames in the first video after importing the first video in response to the first input.
In some embodiments, the number of buffered frames in the video frame data set may be set by the user, or may be generated according to a certain preset scheme, for example, each set of video frame data sets introduces 3 buffered frames more than the previous frame data set, and theoretically, the larger the number of buffered frames applied in the video frame data sets, the smaller the generated jitter intensity adjustment coefficient, and the higher the anti-jitter intensity.
For example, the first video includes 5 frames of preview frames and 50 frames of buffer frames, three sets of video frame data sets are generated, and the first set of video frame data sets may include 5 frames of preview frames and 3 frames of buffer frames adjacent to each frame of preview frames, namely 15 frames of buffer frames; the second group of video frame data groups comprises 5 frames of preview frames and 5 frames of buffer frames adjacent to each frame of preview frames, namely 25 frames of buffer frames; the third set of video frame data includes 5 frames of preview frames and 7 frames of buffer frames adjacent to each frame of preview frames, i.e., 35 frames of buffer frames.
In this embodiment of the present application, after generating multiple sets of video frame data sets according to N frame preview frames and different numbers of buffer frames of the first video, a corresponding jitter intensity adjustment coefficient may be further generated according to each set of video frame data sets, for example, if the first video generates 5 sets of video frame data sets, the 5 sets of jitter intensity adjustment coefficients may be generated.
In an alternative embodiment, the shake strength adjustment coefficient is a coefficient for compensating video shake occurring in the video recording process, the shake strength adjustment coefficient may be an anti-shake coefficient matrix, where the matrix may include X, Y, pitch, yaw and roll anti-shake adjustment coefficients, and shake in X, Y, pitch, yaw and roll directions may be compensated by the anti-shake strength adjustment coefficient during the video recording process, so that stability of video shooting is effectively ensured. In an alternative embodiment, after generating multiple sets of jitter intensity adjustment coefficients, the corresponding jitter intensity adjustment coefficients may be generated according to the number of buffered frames corresponding to each jitter intensity adjustment coefficient, for example, the jitter intensity adjustment coefficient obtained without using buffered frames is set to 0, the jitter intensity coefficient obtained using 15 buffered frames is set to 1, the jitter intensity coefficient obtained using 30 buffered frames is set to 2, so as to obtain the corresponding jitter position information.
In an optional embodiment, after generating a plurality of sets of jitter intensity adjustment coefficients, sorting is performed according to the adjustment values of the adjustment coefficients, so as to generate corresponding sorting information.
In an alternative embodiment, the P jitter intensity adjustment identifiers in the video recording preview interface may be displayed in a bar-shaped slidable page, and the user may select different jitter intensity adjustment identifiers by sliding the page.
In an alternative embodiment, P jitter intensity adjustment identifiers in the video recording preview interface may be displayed in a scattered manner, and the user may click on or long press the jitter intensity adjustment identifier that he wishes to select to set the corresponding jitter intensity adjustment coefficient.
In the embodiment of the application, in the video recording process, a video recording preview interface can be displayed in a display screen of the terminal device, and a user can view preview frames of recorded video through the video recording preview interface, so that shot video content is effectively known, and a shaking intensity adjustment mark for adjusting video recording parameters can be displayed in the video recording preview interface, so that the user can conveniently adjust in the video recording process.
Because the users have difference in the shake intensity feeling, the users can adjust the shake intensity through the shake intensity adjusting mark in the video recording process, and the shake intensity adjusting coefficient which is most suitable for the eyes of the users is selected to record the video, so that the final video recording output is obtained.
In an alternative embodiment, after setting the jitter intensity adjustment factor, the user defaults to use the jitter intensity adjustment factor set by the user to perform anti-jitter processing when recording the video again through the mobile terminal.
In the embodiment of the application, the first video shot by the user in advance is imported, then a plurality of groups of video frame data sets are constructed through N frames of preview frames and different numbers of buffer frames in the first video, a plurality of groups of jitter intensity adjustment coefficients which are more fit with the shooting habit of the user are generated according to the plurality of groups of video frame data sets, and corresponding jitter intensity adjustment identifiers are further displayed on a video recording interface, so that the user can select the jitter intensity adjustment coefficients which can be adapted according to the requirements during video recording, and the stability of recorded video pictures is effectively ensured.
Optionally, the method for generating the jitter intensity adjustment coefficient specifically includes:
Acquiring first motion offsets of the buffer frames and the preview frames in the video frame data set, and determining first difference values of the first motion offsets and target motion offsets, wherein the target motion offsets are minimum values in the first motion offsets;
and determining the jitter intensity adjustment coefficient according to the total frame number of the buffer frame and the preview frame in the video frame data group and the absolute value of each first difference value.
In this embodiment of the present application, the first motion offset is a motion estimation offset of a recording buffer frame or a preview frame relative to a first frame video frame recorded by a video, a minimum offset of the first motion offset of each buffer frame and the first motion offset of each preview frame of each video frame data set is selected, and the minimum offset is taken as a target motion offset, and it can be understood that, because there is a difference between preview frames included in each video frame data set, the minimum offset of each video frame data set may be different.
In one embodiment, a motion estimation curve of each video frame data set is constructed according to the first motion offset of each buffer frame and the first motion offset of each preview frame in one video frame data set, and the minimum value of the motion estimation curve is used as the corresponding target motion offset of the data frame set.
In this embodiment, the first motion offset P of each buffer frame and preview frame may be specifically i For a motion offset P from the target min The difference is obtained to obtain a plurality of first difference values (P i -P min )。
In one embodiment, the total frame number of the buffer frame and the preview frame in the video frame data set refers to a sum of the buffer frame number and the preview frame number in one video frame data set.
In one embodiment, the absolute values of the first differences in a video data may be summed to obtain a summed value, and then the summed value is divided by the total frame number to obtain the calculated jitter intensity adjustment coefficient for the video frame data set, which may specifically be:
wherein frame is the total frame number of the video frame data set, which can be specifically the sum of the frame numbers of the preview frame and the buffer frame in the video frame data set, P i For the first motion offset, P min For the target motion offset, i represents the i-th frame of the video recording for the video frame.
In the embodiment of the present application, different preview frames are included in different video frame data sets, so different jitter intensity adjustment coefficients can be obtained in the above manner.
In one embodiment, since the terminal device needs to be anti-shake in X, Y, pitch, yaw and roll directions, when the first motion offset is acquired, the motion offsets in X, Y, pitch, yaw and roll directions are correspondingly acquired, and then the obtained shake intensity adjustment coefficient is a matrix K, including shake intensity adjustment values in X, Y, pitch, yaw and roll directions, which are specifically formed as follows:
K=(k x ,k y ,k pitch ,k yaw ,k roll ) T
In this embodiment of the present application, the jitter intensity adjustment coefficient is effectively calculated by the total frame number of the buffered frame and the preview frame in the video frame data set, and the first difference value between the first motion offset and the target motion offset of the buffered frame and each preview frame, and a plurality of jitter intensity coefficients may be calculated by different numbers of buffered frames in each video frame data set, so that more choices may be provided to the user in the video recording process.
Optionally, in response to the first input, after displaying the P jitter intensity adjustment identifiers on the video recording preview interface, the method further includes:
receiving a second input of a target jitter intensity adjustment identifier in the P jitter intensity adjustment identifiers;
and responding to the second input, and recording the video according to a first jitter intensity adjustment coefficient corresponding to the target jitter intensity adjustment identifier, wherein the first jitter intensity adjustment coefficient is used for performing jitter compensation on the recorded video in the video recording process.
In one embodiment, the terminal device receives a second input of the target jitter intensity adjustment identifier from the P jitter intensity adjustment identifiers, where the second input is used to determine the target jitter intensity adjustment identifier for selection, and the second input may be a click input, a long press input, a sliding input, a voice command input or other input capable of implementing a response function, which is not limited in this embodiment.
The terminal device, in response to the second input of the user, records video according to the first jitter intensity adjustment coefficient corresponding to the target jitter intensity adjustment identifier, and it can be understood that, because in the embodiment of the application, the preview frame has stronger real-time performance, the first jitter intensity adjustment coefficient is not an adjustment coefficient for adjusting the preview frame alone, but performs jitter compensation on all video frames recorded and saved.
After the user finishes setting, the first jitter intensity adjustment coefficient is used as a default parameter of video recording of the terminal equipment, so that the user can be prevented from additionally setting each time when the video is recorded, and the steps of user operation are effectively reduced.
In an alternative embodiment, fig. 2 is one of display schematic diagrams provided in the embodiment of the present application, as shown in fig. 2, including: the anti-shake intensity setting identifier 211 may be displayed in the video recording preview interface 21, after the user clicks the anti-shake intensity setting identifier 211, the terminal device responds to the input to display the video import interface 22, and after the user imports the first video in the video import interface 22, P shake intensity adjustment identifiers 212 are displayed in the video recording preview interface 21.
In the embodiment of the application, the user can continuously switch the target jitter intensity adjustment identifiers among the P jitter intensity adjustment identifiers, so that the jitter intensity adjustment coefficient suitable for the user is effectively selected, the stability of final video recording is effectively ensured, and the quality of video recording is ensured.
Optionally, after the step of recording the video according to the first jitter intensity adjustment coefficient corresponding to the target jitter intensity adjustment identifier, the method further includes:
receiving a third input, wherein the third input is used for amplifying a target shooting area in the video recording preview interface according to a target amplification factor;
and responding to the third input, and controlling the first center point of the video recording preview interface and the second center point of the target shooting area in the recorded video picture to be close to each other in the process of amplifying the target shooting area.
In the embodiment of the application, since the preview frame has high requirement on real-time performance, the jitter intensity adjustment coefficient of the preview frame displayed in the video recording preview interface cannot be calculated by using the buffer frame, and the calculation of the jitter intensity adjustment coefficient of the video recording has the participation of the buffer frame, so that the jitter intensity of the preview is always far lower than the jitter intensity of the video recording, and the lower the jitter intensity is in the process of amplifying the recorded video, the larger the overall offset of the video is in the amplifying process.
The terminal device receives a third input, where the third input is an operation for amplifying the target shooting area in the video recording preview interface according to a target amplification factor, and the target amplification factor may specifically be an amplification factor for amplifying the target shooting area, for example, the target amplification factor may be 2 times the amplification factor, that is, the target shooting area is amplified twice.
In one embodiment, the target capture area is an area that appears in the video capture preview interface, and the target capture area may also be a focus area during video recording.
In one embodiment, the third input may specifically be an operation of sliding and amplifying the target shooting area by two fingers, an operation of dragging the magnification mark, a voice command input capable of realizing a response function, and the like.
In response to the third input, the terminal device zooms in on the target shooting area, and during zooming in, in one embodiment, since the jitter intensity adjustment coefficient of the video preview is determined according to the preview frame, and the jitter intensity adjustment coefficient for video recording is calculated jointly by the buffer frame and the preview frame, different jitter intensity adjustment coefficients may cause the preview picture and the recording picture to deviate during zooming in. At this time, the situation that the previewed picture and the recorded picture are inconsistent can be adjusted according to the first jitter intensity adjustment coefficient set by the user.
In one embodiment, the first center point may refer to an interface center point of the video recording preview interface, and in an alternative embodiment, the first center point may also be a center point set by a user in the video preview interface.
The second center point may be a focal point of the terminal device in the target shooting area during video recording in one embodiment, or may be a point manually selected by the user in an alternative embodiment.
In this embodiment of the present application, the situation that the previewed picture and the recorded picture are inconsistent may be that the previewed picture is offset, or may be that the recorded picture is offset, so in this embodiment of the present application, the first center point of the video recording preview interface and the second center point of the target shooting area in the recorded video frame may be controlled to be close to each other, at this time, the video preview picture and the recorded video recording picture in the video preview interface may also be offset towards the corresponding directions, so that the problem that the previewed picture and the recorded picture are inconsistent is solved to a certain extent.
Optionally, controlling the first center point of the video recording preview interface and the second center point of the target shooting area in the recorded video frame to be close to each other includes:
Determining a first offset of the first center point and a second offset of the second center point based on the first jitter intensity adjustment coefficient and a preview jitter intensity adjustment coefficient, the preview jitter intensity adjustment coefficient being a jitter intensity adjustment coefficient generated based on a video frame data set including only N frames of the preview frames;
and controlling the direction of the first center point towards the second center point, performing offset adjustment according to the first offset, controlling the direction of the second center point towards the first center point, and performing offset adjustment according to the second offset.
In this embodiment of the present application, the preview jitter intensity adjustment coefficient is a jitter intensity adjustment coefficient of a video preview, which is calculated according to a preview frame in a first video, and specifically is calculated according to a video frame data set including only all preview frames in the first video by the above-mentioned jitter intensity adjustment coefficient calculation method.
In the embodiment of the present application, since there may be an additional buffered frame in the video frame data set that generates the first jitter intensity adjustment coefficient, there may be a difference between the first jitter intensity adjustment coefficient and the preview jitter intensity adjustment coefficient.
In this embodiment of the present application, the offset degree of the video preview picture in the video recording preview interface with respect to the video recording picture is T, where t=k1-K2, K1 is a first jitter intensity adjustment coefficient, and K2 is a preview jitter intensity adjustment coefficient.
In one embodiment, the magnification also affects the offset of the video preview screen in the video recording preview interface relative to the video recording screen during the magnification. The larger the magnification, the larger the offset, which may specifically be magnification x T, for example, the magnification is 2 times, and the corresponding offset is 2*T.
Fig. 3 is a second schematic diagram of the display provided in the embodiment of the present application, as shown in fig. 3, including a first center point 31 and a second center point 32, where an offset is generated between the first center point and the second center point during the magnification process.
In this embodiment of the present application, the values of the first offset and the second offset may be different, and when the respective offset degrees cannot be determined, the values of the first offset and the second offset may be the same.
After the first offset and the second offset are determined, the first center point of the video recording preview interface can be further controlled to offset towards the second center point according to the first offset in the magnification amplifying process, and correspondingly, the second center point of the target shooting area in the video recording picture can also offset towards the first center point according to the second offset, that is, the first center point and the second center point are mutually approximated in the magnification amplifying process, and the corresponding video preview image in the video preview interface and the picture of the video frame recorded by the video can be correspondingly adjusted.
In one embodiment, after the magnification adjustment is completed, the first center point and the second center point also complete corresponding offset adjustment, and after the magnification adjustment is completed, the first center point and the second center point may be coincident on the final display interface, that is, after the magnification adjustment is completed, the content displayed on the video preview interface is identical to the content of the video frame recorded by the video, so that the situation that after the magnification adjustment is completed due to different jitter intensity adjustment coefficients, the content of the video preview is inconsistent with the content of the video recorded is avoided.
Fig. 4 is a third schematic diagram of the display provided in the embodiment of the present application, as shown in fig. 4, the first center point 31 and the second center point 32 may be displayed in the original video recording preview interface 21, and if the offset between the first center point 31 and the second center point 32 is T before zooming in, the offset between the first center point 31 and the second center point 32 is 2T in the video recording preview interface 41 that zooms in twice without adjusting the offset.
Fig. 5 is a schematic diagram of a display provided in this embodiment, as shown in fig. 5, in the process of magnification in the original video recording preview interface 21, if the second center point 32 of the target shooting area is at the lower left corner of the video recording preview interface 21, the first center point 31 is correspondingly shifted in the lower left corner direction, the shifted first center point 33 is obtained in the magnified double video recording preview interface 41, in the process of magnification in the target shooting area 51, if the first center point 31 is at the upper right corner of the target shooting area 51, the corresponding second center point 32 is shifted to the upper right corner of the target shooting area 51, the shifted second center point 34 is obtained in the magnified double target shooting area 52, and finally, in the magnified double video recording preview interface 41, the shifted second center point 34 and the shifted first center point 33 may be overlapped. The scenery which the user wants to enlarge is positioned at the same position on the preview interface and the video interface, so that the problem of inconsistent preview and video under high zoom is solved.
In the embodiment of the present application, since the offset adjustment process starts when the user starts magnification and stops adjusting until the magnification ends, the screen offset generated by offset compensation is synchronized with the magnification process, and the user does not feel uncomfortable visually.
Optionally, determining the first offset of the first center point and the second offset of the second center point based on the first jitter intensity adjustment coefficient and the preview jitter intensity adjustment coefficient includes:
obtaining offset degree information based on a difference value between the first jitter intensity adjustment coefficient and the preview jitter intensity adjustment coefficient;
and determining a first offset of the first center point and a second offset of the second center point based on the amplification factor for amplifying the target shooting area and the offset degree information, wherein the first offset and the second offset are equal.
In this embodiment of the present application, since there may be a certain difference between the preview video frame in the video capturing process and the actually enlarged recorded picture, the adjustment may be performed by the first jitter intensity adjustment coefficient and the preview jitter intensity adjustment coefficient.
In this embodiment of the present application, the offset degree of the recorded video frame relative to the preview video frame may be obtained by using the difference between the first jitter intensity adjustment coefficient and the preview jitter intensity adjustment coefficient, to obtain the offset degree information.
In the embodiment of the present application, because the different magnification factors further affect the amount of the recorded video frames relative to the preview video frames, the overall offset may be determined according to the product of the magnification factor of the target shooting area for magnification and the offset degree information.
In an alternative embodiment, since both the first center point and the second center point need to be adjusted, in order to ensure that the adjusted offset amounts are appropriate, the first offset amount and the second offset amount may be made equal, that is, the first offset amount and the second offset amount are half of the overall offset amount, and the first offset amount of the first center point and the second offset amount of the second center point are determined.
In the embodiment of the application, the integral offset is determined by further combining the amplification factor of the target shooting area through the difference between the first jitter intensity adjustment coefficient and the preview jitter intensity adjustment coefficient, and then the first offset of the first center point and the second offset of the second center point are obtained through the integral offset, so that the situation that after the magnification is amplified due to the fact that the jitter intensity adjustment coefficients are different, the content of the video preview is inconsistent with the content of the video recording is effectively avoided.
According to the video recording method provided by the embodiment of the application, the execution subject can be a video recording device. In the embodiment of the present application, a video recording device performs a video recording method as an example, and the video recording device provided in the embodiment of the present application is described.
Fig. 6 is a schematic structural diagram of a video recording apparatus according to an embodiment of the present application, as shown in fig. 6, including:
the first receiving module 610 is configured to receive a first input of a first video, where the first video includes N frame preview frames and M frame buffer frames;
the first display module 620 is configured to display P jitter intensity adjustment identifiers on a video recording preview interface in response to the first input; each jitter intensity adjustment identifier points to a group of jitter intensity adjustment coefficients, the jitter intensity adjustment coefficients are used for compensating video jitter generated in the video recording process, the jitter intensity adjustment coefficients are generated according to a video frame data set, each video frame data set comprises N frames of preview frames and a buffer frame set, the number of buffer frames in each buffer frame set is different, and M, N and P are positive integers.
Optionally, the method for generating the jitter intensity adjustment coefficient specifically includes:
Acquiring first motion offsets of the buffer frames and the preview frames in the video frame data set, and determining first difference values of the first motion offsets and target motion offsets, wherein the target motion offsets are minimum values in the first motion offsets;
and determining the jitter intensity adjustment coefficient according to the total frame number of the buffer frame and the preview frame in the video frame data group and the absolute value of each first difference value.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving a second input of a target jitter intensity adjustment identifier in the P jitter intensity adjustment identifiers;
and the recording module is used for responding to the second input and recording the video according to a first jitter intensity adjustment coefficient corresponding to the target jitter intensity adjustment mark, wherein the first jitter intensity adjustment coefficient is used for carrying out jitter compensation on the recorded video in the video recording process.
Optionally, the apparatus further comprises:
the third receiving module is used for receiving a third input, and the third input is used for amplifying a target shooting area in the video recording preview interface according to a target amplification factor;
And the control module is used for responding to the third input and controlling the first center point of the video recording preview interface and the second center point of the target shooting area in the recorded video frame to be close to each other in the process of amplifying the target shooting area.
Optionally, the control module is specifically configured to:
determining a first offset of the first center point and a second offset of the second center point based on the first jitter intensity adjustment coefficient and a preview jitter intensity adjustment coefficient, the preview jitter intensity adjustment coefficient being a jitter intensity adjustment coefficient generated based on a video frame data set including only N frames of the preview frames;
and controlling the direction of the first center point towards the second center point, performing offset adjustment according to the first offset, controlling the direction of the second center point towards the first center point, and performing offset adjustment according to the second offset.
Optionally, the control module is specifically configured to:
obtaining offset degree information based on a difference value between the first jitter intensity adjustment coefficient and the preview jitter intensity adjustment coefficient;
and determining a first offset of the first center point and a second offset of the second center point based on the amplification factor for amplifying the target shooting area and the offset degree information, wherein the first offset and the second offset are equal.
In the embodiment of the application, the first video shot by the user in advance is imported, then a plurality of groups of video frame data sets are constructed through N frames of preview frames and different numbers of buffer frames in the first video, a plurality of groups of jitter intensity adjustment coefficients which are more fit with the shooting habit of the user are generated according to the plurality of groups of video frame data sets, and corresponding jitter intensity adjustment identifiers are further displayed on a video recording interface, so that the user can select the jitter intensity adjustment coefficients which can be adapted according to the requirements during video recording, and the stability of recorded video pictures is effectively ensured.
The video recording apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video recording apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video recording apparatus provided in this embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 5, and in order to avoid repetition, a description is omitted here.
Optionally, fig. 7 is a schematic structural diagram of an electronic device provided in the embodiment of the present application, as shown in fig. 7, and further provides an electronic device 700, including a processor 701 and a memory 702, where a program or an instruction capable of running on the processor 701 is stored in the memory 702, and the program or the instruction implements each step of the video recording method embodiment described above when being executed by the processor 701, and the steps can achieve the same technical effects, so that repetition is avoided and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: radio frequency unit 801, network module 802, audio output unit 803, input unit 804, sensor 805, display unit 806, user input unit 807, interface unit 808, memory 809, and processor 810.
Those skilled in the art will appreciate that the electronic device 800 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 810 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The user input unit 807 is configured to receive a first input of a first video, which includes N-frame preview frames and M-frame buffer frames;
the display unit 806 is configured to display P jitter intensity adjustment identifiers on a video recording preview interface in response to the first input; each jitter intensity adjustment identifier points to a group of jitter intensity adjustment coefficients, the jitter intensity adjustment coefficients are used for compensating video jitter generated in the video recording process, the jitter intensity adjustment coefficients are generated according to a video frame data set, each video frame data set comprises N frames of preview frames and a buffer frame set, the number of buffer frames in each buffer frame set is different, and M, N and P are positive integers.
The processor 810 is configured to obtain first motion offsets of the buffer frames and the preview frames in the video frame data set, determine first differences between the first motion offsets and target motion offsets, and the target motion offset is a minimum value in the first motion offsets;
and determining the jitter intensity adjustment coefficient according to the total frame number of the buffer frame and the preview frame in the video frame data group and the absolute value of each first difference value.
A user input unit 807 for receiving a second input of a target shake intensity adjustment identifier of the P shake intensity adjustment identifiers;
the processor 810 is configured to record video according to a first jitter intensity adjustment coefficient corresponding to the target jitter intensity adjustment identifier in response to the second input, where the first jitter intensity adjustment coefficient is used for performing jitter compensation on the recorded video during the video recording process.
The user input unit 807 is configured to receive a third input for enlarging a target shooting area in the video recording preview interface according to a target magnification;
the processor 810 is configured to control, in response to the third input, a first center point of the video recording preview interface and a second center point of the target shooting area in the recorded video frame to be close to each other in a process of zooming in the target shooting area.
The processor 810 is configured to determine a first offset of the first center point and a second offset of the second center point based on the first jitter intensity adjustment coefficient and a preview jitter intensity adjustment coefficient, the preview jitter intensity adjustment coefficient being a jitter intensity adjustment coefficient generated based on a video frame data set including only N frames of the preview frames;
and controlling the direction of the first center point towards the second center point, performing offset adjustment according to the first offset, controlling the direction of the second center point towards the first center point, and performing offset adjustment according to the second offset.
The processor 810 is configured to obtain offset degree information based on a difference between the first jitter intensity adjustment coefficient and the preview jitter intensity adjustment coefficient;
and determining a first offset of the first center point and a second offset of the second center point based on the amplification factor for amplifying the target shooting area and the offset degree information, wherein the first offset and the second offset are equal.
In the embodiment of the application, the first video shot by the user in advance is imported, then a plurality of groups of video frame data sets are constructed through N frames of preview frames and different numbers of buffer frames in the first video, a plurality of groups of jitter intensity adjustment coefficients which are more fit with the shooting habit of the user are generated according to the plurality of groups of video frame data sets, and corresponding jitter intensity adjustment identifiers are further displayed on a video recording interface, so that the user can select the jitter intensity adjustment coefficients which can be adapted according to the requirements during video recording, and the stability of recorded video pictures is effectively ensured.
It should be appreciated that in embodiments of the present application, the input unit 804 may include a graphics processor (Graphics Processing Unit, GPU) 8041 and a microphone 8042, with the graphics processor 8041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes at least one of a touch panel 8071 and other input devices 8072. Touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two parts, a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 809 can be used to store software programs as well as various data. The memory 809 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 809 may include volatile memory or nonvolatile memory, or the memory 809 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 809 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 810 may include one or more processing units; optionally, the processor 810 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, etc., and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the embodiment of the video recording method, and the same technical effects can be achieved, so that repetition is avoided, and no description is repeated here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so that each process of the video recording method embodiment can be implemented, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the video recording method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.
Claims (14)
1. A video recording method, comprising:
receiving a first input of a first video, the first video comprising N frame preview frames and M frame buffer frames;
responding to the first input, and displaying P jitter intensity adjustment identifications on a video recording preview interface; each jitter intensity adjustment identifier points to a group of jitter intensity adjustment coefficients, the jitter intensity adjustment coefficients are used for compensating video jitter generated in the video recording process, the jitter intensity adjustment coefficients are generated according to a video frame data set, each video frame data set comprises N frames of preview frames and a buffer frame set, the number of buffer frames in each buffer frame set is different, and M, N and P are positive integers.
2. The video recording method according to claim 1, wherein the method for generating the jitter intensity adjustment coefficient specifically comprises:
acquiring first motion offsets of the buffer frames and the preview frames in the video frame data set, and determining first difference values of the first motion offsets and target motion offsets, wherein the target motion offsets are minimum values in the first motion offsets;
And determining the jitter intensity adjustment coefficient according to the total frame number of the buffer frame and the preview frame in the video frame data group and the absolute value of each first difference value.
3. The video recording method of claim 1, wherein responsive to the first input, after displaying P jitter intensity adjustment identifications at a video recording preview interface, further comprising:
receiving a second input of a target jitter intensity adjustment identifier in the P jitter intensity adjustment identifiers;
and responding to the second input, and recording the video according to a first jitter intensity adjustment coefficient corresponding to the target jitter intensity adjustment identifier, wherein the first jitter intensity adjustment coefficient is used for performing jitter compensation on the recorded video in the video recording process.
4. The video recording method according to claim 3, further comprising, after the step of recording video according to the first jitter intensity adjustment coefficient corresponding to the target jitter intensity adjustment identifier:
receiving a third input, wherein the third input is used for amplifying a target shooting area in the video recording preview interface according to a target amplification factor;
And responding to the third input, and controlling the first center point of the video recording preview interface and the second center point of the target shooting area in the recorded video frame to be close to each other in the process of amplifying the target shooting area.
5. The video recording method according to claim 4, wherein controlling the first center point of the video recording preview interface and the second center point of the target shooting area in the recorded video frame to be close to each other comprises:
determining a first offset of the first center point and a second offset of the second center point based on the first jitter intensity adjustment coefficient and a preview jitter intensity adjustment coefficient, the preview jitter intensity adjustment coefficient being a jitter intensity adjustment coefficient generated based on a video frame data set including only N frames of the preview frames;
and controlling the direction of the first center point towards the second center point, performing offset adjustment according to the first offset, controlling the direction of the second center point towards the first center point, and performing offset adjustment according to the second offset.
6. The video recording method of claim 5, wherein determining a first offset for the first center point and a second offset for the second center point based on the first jitter intensity adjustment factor and a preview jitter intensity adjustment factor comprises:
Obtaining offset degree information based on a difference value between the first jitter intensity adjustment coefficient and the preview jitter intensity adjustment coefficient;
and determining a first offset of the first center point and a second offset of the second center point based on the amplification factor for amplifying the target shooting area and the offset degree information, wherein the first offset and the second offset are equal.
7. A video recording apparatus, comprising:
the first receiving module is used for receiving a first input of a first video, and the first video comprises N frames of preview frames and M frames of buffer frames;
the first display module is used for responding to the first input and displaying P jitter intensity adjustment identifications on a video recording preview interface; each jitter intensity adjustment identifier points to a group of jitter intensity adjustment coefficients, the jitter intensity adjustment coefficients are used for compensating video jitter generated in the video recording process, the jitter intensity adjustment coefficients are generated according to a video frame data set, each video frame data set comprises N frames of preview frames and a buffer frame set, the number of buffer frames in each buffer frame set is different, and M, N and P are positive integers.
8. The video recording apparatus according to claim 7, wherein the method for generating the jitter intensity adjustment coefficient comprises:
acquiring first motion offsets of the buffer frames and the preview frames in the video frame data set, and determining first difference values of the first motion offsets and target motion offsets, wherein the target motion offsets are minimum values in the first motion offsets;
and determining the jitter intensity adjustment coefficient according to the total frame number of the buffer frame and the preview frame in the video frame data group and the absolute value of each first difference value.
9. The video recording device of claim 7, wherein the device further comprises:
the second receiving module is used for receiving a second input of a target jitter intensity adjustment identifier in the P jitter intensity adjustment identifiers;
and the recording module is used for responding to the second input and recording the video according to a first jitter intensity adjustment coefficient corresponding to the target jitter intensity adjustment mark, wherein the first jitter intensity adjustment coefficient is used for carrying out jitter compensation on the recorded video in the video recording process.
10. The video recording apparatus of claim 9, wherein the apparatus further comprises:
the third receiving module is used for receiving a third input, and the third input is used for amplifying a target shooting area in the video recording preview interface according to a target amplification factor;
and the control module is used for responding to the third input and controlling the first center point of the video recording preview interface and the second center point of the target shooting area in the recorded video frame to be close to each other in the process of amplifying the target shooting area.
11. The video recording device of claim 10, wherein the control module is specifically configured to:
determining a first offset of the first center point and a second offset of the second center point based on the first jitter intensity adjustment coefficient and a preview jitter intensity adjustment coefficient, the preview jitter intensity adjustment coefficient being a jitter intensity adjustment coefficient generated based on a video frame data set including only N frames of the preview frames;
and controlling the direction of the first center point towards the second center point, performing offset adjustment according to the first offset, controlling the direction of the second center point towards the first center point, and performing offset adjustment according to the second offset.
12. The video recording device of claim 11, wherein the control module is specifically configured to:
obtaining offset degree information based on a difference value between the first jitter intensity adjustment coefficient and the preview jitter intensity adjustment coefficient;
and determining a first offset of the first center point and a second offset of the second center point based on the amplification factor for amplifying the target shooting area and the offset degree information, wherein the first offset and the second offset are equal.
13. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, performs the steps of the video recording method as claimed in any one of claims 1 to 6.
14. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the video recording method according to any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310153417.0A CN116132617A (en) | 2023-02-21 | 2023-02-21 | Video recording method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310153417.0A CN116132617A (en) | 2023-02-21 | 2023-02-21 | Video recording method, device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116132617A true CN116132617A (en) | 2023-05-16 |
Family
ID=86300929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310153417.0A Pending CN116132617A (en) | 2023-02-21 | 2023-02-21 | Video recording method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116132617A (en) |
-
2023
- 2023-02-21 CN CN202310153417.0A patent/CN116132617A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110636365B (en) | Video character adding method and device, electronic equipment and storage medium | |
CN112954199B (en) | Video recording method and device | |
CN114422692B (en) | Video recording method and device and electronic equipment | |
CN112954193B (en) | Shooting method, shooting device, electronic equipment and medium | |
WO2021243788A1 (en) | Screenshot method and apparatus | |
CN113259743A (en) | Video playing method and device and electronic equipment | |
CN114520876A (en) | Time-delay shooting video recording method and device and electronic equipment | |
CN114466232B (en) | Video processing method, device, electronic equipment and medium | |
CN113852756B (en) | Image acquisition method, device, equipment and storage medium | |
CN115242981B (en) | Video playing method, video playing device and electronic equipment | |
CN115631109A (en) | Image processing method, image processing device and electronic equipment | |
CN114125297B (en) | Video shooting method, device, electronic equipment and storage medium | |
CN114500852B (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN114390205B (en) | Shooting method and device and electronic equipment | |
CN116132617A (en) | Video recording method, device, electronic equipment and storage medium | |
CN115037874A (en) | Photographing method and device and electronic equipment | |
CN113923392A (en) | Video recording method, video recording device and electronic equipment | |
CN114245017A (en) | Shooting method and device and electronic equipment | |
CN114500844A (en) | Shooting method and device and electronic equipment | |
CN114025237A (en) | Video generation method and device and electronic equipment | |
CN114173178B (en) | Video playing method, video playing device, electronic equipment and readable storage medium | |
CN116847187A (en) | Shooting method, shooting device, electronic equipment and storage medium | |
CN115967863A (en) | Video processing method and device | |
CN114710624A (en) | Photographing method and photographing apparatus | |
CN117271090A (en) | Image processing method, device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |