CN117692582A - Video processing method and electronic equipment - Google Patents

Video processing method and electronic equipment Download PDF

Info

Publication number
CN117692582A
CN117692582A CN202311400931.6A CN202311400931A CN117692582A CN 117692582 A CN117692582 A CN 117692582A CN 202311400931 A CN202311400931 A CN 202311400931A CN 117692582 A CN117692582 A CN 117692582A
Authority
CN
China
Prior art keywords
vertex
video
frame
video picture
electronic equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311400931.6A
Other languages
Chinese (zh)
Inventor
吴孟函
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN117692582A publication Critical patent/CN117692582A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Abstract

The embodiment of the application provides a video processing method and electronic equipment, which are applied to the technical field of electronics. When the video picture displayed in the video cutting interface is subjected to moving operations such as rotation, scaling, dragging and the like, the method can judge whether the cutting frame is completely positioned in the video picture, and if the cutting frame is positioned in an area outside the video picture, the video picture can be adjusted according to adjusting parameters corresponding to the moving operations, so that the cutting frame is completely positioned in the adjusted video picture. Therefore, when the video frames subjected to rotation, scaling, dragging and other operations are cut by adopting the cutting frame, the video frames can be adjusted by adjusting parameters, so that the cutting frame is located in the adjusted video frames, namely, the cutting frame does not comprise non-video frames, and therefore, the cut video can not have non-video frames except the video frames, and the cut video effect is improved.

Description

Video processing method and electronic equipment
The present application is a divisional application of patent application of invention with application number 202210975830.0 and invention name of video processing method and electronic device filed on day 15 of year 2022.
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a video processing method and an electronic device.
Background
With the continuous development of electronic technology, electronic devices such as smart phones and tablet computers are widely applied to life and work of people. In the process of using the electronic equipment by a user, the electronic equipment can be adopted to shoot videos, and a cutting frame can be adopted to cut the shot video pictures so as to obtain cut videos.
However, when a video frame subjected to operations such as rotation, scaling, and dragging is cropped by using a cropping frame, a portion of the region of the cropping frame may be located outside the video frame, that is, a portion of the non-video frame included in the cropping frame may be included, thereby resulting in poor video effect after cropping.
Disclosure of Invention
The embodiment of the application provides a video processing method and electronic equipment, which are used for adjusting a video picture when the video picture rotates, zooms and drags, so that a cutting frame is completely positioned in the adjusted video picture, and the video effect after cutting is improved.
In a first aspect, an embodiment of the present application proposes a video processing method, including: the electronic device receives a movement operation of a video picture displayed in the video cropping interface, wherein the movement operation comprises at least one of a rotation operation, a zooming operation and a dragging operation; the electronic equipment responds to the moving operation and judges whether the cutting frames in the video cutting interface are all positioned in the video picture or not; when an area outside the video picture exists in the cutting frame, the electronic equipment adjusts the video picture according to the adjusting parameters corresponding to the moving operation, so that the cutting frame is all located in the adjusted video picture; the adjusting parameters corresponding to the rotating operation comprise a first critical scaling proportion, the adjusting parameters corresponding to the scaling operation comprise a second critical scaling proportion, and the adjusting parameters corresponding to the dragging operation comprise a rebound distance.
Therefore, when the cutting frame is adopted to cut the video pictures subjected to rotation, scaling, dragging and other operations, the video pictures can be adjusted by adjusting parameters, so that the cutting frame is completely positioned in the adjusted video pictures, namely, the cutting frame does not comprise non-video pictures any more, and therefore, the non-video pictures except the video pictures can not appear in the cut video, and the cut video effect is improved
In one possible implementation, an electronic device receives a move operation on a video frame displayed within a video cropping interface, comprising: the electronic equipment receives touch operation on a first rotation control in the video clipping interface; and the first rotation control is used for rotating the video picture according to the angle scale value corresponding to the touch operation when triggered. The electronic equipment responds to the moving operation, and judges whether the cutting frames in the video cutting interface are all positioned in the video picture or not, and the method comprises the following steps: the electronic equipment responds to touch operation to acquire first vertex coordinates of all vertexes in the video picture in the rotating process and second vertex coordinates of all vertexes in the cutting frame; the electronic equipment calculates a first critical scaling of the video picture according to the first vertex coordinates and the second vertex coordinates; the electronic device determines whether the crop frames are all located in the video picture in the rotating process according to the first critical scaling and the first scaling of the video picture in the rotating process relative to the original video picture. When an area outside the video picture exists in the cutting frame, the electronic equipment adjusts the video picture according to the adjusting parameters corresponding to the moving operation, and the method comprises the following steps: when an area outside the video picture in the rotating process exists in the cutting frame, the electronic equipment performs the amplifying operation on the video picture in the rotating process according to the first critical scaling. In this way, when the video picture is rotated, the video picture in the rotation process is amplified based on the first critical scaling ratio, so that the clipping frame is all located in the amplified video picture.
In one possible implementation, the determining, by the electronic device, whether the crop box is entirely located within the video frame during rotation according to the first critical scaling, and the first scaling of the video frame during rotation relative to the original video frame, includes: when the first critical scaling ratio is larger than the first scaling ratio, the electronic equipment determines that an area outside the video picture in the rotating process exists in the cutting frame; when the first critical scaling ratio is less than or equal to the first scaling ratio, the electronic device determines that the cropping frame is entirely located in the video picture in the rotating process.
In one possible implementation, the electronic device calculates a first critical scaling of the video frame according to the first vertex coordinate and the second vertex coordinate, including: the electronic equipment calculates and obtains a first connecting line formed between every two adjacent vertexes in the video picture according to the first vertex coordinates of each vertex in the video picture; the electronic equipment calculates a second connecting line formed between the central point of the cutting frame and each vertex in the cutting frame according to the second vertex coordinates of each vertex in the cutting frame; the electronic equipment calculates a first connecting line corresponding to a first target vertex and a corresponding second connecting line to obtain a target intersecting point corresponding to the first target vertex, wherein the first target vertex is any vertex in the cutting frame, the first connecting line corresponding to the first target vertex is a first connecting line closest to the first target vertex in the video picture, and the second connecting line corresponding to the first target vertex is a second connecting line formed between the first target vertex and the central point of the cutting frame; the electronic equipment calculates a first interval between the first target vertex and the central point of the cutting frame and a second interval between the target intersection point and the central point of the cutting frame; the electronic device calculates a first critical scaling of the video frame according to the first spacing and the second spacing. Therefore, the first critical scaling rate can be conveniently calculated based on the first vertex coordinates of all the vertexes in the video picture and the second vertex coordinates of all the vertexes in the cutting frame, and the calculation mode is simpler.
In one possible implementation, the electronic device calculates a first critical scaling of the video frame according to the first pitch and the second pitch, including: the electronic equipment determines the ratio of the first interval to the second interval as a target proportion value corresponding to the first target vertex; and the electronic equipment determines the maximum value in the target scale values corresponding to the vertexes in the cutting frame as a first critical scaling of the video picture. In this way, the maximum value of the target scale values corresponding to the four vertexes is used as the first critical scaling, so that the cutting frame can be ensured to be positioned in the video picture amplified by the first critical scaling.
In one possible implementation, the electronic device determines whether a crop box in the video cropping interface is entirely located in the video frame in response to the moving operation; the electronic equipment responds to the dragging operation, and obtains the third vertex coordinates of all vertexes in the dragged video picture and the fourth vertex coordinates of all vertexes in the cutting frame; and the electronic equipment determines whether the cutting frame is positioned in the dragged video picture or not according to the third vertex coordinate and the fourth vertex coordinate. When an area outside the video picture exists in the cutting frame, the electronic equipment adjusts the video picture according to the adjusting parameters corresponding to the moving operation, and the method comprises the following steps: when the region outside the dragged video picture exists in the cutting frame, the electronic equipment carries out rebound operation on the dragged video picture according to the rebound distance. In this way, after the video picture is dragged, the dragged video picture is subjected to the rebounding operation based on the rebounding distance, so that the cutting frame is completely positioned in the rebounding video picture.
In one possible implementation manner, the determining, by the electronic device, whether the crop frame is all located in the dragged video frame according to the third vertex coordinate and the fourth vertex coordinate includes: under the condition that the rotation angle of the video picture is not 0 degree and is not an integer multiple of 90 degrees, the electronic equipment calculates a third connecting line formed between every two adjacent vertexes in the video picture according to the third vertex coordinates of each vertex in the video picture; the electronic equipment calculates the vertical point coordinates from each vertex in the cutting frame to the corresponding third connecting line; and the electronic equipment determines whether the cutting frame is positioned in the dragged video picture or not according to the vertical point coordinate and the fourth vertex coordinate.
In one possible implementation manner, before the electronic device performs the bouncing operation on the dragged video frame according to the bouncing distance, the method further includes: under the condition that the rotation angle of the video picture is not 0 degree and is not an integral multiple of 90 degrees, the electronic equipment determines a second target vertex which is closest to the center point of the cutting frame in each vertex in the dragged video picture; the electronic equipment acquires critical coordinates of a critical point corresponding to the second target vertex, and the critical point is consistent with the azimuth of the second target vertex; the electronic equipment determines whether the number of the vertexes outside the dragged video picture in the cutting frame is larger than 1 according to a third vertex coordinate corresponding to the second target vertex, a critical coordinate corresponding to the critical point, a fourth vertex coordinate corresponding to the third target vertex and a vertical point coordinate corresponding to two target vertical points, wherein the third target vertex is a vertex consistent with the second target vertex in the cutting frame in azimuth, and the target vertical point is two vertexes corresponding to the second target vertex in the cutting frame and is a vertical point on two boundary lines adjacent to the second target vertex in the video picture; when at least two vertexes in the cutting frame are positioned outside the dragged video picture, the electronic equipment takes the deviation distance between the critical point and the second target vertex as the rebound distance. In this way, the rebound distance is simply calculated based on the second target vertex whose critical point is closest to the center point of the crop frame.
In one possible implementation manner, when one vertex in the crop frame is located outside the dragged video picture, the electronic device acquires a fourth vertex coordinate corresponding to a fourth target vertex located outside the video picture in the crop frame; and the electronic equipment takes the deviation distance between the fourth vertex coordinates corresponding to the fourth target vertex and the vertical point coordinates of the corresponding target vertical point as the rebound distance. Therefore, when only one vertex in the cutting frame is positioned outside the dragged video picture, the rebound is performed based on the deviation distance between the fourth target vertex and the target vertical point, and the rebound effect can be improved.
In one possible implementation manner, before the electronic device performs the bouncing operation on the dragged video frame according to the bouncing distance, the method further includes: in the case that the rotation angle of the video picture is an integer multiple of 0 DEG or 90 DEG, the electronic device determines a fifth target vertex closest to the center point of the cutting frame among the vertexes in the dragged video picture; and the electronic equipment calculates the rebound distance according to fourth vertex coordinates corresponding to the vertex with the same azimuth as the fifth target vertex in the cutting frame and third vertex coordinates corresponding to the fifth target vertex.
In one possible implementation, the electronic device determining, in response to the moving operation, whether the crop frames in the video cropping interface are all located in the video frame includes: the electronic equipment responds to the scaling operation, and a second scaling proportion of the scaled video picture relative to the original video picture is obtained; the electronic equipment calculates a second critical scaling ratio of the video picture; and the electronic equipment determines whether the cutting frame is completely positioned in the scaled video picture according to the second critical scaling and the second scaling. When an area outside the video picture exists in the cutting frame, the electronic equipment adjusts the video picture according to the adjusting parameters corresponding to the moving operation, and the method comprises the following steps: when the region outside the scaled video picture exists in the cutting frame, the electronic equipment performs the amplifying operation on the scaled video picture according to the second critical scaling ratio. In this way, after the video picture is scaled, the scaled video picture is enlarged and rebounded based on the second critical scaling ratio, so that the cutting frame is all located in the enlarged and rebounded video picture.
In one possible implementation, the determining, by the electronic device, whether the crop box is entirely located in the scaled video frame according to the second critical scaling and the second scaling includes: when the second critical scaling ratio is larger than the second scaling ratio, the electronic equipment determines that an area outside the scaled video picture exists in the cutting frame; when the second critical scaling ratio is less than or equal to the second scaling ratio, the electronic device determines that the cropping frame is all located in the scaled video picture.
In one possible implementation, the electronic device calculates a second critical scaling of the video frame, comprising: under the condition that the rotation angle of the video picture is not 0 degree and is not an integral multiple of 90 degrees, the electronic equipment calculates the critical distance of the video picture according to the width of the cutting frame, the height of the cutting frame and the rotation angle of the video picture under the triggering of the first rotation control; the electronic equipment determines the ratio of the critical distance to the original size of the video picture as a second critical scaling ratio; the original size of a video picture is the original width of the video picture or the original height of the video picture. Therefore, the second critical scaling ratio can be conveniently calculated based on the width of the cutting frame, the height of the cutting frame and the rotation angle corresponding to the first rotation control, and the calculation mode is simpler.
In one possible implementation, the electronic device calculates a second critical scaling of the video frame, comprising: in the case that the rotation angle of the video picture is an integer multiple of 0 ° or 90 °, the electronic device uses the ratio of the target size of the crop frame to the original size of the video picture as a second critical scaling; the target size of the cutting frame is the height of the cutting frame, and the original size of the video picture is the original height of the video picture; or the target size of the cutting frame is the width of the cutting frame, and the original size of the video picture is the original width of the video picture. In this way, a calculation mode of the second critical scaling factor when the rotation angle corresponding to the first rotation control is 0 ° is provided.
In one possible implementation, the video processing method further includes: in the case that the rotation angle of the video picture is not 0 DEG and is not an integer multiple of 90 DEG, the electronic device receives a drag operation on the cutting frame; the electronic equipment responds to the drag operation to acquire fifth vertex coordinates of each vertex in the video picture; the electronic equipment calculates and obtains a fourth connecting line formed between every two adjacent vertexes in the video picture according to the fifth vertex coordinates of all vertexes in the video picture; the electronic equipment calculates the dragging boundary of the cutting frame according to the intersection point coordinates corresponding to the intersection point formed between the extension lines of the four boundary lines in the cutting frame and the fourth connecting line; the drag boundary is the boundary corresponding to the maximum range of the drag allowed by the cutting frame. Thus, the drag boundary of the cutting frame can be conveniently calculated.
In one possible implementation, the video processing method further includes: after dragging the zoomed video picture, the electronic equipment acquires the sixth vertex coordinates of each vertex of the zoomed video picture and the actual zoom scale of the video picture; the electronic equipment determines a first axis point of the video picture according to the sixth vertex coordinates and the center point of the cutting frame; the electronic equipment determines a corresponding second axis point of the video picture in the video picture reduced by the actual scaling according to the actual position of the first axis point in the dragged video picture, and the video picture is reduced by taking the original axis point as a scaling center point; the electronic equipment calculates a deviation value between the sixth vertex coordinate and the actual vertex coordinate of the video picture amplified by the actual scaling, and the video picture is amplified by taking the second axis point as a scaling center point; the electronic device updates coordinates of the video frame according to the deviation value. In this way, after the drag operation is performed on the zoomed video picture, the axis point of the video picture is updated, and when the video picture is actually rotated or zoomed again later, the picture part actually required by the user can be located in the cutting frame.
In one possible implementation, the electronic device determines a first axis point of the video frame according to the sixth vertex coordinate and the center point of the crop frame, including: the electronic equipment calculates a fifth connecting line and a sixth connecting line of the dragged video picture according to the sixth vertex coordinates, wherein the fifth connecting line and the sixth connecting line are two adjacent boundary lines in the dragged video picture; the electronic equipment calculates the distance from the center point of the cutting frame to the first axis point of the fifth connecting line and the distance from the center point of the cutting frame to the second axis point of the sixth connecting line; the electronic equipment divides the first axis point distance by the actual width of the video picture, and multiplies the first axis point distance by the original width of the video picture to obtain the abscissa of the first axis point; the electronic device divides the second axis distance by the actual height of the video picture, and multiplies the second axis distance by the original height of the video picture to obtain the ordinate of the first axis.
In a second aspect, an embodiment of the present application proposes an electronic device, including a memory and a processor, where the memory is configured to store a computer program, and the processor is configured to invoke the computer program to execute the video processing method described above.
In a third aspect, embodiments of the present application provide a computer readable storage medium, where a computer program or instructions is stored, and when the computer program or instructions is executed, the video processing method described above is implemented.
In a fourth aspect, embodiments of the present application provide a computer program product comprising a computer program, which when executed, causes a computer to perform the video processing method described above.
The effects of each possible implementation manner of the second aspect to the fourth aspect are similar to those of the first aspect and the possible designs of the first aspect, and are not described herein.
Drawings
Fig. 1 is a schematic diagram of a hardware system structure of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic software system structure of an electronic device according to an embodiment of the present application;
fig. 3 is an interface schematic diagram in the video clipping process according to the embodiment of the present application;
FIG. 4 is a schematic diagram of a video frame mirrored and rotated at a video cropping interface according to an embodiment of the present application;
fig. 5 is a schematic diagram of springback after a video frame is dragged at a video clipping interface according to an embodiment of the present application;
fig. 6 is a schematic diagram of performing rebound amplification on a video frame after the video frame is reduced at the video clipping interface according to the embodiment of the present application;
fig. 7 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 8 is a flowchart of a video processing method in a video frame rotation process according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating calculation of a first critical scaling factor required for enlarging a video frame during rotation of the video frame according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an embodiment of the present application when a crop box is dragged in different manners;
FIG. 11 is a schematic diagram illustrating calculation of a drag boundary of a crop frame when a video frame is not rotated according to an embodiment of the present application;
FIG. 12 is a schematic diagram illustrating calculation of a drag boundary of a crop frame after video frame rotation according to an embodiment of the present application;
fig. 13 is a flowchart of a video processing method after a video frame is dragged according to an embodiment of the present application;
fig. 14 is a schematic diagram of determining whether all the crop frames are located in the dragged video frame after the video frame is dragged according to the embodiment of the present application;
FIG. 15 is a schematic diagram illustrating a calculation of a springback distance of a video frame after the video frame is dragged according to an embodiment of the present application;
FIG. 16 is a schematic diagram illustrating another calculation of a rebound distance of a video frame after the video frame is dragged according to an embodiment of the present application;
FIG. 17 is a flowchart of a method for video processing after scaling a video frame according to an embodiment of the present application;
FIG. 18 is a schematic diagram illustrating calculation of a second critical scaling factor required for zooming in a video frame after zooming in the video frame according to an embodiment of the present application;
Fig. 19 is a schematic diagram of updating an axis point of a video frame after a drag operation is performed on the scaled video frame according to an embodiment of the present application;
fig. 20 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to clearly describe the technical solutions of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first chip and the second chip are merely for distinguishing different chips, and the order of the different chips is not limited. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
In the process that the user uses the electronic equipment, the electronic equipment can be adopted to shoot the video, and after the user shoots the video, the user can edit the shot video, so that the edited video can meet the personalized requirements of the user. The electronic device may then save the edited video to a storage device. Thus, when the user needs to browse or use the edited video, the electronic device can read the edited video from the storage device and display the edited video for the user to browse and view.
For example, after capturing a video, in response to a user's cropping operation of the video frame with a crop frame, the video frame within the crop frame is cropped out. The cropping frame refers to a view operation frame for cropping a picture in a video cropping function of a video editing application.
In some scenarios, a user may perform operations such as rotating, zooming, panning (which may also be referred to as dragging) on a video screen. In the related art, when a video frame is used to clip a video frame subjected to operations such as rotation, scaling, and dragging, a partial region of the clipping frame may be located outside the video frame, that is, the clipping frame includes a portion of non-video frames, so that the non-video frames other than the video frames appear in the clipped video, which affects the clipped video effect.
Based on this, the embodiment of the application provides a video processing method, when moving operations such as rotation, scaling and dragging are performed on a video frame displayed in a video clipping interface, whether the clipping frame is located in the video frame is judged, if there is an area located outside the video frame in the clipping frame, the video frame can be adjusted according to adjustment parameters corresponding to the moving operations, so that the clipping frame is located in the adjusted video frame. Therefore, when the video frames subjected to rotation, scaling, dragging and other operations are cut by adopting the cutting frame, the video frames can be adjusted by adjusting parameters, so that the cutting frame is located in the adjusted video frames, namely, the cutting frame does not comprise non-video frames, and therefore, the cut video can not have non-video frames except the video frames, and the cut video effect is improved.
The electronic device provided by the embodiment of the application may be a mobile phone, a tablet computer (Pad), a wearable device, a vehicle-mounted device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA) or the like, and has video processing capability. The embodiment of the application does not limit the specific technology and the specific equipment form adopted by the electronic equipment.
In order to better understand the embodiments of the present application, the structure of the electronic device of the embodiments of the present application is described below.
Fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriberidentification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processingunit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it may be called from memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used for displaying images, displaying videos, receiving sliding operations, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrixorganic light emitting diod (AMOLED), a flexible light-emitting diode (flex), a mini, a Micro-OLED, a quantum dot light-emitting diode (quantum dot lightemitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
Touch sensors, also known as "touch devices". The touch sensor may be disposed on the display screen 194, and the touch sensor and the display screen 194 form a touch screen, which is also referred to as a "touch screen". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor may also be disposed on a surface of the electronic device 100 at a different location than the display 194.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture, among others. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 2 is a software configuration block diagram of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, namely an application layer, an application framework layer, a system library and a hardware layer from top to bottom.
The application layer may include a series of application packages, such as telephone, mailbox, calendar, music, and the like.
In the embodiment of the present application, as shown in fig. 2, the application layer further includes a video editing application. The video editing application has video data processing capability, and can provide video editing functions for users, including video data processing such as cutting, rendering and the like. In the embodiment of the application, the video editing application can be utilized to judge whether the cutting frame in the video cutting interface is completely positioned in the video picture under the condition of carrying out moving operations such as dragging, scaling, rotating and the like on the video picture, and when an area outside the video picture exists in the cutting frame, the video picture is adjusted according to the first critical scaling corresponding to the rotating operation, the second critical scaling corresponding to the scaling operation, the rebound distance corresponding to the dragging operation and other adjustment parameters, so that the cutting frame is completely positioned in the adjusted video picture. In addition, the video editing application may also calculate a drag boundary for the crop box that allows drag, and update the axis point of the video frame after drag of the scaled video frame.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application layer applications. The application framework layer includes a number of predefined functions.
As shown in fig. 2, the programmatic framework layer may include a media framework and an application framework.
A plurality of tools for editing video and audio are provided in the media frame. Wherein the tool comprises MediaCodec. MediaCodec is an Android-supplied module for encoding and decoding audio and video. It includes an encoder, a decoder, and a surface type cache.
The encoder provided by MeidaCodec may convert video or audio of one format input to the encoder to another format by compression techniques, while the decoder is used to perform the inverse of encoding to convert video or audio of one format input to the decoder to another format by decompression techniques.
MediaCodec may also apply for a shared memory, i.e., a surface memory in the hardware layer (hereinafter referred to as surface). Surface may be used to buffer video data. For example, after the electronic device performs an editing operation to obtain a rendered video image frame, the electronic device may input the image frame into a surface buffer. The application may then obtain the rendered video image frames from the surface for storage or display, etc.
The media frame also comprises a shared memory management module (surface flinger), wherein the shared memory management module is used for superposing and mixing the view surfaces of the application windows of the apps into one view surface according to the hierarchical sequence of the screen display, and placing the view surface into a cache region. When the screen refreshes the picture according to a certain frequency, the view of the buffer memory area can be displayed on the display screen.
An application FrameWork (FrameWork) can manage window processing events and actions of each application App at present and communicate data with surfeflinger; the method can also process the working event or the method call of the application App, and make corresponding response according to the requirement of the App.
The system library may include a plurality of functional modules. As shown in fig. 2, a rendering module may be included in the system library, and may be used to implement three-dimensional graphics drawing, image rendering, composition, layer processing, and the like. By way of example and not limitation, the rendering module includes, but is not limited to, at least one of: open graphics library (open graphics library, openGL), open source computer vision library (open source computer vision library, openCV), open operation language library (open computing language library, openCL).
The rendering module is provided with a plurality of image rendering functions that can be used to draw a scene from simple graphics to complex three dimensions. The rendering module provided by the system library in the embodiment of the application can be used for supporting the video editing application to execute image editing operations, such as video cropping operations, adding filters and the like.
As an example, other functional modules may also be included in the system library, such as, for example: status monitoring services, surface manager (surface manager), media library (Media Libraries), etc.
The hardware layer comprises a memory (memory), a Graphics Processing Unit (GPU) and a display memory. The memory is used for temporarily storing operation data in a central processing unit (central processing unit, CPU) and data exchanged with an external memory such as a hard disk. The memory also includes the shared memory (surface) described above. The storage space required by the electronic device to run the video editing application may be provided by memory, for example, surface.
A GPU is a processor that performs image and graphics-related operations. In the embodiment of the application, the process of video clipping by the electronic device through the rendering module can be completed through the GPU. The video memory is used for storing the rendering picture data after each GPU calculation.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be implemented independently or combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
In an actual usage scenario, a user may open a video through a video editing application, causing the electronic device to display the video editing interface 300 shown in fig. 3 (a). The video editing interface 300 displays a video frame 301 corresponding to a video, and the video editing interface 300 further includes various functional controls for performing video editing, such as a clipping control 302, a filter control, a text control, and a special effect control.
In the event that the user desires to crop the video, the user may operate the cropping control 302, and the electronic device causes the electronic device to display a video cropping interface 400 as shown in fig. 3 (b) in response to the user operation of the cropping control 302.
The video clipping interface 400 also displays a video frame 301 corresponding to the video, and a clipping frame 401 is further disposed in the video clipping interface, where the clipping frame 401 refers to a view operation frame for clipping the video frame in the video clipping function of the video editing application, and is used for clipping the video frame.
The video cropping interface 400 also includes a frame control for changing the width and/or height of the crop box 401, and thus the width and/or height of the video frames within the crop box 401, i.e., for changing the width and/or height of the cropped video frames. For example, the frame controls may include, but are not limited to, the following: a "free" frame control, an "original ratio" frame control, a "9:16" frame control, a "16:9" frame control, a "3:4" frame control, and the like.
The "free" frame control is used to adjust the width and/or height of the frame 401, and the aspect ratio of the frame 401 may be changed arbitrarily according to the actual requirement, i.e. the aspect ratio of the frame 401 is not fixed. The "raw scale" frame control is used to adjust the width and height of the crop frame 401 such that the aspect ratio of the crop frame 401 remains fixed to a preset aspect ratio that may be the aspect ratio of the video frame prior to cropping, or other preset scale. The "9:16" frame control is used to adjust the width and height of crop box 401 by a fixed aspect ratio of 9:16. The "16:9" frame control is used to adjust the width and height of crop box 401 by a fixed aspect ratio of 16:9. The "3:4" frame control is used to adjust the width and height of crop box 401 by a fixed aspect ratio of 3:4
In addition, the video cropping interface 400 also includes a first rotation control 402, a second rotation control 403, and a mirror control 404.
The first rotation control 402 may also be referred to as a rule view control, where the first rotation control 402 is configured to rotate the video frame according to an angle scale value selected by a user operation when the user operation is received. Illustratively, the first rotational control 402 rotates through an angle ranging from-45 to +45, and the video frame rotates clockwise when the first rotational control 402 is slid rightward, and rotates counterclockwise when the first rotational control 402 is slid leftward. Therefore, when the user slides the first rotation control 402, the rotation angle of the video picture may be adjusted according to the actual demand, for example, the video picture may be rotated clockwise by 10 ° or 30 ° or the like according to the actual demand.
The second rotation control 403 is used to rotate the video screen by a fixed angle when receiving a user operation. For example, each time the user clicks the second rotation control 403, the video frame may be rotated 90 ° counterclockwise or 90 ° clockwise, etc. The mirror control 404 is used to mirror the video screen when receiving the user operation.
Illustratively, when the user clicks the "free" frame control and adjusts the width and height of the crop frame 401 to the size required by the user by dragging the boundary or the vertex of the crop frame 401, the electronic device may click the confirmation control 405 in the video cropping interface 400, and in response to the user operation of the confirmation control 405, the electronic device causes the electronic device to display the video editing interface 300 shown in fig. 3 (c) and display the cropped video frame 303 in the video editing interface 300 shown in fig. 3 (c). After the electronic device jumps to the video editing interface 300 shown in fig. 3 (c), the user may continue to click on the function buttons of other edits for editing, or may end the video editing operation.
In actual use, after the electronic device displays the video clip interface 400, the user may perform one or more of the following operations on the video frames.
Illustratively, as shown in (a) of fig. 4, the user may perform the mirroring operation on the video frame 301 by performing a touch operation on the mirroring control 404 in the video clip interface 400, and the electronic device may respond to the user operation on the mirroring control 404.
As shown in fig. 4 (b), the user may also perform a touch operation on the first rotation control 402 in the video cropping interface 400, so as to perform a rotation operation on the video screen 301. In addition, during the rotation process, it is determined whether the crop box 401 is located in the video frame 301 during the rotation process, when the crop box 401 is located in the video frame 301 during the rotation process, the video frame 301 remains unchanged, and when there is an area of the crop box 401 located outside the video frame 301 during the rotation process, the electronic device may perform the zoom-in operation on the video frame 301 during the rotation process according to the first critical zoom scale, so that the crop box 401 is located in the zoomed-in video frame 301.
Of course, the user may perform a touch operation on the second rotation control 403 in the video cropping interface 400, so as to perform a rotation operation on the video frame by an integer multiple of 90 °.
As shown in fig. 5 (a), the user can also drag the video screen 301 displayed in the crop box 401 by performing a one-finger drag operation on the video screen 301 in the crop box 401. After the drag operation of the user is finished, the user's finger will leave the screen of the electronic device, so the electronic device needs to determine whether the crop box 401 is located in the dragged video frame 301 entirely, when the crop box 401 is located in the dragged video frame 301 entirely, the video frame 301 remains unchanged, and when there is an area outside the dragged video frame 301 in the crop box 401, the electronic device may perform a rebound operation on the dragged video frame 301 according to the rebound distance, that is, the video frame 301 is rebound from the state shown in (a) in fig. 5 to the state shown in (b) in fig. 5, so that the crop box 401 is located in the rebound video frame 301 entirely.
The user may also zoom in or out the video frame 301 displayed in the crop frame 401 by performing a double-finger zoom operation on the video frame 301 in the crop frame 401, as shown in (a) of fig. 6, and the user performs a zoom out operation on the video frame 301 displayed in the crop frame 401. After the zooming operation of the user is finished, the user's finger leaves the screen of the electronic device, so that the electronic device needs to determine whether the crop box 401 is located in the zoomed video frame 301 entirely, when the crop box 401 is located in the zoomed video frame 301 entirely, the video frame 301 remains unchanged, and when there is an area outside the zoomed video frame 301 in the crop box 401, the electronic device may perform the zooming operation on the zoomed video frame according to the second critical zooming ratio, that is, the video frame 301 is zoomed and rebounded from the state shown in (a) of fig. 6 to the state shown in (b) of fig. 6, and finally, the crop box 401 is located in the zoomed and rebounded video frame 301 entirely.
Fig. 7 is a flowchart of a video processing method according to an embodiment of the present application, which may be specifically performed by a video editing application in an electronic device. Referring to fig. 7, the video processing method may specifically include the steps of:
step 701, the electronic device receives a moving operation of a video picture displayed in a video clipping interface; the movement operation includes at least one of a rotation operation, a zoom operation, and a drag operation.
In some embodiments, the user may slide the first rotation control 402 within the video clip interface 400 or the user may click the second rotation control 403 within the video clip interface 400 such that the electronic device may receive a rotation of the video screen 301 displayed within the video clip interface 400.
The user may also perform a single-finger drag operation on the video frame 301 within the crop box 401, such that the electronic device receives a drag operation on the video frame 301 displayed within the video crop interface 400.
The user may also perform a double-finger zoom operation on the video frame 301 within the crop box 401 such that the electronic device receives a zoom operation on the video frame 301 displayed within the video crop interface 400.
In step 702, the electronic device determines whether all the crop frames in the video cropping interface are located in the video frame in response to the moving operation.
Take the rotation operation for the video frame 301 as an example. During the rotation, the electronic device may determine whether the crop box 401 is located in the video frame 301 during the rotation. When the crop box 401 is located entirely within the video frame 301 during rotation, the video frame 301 remains unchanged, and the electronic device does not change the size of the video frame 301.
Taking a drag operation for the video frame 301 as an example. After the drag operation of the user is finished, the electronic device may determine whether the crop box 401 is located in the dragged video frame 301, and when the crop box 401 is located in the dragged video frame 301, the video frame 301 remains unchanged, and the electronic device does not rebound the video frame 301.
Taking as an example a scaling operation for the video picture 301. After the zooming operation of the user is finished, the electronic device may determine whether the crop box 401 is located in the zoomed video frame 301, and when the crop box 401 is located in the zoomed video frame 301, the video frame 301 remains unchanged, and the electronic device does not perform the zooming operation on the video frame 301.
In step 703, when there is an area outside the video frame in the crop frame, the electronic device adjusts the video frame according to the adjustment parameter corresponding to the movement operation, so that the crop frame is all located in the adjusted video frame.
Taking a rotation operation for the video frame 301 as an example, the adjustment parameters corresponding to the rotation operation include a first critical scaling. When there is an area in the crop box 401 outside the video frame 301 during rotation, the electronic device may zoom in on the video frame 301 during rotation according to the first critical zoom scale, so that the crop box 401 is located in the zoomed in video frame 301.
Taking a drag operation for the video screen 301 as an example, the adjustment parameters corresponding to the drag operation include a rebound distance. When there is an area outside the dragged video frame 301 in the crop frame 401, the electronic device may perform a springback operation on the dragged video frame 301 according to the springback distance, so that the crop frame 401 is all located in the springback video frame 301.
Taking a scaling operation for the video frame 301 as an example, the adjustment parameters corresponding to the scaling operation include a second critical scaling ratio. When there is an area in the crop box 401 that is located outside the zoomed video frame 301, the electronic device may zoom in on the zoomed video frame according to the second critical zoom scale, such that the crop box 401 is located entirely within the zoomed video frame 301.
Therefore, when the cutting frame 401 is used to cut the video frame 301 subjected to operations such as rotation, scaling, dragging, and the like, the adjusting parameters can be used to adjust the video frame 301, so that the cutting frame 401 is located in the adjusted video frame 301, that is, the cutting frame 401 does not include non-video frames, so that the cut video can not include non-video frames other than the video frame 301, and further the cut video effect is improved.
In addition, in addition to the operations of rotating, zooming, and dragging the video frame 301, the embodiments of the present application may calculate the drag boundary of the crop box 401 that allows the drag, and update the axis point of the video frame 301 after the zoomed video frame 301 is dragged.
The following describes the specific implementation of the embodiment of the present application in detail from the five aspects of rotation of the video frame 301, dragging of the crop frame 401, dragging of the video frame 301, scaling of the video frame 301, and updating of the axis point of the video frame 301, respectively.
Fig. 8 is a flowchart of a video processing method in a video frame rotation process according to an embodiment of the present application, which is mainly directed to a rotation scenario when the first rotation control 402 is triggered. Referring to fig. 8, the video processing method in the video frame rotation process may specifically include the following steps:
Step 801, the electronic device receives a touch operation on a first rotation control in a video clipping interface; and the first rotation control is used for rotating the video picture according to the angle scale value corresponding to the touch operation when triggered.
In some embodiments, when the user wants to rotate the video frame 301 displayed in the video clip interface 400, the electronic device may receive a touch operation on the first rotation control 402 in the video clip interface 400 by performing a sliding operation on the first rotation control 402 in the video clip interface 400.
The first rotary control 402 may also be referred to as a rulerView control, and the rotation angle of the first rotary control 402 may range from-45 ° to +45°. The first rotation control 402 is configured to perform a rotation operation on the video frame according to an angle scale value corresponding to the touch operation when triggered.
After clicking the mirror control 404, the degree of the first rotation control 402 changes, where the change rule is newrule= -1× oldRulerAngle, oldRulerAngle, which refers to the degree of the first rotation control 402 before clicking the mirror control 404, and newrule refers to the degree of the first rotation control 402 after clicking the mirror control 404.
The rotation operation refers to rotating the video view (i.e., video picture 301) about the Z-axis, and the video editing application may call the mvdeoview. The mirror operation is to rotate the video picture 301 by 0 degrees or 180 degrees around the Y axis, and the video editing application may call the mvdeoview.
The width direction of the cutting frame 401 may be defined as the X axis, the height direction of the cutting frame 401 may be defined as the Y axis, and the Z axis may be defined as a coordinate axis perpendicular to both the X axis and the Y axis.
Step 802, the electronic device obtains, in response to the touch operation, first vertex coordinates of each vertex in the video frame in the rotation process and second vertex coordinates of each vertex in the crop frame.
In step 803, the electronic device calculates a first critical scaling of the video frame according to the first vertex coordinate and the second vertex coordinate.
After receiving a touch operation on a first rotation control 402 in the video clipping interface 400, the electronic device responds to the touch operation to obtain first vertex coordinates of each vertex in the video frame 301 in the rotation process and second vertex coordinates of each vertex in the clipping frame 401; then, a first critical scaling of the video frame 301 is calculated according to the first vertex coordinates and the second vertex coordinates.
The first critical scaling ratio refers to the scaling multiple of the video frame 301 when the video frame 301 is just able to fill the crop box 401 at the current rotation angle. Just being able to fully frame the video frame 301 with the crop frame 401 is understood to mean that the crop frame 401 does not include non-video frames outside the video frame 301 and at least one vertex in the crop frame 401 is located at the boundary line of the video frame 301.
In an alternative implementation manner, the electronic device calculates a first connection line formed between every two adjacent vertexes in the video picture 301 according to the first vertex coordinates of each vertex in the video picture 301; the electronic device calculates a second connecting line formed between the central point of the cutting frame 401 and each vertex in the cutting frame 401 according to the second vertex coordinates of each vertex in the cutting frame 401; the electronic device calculates a first connection line corresponding to a first target vertex and a corresponding second connection line to obtain a target intersection point corresponding to the first target vertex, wherein the first target vertex is any vertex in the cutting frame 401, the first connection line corresponding to the first target vertex is a first connection line closest to the first target vertex in the video picture 301, and the second connection line corresponding to the first target vertex is a second connection line formed between the first target vertex and a central point of the cutting frame 401; the electronic device calculates a first interval between the first target vertex and the central point of the cutting frame 401, and a second interval between the target intersection point and the central point of the cutting frame 401; the electronic device calculates a first critical scaling of the video frame according to the first spacing and the second spacing.
Specifically, the electronic equipment determines the ratio of the first interval to the second interval as a target proportion value corresponding to the first target vertex; the electronic device determines the maximum value of the target scale values corresponding to the vertices in the crop box 401 as the first critical scaling of the video frame 301.
For example, as shown in fig. 9, four vertices in the video frame 301 during rotation are P1, P2, P3, and P4, respectively, and each vertex in the crop box 401 is a C1, C2, C3, and C4 vertex, respectively.
According to the calculation formula of determining a straight line from two points, a first connecting line formed between every two adjacent vertexes in the video picture 301 can be obtained based on the first vertex coordinates corresponding to the P1 vertex, the P2 vertex, the P3 vertex and the P4 vertex in the video picture 301. The first connection line included in the video frame 301 may be understood as four boundary lines in the video frame 301, which includes: a boundary line formed between the P1 vertex and the P2 vertex, a boundary line formed between the P2 vertex and the P3 vertex, a boundary line formed between the P3 vertex and the P4 vertex, and a boundary line formed between the P1 vertex and the P4 vertex.
center is the center point of crop box 401. With the upper left corner of the screen as the origin of coordinates, the abscissa of the center point of the crop box 401 may be: the abscissa of the C1 vertex + the width/2 of the crop box, the ordinate of the center point of crop box 401 may be: the ordinate of the C1 vertex + the height/2 of the crop box. Alternatively, the abscissa of the center point of the crop box 401 may be: (abscissa of C2 vertex-abscissa of C1 vertex)/abscissa of 2+c1 vertex, the ordinate of the center point of the crop box 401 may be: ordinate of C4 vertex-ordinate of C1 vertex)/ordinate of 2+c1 vertex.
According to the calculation formula of determining a straight line by two points, a second connecting line formed between the center point center of the cutting frame 401 and each vertex in the cutting frame 401 can be obtained based on the coordinates of the second vertex corresponding to each of the C1 vertex, the C2 vertex, the C3 vertex and the C4 vertex in the cutting frame 401 and the coordinates of the center point center of the cutting frame 401. Wherein, the second connecting line formed between the center point of the clipping frame 401 and each vertex in the clipping frame 401 includes: a line formed between the center point center of the clip frame 401 and the C1 vertex, a line formed between the center point center of the clip frame 401 and the C2 vertex, a line formed between the center point center of the clip frame 401 and the C3 vertex, and a line formed between the center point center of the clip frame 401 and the C4 vertex.
According to the mode, the straight line formulas corresponding to the 8 connecting lines can be obtained through calculation, and according to the corresponding relation, the straight lines are intersected in pairs, and the intersection point of the first connecting line and the second connecting line can be obtained through calculation formulas of intersection points of the two straight lines. The intersection points of the first connecting line and the second connecting line comprise an S1 intersection point, an S2 intersection point, an S3 intersection point and an S4 intersection point.
As shown in fig. 9 (a), for a scene in which the video screen 301 is not subjected to mirroring processing, and the first rotation control 402 slides rightward, that is, a scene in which a rule (refer to a change in rotation angle caused by a sliding operation of the first rotation control 402) is greater than 0. Taking the first target vertex as the C1 vertex as an example, a first connecting line corresponding to the first target vertex is a boundary line of the P1 vertex-P4 vertex, a second connecting line corresponding to the first target vertex is a connecting line of the center-C1 vertex, and an intersection point between the connecting line of the center-C1 vertex and the boundary line of the P1 vertex-P4 vertex is an S1 intersection point. Taking the first target vertex as the C2 vertex as an example, a first connecting line corresponding to the first target vertex is a boundary line of the P1 vertex-P2 vertex, a second connecting line corresponding to the first target vertex is a connecting line of the center-C2 vertex, and an intersection point between the connecting line of the center-C2 vertex and the boundary line of the P1 vertex-P2 vertex is an S2 intersection point. Taking the first target vertex as the C3 vertex as an example, a first connecting line corresponding to the first target vertex is a boundary line of the P2 vertex-P3 vertex, a second connecting line corresponding to the first target vertex is a connecting line of the center-C3 vertex, and an intersection point between the connecting line of the center-C3 vertex and the boundary line of the P2 vertex-P3 vertex is an S3 intersection point. Taking the first target vertex as the C4 vertex as an example, a first connecting line corresponding to the first target vertex is a boundary line of the P3 vertex-P4 vertex, a second connecting line corresponding to the first target vertex is a connecting line of the center-C4 vertex, and an intersection point between the connecting line of the center-C4 vertex and the boundary line of the P3 vertex-P4 vertex is an S4 intersection point.
As shown in (b) in fig. 9, for a video picture 301 that is not mirror-processed, and a scene in which the first rotation control 402 slides to the left, i.e., a scene in which rulera is less than 0. The intersection point between the line connecting the center-C1 vertex and the boundary line of the P1 vertex-P2 vertex is an S1 intersection point, the intersection point between the line connecting the center-C2 vertex and the boundary line of the P2 vertex-P3 vertex is an S2 intersection point, the intersection point between the line connecting the center-C3 vertex and the boundary line of the P3 vertex-P4 vertex is an S3 intersection point, and the intersection point between the line connecting the center-C4 vertex and the boundary line of the P1 vertex-P4 vertex is an S4 intersection point.
Then, the electronic device calculates first pitches respectively corresponding to four vertices of the crop box 401 and the center point center of the crop box 401, that is, first pitches between the C1 vertex and the center point center of the crop box 401, first pitches between the C2 vertex and the center point center of the crop box 401, first pitches between the C3 vertex and the center point center of the crop box 401, and first pitches between the C4 vertex and the center point center of the crop box 401.
The electronic device calculates the second pitches respectively corresponding to the four intersections and the center point center of the frame 401, that is, the second pitches between the S1 intersection and the center point center of the frame 401, the second pitches between the S2 intersection and the center point center of the frame 401, the second pitches between the S3 intersection and the center point center of the frame 401, and the second pitches between the S4 intersection and the center point center of the frame 401.
Then, the ratio of each first interval to the corresponding second interval is calculated, so as to obtain the corresponding target ratio value of the four vertexes in the clipping frame 401. The four calculated target proportion values are respectively as follows: a target ratio value obtained by dividing a first interval between the C1 vertex and the center point center of the cutting frame 401 by a second interval between the S1 intersection point and the center point center of the cutting frame 401; a target ratio value obtained by dividing a first interval between the C2 vertex and the center point center of the cutting frame 401 by a second interval between the S2 intersection point and the center point center of the cutting frame 401; a target ratio value obtained by dividing a first interval between the C3 vertex and the center point center of the cutting frame 401 by a second interval between the S3 intersection point and the center point center of the cutting frame 401; a target ratio value is obtained by dividing a first distance between the C4 vertex and the center point center of the crop box 401 by a second distance between the S4 intersection and the center point center of the crop box 401.
Finally, the maximum value of the calculated target scale values is determined as the first critical scaling of the video frame 301.
In other embodiments, after calculating the S1 intersection, the S2 intersection, the S3 intersection, and the S4 intersection between the first and second wires, a third pitch between the S1 intersection and the S3 intersection, a fourth pitch between the S2 intersection and the S4 intersection, a fifth pitch between the C1 vertex and the C3 vertex, and a sixth pitch between the C2 vertex and the C4 vertex may also be calculated; then, calculating the ratio of the fifth spacing to the third spacing and the ratio of the sixth spacing to the fourth spacing; finally, the maximum of the ratio of the fifth pitch to the third pitch and the ratio of the sixth pitch to the fourth pitch is determined as the first critical scaling of the video frame 301.
In step 804, the electronic device determines whether the crop frame is located in the video frame in the rotation process according to the first critical scaling and the first scaling of the video frame in the rotation process relative to the original video frame.
In some embodiments, the electronic device may further need to obtain a first scaling of the current video picture relative to the original video picture during the rotation.
The original video picture refers to the video picture at the time when the video clip interface 400 just starts to display. Since the video frames are scaled equally, the first scaling may be the ratio of the width of the current video frame to the original width of the original video frame, or the first scaling may be the ratio of the height of the current video frame to the original height of the original video frame.
Specifically, when the first critical scaling ratio is greater than the first scaling ratio, the electronic device determines that an area outside the video frame 301 in the rotation process exists in the crop box 401; when the first critical scale is less than or equal to the first scale, the electronic device determines that the crop box 401 is entirely within the video frame 301 during rotation.
When the crop box 401 is located entirely within the video frame 301 during rotation, the video frame 301 remains unchanged, and the electronic device does not change the size of the video frame 301. And when there is an area outside the video frame 301 during rotation in the crop box 401, the electronic device performs the following step 805.
In step 805, when there is an area in the crop frame outside the video frame in the rotation process, the electronic device performs an enlarging operation on the video frame in the rotation process according to the first critical scaling ratio, so that the crop frame is all located in the enlarged video frame.
In some embodiments, when there is an area in the crop box 401 that is outside the video frame 301 during rotation, in order for the crop box 401 to no longer include non-video frames other than the video frame 301, the electronic device may invoke a first critical scaling ratio to zoom in on the video frame 301 during rotation such that the crop box 401 is entirely within the zoomed in video frame 301.
When the video frame 301 is enlarged during rotation by the first critical scaling, the video frame 301 is also enlarged in equal proportion, i.e. the width and height of the video frame 301 are both enlarged by the first critical scaling.
In some scenarios, the embodiments of the present application may also perform a click operation on the second rotation control 403 in the video clipping interface 400, so that the electronic device receives a touch operation on the second rotation control 403 in the video clipping interface 400. The second rotation control 403 is used to rotate the video frame 301 by an integer multiple of 90 ° according to the number of clicks when triggered.
Wherein, each time the second rotation control 403 is clicked, the video frame 301 rotates 90 ° counterclockwise, i.e., superimposes-90 ° on the original angle basis. When the video frame 301 is not mirrored, that is, when the mirror=0, the video frame 301 is rotated counterclockwise after a plurality of clicks of the second rotation control 403, the rotation angles thereof are, in order, 0 °, -90 °, -180 ° and-270 °; after the mirror image processing is performed on the video screen 301, that is, when mirror=180°, the video screen 301 is rotated clockwise after a plurality of clicks of the second rotation control 403, the rotation angles are 0 °,90 °,180 °, and 270 ° in this order.
Therefore, when performing a rotation operation on the video screen 301, it is necessary to call setRotation (degree) to set a rotation angle. The rotation angle includes a rotation angle corresponding to a sliding operation performed on the first rotation control 402 and a rotation angle corresponding to a clicking operation performed on the second rotation control 403, that is, rotation=rotation+rule. The rotation refers to a change in the rotation angle caused by the click operation on the second rotation control 403.
For a scene with a frame size ratio of 1:1 of the crop box 401, when the video frame 301 rotates by an integer multiple of 90 °, the electronic device does not enlarge the size of the video frame 301.
A scene where the frame ratio for the crop box 401 is not 1:1, i.e., a scene where the width and the height of the crop box 401 are not equal. If the video frame 301 rotates by 90 °, both the width and the height of the frame 401 can be changed, that is, the width of the frame 401 after the change is the height of the frame 401 before the change, and the height of the frame 401 after the change is the width of the frame 401 before the change, accordingly, the width and the height of the video frame 301 also need to be changed according to the proportion of the frame 401.
If the aspect ratio of the crop box 401 is greater than the aspect ratio of the operable area before the video frame 301 is not rotated, the scaled ratio of the video frame 301 after rotation is: the ratio of the height of the operable area to the width of the operable area. If the aspect ratio of the crop box 401 is smaller than the aspect ratio of the operable area before the video frame 301 is not rotated, the scaled ratio after the video frame 301 is rotated is: the ratio of the width of the operable area to the height of the operable area. The operable area refers to the largest area in which the crop box 401 can be operated in response to a gesture of the user.
In some alternative embodiments, when using the video cropping function, the user may also adjust the size and position of the cropping frame 401 according to the position in the video frame 301 where cropping is actually required, which may be achieved by dragging the cropping frame 401.
The drawing of the cutting frame is an important component function of video cutting, the cutting frame 401 has four boundary lines and four vertexes, and when the boundary lines or the vertexes of the cutting frame 401 are drawn by a single finger, corresponding processing modes are different according to different touch positions.
In order to ensure that the boundary line or the vertex of the frame 401 can be dragged, the frame 401 is dragged in two ways, since the frame 401 can be cut according to the current frame scale. One is a free-frame scene and the other is a fixed-frame scene.
The free frame (such as the scene when clicking the "free" frame control) refers to that the frame 401 does not need to be dragged according to a fixed proportion, the aspect ratio of the frame 401 can be changed at will when being dragged, the dragging position is within the range allowing the dragging, and the previous position is maintained after the dragging position is beyond the dragging range. Fixed frame refers to scaling down or scaling up the crop frame 401 in equal proportions (e.g., 9:16 and 3:4, etc.) of the frame.
Taking a fixed drawing as an example, in one scene, a user may drag any one of four boundary lines of the crop frame 401 to reduce or enlarge the width and the height of the crop frame 401 in equal proportion, and at this time, the positions of the boundary lines opposite to the dragged boundary lines remain unchanged. As shown in fig. 10 (a), 401a is a pre-dragging frame, and when the lower boundary line of the frame is dragged upward, the position of the upper boundary line of the frame remains unchanged, and 401b is a post-dragging frame, and it can be seen that the frame width ratio of the pre-dragging and post-dragging frames is uniform.
Taking a fixed frame as an example, in another scenario, the user may drag any one of the four vertices of the crop frame 401 to reduce or enlarge the width and the height of the crop frame 401 in equal proportion, where the vertex position at the diagonal of the dragged vertex is unchanged. As shown in fig. 10 (b), 401a is a pre-drag frame, and the lower left vertex of the frame can be dragged in the direction of the upper right vertex, so that the position of the upper right vertex of the frame remains unchanged, 401c is a post-drag frame, and it can be seen that the frame width ratio of the pre-drag and post-drag frames is uniform.
It should be noted that, the crop box 401 may be limited by some drag boundaries during the drag process, that is, the drag boundaries need to satisfy the following three conditions: 1. when the clipping frame 401 is reduced, the width of the clipping frame 401 cannot be smaller than the minimum width of the clipping frame 401, the height of the clipping frame 401 cannot be smaller than the minimum height of the clipping frame 401, and the minimum width and the minimum height are determined by view controls corresponding to the clipping frame 401; 2. the drag boundary cannot exceed the maximum range (i.e., the operable area) that the crop box 401 can display, the maximum range that the crop box 401 can display may be that the upper boundary is located below the title bar, the lower boundary is located above the "reset" control in the video crop interface 400, the left boundary is a first distance preset from the left edge of the screen, the right boundary is a second distance preset from the right edge of the screen, the first distance and the second distance may be equal, i.e., the maximum width that the crop box 401 can display = parent layout width-left-right spacing, the maximum height that the crop box 401 can display = parent layout height-up-down spacing-height of the title bar-height of the control bar (e.g., the "reset" control).
As shown in fig. 11, the first rectangular frame 63 represents the region where the enlarged video screen is located, the second rectangular frame 64 (i.e., parent layout) represents the maximum range that the crop frame 401 can display, the third rectangular frame 65 represents the position of the crop frame 401 before dragging, and the fourth rectangular frame 66 represents the drag boundary where the crop frame 401 allows dragging.
Therefore, the left boundary of the drag boundary may take the maximum value of the coordinate values of the left boundaries of the first rectangular frame 63 and the second rectangular frame 64, that is, the left boundary of the first rectangular frame 63 is the left boundary of the drag boundary. The right boundary of the drag boundary may take the minimum value of the coordinate values of the right boundaries of the first rectangular frame 63 and the second rectangular frame 64, that is, the right boundary of the second rectangular frame 64 is the right boundary of the drag boundary. The upper boundary of the drag boundary may take the maximum value of the coordinate values of the upper boundaries of the first rectangular frame 63 and the second rectangular frame 64, that is, the upper boundary of the first rectangular frame 63 is the upper boundary of the drag boundary. The lower boundary of the drag boundary may take the minimum value of the coordinate values of the lower boundaries of the first rectangular frame 63 and the second rectangular frame 64, that is, the lower boundary of the second rectangular frame 64 is the lower boundary of the drag boundary).
The above description is directed to a manner of determining a corresponding drag boundary in a scene where the rotation angle of the video frame 301 is an integer multiple of 0 ° or 90 °, the region boundary of the video frame 301 may be compared with the boundary of the maximum range that the crop frame 401 can display, from which an upper boundary, a lower boundary, a left boundary, a right boundary, and the like are selected.
Whereas, in the case where the rotation angle for the video screen 301 is not 0 ° and is not an integer multiple of 90 °, the electronic device may determine the drag boundary in the following manner.
In the case where the rotation angle of the video screen 301 is not 0 ° and is not an integer multiple of 90 °, the electronic device receives a drag operation on the crop frame 401; the electronic device responds to the drag operation to acquire fifth vertex coordinates of each vertex in the video picture 301; the electronic device calculates a fourth connecting line formed between every two adjacent vertexes in the video picture 301 according to the fifth vertex coordinates of each vertex in the video picture 301; the electronic equipment calculates the dragging boundary of the cutting frame 401 according to the intersection point coordinates corresponding to the intersection points formed between the extension lines of the four boundary lines in the cutting frame 401 and the fourth connecting line; the drag boundary is a boundary corresponding to the maximum range in which the crop box 401 allows drag.
For example, as shown in fig. 12, four vertices in the video frame 301 are P1, P2, P3, and P4, respectively, and each vertex in the crop box 401 is a C1, C2, C3, and C4 vertex, respectively.
According to the calculation formula of determining a straight line from two points, a fourth connecting line formed between every two adjacent vertexes in the video frame 301 can be obtained based on the fifth vertex coordinates corresponding to the P1 vertex, the P2 vertex, the P3 vertex and the P4 vertex in the video frame 301. The fourth lines included in the video frame 301 are respectively: a boundary line formed between the P1 vertex and the P2 vertex, a boundary line formed between the P2 vertex and the P3 vertex, a boundary line formed between the P3 vertex and the P4 vertex, and a boundary line formed between the P1 vertex and the P4 vertex.
Accordingly, based on the C1 vertex, the C2 vertex, the C3 vertex, and the C4 vertex in the crop frame 401, four boundary lines according to the crop frame 401, which are boundary lines formed between the C1 vertex and the C2 vertex, boundary lines formed between the C2 vertex and the C3 vertex, boundary lines formed between the C3 vertex and the C4 vertex, and boundary lines formed between the C1 vertex and the C4 vertex, respectively, can be obtained.
According to the mode, a linear formula corresponding to 8 boundary lines can be obtained, the two boundary lines are intersected according to the corresponding relation, and 8 intersection points can be obtained through calculation according to a calculation formula of intersection points of the two linear lines.
As shown in (a) in fig. 12, for a scenario in which rulerAngle is greater than 0. The intersection point between the extension line of the boundary line of the C1 vertex-C4 vertex and the boundary line of the P1 vertex-P4 vertex is the Q1 intersection point, and the intersection point between the extension line of the boundary line of the C1 vertex-C4 vertex and the boundary line of the P3 vertex-P4 vertex is the Q2 intersection point; the intersection point between the extension line of the boundary line of the C2 vertex-C3 vertex and the boundary line of the P1 vertex-P2 vertex is the Q3 intersection point, and the intersection point between the extension line of the boundary line of the C2 vertex-C3 vertex and the boundary line of the P2 vertex-P3 vertex is the Q4 intersection point; the intersection point between the extension line of the boundary line of the C1 vertex-C2 vertex and the boundary line of the P1 vertex-P4 vertex is the Q5 intersection point, and the intersection point between the extension line of the boundary line of the C1 vertex-C2 vertex and the boundary line of the P1 vertex-P2 vertex is the Q6 intersection point; the intersection point between the extension line of the boundary line of the C3 vertex-C4 vertex and the boundary line of the P3 vertex-P4 vertex is Q7 intersection point, and the intersection point between the extension line of the boundary line of the C3 vertex-C4 vertex and the boundary line of the P2 vertex-P3 vertex is Q8 intersection point.
The drag boundary on the left side of the crop box 401 is the maximum value of the abscissa of the Q5 intersection point and the abscissa of the Q7 intersection point, that is, the abscissa of the Q5 intersection point shown in (a) in fig. 12. The drag boundary on the right side of the crop box 401 is the minimum value of the abscissa of the Q6 intersection and the abscissa of the Q8 intersection, that is, the abscissa of the Q8 intersection shown in (a) in fig. 12. The drag boundary on the upper side of the crop box 401 is the maximum value of the ordinate of the Q1 intersection point and the ordinate of the Q3 intersection point, that is, the ordinate of the Q1 intersection point shown in (a) in fig. 12. The drag boundary on the lower side of the crop box 401 is the minimum value of the ordinate of the Q2 intersection point and the ordinate of the Q4 intersection point, that is, the ordinate of the Q4 intersection point shown in (a) of fig. 12. Therefore, four black bold lines shown in (a) in fig. 12 represent the drag boundary of the crop box 401.
As shown in (b) in fig. 12, for a scenario in which rulerAngle is less than 0. The intersection point between the extension line of the boundary line of the C1 vertex-C4 vertex and the boundary line of the P1 vertex-P2 vertex is the Q1 intersection point, and the intersection point between the extension line of the boundary line of the C1 vertex-C4 vertex and the boundary line of the P1 vertex-P4 vertex is the Q2 intersection point; the intersection point between the extension line of the boundary line of the C2 vertex-C3 vertex and the boundary line of the P2 vertex-P3 vertex is the Q3 intersection point, and the intersection point between the extension line of the boundary line of the C2 vertex-C3 vertex and the boundary line of the P3 vertex-P4 vertex is the Q4 intersection point; the intersection point between the extension line of the boundary line of the C1 vertex-C2 vertex and the boundary line of the P1 vertex-P2 vertex is the Q5 intersection point, and the intersection point between the extension line of the boundary line of the C1 vertex-C2 vertex and the boundary line of the P2 vertex-P3 vertex is the Q6 intersection point; the intersection point between the extension line of the boundary line of the C3 vertex-C4 vertex and the boundary line of the P1 vertex-P4 vertex is Q7 intersection point, and the intersection point between the extension line of the boundary line of the C3 vertex-C4 vertex and the boundary line of the P3 vertex-P4 vertex is Q8 intersection point.
The drag boundary on the left side of the crop box 401 is the maximum value of the abscissa of the Q5 intersection and the abscissa of the Q7 intersection, that is, the abscissa of the Q7 intersection shown in (b) in fig. 12. The drag boundary on the right side of the crop box 401 is the minimum value of the abscissa of the Q6 intersection and the abscissa of the Q8 intersection, that is, the abscissa of the Q6 intersection shown in (b) in fig. 12. The drag boundary on the upper side of the crop box 401 is the maximum value of the ordinate of the Q1 intersection point and the ordinate of the Q3 intersection point, that is, the ordinate of the Q3 intersection point shown in (b) in fig. 12. The drag boundary on the lower side of the crop box 401 is the minimum value of the ordinate of the Q2 intersection point and the ordinate of the Q4 intersection point, that is, the ordinate of the Q2 intersection point shown in (b) in fig. 12. Therefore, four black bold lines shown in (b) in fig. 12 represent the drag boundary of the crop box 401.
In the embodiment of the application, the electronic device may determine the operation type according to the position of the finger touch of the user. When the position of the touch of the finger of the user is located on the boundary line or the vertex of the cutting frame 401, the operation type can be determined to be dragging the cutting frame 401; when the position of the single-finger touch is located inside the crop box 401, it can be determined that the operation type is a drag operation on the video screen 301; when the position of the two-finger touch is located inside the crop box 401, it may be determined that the operation type is a zoom operation on the video screen 301. Therefore, when it is determined that the operation type is to drag the crop box 401, the drag boundary of the crop box 401 can be determined in the above-described manner.
Fig. 13 is a flowchart of a video processing method after a video frame is dragged according to an embodiment of the present application. Referring to fig. 13, the method for processing a video after dragging a video frame may specifically include the following steps:
in step 1301, the electronic device receives a drag operation on a video frame displayed in the video clip interface.
In some embodiments, when the user wants to pan the video frame 301 displayed in the video clip interface 400, the user may touch the video frame 301 in the clip frame 401 with a single finger and drag the video frame 301 toward a certain direction, and the electronic device may receive a drag operation on the video frame 301 displayed in the video clip interface 400.
In dragging the video frame 301, the following two points need to be noted: one is that the finger cannot separate from the video frame 301 displayed on the screen during the process of dragging the video frame 301, and the other is that a rebound mechanism needs to be added after the dragging is finished and the finger leaves the screen, so that the inside of the cutting frame 401 is fully supported by the video frame 301, i.e. the content outside the video frame 301 cannot appear in the cutting frame 401.
In step 1302, the electronic device responds to the drag operation to obtain the third vertex coordinates of each vertex in the dragged video frame and the fourth vertex coordinates of each vertex in the clipping frame.
In step 1303, the electronic device determines whether the crop frames are all located in the dragged video frame according to the third vertex coordinates and the fourth vertex coordinates.
In some embodiments, after receiving the drag operation on the video frame 301, the electronic device may determine whether the crop frame 401 is entirely located within the dragged video frame 301 according to the third vertex coordinates of each vertex in the dragged video frame 301 and the fourth vertex coordinates of each vertex in the crop frame 401 in response to the drag operation.
If the crop box 401 is located entirely within the dragged video frame 301, the electronic device does not spring back the video frame 301, and the video frame 301 remains unchanged.
After the drag operation on the video frame 301 is finished and the user's finger leaves the screen, the electronic device may be divided into the following two cases to separately determine whether the crop box 401 is located in the dragged video frame 301.
In the first case, a scene where rulera is equal to 0, that is, a scene where the rotation angle of the video picture 301 at this time is an integer multiple of 0 ° or 90 °. In this case, the electronic device compares the upper boundary value of the dragged video frame 301 with the upper boundary value of the crop box 401, compares the lower boundary value of the dragged video frame 301 with the lower boundary value of the crop box 401, compares the left boundary value of the dragged video frame 301 with the left boundary value of the crop box 401, and compares the right boundary value of the dragged video frame 301 with the right boundary value of the crop box 401.
When the left boundary value of the crop box 401 is greater than or equal to the left boundary value of the dragged video frame 301, the right boundary value of the crop box 401 is less than or equal to the right boundary value of the dragged video frame 301, the upper boundary value of the crop box 401 is greater than or equal to the upper boundary value of the dragged video frame 301, and the lower boundary value of the crop box 401 is less than or equal to the lower boundary value of the dragged video frame 301, it is determined that the crop box 401 is all located in the dragged video frame 301.
When at least one of the four determination conditions is not satisfied, it is determined that the region located outside the dragged video frame 301 exists in the crop box 401.
The left boundary value of the crop box 401 refers to the abscissa of the upper left vertex and lower left vertex of the crop box 401, and the left boundary value of the video picture 301 refers to the abscissa of the upper left vertex and lower left vertex of the video picture 301. The right boundary value of the crop box 401 refers to the abscissa of the upper right vertex and lower right vertex of the crop box 401, and the right boundary value of the video picture 301 refers to the abscissa of the upper right vertex and lower right vertex of the video picture 301. The upper boundary of the crop box 401 refers to the ordinate of the upper left and upper right vertices of the crop box 401, and the upper boundary value of the video picture 301 refers to the ordinate of the upper left and upper right vertices of the video picture 301. The lower boundary value of the crop box 401 refers to the ordinate of the lower left and lower right vertices of the crop box 401, and the lower boundary value of the video frame 301 refers to the ordinate of the lower left and lower right vertices of the video frame 301.
After the video frame 301 is rotated 90 ° counterclockwise, the upper left vertex in the visual sense of the video frame 301 is actually the upper right vertex of the video frame 301, the lower left vertex in the visual sense of the video frame 301 is actually the upper left vertex of the video frame 301, the lower right vertex in the visual sense of the video frame 301 is actually the lower left vertex of the video frame 301, and the upper right vertex in the visual sense of the video frame 301 is actually the lower right vertex of the video frame 301. The present solution is directed to such a case where the video frame 301 is rotated, but the rotation angle is an integer multiple of 90 °, by comparing each vertex coordinate in the visual sense with the vertex coordinate of the crop box 401 to determine whether to rebound, and calculating the rebound distance when rebound is required.
In the second case, a scene where rulera is not equal to 0, that is, a case where the rotation angle of the video picture 301 is not 0 ° and is not an integer multiple of 90 °. In this case, the electronic device calculates a third connection line formed between every two adjacent vertices in the video frame 301 according to the third vertex coordinates of each vertex in the video frame 301; the electronic equipment calculates the vertical point coordinates from each vertex in the cutting frame 401 to the corresponding third connecting line; the electronic device determines whether the crop box 401 is located in the dragged video frame 301 according to the vertical point coordinates and the fourth vertex coordinates.
As shown in fig. 14, after the drag operation is finished, four vertices in the video frame 301 are P1 vertex, P2 vertex, P3 vertex, and P4 vertex, respectively, and each vertex in the crop box 401 is C1 vertex (i.e., upper left vertex), C2 vertex (i.e., upper right vertex), C3 vertex (i.e., lower right vertex), and C4 vertex (i.e., lower left vertex), respectively.
According to the calculation formula of determining a straight line from two points, a third connecting line formed between every two adjacent vertexes in the video picture 301 can be obtained based on the third vertex coordinates corresponding to the P1 vertex, the P2 vertex, the P3 vertex and the P4 vertex in the video picture 301. The third connection corresponding to the video frame 301 includes: a boundary line formed between the P1 vertex and the P2 vertex, a boundary line formed between the P2 vertex and the P3 vertex, a boundary line formed between the P3 vertex and the P4 vertex, and a boundary line formed between the P1 vertex and the P4 vertex.
The electronic device calculates the coordinates of the vertical points from each vertex in the clipping frame 401 to the corresponding third connecting line, that is, the coordinates of the vertical points from the C1 vertex, the C2 vertex, the C3 vertex, and the C4 vertex in the clipping frame 401 to the corresponding third connecting line. The vertical points corresponding to the four vertices in the crop box 401 are drop14, drop12, drop23 and drop34, respectively, and the electronic device determines whether the crop box 401 is located in the dragged video frame 301 according to the fourth vertex coordinates corresponding to the four vertices in the crop box 401 and the vertical point coordinates of the vertical points corresponding to the four vertices.
As shown in (a) of fig. 14, for a scenario in which rulerAngle is greater than 0. drop14 drop12 drop is a drop on the boundary line from the C1 vertex to the P1 vertex-P2 vertex, drop23 drop is a drop on the boundary line from the C3 vertex to the P2 vertex-P3 vertex, and drop34 drop is a drop on the boundary line from the C4 vertex to the P3 vertex-P4 vertex.
As shown in (b) of fig. 14, for a scenario in which rulerAngle is less than 0. drop14 drop12 drop is a drop on the boundary line from the C1 vertex to the P1 vertex-P2 vertex, drop23 drop is a drop on the boundary line from the C2 vertex to the P2 vertex-P3 vertex, and drop34 drop is a drop on the boundary line from the C3 vertex to the P3 vertex-P4 vertex.
When the abscissa of the C1 vertex and the C4 vertex is greater than or equal to the abscissa of the drop14 drop, the ordinate of the C1 vertex and the C2 vertex is greater than or equal to the ordinate of the drop12 drop, the abscissa of the C2 vertex and the C3 vertex is less than or equal to the abscissa of the drop23 drop, and the ordinate of the C3 vertex and the C4 vertex is less than or equal to the ordinate of the drop34 drop, it is determined that the crop box 401 is all located in the dragged video frame 301.
When at least one of the four determination conditions is not satisfied, it is determined that the region located outside the dragged video frame 301 exists in the crop box 401.
In step 1304, when there is an area outside the dragged video frame in the crop frame, the electronic device performs a rebound operation on the dragged video frame according to the rebound distance, so that the crop frame is all located in the rebound video frame.
In some embodiments, when there is an area in the crop box 401 that is located outside the dragged video frame 301, in order for the crop box 401 to no longer include non-video frames other than the video frame 301, the electronic device may call a rebound distance to perform a rebound operation on the dragged video frame 301 such that the crop box 401 is located entirely within the rebound video frame 301.
Therefore, the electronic device needs to calculate the springback distance of the video picture 301 before performing the springback operation on the dragged video picture 301. The electronic apparatus can be divided into the following two cases to separately calculate the rebound distance of the video picture 301.
In the first case, a scene where rulera is equal to 0, that is, a scene where the rotation angle of the video picture 301 at this time is an integer multiple of 0 ° or 90 °. In this case, the electronic device determines a fifth target vertex closest to the center point of the crop frame 401 among the vertices in the dragged video frame 301; the electronic device calculates the rebound distance according to the fourth vertex coordinates corresponding to the vertex with the same orientation as the fifth target vertex in the clipping frame 401 and the third vertex coordinates corresponding to the fifth target vertex.
If the fifth target vertex closest to the center point of the frame 401 among the vertices in the dragged video frame 301 is the top left vertex of the video frame 301, the vertex in the frame 401 with the same orientation as the fifth target vertex is the top left vertex of the frame 401. Therefore, when the abscissa of the upper left vertex of the video screen 301 is greater than the abscissa of the upper left vertex of the clip frame 401, the rebound distance in the X-axis direction is equal to the difference between the abscissa of the upper left vertex of the clip frame 401 and the abscissa of the upper left vertex of the video screen 301; when the abscissa of the upper left vertex of the video screen 301 is less than or equal to the abscissa of the upper left vertex of the clip frame 401, the rebound distance in the X-axis direction is equal to 0. When the ordinate of the top left vertex of the video frame 301 is greater than the ordinate of the top left vertex of the clip frame 401, the rebound distance in the Y-axis direction is equal to the difference between the ordinate of the top left vertex of the clip frame 401 and the ordinate of the top left vertex of the video frame 301; when the ordinate of the upper left vertex of the video screen 301 is less than or equal to the ordinate of the upper left vertex of the clip frame 401, the rebound distance in the Y-axis direction is equal to 0.
If the fifth target vertex closest to the center point of the frame 401 among the vertices in the dragged video frame 301 is the top right vertex of the video frame 301, the vertex in the frame 401 having the same orientation as the fifth target vertex is the top right vertex of the frame 401. Therefore, when the abscissa of the upper right vertex of the video screen 301 is smaller than the abscissa of the upper right vertex of the clip frame 401, the rebound distance in the X-axis direction is equal to the difference between the abscissa of the upper right vertex of the clip frame 401 and the abscissa of the upper right vertex of the video screen 301; when the abscissa of the upper right vertex of the video screen 301 is greater than or equal to the abscissa of the upper right vertex of the clip frame 401, the rebound distance in the X-axis direction is equal to 0. When the ordinate of the top right vertex of the video frame 301 is greater than the ordinate of the top right vertex of the clip frame 401, the rebound distance in the Y-axis direction is equal to the difference between the ordinate of the top right vertex of the clip frame 401 and the ordinate of the top right vertex of the video frame 301; when the ordinate of the upper right vertex of the video screen 301 is less than or equal to the ordinate of the upper right vertex of the clip frame 401, the rebound distance in the Y-axis direction is equal to 0.
If, of the vertices in the dragged video frame 301, the fifth target vertex closest to the center point of the crop frame 401 is the lower right vertex of the video frame 301, the vertex in the crop frame 401 having the same orientation as the fifth target vertex is the lower right vertex of the crop frame 401. Therefore, when the abscissa of the lower right vertex of the video screen 301 is smaller than the abscissa of the lower right vertex of the clip frame 401, the rebound distance in the X-axis direction is equal to the difference between the abscissa of the lower right vertex of the clip frame 401 and the abscissa of the lower right vertex of the video screen 301; when the abscissa of the lower right vertex of the video screen 301 is greater than or equal to the abscissa of the lower right vertex of the clip frame 401, the rebound distance in the X-axis direction is equal to 0. When the ordinate of the lower right vertex of the video frame 301 is smaller than the ordinate of the lower right vertex of the clip frame 401, the rebound distance in the Y-axis direction is equal to the difference between the ordinate of the lower right vertex of the clip frame 401 and the ordinate of the lower right vertex of the video frame 301; when the ordinate of the lower right vertex of the video screen 301 is greater than or equal to the ordinate of the lower right vertex of the clip frame 401, the rebound distance in the Y-axis direction is equal to 0.
If, of the vertices in the dragged video frame 301, the fifth target vertex closest to the center point of the crop frame 401 is the lower left vertex of the video frame 301, the vertex in the crop frame 401 having the same orientation as the fifth target vertex is the lower left vertex of the crop frame 401. Therefore, when the abscissa of the lower left vertex of the video screen 301 is greater than the abscissa of the lower left vertex of the clip frame 401, the rebound distance in the X-axis direction is equal to the difference between the abscissa of the lower left vertex of the clip frame 401 and the abscissa of the lower left vertex of the video screen 301; when the abscissa of the lower left vertex of the video screen 301 is less than or equal to the abscissa of the lower left vertex of the clip frame 401, the rebound distance in the X-axis direction is equal to 0. When the ordinate of the lower left vertex of the video frame 301 is smaller than the ordinate of the lower left vertex of the clip frame 401, the rebound distance in the Y-axis direction is equal to the difference between the ordinate of the lower left vertex of the clip frame 401 and the ordinate of the lower left vertex of the video frame 301; when the ordinate of the lower left vertex of the video screen 301 is greater than or equal to the ordinate of the lower left vertex of the clip frame 401, the rebound distance in the Y-axis direction is equal to 0.
In the second case, a scene where rulera is not equal to 0, that is, a case where the rotation angle of the video picture 301 is not 0 ° and is not an integer multiple of 90 °. In this case, the electronic device determines a second target vertex closest to the center point of the crop frame 401 among the vertices in the dragged video frame 301; the electronic equipment acquires critical coordinates of a critical point corresponding to the second target vertex, and the critical point is consistent with the azimuth of the second target vertex; the electronic device determines whether the number of vertices located outside the dragged video frame 301 in the clipping frame 401 is greater than 1 according to the third vertex coordinate corresponding to the second target vertex, the critical coordinate corresponding to the critical point, the fourth vertex coordinate corresponding to the third target vertex, and the vertical point coordinates corresponding to the two target vertical points. The third target vertex is a vertex in the frame 401 that is consistent with the second target vertex, and the target vertical point is two vertices corresponding to the second target vertex in the frame 401, and is a vertical point on two boundary lines adjacent to the second target vertex in the video frame 301.
When at least two vertices in the crop box 401 are located outside the dragged video frame 301, the electronic device uses the offset distance between the critical point and the second target vertex as the rebound distance. When one vertex in the clipping frame is positioned outside the dragged video picture 301, the electronic device acquires a fourth vertex coordinate corresponding to a fourth target vertex positioned outside the video picture 301 in the clipping frame 401; and the electronic equipment takes the deviation distance between the fourth vertex coordinates corresponding to the fourth target vertex and the vertical point coordinates of the corresponding target vertical point as the rebound distance.
In the first case, the second target vertex closest to the center point of the crop box 401 among the respective vertices in the dragged video frame 301 is the scene of the top left vertex (i.e., the P1 vertex) of the video frame 301. The third target vertex in crop box 401 is the top left vertex (i.e., C1 vertex) in crop box 401, and the two target vertices are drop14 and drop 12.
Therefore, the electronic device may determine whether the number of vertices located outside the dragged video frame 301 in the clipping frame 401 is greater than 1 according to the third vertex coordinate corresponding to the P1 vertex, the critical coordinate of the critical point corresponding to the P1 vertex, the fourth vertex coordinate corresponding to the C1 vertex, the drop point coordinate of the drop14 drop point, and the drop point coordinate of the drop12 drop point.
The critical point corresponding to the P1 vertex may be referred to as the left upper critical point leftTop, which may be calculated according to the fourth vertex coordinate of the C1 vertex in the crop box 401, the width or height of the crop box 401, and the rotation angle (i.e., rule) of the video frame 301 under the triggering of the first rotation control 402. Specifically, when the rule is greater than 0, the abscissa of the upper left critical point leftTop is equal to the abscissa +wsin of the C1 vertex, and the ordinate of the upper left critical point leftTop is equal to the ordinate-wCos of the C1 vertex; when rule is less than 0, the abscissa of the upper left critical point leftTop is equal to the abscissa of the C1 vertex-hCos, and the ordinate of the upper left critical point leftTop is equal to the ordinate of the C1 vertex+hsin. Where wsin=width of crop box 401× sin (rulerAngle) × sin (rulerAngle), wcos=width of crop box 401× sin (rulerAngle) × cos (rulerAngle), hsin=height of crop box 401× sin (rulerAngle) × sin (rulerAngle), hcos=height of crop box 401 sin (rulerAngle) × cos (rulerAngle).
When the third vertex coordinate corresponding to the P1 vertex and the critical coordinate of the upper left critical point leftop satisfy the first condition, it is determined that at least two vertices in the crop box 401 are located outside the dragged video frame 301. Wherein the first condition is satisfied when the abscissa of the P1 vertex is greater than the abscissa of the upper left critical point leftTop, and the ordinate of the P1 vertex is greater than the ordinate of the upper left critical point leftTop; the first condition is not satisfied when the abscissa of the P1 vertex is less than or equal to the abscissa of the upper left critical point leftTop, and/or the ordinate of the P1 vertex is less than or equal to the ordinate of the upper left critical point leftTop.
And when the third vertex coordinate corresponding to the P1 vertex and the critical coordinate of the left upper critical point leftTop do not meet the first condition, continuously determining whether the third vertex coordinate corresponding to the P1 vertex meets the second condition according to the fourth vertex coordinate corresponding to the C1 vertex, the drop14 drop coordinate and the drop12 drop coordinate, determining that at least two vertexes in the cutting frame 401 are positioned outside the dragged video picture 301 when the second condition is met, and determining that one vertex in the cutting frame 401 is positioned outside the dragged video picture 301 when the second condition is not met yet.
Wherein the second condition is satisfied when the abscissa of the C1 vertex is less than the abscissa of the drop14 drop point and the ordinate of the C1 vertex is less than the ordinate of the drop12 drop point; the second condition is not satisfied when the abscissa of the C1 vertex is greater than or equal to the abscissa of the drop14 drop point, and/or the ordinate of the C1 vertex is greater than or equal to the ordinate of the drop12 drop point.
When the first condition or the second condition is satisfied, the electronic device sets the offset distance between the upper left critical point leftTop and the P1 vertex as the rebound distance. That is, the rebound distance in the X-axis direction is equal to the abscissa of the abscissa-P1 vertex of the upper left critical point leftTop, and the rebound distance in the Y-axis direction is equal to the ordinate of the ordinate-P1 vertex of the upper left critical point leftTop.
As shown in fig. 15, 301a represents a video frame when the video frame is not sprung back after the drag operation, and when both the C1 vertex and the C2 vertex in the crop frame 401 are located outside the video frame, the upper left vertex P1 of the video frame is sprung back to the upper left critical point leftTop after the drag operation is completed, and 301b represents the sprung video frame.
In the case where the first condition and the second condition described above are not satisfied. When the abscissa of the C1 vertex is less than the abscissa of the drop14 drop, the ordinate of the C1 vertex is greater than or equal to the ordinate of the drop12 drop, and the rule is greater than 0, the fourth target vertex in the crop box 401 is illustrated as the C1 vertex, and thus, the rebound distance in the X-axis direction is equal to the abscissa of the C1 vertex-the abscissa of the drop14 drop, and the rebound distance in the Y-axis direction is equal to the ordinate of the C1 vertex-the ordinate of the drop14 drop. And when the abscissa of the C1 vertex is smaller than the abscissa of the drop14 drop point, and the ordinate of the C1 vertex is greater than or equal to the ordinate of the drop12 drop point, and rule is smaller than 0, it is indicated that the fourth target vertex in the clipping frame 401 is the C4 vertex, and therefore, the rebound distance in the X-axis direction is equal to the abscissa of the C4 vertex-the abscissa of the drop14 drop point, and the rebound distance in the Y-axis direction is equal to the ordinate of the C4 vertex-the ordinate of the drop14 drop point.
In the case where the first condition and the second condition described above are not satisfied. When the abscissa of the C1 vertex is greater than or equal to the abscissa of the drop14 drop point, the ordinate of the C1 vertex is less than the ordinate of the drop12 drop point, and rule is greater than 0, it is indicated that the fourth target vertex in the crop box 401 is the C2 vertex, and thus, the rebound distance in the X-axis direction is equal to the abscissa of the C2 vertex-the abscissa of the drop12 drop point, and the rebound distance in the Y-axis direction is equal to the ordinate of the C2 vertex-the ordinate of the drop12 drop point. When the abscissa of the C1 vertex is greater than or equal to the abscissa of the drop14 drop point, and the ordinate of the C1 vertex is less than the ordinate of the drop12 drop point, and the rule is less, it is indicated that the fourth target vertex in the clipping frame 401 is the C1 vertex, and thus, the rebound distance in the X-axis direction is equal to the abscissa of the C1 vertex-the abscissa of the drop12 drop point, and the rebound distance in the Y-axis direction is equal to the ordinate of the C1 vertex-the ordinate of the drop12 drop point.
As shown in fig. 16, 301a represents a video frame when the drag operation is not resumed, and when the C1 vertex in the crop box 401 is located outside the video frame, the drop14 point of the video frame is resumed to the C1 vertex after the drag operation is completed.
In the second case, the second target vertex closest to the center point of the crop box 401 among the vertices in the dragged video frame 301 is the scene of the top right vertex (i.e., the P2 vertex) of the video frame 301. The third target vertex in crop box 401 is the top right vertex (i.e., C2 vertex) in crop box 401, and the two target vertices are drop23 and drop 12.
The critical point corresponding to the P2 vertex may be referred to as the upper right critical point lighttop. When the rule is greater than 0, the abscissa of the right upper critical point lighttop is equal to the abscissa of the C2 vertex +hcos, and the ordinate of the right upper critical point lighttop is equal to the ordinate of the C2 vertex +hsin; when rule is less than 0, the abscissa of the upper right critical point lighttop is equal to the abscissa of the C2 vertex-wSin, and the ordinate of the upper right critical point lighttop is equal to the ordinate of the C2 vertex-wCos.
When the abscissa of the P2 vertex is smaller than the abscissa of the right upper critical point lighttop and the ordinate of the P2 vertex is larger than the ordinate of the right upper critical point lighttop, it satisfies the first condition, otherwise the first condition is not satisfied. Further, the second condition is satisfied when the ordinate of the C2 vertex is less than the ordinate of the drop12 drop point and the abscissa of the C2 vertex is greater than the abscissa of the drop23 drop point.
In the case where the first condition or the second condition described above is satisfied, the electronic device uses the offset distance between the upper right critical point lighttop and the P2 vertex as the rebound distance. I.e. the rebound distance in the X-axis direction is equal to the abscissa of the abscissa-P2 vertex of the upper right critical point lighttop and the rebound distance in the Y-axis direction is equal to the ordinate of the ordinate-P2 vertex of the upper right critical point lighttop.
In the case where the first condition and the second condition described above are not satisfied. When the abscissa of the C2 vertex is smaller than or equal to the abscissa of the drop23 drop point and the ordinate of the C2 vertex is smaller than the ordinate of the drop12 drop point, the rebound distance in the X-axis direction is equal to the abscissa of the X12-drop12 drop point, and the rebound distance in the Y-axis direction is equal to the ordinate of the C2 vertex-the ordinate of the drop12 drop point. Wherein, when rule is greater than 0, x12 represents the abscissa of the C2 vertex; when rule is less than 0, x12 represents the abscissa of the C1 vertex.
In the case where the first condition and the second condition described above are not satisfied. When the abscissa of the C2 vertex is greater than the abscissa of the drop23 drop, and the ordinate of the C2 vertex is greater than or equal to the ordinate of the drop12 drop, the rebound distance in the X-axis direction is equal to the abscissa of the C2 vertex-the abscissa of the drop23 drop, and the rebound distance in the Y-axis direction is equal to the ordinate of the Y23-drop23 drop. Wherein, when rule is greater than 0, y23 represents the ordinate of the C3 vertex; when rule is less than 0, y23 represents the ordinate of the C2 vertex.
In the third case, the second target vertex closest to the center point of the crop box 401 among the vertices in the dragged video frame 301 is the scene of the lower right vertex (i.e., the P3 vertex) of the video frame 301. The third target vertex in crop box 401 is the lower right vertex (i.e., C3 vertex) in crop box 401, and the two target vertices are drop34 and drop 23.
The critical point corresponding to the P3 vertex may be referred to as the lower right critical point lighttotal. When rule is greater than 0, the abscissa of the lower right critical point lightbottom=the abscissa of the C3 vertex-wSin, and the ordinate of the lower right critical point lightbottom=the ordinate of the C3 vertex +wcos; when rule is less than 0, the abscissa of the lower right critical point lightbottom=the abscissa of the C3 vertex+hcos, and the ordinate of the lower right critical point lightbottom=the ordinate of the C3 vertex-hSin.
When the abscissa of the P3 vertex is smaller than the abscissa of the lower right critical point lightbotom and the ordinate of the P3 vertex is smaller than the ordinate of the lower right critical point lightbotom, it satisfies the first condition, otherwise, the first condition is not satisfied. Further, the second condition is satisfied when the ordinate of the C3 vertex is greater than the ordinate of the drop34 drop point and the abscissa of the C3 vertex is greater than the abscissa of the drop23 drop point.
In the case where the first condition or the second condition described above is satisfied, the electronic device uses the offset distance between the lower right critical point lightbottom and the P3 vertex as the rebound distance. I.e. the rebound distance in the X-axis direction is equal to the abscissa of the lower right critical point lightbottom-the abscissa of the P3 vertex, and the rebound distance in the Y-axis direction is equal to the ordinate of the lower right critical point lightbottom-the ordinate of the P3 vertex.
In the case where the first condition and the second condition described above are not satisfied. When the ordinate of the C3 vertex is greater than the ordinate of the drop34 drop point and the abscissa of the C3 vertex is less than or equal to the abscissa of the drop23 drop point, the rebound distance in the X-axis direction is equal to the abscissa of the X34-drop34 drop point and the rebound distance in the Y-axis direction is equal to the ordinate of the C3 vertex, the ordinate of the drop34 drop point. Wherein, when rule is greater than 0, x34 represents the abscissa of the C4 vertex; when rule is less than 0, x34 represents the abscissa of the C3 vertex.
In the case where the first condition and the second condition described above are not satisfied. When the abscissa of the C3 vertex is greater than the abscissa of the drop23 drop, and the ordinate of the C3 vertex is less than or equal to the ordinate of the drop34 drop, the rebound distance in the X-axis direction is equal to the abscissa of the C3 vertex, the abscissa of the drop23 drop, and the rebound distance in the Y-axis direction is equal to the ordinate of the Y23-drop23 drop. Wherein, when rule is greater than 0, y23 represents the ordinate of the C3 vertex; when rule is less than 0, y23 represents the ordinate of the C2 vertex.
In the fourth case, the second target vertex closest to the center point of the crop box 401 among the respective vertices in the dragged video frame 301 is the scene of the lower left vertex (i.e., the P4 vertex) of the video frame 301. The third target vertex in crop box 401 is the lower left vertex (i.e., C4 vertex) in crop box 401, and the two target vertices are drop34 and drop 14.
The critical point corresponding to the P4 vertex may be referred to as the lower left critical point leftbotom. When rule is greater than 0, the abscissa of the lower left critical point leftbotom=the abscissa of the C4 vertex-hCos, and the ordinate of the lower left critical point leftbotom=the ordinate of the C4 vertex-hSin; when rule is smaller than 0, the abscissa of the lower left critical point leftbotom=the abscissa of the C4 vertex+wsin, and the ordinate of the lower left critical point leftbotom=the ordinate of the C4 vertex+wcos.
When the abscissa of the P4 vertex is greater than the abscissa of the lower left critical point leftbotom and the ordinate of the P4 vertex is less than the ordinate of the lower left critical point leftbotom, it satisfies the first condition, otherwise the first condition is not satisfied. Further, the second condition is satisfied when the abscissa of the C4 vertex is less than the abscissa of the drop14 drop point and the ordinate of the C4 vertex is greater than the ordinate of the drop34 drop point.
When the first condition or the second condition is satisfied, the electronic device uses the offset distance between the lower left critical point leftbotom and the P4 vertex as the rebound distance. I.e. the rebound distance in the X-axis direction is equal to the abscissa of the lower left critical point leftbotom-the abscissa of the P4 vertex, and the rebound distance in the Y-axis direction is equal to the ordinate of the lower left critical point leftbotom-the ordinate of the P4 vertex.
In the case where the first condition and the second condition described above are not satisfied. When the abscissa of the C4 vertex is smaller than the abscissa of the drop14 drop point and the ordinate of the C4 vertex is smaller than or equal to the ordinate of the drop34 drop point, the rebound distance in the X-axis direction is equal to the abscissa of the C4 vertex, the abscissa of the drop14 drop point, and the rebound distance in the Y-axis direction is equal to the ordinate of the Y14-drop14 drop point. Wherein, when rule is greater than 0, y14 represents the ordinate of the C1 vertex; when rule is less than 0, y14 represents the ordinate of the C4 vertex.
In the case where the first condition and the second condition described above are not satisfied. When the ordinate of the C4 vertex is greater than the ordinate of the drop34 drop point and the abscissa of the C4 vertex is greater than or equal to the abscissa of the drop14 drop point, the rebound distance in the X-axis direction is equal to the abscissa of the X34-drop34 drop point and the rebound distance in the Y-axis direction is equal to the ordinate of the C4 vertex, the ordinate of the drop34 drop point. Wherein, when rule is greater than 0, x34 represents the abscissa of the C4 vertex; when rule is less than 0, x34 represents the abscissa of the C3 vertex.
Fig. 17 is a flowchart of a video processing method after video frame scaling according to an embodiment of the present application. Referring to fig. 17, the method for processing a scaled video image may specifically include the following steps:
In step 1701, the electronic device receives a zoom operation on a video frame displayed within the video cropping interface.
In some embodiments, when the user wants to zoom in on the video frame 301 displayed in the video clip interface 400, the user may touch the video frame 301 in the clip frame 401 with two fingers, and the two fingers slide in directions gradually away from or toward each other, and the electronic device may receive a zoom operation on the video frame 301 displayed in the video clip interface 400.
After the user touches the video frame 301 in the crop frame 401 with two fingers, when the distance between the two fingers is gradually increased, the video frame 301 is enlarged, and when the distance between the two fingers is gradually decreased, the video frame 301 is reduced.
In step 1702, the electronic device obtains a second scaling of the scaled video picture relative to the original video picture in response to the scaling operation.
In step 1703, the electronic device calculates a second critical scaling of the video frame.
In practical use, the enlarging operation of the video picture 301 has no upper limit, and in principle, the video picture 301 can be enlarged infinitely; however, the zoom-out operation needs to follow a rule that, after the zoom-out operation of the video frame 301 is finished, the user's finger leaves the screen, the crop box 401 needs to be controlled to be located entirely within the zoomed video frame 301, that is, the video frame 301 can be full of the crop box 401.
After the end of the zoom-out operation of the video frame 301, the zoom-out multiple of the video frame 301 when the video frame 301 is just full of the crop frame 401 is called a second critical zoom scale.
In some embodiments, upon receiving a scaling operation for the video frame 301, the electronic device may calculate a second scaling of the scaled video frame 301 relative to the original video frame, and a second critical scaling of the video frame, respectively, in response to the scaling operation; and, the electronic device needs to determine the second scaling of the current video frame relative to the original video frame, and the size relationship with the second critical scaling, so as to determine whether to perform the zooming-in rebound on the video frame 301.
When the finger touches the video screen 301, a distance between two fingers may be calculated as an initial distance, then a latest distance between the two fingers in the zooming process is calculated, and a ratio of the latest distance to the initial distance is used as a second scaling ratio. During the actual scaling, a second scaling factor is continuously calculated.
The second critical scaling may be calculated separately in two ways.
In the first case, a scene where rulera is equal to 0, that is, a scene where the rotation angle of the video picture 301 at this time is an integer multiple of 0 ° or 90 °. In this case, the electronic device uses the ratio of the target size of the crop box 401 to the original size of the video frame 301 as the second critical scaling; the target size of the cropping frame 401 is the height of the cropping frame 401, and the original size of the video frame 301 is the original height of the video frame 301; alternatively, the target size of the crop box 401 is the width of the crop box 401, and the original size of the video frame 301 is the original width of the video frame 301.
The original height of the video frame 301 refers to the height of the video frame at the beginning of the display of the video clip interface 400; the original width of the video frame 301 refers to the width of the video frame at the beginning of the display of the video clip interface 400.
Specifically, the electronic device may first determine whether the crop box 401 is full or full. When the aspect ratio of the crop frame 401 is smaller than the aspect ratio of the operable area, it is determined that the crop frame 401 is highly-full, and when the aspect ratio of the crop frame 401 is larger than the aspect ratio of the operable area, it is determined that the crop frame 401 is widely-full. The operable area refers to the largest area in which the crop box 401 can be operated in response to a gesture of the user.
For the case that the frame 401 is full, the electronic device calculates a first ratio of the height of the frame 401 to the original height of the video frame 301, and then determines whether the product of the original width of the video frame 301 and the first ratio is smaller than the width of the frame 401. When the product of the original width of the video frame 301 and the first ratio is greater than or equal to the width of the crop frame 401, the second critical scaling is equal to the first ratio of the height of the crop frame 401 to the original height of the video frame 301; when the product of the original width of the video frame 301 and the first ratio is smaller than the width of the crop box 401, the second critical scaling is equal to the ratio of the width of the crop box 401 to the original width of the video frame 301.
For the case that the width of the crop box 401 is full, the electronic device calculates a second ratio of the width of the crop box 401 to the original width of the video frame 301, and then determines whether the product of the original height of the video frame 301 and the second ratio is smaller than the height of the crop box 401. When the product of the original height of the video frame 301 and the second ratio is greater than or equal to the height of the crop frame 401, the second critical scaling is equal to the second ratio of the width of the crop frame 401 to the original width of the video frame 301; when the product of the original height of the video frame 301 and the second ratio is smaller than the height of the crop box 401, the second critical scaling is equal to the height of the crop box 401 and the original height of the video frame 301.
In the second case, a scene where rulera is not equal to 0, that is, a case where the rotation angle of the video picture 301 is not 0 ° and is not an integer multiple of 90 °. In this case, the electronic device calculates the critical distance of the video frame 301 according to the width of the crop frame 401, the height of the crop frame 401, and the rotation angle (i.e., rule) of the video frame 301 under the triggering of the first rotation control; the electronic device determines a ratio of the critical distance to the original size of the video frame 301 as a second critical scaling; the original size of the video picture 301 is the original width of the video picture 301 or the original height of the video picture 301.
Specifically, the calculation manner of the second critical scaling factor when the rule is not equal to 0 may be different according to the horizontal/vertical screen state of the video frame 301 and the influence of factors such as the number of clicks of the second rotary control 403. Illustratively, when the width of the video frame 301 is greater than or equal to the height of the video frame 301, then the initial display state of the video frame 301 is a landscape display; when the width of the video picture 301 is smaller than the height of the video picture 301, the initial display state of the video picture 301 is a portrait state.
First, when the initial display state of the video frame 301 is a landscape display and the number of clicks of the second rotary control 403 is 0 or even, the critical distance is equal to the width of the crop box 401× sin (rulerAngle) +the height of the crop box 401× cos (rulerAngle), and the second critical scaling is equal to the ratio of the critical distance to the original height of the video frame 301.
Second, when the initial display state of the video frame 301 is a landscape display and the number of clicks of the second rotary control 403 is odd, the critical distance is equal to the height of the crop box 401× sin (rulerAngle) +the width of the crop box 401× cos (rulerAngle), and the second critical scaling is equal to the ratio of the critical distance to the original height of the video frame 301.
Third, when the initial display state of the video frame 301 is a portrait display and the number of clicks of the second rotary control 403 is 0 or even, the critical distance is equal to the height of the crop box 401× sin (rulerAngle) +the width of the crop box 401× cos (rulerAngle), and the second critical scaling is equal to the ratio of the critical distance to the original width of the video frame 301.
Fourth, when the initial display state of the video frame 301 is vertical screen display and the number of clicks of the second rotary control 403 is odd, the critical distance is equal to the width of the crop box 401× sin (rulerAngle) +the height of the crop box 401× cos (rulerAngle), and the second critical scaling is equal to the ratio of the critical distance to the original width of the video frame 301.
As shown in fig. 18, 301d represents the corresponding video frame when the frame 401 is just fully supported, the C6 vertical point is the vertical point of the boundary line between the C4 vertex and the P1 vertex-P4 vertex of the frame 401, and the C5 vertical point is the vertical point of the boundary line between the C4 vertex and the P2 vertex-P3 vertex of the frame 401. The distance between the C4 vertex and the C6 vertical point is equal to the height x sin (rulerAngle) of the crop box 401, the distance between the C4 vertex and the C5 vertical point is equal to the width x cos (rulerAngle) of the crop box 401, and the distance between the C5 vertical point and the C6 vertical point is the critical distance. The second critical scaling at this time is equal to the distance between the C5 and C6 vertical points divided by the original width of the video frame 301.
In step 1704, the electronic device determines whether the crop box is located in the scaled video frame according to the second critical scaling and the second scaling.
In some embodiments, when the second critical scale is greater than the second scale, the electronic device determines that there is an area in the crop box 401 that is outside of the scaled video frame 301; when the second critical scaling ratio is less than or equal to the second scaling ratio, the electronic device determines that the crop box 401 is entirely within the scaled video frame 301.
When the crop box 401 is located entirely within the scaled video frame 301, then the electronic device does not zoom in on the video frame 301, and the video frame 301 remains unchanged. And when there is an area outside the zoomed video frame 301 in the crop box 401, the electronic device performs the following step 1705.
In step 1705, when there is an area outside the scaled video frame in the crop frame, the electronic device performs an enlarging operation on the scaled video frame according to the second critical scaling ratio, so that the crop frame is all located in the enlarged video frame.
In some embodiments, when there is an area in the crop box 401 that is outside of the scaled video frame 301, in order for the crop box 401 to no longer include non-video frames other than the video frame 301, the electronic device may invoke a second critical scaling ratio to zoom in on the scaled video frame 301 such that the crop box 401 is entirely within the zoomed video frame 301.
In other scenarios, after the user performs the zooming process on the video frame 301 and drags the zoomed video frame 301, the electronic device needs to update the axis point of the video frame 301. The axis point refers to the center point of the video frame 301 as it rotates or zooms.
In the case where the zoomed video picture 301 is not dragged, the axis point of the video picture 301 is by default the center point of the video picture 301. The coordinate system of the axis points is 0 point of the X axis and 0 point of the Y axis which are the upper left corner of the original video picture. For example, when the width and height of the original video frame are 100×100, the coordinate value of the original axis point of the original video frame is 50×50. The coordinate values of the axis points cannot exceed the width and height of the video picture.
When the video frame 301 is rotated or scaled, the video frame 301 is changed with the center point of the video frame 301 as an axis point. In use, when the user rotates or zooms the video frame again after dragging the zoomed video frame 301, if the center point of the video frame 301 is still used as the axis point, a part of the video frame 301 may appear outside the cropping frame 401.
Based on this, the embodiment of the present application needs to always use the center point of the crop box 401 as the axis point of the video frame 301. After the zoomed video frame 301 is dragged, a deviation occurs between the center point of the video frame 301 and the center point of the crop frame 401, so after the zoomed video frame 301 is dragged, the electronic device needs to update the axis point of the video frame 301, so that the axis point of the updated video frame 301 is the center point of the crop frame 401, and the subsequent electronic device rotates or zooms the video frame 301 with the updated axis point.
The calculation of the axis point coordinates always depends on the original width and original height of the video frame 301. For example, an original video frame of 100×100 has a default axis point (50, 50), and at this time, after the original video frame is enlarged twice, the width and height of the video frame become 200×200. After the axis points of the video frame 301 are updated, the coordinates of the updated axis points remain (50, 50).
In some embodiments, the axis points of the video frame 301 may be updated as follows: after the drag operation is performed on the zoomed video frame 301, the electronic device obtains the sixth vertex coordinates of each vertex of the zoomed video frame 301 and the actual zoom scale of the video frame 301; the electronic device determines a first axis point of the video picture 301 according to the sixth vertex coordinates and the center point of the clipping frame 401; the electronic device determines a corresponding second axis point of the video picture 301 in the video picture 301 reduced by the actual scaling according to the actual position of the first axis point in the dragged video picture 301, and the video picture 301 is reduced by taking the original axis point as the scaling center point; the electronic device calculates a deviation value between the sixth vertex coordinate and the actual vertex coordinate of the video frame 301 after being scaled up by the actual scaling, and at this time, the video frame 301 is scaled up by taking the second axis point as the scaling center point; the electronic device updates the coordinates of the video frame 301 according to the deviation value.
According to the sixth vertex coordinates, the electronic device calculates a fifth connecting line and a sixth connecting line of the dragged video frame 301, where the fifth connecting line and the sixth connecting line are two adjacent boundary lines in the dragged video frame 301; the electronic device calculates a first axial point distance from the center point of the cutting frame 401 to the fifth connecting line and a second axial point distance from the center point of the cutting frame 401 to the sixth connecting line; the electronic device divides the first axis point distance by the actual width of the video picture 301, and multiplies the first axis point distance by the original width of the video picture 301 to obtain an abscissa of the first axis point; the electronic device divides the second axis distance by the actual height of the video frame 301, and multiplies the second axis distance by the original height of the video frame 301 to obtain the ordinate of the first axis.
The actual scaling of the video picture 301 refers to a ratio of the width of the current video picture 301 to the original width of the video picture or a ratio of the height of the current video picture 301 to the original height of the video picture after the drag operation is performed on the scaled video picture 301.
As shown in fig. 19 (a), the vertices of the dragged video screen 301 are the P1 vertex, the P2 vertex, the P3 vertex, and the P4 vertex, respectively. A fifth connecting line can be calculated based on the sixth vertex coordinates corresponding to the P1 vertex and the P4 vertex, namely the fifth connecting line is a boundary line between the P1 vertex and the P4 vertex; and, a sixth connecting line can be calculated based on the sixth vertex coordinates corresponding to the P1 vertex and the P2 vertex, i.e. the sixth connecting line is the boundary line of the P1 vertex-P2 vertex.
The electronic device may calculate the first axis point according to a point-to-line distance formula. Specifically, a first axial distance from the center of the frame 401 to the boundary line between the P1 vertex and the P4 vertex and a second axial distance from the center of the frame 401 to the boundary line between the P1 vertex and the P2 vertex are calculated. The abscissa of the first axis point is equal to the first axis point distance divided by the actual width of the video frame 301 and then multiplied by the original width of the video frame 301, and the ordinate of the first axis point is equal to the second axis point distance divided by the actual height of the video frame 301 and then multiplied by the original height of the video frame 301.
As shown in fig. 19 (a), the original axis point of the video frame 301 is pivot1, and the first axis point calculated in the above manner is pivot2. It is for an aspect ratio of 1:1, and dragging the video frame toward the lower right side, so that the upper left vertex of the video frame 301 coincides with the upper left vertex of the crop frame 401.
Then, the electronic device performs a zoom-out operation on the video frame 301 with the original axis point of the video frame 301 as the zoom center point according to the actual zoom scale. For example, as shown in fig. 19 (b), when the actual scaling ratio is 2, the electronic device uses the original pivot point pivot1 as the scaling center point, and reduces the width and height of the video frame 301 by 1/2.
Next, the electronic device determines a second axis point in the zoomed-out video frame 301 according to the actual position of the first axis point in the video frame 301 before zoomed-out. That is, the electronic device may invoke the setPivotX/Y method of the video frame 301 to update the calculated first axis point into the scaled-down video frame 301.
Illustratively, as shown in fig. 19 (c), since the calculated first axis point pivot2 is located at 1/4 of the video frame 301 before reduction (1/4 in both the X-axis direction and the Y-axis direction), the second axis point pivot3 is updated at 1/4 of the video frame 301 after reduction.
After the second axis is updated, the electronic device performs an enlarging operation on the video frame 301 with the second axis as a zoom center point according to the actual zoom scale. For example, as shown in (d) of fig. 19, if the actual scaling ratio is 2, the electronic device enlarges both the width and the height of the video frame 301 by 2 times with the second axis point pivot3 as the scaling center point.
Finally, the electronic device calculates a deviation value between the sixth vertex coordinate and the actual vertex coordinate of the enlarged video frame 301, and updates the coordinates of the video frame 301 according to the deviation value.
For example, as shown in (d) of fig. 19, the video frame 301 is enlarged with the second axis point pivot3 as the zoom center point and with the actual zoom scale, and there is a deviation between the vertices of the enlarged video frame 301 with respect to the vertices of the video frame 301 shown in (a) of fig. 19, so that it is necessary to perform displacement compensation for the deviation of the vertices of the enlarged video frame 301 to accurately adjust the axis point of the video frame 301.
Since the top left vertex of the video frame 301 coincides with the top left vertex of the crop frame 401 after the scaled video frame is dragged before, the coordinate of the sixth vertex of the top left vertex of the dragged video frame 301, that is, the coordinate of the top left vertex (i.e., C1 vertex) of the crop frame 401, at this time, the coordinate of the video frame 301 is subjected to displacement compensation according to the offset value between the coordinate of the C1 vertex and the actual vertex coordinate of the top left vertex (i.e., P1 vertex) of the video frame 301 shown in (d) of fig. 19. Wherein the second axis point pivot3 of the video picture after the displacement compensation is still located at 1/4 of the video picture 301 after the displacement compensation.
The above-described procedure describes a calculation procedure of the first critical scaling ratio in the rotation of the video frame 301, a calculation procedure of the drag boundary at the time of drag of the crop frame 401, a calculation procedure of the rebound distance after drag of the video frame 301, a calculation procedure of the second critical scaling ratio after scaling of the video frame 301, and an axis point update procedure of the video frame 301. The above-described process may be implemented by a video editing application in an electronic device.
After the video editing application calculates and obtains the adjustment parameters such as the first critical scaling, the rebound distance, the second critical scaling and the like of the current video picture, the video editing application calls an application frame to record the adjustment parameters of the current video picture, and the application frame calls the GPU, so that the GPU initiates the drawing process of the video picture based on the adjustment parameters, and finally the adjusted video picture is displayed on the screen of the electronic equipment.
Fig. 20 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 100 shown in fig. 20 includes: memory 101, processor 110, and communication interface 102, wherein memory 101, processor 110, and communication interface 102 may communicate; illustratively, the memory 101, the processor 110, and the communication interface 102 may communicate over a communication bus.
The memory 101 may be a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access memory (random access memory, RAM). The memory 101 may store a computer program, which is controlled to be executed by the processor 110, and which is communicated by the communication interface 102, thereby implementing the video processing method provided in the above-described embodiment of the present application.
The communication interface 102 in the chip may be an input/output interface, pins or circuitry, etc.
The electronic device 100 of the present embodiment may be correspondingly configured to perform the steps performed in the foregoing method embodiments, and the implementation principle and technical effects are similar, which are not described herein.
Embodiments of the present application also provide a computer-readable storage medium. The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer readable media can include computer storage media and communication media and can include any medium that can transfer a computer program from one place to another. The storage media may be any target media that is accessible by a computer.
In one possible implementation, the computer readable medium may include RAM, ROM, compact disk-read only memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium targeted for carrying or storing the desired program code in the form of instructions or data structures and accessible by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (digital subscriber line, DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes optical disc, laser disc, optical disc, digital versatile disc (digital versatiledisc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tapes), optical media (e.g., DVDs), or semiconductor media (e.g., solid State Disks (SSDs)), among others.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing detailed description of the embodiments has further described the objects, technical solutions and advantageous effects of the present application, and it should be understood that the foregoing is only a detailed description of the present application and is not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the present application should be included in the scope of protection of the present application.

Claims (23)

1. A video processing method, comprising:
the electronic equipment receives a moving operation of a video picture displayed in a video clipping interface; the movement operation includes at least one of a rotation operation, a zoom operation, and a drag operation;
and the electronic equipment responds to the moving operation, and when an area outside the video picture exists in the cutting frame, the video picture is adjusted according to the adjusting parameters corresponding to the moving operation, so that the cutting frame is all located in the adjusted video picture.
2. The method of claim 1, wherein the electronic device, in response to the movement operation, when there is an area in the crop box that is located outside the video frame, adjusts the video frame according to an adjustment parameter corresponding to the movement operation, comprising:
the electronic equipment responds to the moving operation and judges whether all the cutting frames in the video cutting interface are positioned in the video picture or not;
when the region outside the video picture exists in the cutting frame, the electronic equipment adjusts the video picture according to the adjusting parameters corresponding to the moving operation.
3. The method of claim 1 or 2, wherein the adjustment parameter for the rotation operation comprises a first critical scaling, the adjustment parameter for the scaling operation comprises a second critical scaling, and the adjustment parameter for the drag operation comprises a rebound distance.
4. The method of claim 3, wherein the electronic device adjusting the video frame according to the adjustment parameter corresponding to the movement operation comprises:
when the moving operation comprises the rotating operation, the electronic equipment performs an amplifying operation on the video picture in the rotating process according to the first critical scaling; the method comprises the steps of,
when the moving operation comprises the dragging operation, the electronic equipment carries out rebound operation on the dragged video picture according to the rebound distance; the method comprises the steps of,
and when the moving operation comprises the scaling operation, the electronic equipment performs the amplifying operation on the scaled video picture according to the second critical scaling proportion.
5. The method of claim 4, wherein when the move operation comprises the rotate operation, the electronic device receives a move operation on a video frame displayed within a video cropping interface, comprising:
The electronic equipment receives touch operation of a first rotation control in the video clipping interface; the first rotation control is used for rotating the video picture according to the angle scale value corresponding to the touch operation when triggered;
the electronic equipment responding to the moving operation, judging whether the clipping frames in the video clipping interface are all positioned in the video picture or not, and the method comprises the following steps:
the electronic equipment responds to the touch operation to obtain first vertex coordinates of all vertexes in the video picture in the rotating process and second vertex coordinates of all vertexes in the cutting frame;
the electronic equipment calculates a first critical scaling of the video picture according to the first vertex coordinates and the second vertex coordinates;
the electronic device determines whether the cropping frame is entirely located in the video picture in the rotating process according to the first critical scaling and the first scaling of the video picture in the rotating process relative to the original video picture.
6. The method of claim 5, wherein the electronic device determining whether the crop box is entirely within the video frame during rotation based on the first critical scaling and a first scaling of the video frame during rotation relative to an original video frame comprises:
When the first critical scaling ratio is larger than the first scaling ratio, the electronic equipment determines that an area outside the video picture in the rotating process exists in the cutting frame;
when the first critical scaling ratio is smaller than or equal to the first scaling ratio, the electronic device determines that the cutting frame is all located in the video picture in the rotating process.
7. The method of claim 5, wherein the computing, by the electronic device, a first critical scaling of the video frame based on the first vertex coordinate and the second vertex coordinate comprises:
the electronic equipment calculates and obtains a first connecting line formed between every two adjacent vertexes in the video picture according to the first vertex coordinates of each vertex in the video picture;
the electronic equipment calculates a second connecting line formed between the central point of the cutting frame and each vertex in the cutting frame according to the second vertex coordinates of each vertex in the cutting frame;
the electronic equipment calculates an intersection point formed between the first connecting line corresponding to a first target vertex and the corresponding second connecting line to obtain a target intersection point corresponding to the first target vertex; the first target vertex is any vertex in the cutting frame, a first connecting line corresponding to the first target vertex is a first connecting line closest to the first target vertex in the video picture, and a second connecting line corresponding to the first target vertex is a second connecting line formed between the first target vertex and the central point of the cutting frame;
The electronic equipment calculates a first interval between the first target vertex and the central point of the cutting frame and a second interval between the target intersection point and the central point of the cutting frame;
and the electronic equipment calculates a first critical scaling of the video picture according to the first interval and the second interval.
8. The method of claim 7, wherein the computing, by the electronic device, a first critical scaling of the video frame based on the first pitch and the second pitch comprises:
the electronic equipment determines the ratio of the first interval to the second interval as a target proportion value corresponding to the first target vertex;
and the electronic equipment determines the maximum value in the target scale values corresponding to the vertexes in the cutting frame as a first critical scaling of the video picture.
9. The method of claim 4, wherein when the move operation comprises the drag operation, the electronic device determines whether a crop box within the video cropping interface is entirely within the video frame in response to the move operation;
the electronic equipment responds to the dragging operation to obtain third vertex coordinates of all vertexes in the dragged video picture and fourth vertex coordinates of all vertexes in the cutting frame;
And the electronic equipment determines whether the cutting frame is completely positioned in the dragged video picture according to the third vertex coordinate and the fourth vertex coordinate.
10. The method of claim 9, wherein the determining, by the electronic device, whether the crop box is entirely within the dragged video frame based on the third vertex coordinate and the fourth vertex coordinate comprises:
under the condition that the rotation angle of the video picture is not 0 degree and is not an integer multiple of 90 degrees, the electronic equipment calculates a third connecting line formed between every two adjacent vertexes in the video picture according to the third vertex coordinates of each vertex in the video picture;
the electronic equipment calculates the vertical point coordinates from each vertex in the cutting frame to the corresponding third connecting line;
and the electronic equipment determines whether the cutting frame is completely positioned in the dragged video picture according to the vertical point coordinates and the fourth vertex coordinates.
11. The method of claim 9, further comprising, prior to the electronic device performing a springback operation on the dragged video frame by the springback distance:
In the case that the rotation angle of the video picture is not 0 ° and is not an integer multiple of 90 °, the electronic device determines a second target vertex closest to the center point of the crop frame among the vertexes in the video picture after being dragged;
the electronic equipment acquires critical coordinates of a critical point corresponding to the second target vertex; the critical point is consistent with the azimuth of the second target vertex;
the electronic equipment determines whether the number of vertexes out of the dragged video picture in the cutting frame is larger than 1 according to a third vertex coordinate corresponding to the second target vertex, a critical coordinate corresponding to the critical point, a fourth vertex coordinate corresponding to the third target vertex and vertical point coordinates corresponding to the two target vertical points; the third target vertex is a vertex in the cutting frame, the direction of the vertex is consistent with that of the second target vertex, and the target vertical point is a vertical point from two vertexes corresponding to the second target vertex in the cutting frame to two boundary lines adjacent to the second target vertex in the video picture;
and when at least two vertexes in the cutting frame are positioned outside the dragged video picture, the electronic equipment takes the deviation distance between the critical point and the second target vertex as the rebound distance.
12. The method of claim 11, wherein after the electronic device determines whether the number of vertices located outside the dragged video frame in the crop box is greater than 1 according to the third vertex coordinate corresponding to the second target vertex, the critical coordinate corresponding to the critical point, the fourth vertex coordinate corresponding to the third target vertex, and the vertical point coordinates corresponding to the two target vertical points, further comprising:
when one vertex in the cutting frame is positioned outside the dragged video picture, the electronic equipment acquires a fourth vertex coordinate corresponding to a fourth target vertex positioned outside the video picture in the cutting frame;
and the electronic equipment takes the deviation distance between the fourth vertex coordinates corresponding to the fourth target vertex and the vertical point coordinates of the corresponding target vertical point as the rebound distance.
13. The method of claim 9, further comprising, prior to the electronic device performing a springback operation on the dragged video frame by the springback distance:
in the case that the rotation angle of the video picture is an integer multiple of 0 ° or 90 °, the electronic device determines a fifth target vertex closest to the center point of the crop frame among the vertices in the video picture after being dragged;
And the electronic equipment calculates the rebound distance according to fourth vertex coordinates corresponding to the vertex with the same azimuth as the fifth target vertex in the cutting frame and third vertex coordinates corresponding to the fifth target vertex.
14. The method of claim 4, wherein when the move operation comprises the zoom operation, the electronic device determining whether a crop box within the video cropping interface is entirely within the video frame in response to the move operation comprises:
the electronic equipment responds to the scaling operation to obtain a second scaling ratio of the scaled video picture relative to the original video picture;
the electronic equipment calculates a second critical scaling of the video picture;
and the electronic equipment determines whether the cutting frame is completely positioned in the scaled video picture according to the second critical scaling and the second scaling.
15. The method of claim 14, wherein the electronic device determining whether the crop box is entirely within the scaled video frame based on the second critical scaling level and the second scaling level comprises:
When the second critical scaling ratio is larger than the second scaling ratio, the electronic equipment determines that an area outside the scaled video picture exists in the cutting frame;
and when the second critical scaling ratio is smaller than or equal to the second scaling ratio, the electronic equipment determines that the cutting frame is all positioned in the scaled video picture.
16. The method of claim 14, wherein the electronic device calculating a second critical scaling of the video picture comprises:
under the condition that the rotation angle of the video picture is not 0 degree and is not an integral multiple of 90 degrees, the electronic equipment calculates and obtains the critical distance of the video picture according to the width of the cutting frame, the height of the cutting frame and the rotation angle of the video picture under the triggering of the first rotation control;
the electronic equipment determines the ratio of the critical distance to the original size of the video picture as the second critical scaling; the original size of the video picture is the original width of the video picture or the original height of the video picture.
17. The method of claim 14, wherein the electronic device calculating a second critical scaling of the video picture comprises:
In the case that the rotation angle of the video picture is an integer multiple of 0 ° or 90 °, the electronic device uses a ratio of the target size of the crop frame to the original size of the video picture as the second critical scaling;
the target size of the cutting frame is the height of the cutting frame, and the original size of the video picture is the original height of the video picture; or the target size of the cutting frame is the width of the cutting frame, and the original size of the video picture is the original width of the video picture.
18. The method according to any one of claims 1-4, further comprising:
in the case that the rotation angle of the video picture is not 0 ° and is not an integer multiple of 90 °, the electronic device receives a drag operation on the crop frame;
the electronic equipment responds to the drag operation to obtain fifth vertex coordinates of each vertex in the video picture;
the electronic equipment calculates a fourth connecting line formed between every two adjacent vertexes in the video picture according to the fifth vertex coordinates of all vertexes in the video picture;
the electronic equipment calculates a dragging boundary of the cutting frame according to intersection point coordinates corresponding to an intersection point formed between extension lines of four boundary lines in the cutting frame and the fourth connecting line; and the dragging boundary is a boundary corresponding to the maximum range of the allowable dragging of the cutting frame.
19. The method according to any one of claims 1-4, further comprising:
after drag operation is carried out on the zoomed video picture, the electronic equipment obtains sixth vertex coordinates of all vertexes of the zoomed video picture and actual zoom scale of the video picture;
the electronic equipment determines a first axis point of the video picture according to the sixth vertex coordinates and the center point of the cutting frame;
the electronic equipment determines a corresponding second axis point of the video picture in the video picture reduced by the actual scaling according to the actual position of the first axis point in the dragged video picture; the video picture is reduced by taking the original axis point as a scaling center point;
the electronic equipment calculates a deviation value between the sixth vertex coordinate and the actual vertex coordinate of the video picture amplified by the actual scaling; the video picture is amplified by taking the second axis point as a scaling center point;
and the electronic equipment updates the coordinates of the video picture according to the deviation value.
20. The method of claim 19, wherein the electronic device determining the first axis point of the video frame based on the sixth vertex coordinates and the center point of the crop box comprises:
The electronic equipment calculates a fifth connecting line and a sixth connecting line of the dragged video picture according to the sixth vertex coordinates; the fifth connecting line and the sixth connecting line are two adjacent boundary lines in the dragged video picture;
the electronic equipment calculates the distance from the center point of the cutting frame to the first axis point of the fifth connecting line and the distance from the center point of the cutting frame to the second axis point of the sixth connecting line;
the electronic equipment divides the first axis point distance by the actual width of the video picture, and multiplies the original width of the video picture to obtain the abscissa of the first axis point;
the electronic equipment divides the distance of the second axis point by the actual height of the video picture and multiplies the actual height of the video picture to obtain the ordinate of the first axis point.
21. An electronic device comprising a memory for storing a computer program and a processor for invoking the computer program to perform the video processing method of any of claims 1 to 20.
22. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program or instructions which, when executed, implement the video processing method of any of claims 1 to 20.
23. A computer program product comprising a computer program which, when run, causes a computer to perform the video processing method of any one of claims 1 to 20.
CN202311400931.6A 2022-05-30 2022-08-15 Video processing method and electronic equipment Pending CN117692582A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2022106014356 2022-05-30
CN202210601435 2022-05-30
CN202210975830.0A CN116095249B (en) 2022-05-30 2022-08-15 Video processing method and electronic equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202210975830.0A Division CN116095249B (en) 2022-05-30 2022-08-15 Video processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117692582A true CN117692582A (en) 2024-03-12

Family

ID=86205170

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210975830.0A Active CN116095249B (en) 2022-05-30 2022-08-15 Video processing method and electronic equipment
CN202311400931.6A Pending CN117692582A (en) 2022-05-30 2022-08-15 Video processing method and electronic equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210975830.0A Active CN116095249B (en) 2022-05-30 2022-08-15 Video processing method and electronic equipment

Country Status (1)

Country Link
CN (2) CN116095249B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000181915A (en) * 1998-12-16 2000-06-30 Dainippon Printing Co Ltd Image allocation device
JP4013138B2 (en) * 2002-12-13 2007-11-28 富士フイルム株式会社 Trimming processing apparatus and trimming processing program
JP4946738B2 (en) * 2007-09-03 2012-06-06 セイコーエプソン株式会社 Image processing apparatus, image processing method, and image processing program
CN104461439B (en) * 2014-12-29 2019-05-31 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN105205780B (en) * 2015-10-19 2018-09-07 大唐网络有限公司 Picture method of cutting out and device
CN109544453A (en) * 2018-11-16 2019-03-29 北京中竞鸽体育文化发展有限公司 Image adjusting method and device, electronic equipment, storage medium
CN111667487A (en) * 2020-04-29 2020-09-15 平安科技(深圳)有限公司 Picture clipping method and device and computer equipment
JP2022069931A (en) * 2020-10-26 2022-05-12 株式会社Jvis Automated trimming program, automated trimming apparatus, and automated trimming method
CN114302226B (en) * 2021-12-28 2022-10-25 北京中科大洋信息技术有限公司 Intelligent cutting method for video picture

Also Published As

Publication number Publication date
CN116095249B (en) 2023-10-20
CN116095249A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
US11481096B2 (en) Gesture mapping for image filter input parameters
US11706521B2 (en) User interfaces for capturing and managing visual media
US11770601B2 (en) User interfaces for capturing and managing visual media
KR102256706B1 (en) Fobited rendering that changes smoothly
US9860448B2 (en) Method and electronic device for stabilizing video
US10250800B2 (en) Computing device having an interactive method for sharing events
RU2685031C1 (en) Video sequence stabilization
US10049490B2 (en) Generating virtual shadows for displayable elements
CN110097576B (en) Motion information determination method of image feature point, task execution method and equipment
CN110708596A (en) Method and device for generating video, electronic equipment and readable storage medium
CN111837379B (en) Method and system for capturing subareas and informing whether the subareas are changed by camera movement
US20150215532A1 (en) Panoramic image capture
KR20200123223A (en) Display adaptation method and apparatus, device, and storage medium for applications
US10863077B2 (en) Image photographing method, apparatus, and terminal
JP6337907B2 (en) Display control apparatus, display control method, and program
WO2021227693A1 (en) Photographic method and apparatus, and mobile terminal and chip system
WO2021013147A1 (en) Video processing method, device, terminal, and storage medium
EP3857499A1 (en) Panoramic light field capture, processing and display
US20140181709A1 (en) Apparatus and method for using interaction history to manipulate content
WO2013144437A2 (en) Method, apparatus and computer program product for generating panorama images
CN116095249B (en) Video processing method and electronic equipment
CN110992268A (en) Background setting method, device, terminal and storage medium
CN115225806A (en) Cinematic image framing for wide field of view (FOV) cameras
WO2021218118A1 (en) Image processing method and apparatus
CN114327174A (en) Virtual reality scene display method and cursor three-dimensional display method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination