CN117641114A - Video processing method and device and electronic equipment - Google Patents

Video processing method and device and electronic equipment Download PDF

Info

Publication number
CN117641114A
CN117641114A CN202311541942.6A CN202311541942A CN117641114A CN 117641114 A CN117641114 A CN 117641114A CN 202311541942 A CN202311541942 A CN 202311541942A CN 117641114 A CN117641114 A CN 117641114A
Authority
CN
China
Prior art keywords
sub
video
videos
shooting
video file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311541942.6A
Other languages
Chinese (zh)
Inventor
张波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aiku Software Technology Shanghai Co ltd
Original Assignee
Aiku Software Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aiku Software Technology Shanghai Co ltd filed Critical Aiku Software Technology Shanghai Co ltd
Priority to CN202311541942.6A priority Critical patent/CN117641114A/en
Publication of CN117641114A publication Critical patent/CN117641114A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a video processing method, a video processing device and electronic equipment, and belongs to the technical field of video processing. The method comprises the following steps: receiving a segmentation input of a shooting interface; responding to the segmentation input, segmenting the shooting interface into N shooting areas, wherein N is an integer greater than 1; and carrying out division shooting on pictures through the N shooting areas to obtain a first video file formed by combining N sub-videos, wherein one shooting area is used for shooting one sub-video correspondingly.

Description

Video processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of video processing, and particularly relates to a video processing method, a video processing device and electronic equipment.
Background
At present, as the frequency of using a mobile terminal by a user is higher and higher, the frequency of using a camera to record daily life by the user is higher and higher. However, in the current video shooting mode, the whole shooting interface is used for shooting pictures, so as to obtain a shooting video with a complete picture. If a user wants to process a part of the frame images in the shot video, the user can only process the whole picture of the frame images, so that the pertinence is poor and the efficiency is low.
Disclosure of Invention
The embodiment of the application aims to provide a video processing method, a video processing device and electronic equipment, which can solve the problem that the efficiency is low because the existing video processing mode can only process the whole picture of a frame image.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides a video processing method, including:
receiving a segmentation input of a shooting interface;
responding to the segmentation input, segmenting the shooting interface into N shooting areas, wherein N is an integer greater than 1;
and carrying out division shooting on pictures through the N shooting areas to obtain a first video file formed by combining N sub-videos, wherein one shooting area is used for shooting one sub-video correspondingly.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the first input module is used for receiving segmentation input of a shooting interface;
the first response module is used for responding to the segmentation input and segmenting the shooting interface into N shooting areas, wherein N is an integer greater than 1;
and the shooting module is used for carrying out division shooting on pictures through the N shooting areas to obtain a first video file formed by combining N sub-videos, wherein one shooting area is used for shooting one sub-video correspondingly.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, by receiving a segmentation input to a shooting interface, responding to the segmentation input, segmenting the shooting interface into N shooting areas, and performing segmentation shooting of pictures through the N shooting areas to obtain a first video file formed by combining N sub-videos, wherein one shooting area is used for shooting one sub-video correspondingly.
According to the scheme, the shooting interface is divided into N shooting areas to carry out the division shooting of the picture, so that in the shooting process, the picture can be subjected to the division shooting through the N shooting areas, which is equivalent to dividing each frame image in one video into N sub-images to form N sub-videos, and a user can process the frame images in one or more sub-videos according to the needs in a targeted manner, so that the video processing efficiency is improved.
Drawings
Fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of segmentation of a shooting interface provided in an embodiment of the present application;
fig. 3 is one of display schematic diagrams of N sub-videos provided in an embodiment of the present application;
FIG. 4 is a second schematic diagram of displaying N sub-videos according to an embodiment of the present disclosure;
fig. 5 is a specific flowchart of a video processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a block diagram of another electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The video processing method provided by the embodiment of the application is described in detail below by means of specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides a video processing method, which may specifically include the following steps:
step 101, receiving a segmentation input to a shooting interface.
Specifically, when a user needs to shoot, entering a shooting interface, and dividing and inputting the shooting interface according to the needs, namely, adjusting the number of dividing lines and the interval positions, wherein the electronic equipment receives the dividing and inputting the shooting interface by the user, so that the information such as the number of dividing lines, the interval positions and the like of the shooting interface is determined.
For example: as shown in fig. 2, the user performs division input of two division lines in the horizontal direction and two division lines in the vertical direction on the photographing interface of the electronic device, that is, the photographing interface is divided into three rows in the horizontal direction and three rows in the vertical direction by the division lines, and is divided into nine parts in total.
It will be appreciated that the manner of entering the capture interface includes, but is not limited to, the following: entering the camera from the electronic device, i.e. into the shooting interface 21; the video division recording mode is turned on in the camera, and the user can make division input to the photographing interface 21. For example: the bottom of the shooting interface 21 is provided with a segmentation video text control 22, and a user clicks the segmentation video text control 22 to enter a video segmentation recording mode. Alternatively, a split video recording function control may be provided in the shooting interface 21, and the user clicks the split video recording function control to enter the video split recording mode.
And 102, responding to the segmentation input, and segmenting the shooting interface into N shooting areas, wherein N is an integer greater than 1.
Specifically, after receiving the division input, the electronic device divides the photographing interface into N photographing regions according to the positions of the division lines of the division input in response to the division input. Wherein, the user can adjust the number of shooting areas (i.e. the numerical value of N) by adjusting the number of dividing lines inputted by dividing, and adjust the position and the size of each shooting area by adjusting the position of the dividing line inputted by dividing.
And 103, carrying out division shooting on pictures through the N shooting areas to obtain a first video file formed by combining N sub-videos, wherein one shooting area is used for shooting one sub-video correspondingly.
Specifically, after the user clicks the shooting button, the N shooting areas are used for dividing and shooting pictures, each shooting area is used for shooting the sub-video through its own encoder, and each encoder shoots the picture of one area, so as to finally form N sub-videos. The N sub-videos are combined to form a first video file, i.e., the first video file includes N sub-videos.
It should be noted that, N sub-videos in the first video file may be stored in a manner of compression or file writing.
It can be understood that the same frame sub-images of the N sub-videos are subjected to image stitching to obtain a complete image, and then all frame sub-images of the N sub-videos are subjected to image stitching to obtain a complete video.
In the above embodiment of the present application, by receiving a segmentation input to a shooting interface, responding to the segmentation input, segmenting the shooting interface into N shooting areas, and performing segmentation shooting of a picture through the N shooting areas, a first video file composed of N sub-videos is obtained, where one shooting area correspondingly shoots one sub-video. Because the shooting interface is divided into N shooting areas to carry out the division shooting of the picture, in the shooting process, the picture can be divided and shot through the N shooting areas, which is equivalent to dividing each frame image in one video into N sub-images to form N sub-videos, and a user can process the frame images in one or more sub-videos according to the needs in a targeted manner, so that the video processing efficiency is improved.
As an optional specific embodiment of step 103, the performing the split shooting of the picture through the N shooting areas to obtain the first video file formed by combining N sub-videos may specifically include:
Dividing and shooting pictures through the N shooting areas to obtain N sub-videos;
and combining the N sub-videos according to the current positions of the N sub-videos to obtain a first video file.
Specifically, after a user clicks a shooting button, the user performs split shooting of pictures through N shooting areas, each shooting area performs shooting of sub-videos through own encoders, each encoder shoots pictures of one area, N sub-videos are finally formed, each sub-video carries a position tag, and the position tag can be used for knowing which position of a shooting interface the sub-video is located when shooting. And sequencing and combining the N sub-videos according to the current positions of the N sub-videos to obtain a first video file.
As an optional specific embodiment, the step of combining the N sub-videos according to the current positions of the N sub-videos to obtain the first video file may specifically include:
receiving position adjustment inputs for the N sub-videos;
responding to the position adjustment input, and adjusting the positions of the N sub-videos to obtain disordered N sub-videos;
and combining the disordered N sub-videos according to the current positions of the N sub-videos to form a first video file.
Specifically, the user may perform position adjustment input on N sub-videos, after receiving the position adjustment input, the electronic device responds to the position adjustment input to adjust positions of the N sub-videos, and since at least a part of the N sub-videos are subjected to position adjustment, the obtained N sub-videos are disordered N sub-videos, and the N sub-videos after the disorder are combined in a sequence according to positions where the N sub-videos are located, so as to obtain a first video file, where the N sub-videos in the first video file are disordered.
In one example, as shown in FIG. 3, nine sub-videos are originally positioned in order from left to right, top to bottom. The user can input the nine sub-videos through position adjustment, namely, the nine sub-videos are arranged in a disordered way, so that the interestingness of video editing can be improved.
As an alternative embodiment, the method further comprises:
the positions of the N sub-videos in the first video file are adjusted to obtain disordered N sub-videos;
and encrypting the first video file to obtain an encrypted second video file, and determining the disordered N sub-videos as playing contents of the second video file.
Specifically, after the first video file is obtained, the user may adjust the positions of the N sub-videos in the first video file to obtain N unordered sub-videos. And the first video file is encrypted to obtain an encrypted second video file, wherein the second video file comprises N unordered sub-videos. Because the second video file is an encrypted video file, related video information cannot be checked before decryption, therefore, disordered N sub-videos can be used as playing contents of the second video file, and the disordered N sub-videos are played under the condition that a user needs to acquire the related information of the second video file, so that the encryption safety is ensured, the user can know the related information of the encrypted video in time, and the video processing efficiency is improved.
The encryption processing mode can be password encryption, face encryption, fingerprint encryption and the like. As another alternative specific embodiment, the method further comprises:
and determining a first sub-video in the N sub-videos as the playing content of a second video file, wherein the second video file is the encrypted first video file.
Specifically, under the condition that the first video file is encrypted to obtain an encrypted second video file, as the second video file is the encrypted video file, related video information cannot be checked before decryption, therefore, the first sub-video in the N sub-videos can be used as playable content of the second video file, and under the condition that a user needs to acquire the related information of the second video file, the first sub-video is played, so that the encryption safety is ensured, the user can know the related information of the encrypted video in time, and the video processing efficiency is improved.
It should be noted that the first sub-video may be a first sub-video, a last sub-video, an intermediate sub-video, etc. of N preset sub-videos. Alternatively, at least one of the N sub-videos may be selected as the first sub-video by the user after encryption.
In an example, after obtaining the encrypted second video file, the selection interface of the first sub video may be automatically skipped, and the user may select at least one sub video as the first sub video. As shown in fig. 4, the number of N is 9, and if the user does not select the first sub-video, the middle sub-video (i.e., sub-video 5) may be defaulted as the first sub-video played by default. Alternatively, the user may select a sub-video of the middle area block (i.e., sub-video 5) as the first sub-video.
The first sub-video may also be a plurality of sub-videos, for example, 4 sub-videos (such as sub-video 1, sub-video 2, sub-video 4, and sub-video 5) may be selected as the first sub-video at the same time, and other sub-videos (i.e. sub-video 3, sub-video 6, sub-video 7, sub-video 8, and sub-video 9) in the N sub-videos may be stored in the second video file in the form of thumbnails and not played.
As an alternative specific embodiment, the method may further include:
acquiring the encryption priority of the second video file under the condition that the editing input of the second video file is received;
and under the condition that the encryption priority of the second video file is lower than a preset priority, responding to the editing input, and editing the second sub video in the second video file.
Specifically, after the first video file is encrypted to obtain the second video file, if the editing input of the user to the second video file is received, the encryption priority of the second video file is obtained, and whether the encryption priority of the second video file is lower than a preset priority is judged. If the encryption priority of the second video file is lower than the preset priority, the encryption level of the second video file is lower, and at least a part of the sub-videos (namely the second sub-videos) in the second video file can be directly edited without decryption, so that the video processing efficiency is improved, the pertinence of the video processing position is improved, and resources are saved.
Further, the editing process includes, but is not limited to, at least one of:
Performing replacement processing on the target sub-video;
performing position adjustment processing on the target sub-video;
wherein the target sub-video is the second sub-video.
Specifically, if the encryption priority of the second video file is lower than the preset priority, editing processes such as position adjustment and picture replacement can be directly performed on the second sub-video in the second video file without decryption, so that the video processing efficiency is improved, and resources are saved.
For example: if the encryption priority of the second video file is lower than the preset priority, the second sub-video can be moved to carry out position adjustment without decryption, the position of each sub-video can be adjusted by the position label of each sub-video, the complete video can be spliced, and the process can not only increase the interestingness of video editing, but also improve the efficiency of video processing. And, if the user finds that the second sub-video contains meaningless information or redundant information, the second sub-video can be replaced by solid color or other video.
As another alternative specific embodiment, the method may further include:
acquiring decryption information under the condition that the encryption priority of the second video file is higher than or equal to the preset priority;
Under the condition that the decryption information is successfully verified, performing decryption processing on the second video file to obtain a decrypted first video file;
and responding to the editing input, and editing the third sub video in the first video file.
Specifically, if the encryption priority of the second video file is higher than or equal to the preset priority, it means that the encryption level of the second video file is higher, the second video file needs to be decrypted first, and at least a part of the N sub-videos (i.e., the third sub-video) can be edited after the decrypted first video file is obtained, so that the security of the encryption information can be improved, the pertinence of the video processing position can be increased, the video processing efficiency is improved, and resources are saved.
Further, the editing process includes, but is not limited to, at least one of:
performing replacement processing on the target sub-video;
and carrying out position adjustment processing on the target sub-video.
Wherein the target sub-video is the third sub-video.
Specifically, if the encryption priority of the second video file is higher than or equal to the preset priority, the second video file needs to be decrypted to obtain the first video file, then editing processing such as position adjustment and picture replacement is performed on a third sub-video in the first video file, and on the premise of guaranteeing the safety of the encrypted video, the efficiency of video processing is improved.
It should be noted that, after the decrypted first video file is obtained, N sub-videos are displayed, and meanwhile, dividing lines between different sub-videos are displayed, so as to facilitate interactive operation of a User Interface (UI) of a User.
As an alternative specific embodiment, the method may further include:
receiving a first display input of an icon for the first video file;
responsive to the first display input, displaying icons of N sub-videos in the first video file;
receiving playing input of an icon of a fourth sub-video in the icons of the N sub-videos;
in response to the play input, playing a fourth sub-video of the N sub-videos;
and when the fourth sub video is at least two sub videos in the N sub videos, in the process of playing the at least two sub videos, the ith sub image of the at least two sub videos is played simultaneously, M is an integer greater than 1, and i is an integer greater than 0 and less than or equal to M.
Specifically, after the decrypted first video file is obtained, if a first display input of a user to icons of the first video file (i.e., thumbnails of N sub-videos in the first video file) is received, the icons of the N sub-videos in the first video file are displayed in response to the first display input, that is, the thumbnails of the N sub-videos in the first video file are subjected to display processing. And if the playing input of the user to the thumbnail of the fourth sub-video in the thumbnails of the N sub-videos is received, responding to the playing input, and playing the fourth sub-video corresponding to the thumbnail of the fourth sub-video. If the fourth sub-video is at least two sub-videos in the N sub-videos, in the process of playing the at least two sub-videos, the same frame sub-image of the at least two sub-videos is played simultaneously, namely, the at least two sub-videos play the first frame sub-image, the second frame sub-image, the third frame sub-image and the like of the at least two sub-videos simultaneously.
It should be noted that if the decrypted first video file is N out-of-order sub-videos, the out-of-order fourth sub-video is played.
As an alternative specific embodiment, the method may further include:
receiving a second display input of an icon for the first video file;
responsive to the second display input, displaying icons of N sub-videos in the first video file;
receiving an icon sharing input of a fifth sub-video in the icons of the N sub-videos;
and responding to the sharing input, and sharing the fifth sub video.
Specifically, after the decrypted first video file is obtained, if a second display input of a user to icons of the first video file (i.e., thumbnails of N sub-videos in the first video file) is received, the icons of the N sub-videos in the first video file are displayed in response to the second display input, that is, the thumbnails of the N sub-videos in the first video file are subjected to display processing. If the icon sharing input of the user to the thumbnail of the fifth sub-video in the thumbnails of the N sub-videos is received, the fifth sub-video corresponding to the icon of the fifth sub-video is subjected to targeted sharing in response to the icon sharing input, so that the accuracy and the efficiency of video sharing are improved.
It should be noted that, because the first video file or the second video file is saved in a thumbnail or file writing mode, the first sub-video is played by default. Therefore, in the process of viewing and sharing the sub-videos, the user can display other sub-videos except the first sub-video in a clicking or long-time thumbnail mode, so that the user can view or share the other sub-videos conveniently, and the video processing efficiency is improved. And after receiving at least two sub-videos, the sharees can also order and play the at least two sub-videos, so that the interestingness of video processing is improved, and the pertinence of the video processing position is also improved.
As an optional specific embodiment, after the step 102 of dividing the shooting interface into N shooting areas in response to the dividing input, the method may further include:
receiving a dividing line adjustment input of the shooting interface;
responding to the dividing line adjustment input, and adjusting the number of the dividing lines and the interval distance between two adjacent dividing lines;
and adjusting the number of the shooting areas and the area of each shooting area according to the adjusted number of the dividing lines and the interval distance between the two adjacent dividing lines.
Specifically, the user can adjust the dividing lines in the shooting interface according to the need, namely, the user adjusts the dividing lines of the shooting interface, the electronic equipment receives the dividing line adjusting input, the number of the dividing lines and the interval distance between two adjacent dividing lines are adjusted according to the dividing line adjusting input, and the number of shooting areas and the area of each shooting area are correspondingly changed due to the change of the number of the dividing lines or the interval distance between two adjacent dividing lines, so that the dividing lines can be adjusted according to the need of the user and the picture layout, the sub-video with pertinence is obtained, the processing of the sub-video by the user is facilitated, and the processing efficiency is improved.
The above scheme is described below by way of a specific example:
as shown in fig. 5, step 501: and receiving segmentation input of a user on a shooting interface.
Step 502: and responding to the segmentation input, and segmenting the shooting interface into N shooting areas.
Step 503: and carrying out division shooting on pictures through the N shooting areas to obtain N sub-videos, and packaging the N sub-videos to obtain a first video file.
Step 504: and encrypting the first video file to obtain an encrypted second video file, and determining the first sub-video in the N sub-videos as the playing content of the second video file.
Step 505: and acquiring the encryption priority of the second video file under the condition that the editing input of the second video file is received.
Step 506: judging whether the encryption priority of the second video file is lower than a preset priority; if yes, go to step 508; if not, the process proceeds to step 507.
Step 507: and carrying out decryption processing on the second video file to obtain a decrypted first video file.
Step 508: responding to the editing input, and editing a third sub video in the first video file; or in response to the editing input, editing the second sub-video in the second video file.
In summary, in the above embodiments of the present application, since the shooting interface is divided into N shooting areas to perform split shooting of pictures, a video may be split into N sub-videos, the first video file is encrypted to obtain a second video file, and a first sub-video of the N sub-videos is used as a play content of the second video file, that is, related information of the N sub-video contents may be known through the first sub-video before decryption, so that a user may more conveniently process the video and improve video processing efficiency. And the user can edit at least one sub-video in the N sub-videos, thereby increasing the pertinence of the video processing position and saving the resources.
According to the video processing method provided by the embodiment of the application, the execution subject can be a video processing device. In the embodiment of the present application, a video processing device is taken as an example to execute a video processing method by using the video processing device, and the video processing device provided in the embodiment of the present application is described.
As shown in fig. 6, the embodiment of the present application further provides a video processing apparatus 600, including:
a first input module 601, configured to receive a segmentation input to a shooting interface;
the first response module 602 is configured to split the shooting interface into N shooting areas in response to the splitting input, where N is an integer greater than 1;
and the shooting module 603 is configured to perform split shooting of a picture through the N shooting areas, and obtain a first video file formed by combining N sub-videos, where one shooting area corresponds to shooting one sub-video.
In the above embodiment of the present application, by receiving a segmentation input to a shooting interface, responding to the segmentation input, segmenting the shooting interface into N shooting areas, and performing segmentation shooting of a picture through the N shooting areas, a first video file composed of N sub-videos is obtained, where one shooting area correspondingly shoots one sub-video. Because the shooting interface is divided into N shooting areas to carry out the division shooting of the picture, in the shooting process, the picture can be divided and shot through the N shooting areas, which is equivalent to dividing each frame image in one video into N sub-images to form N sub-videos, and a user can process the frame images in one or more sub-videos according to the needs in a targeted manner, so that the video processing efficiency is improved.
Optionally, the shooting module 603 is specifically configured to:
dividing and shooting pictures through the N shooting areas to obtain N sub-videos;
and combining the N sub-videos according to the current positions of the N sub-videos to obtain a first video file.
Optionally, when the capturing module 603 combines the N sub-videos according to the current positions of the N sub-videos to obtain a first video file, the capturing module is specifically configured to:
receiving position adjustment inputs for the N sub-videos;
responding to the position adjustment input, and adjusting the positions of the N sub-videos to obtain disordered N sub-videos;
and combining the disordered N sub-videos according to the current positions of the N sub-videos to form a first video file.
Optionally, the apparatus further includes:
the first adjusting module is used for adjusting the positions of the N sub-videos in the first video file to obtain disordered N sub-videos;
the first processing module is used for carrying out encryption processing on the first video file to obtain an encrypted second video file, and determining the disordered N sub-videos as playing contents of the second video file.
Optionally, the apparatus further includes:
and the second determining module is used for determining a first sub-video in the N sub-videos as the playing content of a second video file, wherein the second video file is the encrypted first video file.
Optionally, the apparatus further includes:
the first acquisition module is used for acquiring the encryption priority of the second video file under the condition that the editing input of the second video file is received;
and the second processing module is used for responding to the editing input to edit the second sub-video in the second video file under the condition that the encryption priority of the second video file is lower than the preset priority.
Optionally, the apparatus further includes:
the second acquisition module is used for acquiring decryption information under the condition that the encryption priority of the second video file is higher than or equal to the preset priority;
the decryption module is used for carrying out decryption processing on the second video file under the condition that the decryption information is successfully verified to obtain a decrypted first video file;
and the third processing module is used for responding to the editing input and performing editing processing on a third sub video in the first video file.
Optionally, the editing process includes at least one of:
performing replacement processing on the target sub-video;
performing position adjustment processing on the target sub-video;
the target sub-video is the second sub-video or the third sub-video.
Optionally, the apparatus further includes:
a second input module for receiving a first display input of an icon for the first video file;
the second response module is used for responding to the first display input and displaying icons of N sub videos in the first video file;
the third input module is used for receiving the playing input of the icon of the fourth sub-video in the icons of the N sub-videos;
the third response module is used for responding to the playing input and playing a fourth sub-video in the N sub-videos;
and when the fourth sub video is at least two sub videos in the N sub videos, in the process of playing the at least two sub videos, the ith sub image of the at least two sub videos is played simultaneously, M is an integer greater than 1, and i is an integer greater than 0 and less than or equal to M.
Optionally, the apparatus further includes:
a fourth input module for receiving a second display input of an icon for the first video file;
a fourth response module, configured to display icons of N sub-videos in the first video file in response to the second display input;
the fifth input module is used for receiving icon sharing input of a fifth sub-video in the icons of the N sub-videos;
and the fifth response module is used for responding to the sharing input and sharing the fifth sub video.
Optionally, the apparatus further includes:
the sixth input module is used for receiving a dividing line adjustment input of the shooting interface;
a sixth response module, configured to adjust the number of dividing lines and a separation distance between two adjacent dividing lines in response to the dividing line adjustment input;
the second adjusting module is used for adjusting the number of the shooting areas and the area of each shooting area according to the adjusted number of the dividing lines and the interval distance between the two adjacent dividing lines.
In summary, in the above embodiments of the present application, since the shooting interface is divided into N shooting areas to perform split shooting of pictures, a video may be split into N sub-videos, the first video file is encrypted to obtain a second video file, and a first sub-video of the N sub-videos is used as a play content of the second video file, that is, related information of the N sub-video contents may be known through the first sub-video before decryption, so that a user may more conveniently process the video and improve video processing efficiency. And the user can edit at least one sub-video in the N sub-videos, thereby increasing the pertinence of the video processing position and saving the resources.
The video processing device in the embodiment of the application may be an electronic device, or may be a component in an electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The video processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The video processing device provided in the embodiment of the present application can implement each process implemented by the embodiments of the methods of fig. 1 to 5, and in order to avoid repetition, a detailed description is omitted here.
Optionally, as shown in fig. 7, the embodiment of the present application further provides an electronic device 700, including a processor 701 and a memory 702, where the memory 702 stores a program or an instruction that can be executed on the processor 701, and the program or the instruction implements each step of the embodiment of the video processing method when executed by the processor 701, and the steps achieve the same technical effects, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: radio frequency unit 1001, network module 1002, audio output unit 1003, input unit 1004, sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, and processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1010 by a power management system to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
Wherein the processor 1010 is configured to receive a segmentation input to a shooting interface;
responding to the segmentation input, segmenting the shooting interface into N shooting areas, wherein N is an integer greater than 1;
and carrying out division shooting on pictures through the N shooting areas to obtain a first video file formed by combining N sub-videos, wherein one shooting area is used for shooting one sub-video correspondingly.
In the above embodiment of the present application, by receiving a segmentation input to a shooting interface, responding to the segmentation input, segmenting the shooting interface into N shooting areas, and performing segmentation shooting of a picture through the N shooting areas, a first video file composed of N sub-videos is obtained, where one shooting area correspondingly shoots one sub-video. Because the shooting interface is divided into N shooting areas to carry out the division shooting of the picture, in the shooting process, the picture can be divided and shot through the N shooting areas, which is equivalent to dividing each frame image in one video into N sub-images to form N sub-videos, and a user can process the frame images in one or more sub-videos according to the needs in a targeted manner, so that the video processing efficiency is improved.
Optionally, when the processor 1010 performs split shooting of the picture through the N shooting areas to obtain a first video file composed of N sub-videos, the processor is specifically configured to:
dividing and shooting pictures through the N shooting areas to obtain N sub-videos;
and combining the N sub-videos according to the current positions of the N sub-videos to obtain a first video file.
Optionally, when the processor 1010 combines the N sub-videos according to the current positions of the N sub-videos to obtain a first video file, the method is specifically configured to:
receiving position adjustment inputs for the N sub-videos;
responding to the position adjustment input, and adjusting the positions of the N sub-videos to obtain disordered N sub-videos;
and combining the disordered N sub-videos according to the current positions of the N sub-videos to form a first video file.
Optionally, the processor 1010 is further configured to:
the positions of the N sub-videos in the first video file are adjusted to obtain disordered N sub-videos;
and encrypting the first video file to obtain an encrypted second video file, and determining the disordered N sub-videos as playing contents of the second video file.
Optionally, the processor 1010 is further configured to:
and determining a first sub-video in the N sub-videos as the playing content of a second video file, wherein the second video file is the encrypted first video file.
Optionally, the processor 1010 is further configured to:
acquiring the encryption priority of the second video file under the condition that the editing input of the second video file is received;
and under the condition that the encryption priority of the second video file is lower than a preset priority, responding to the editing input, and editing the second sub video in the second video file.
Optionally, the processor 1010 is further configured to:
acquiring decryption information under the condition that the encryption priority of the second video file is higher than or equal to the preset priority;
under the condition that the decryption information is successfully verified, performing decryption processing on the second video file to obtain a decrypted first video file;
and responding to the editing input, and editing the third sub video in the first video file.
Optionally, the editing process includes at least one of:
performing replacement processing on the target sub-video;
Performing position adjustment processing on the target sub-video;
the target sub-video is the second sub-video or the third sub-video.
Optionally, the processor 1010 is further configured to:
receiving a first display input of an icon for the first video file;
responsive to the first display input, displaying icons of N sub-videos in the first video file;
receiving playing input of an icon of a fourth sub-video in the icons of the N sub-videos;
in response to the play input, playing a fourth sub-video of the N sub-videos;
and when the fourth sub video is at least two sub videos in the N sub videos, in the process of playing the at least two sub videos, the ith sub image of the at least two sub videos is played simultaneously, M is an integer greater than 1, and i is an integer greater than 0 and less than or equal to M.
Optionally, the processor 1010 is further configured to:
receiving a second display input of an icon for the first video file;
responsive to the second display input, displaying icons of N sub-videos in the first video file;
Receiving an icon sharing input of a fifth sub-video in the icons of the N sub-videos;
and responding to the sharing input, and sharing the fifth sub video.
Optionally, the processor 1010 is further configured to, after dividing the shooting interface into N shooting areas in response to the division input:
receiving a dividing line adjustment input of the shooting interface;
responding to the dividing line adjustment input, and adjusting the number of the dividing lines and the interval distance between two adjacent dividing lines;
and adjusting the number of the shooting areas and the area of each shooting area according to the adjusted number of the dividing lines and the interval distance between the two adjacent dividing lines.
In summary, in the above embodiments of the present application, since the shooting interface is divided into N shooting areas to perform split shooting of pictures, a video may be split into N sub-videos, the first video file is encrypted to obtain a second video file, and a first sub-video of the N sub-videos is used as a play content of the second video file, that is, related information of the N sub-video contents may be known through the first sub-video before decryption, so that a user may more conveniently process the video and improve video processing efficiency. And the user can edit at least one sub-video in the N sub-videos, thereby increasing the pertinence of the video processing position and saving the resources.
It should be understood that in the embodiment of the present application, the input unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and a microphone 10042, and the graphics processor 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 can include two portions, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1009 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the program or the instruction realizes each process of the embodiment of the video processing method, and the same technical effect can be achieved, so that repetition is avoided, and no redundant description is provided herein.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the video processing method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product stored in a storage medium, where the program product is executed by at least one processor to implement the respective processes of the embodiments of the video processing method, and achieve the same technical effects, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (13)

1. A video processing method, comprising:
receiving a segmentation input of a shooting interface;
responding to the segmentation input, segmenting the shooting interface into N shooting areas, wherein N is an integer greater than 1;
and carrying out division shooting on pictures through the N shooting areas to obtain a first video file formed by combining N sub-videos, wherein one shooting area is used for shooting one sub-video correspondingly.
2. The method according to claim 1, wherein the performing the split shooting of the picture through the N shooting areas, to obtain the first video file composed of N sub-videos, includes:
dividing and shooting pictures through the N shooting areas to obtain N sub-videos;
and combining the N sub-videos according to the current positions of the N sub-videos to obtain a first video file.
3. The method according to claim 2, wherein the combining the N sub-videos according to the respective current positions to obtain the first video file includes:
receiving position adjustment inputs for the N sub-videos;
responding to the position adjustment input, and adjusting the positions of the N sub-videos to obtain disordered N sub-videos;
And combining the disordered N sub-videos according to the current positions of the N sub-videos to form a first video file.
4. The method according to claim 2, wherein the method further comprises:
the positions of the N sub-videos in the first video file are adjusted to obtain disordered N sub-videos;
and encrypting the first video file to obtain an encrypted second video file, and determining the disordered N sub-videos as playing contents of the second video file.
5. The method according to claim 1, wherein the method further comprises:
and determining a first sub-video in the N sub-videos as the playing content of a second video file, wherein the second video file is the encrypted first video file.
6. The method according to claim 4 or 5, characterized in that the method further comprises:
acquiring the encryption priority of the second video file under the condition that the editing input of the second video file is received;
and under the condition that the encryption priority of the second video file is lower than a preset priority, responding to the editing input, and editing the second sub video in the second video file.
7. The method of claim 6, wherein the method further comprises:
acquiring decryption information under the condition that the encryption priority of the second video file is higher than or equal to the preset priority;
under the condition that the decryption information is successfully verified, performing decryption processing on the second video file to obtain a decrypted first video file;
and responding to the editing input, and editing the third sub video in the first video file.
8. The method of claim 7, wherein the editing process comprises at least one of:
performing replacement processing on the target sub-video;
performing position adjustment processing on the target sub-video;
the target sub-video is the second sub-video or the third sub-video.
9. The method according to claim 1, wherein the method further comprises:
receiving a first display input of an icon for the first video file;
responsive to the first display input, displaying icons of N sub-videos in the first video file;
receiving playing input of an icon of a fourth sub-video in the icons of the N sub-videos;
In response to the play input, playing a fourth sub-video of the N sub-videos;
and when the fourth sub video is at least two sub videos in the N sub videos, in the process of playing the at least two sub videos, the ith sub image of the at least two sub videos is played simultaneously, M is an integer greater than 1, and i is an integer greater than 0 and less than or equal to M.
10. The method according to claim 1, wherein the method further comprises:
receiving a second display input of an icon for the first video file;
responsive to the second display input, displaying icons of N sub-videos in the first video file;
receiving an icon sharing input of a fifth sub-video in the icons of the N sub-videos;
and responding to the sharing input, and sharing the fifth sub video.
11. The method of claim 1, wherein after dividing the capture interface into N capture areas in response to the division input, the method further comprises:
receiving a dividing line adjustment input of the shooting interface;
Responding to the dividing line adjustment input, and adjusting the number of the dividing lines and the interval distance between two adjacent dividing lines;
and adjusting the number of the shooting areas and the area of each shooting area according to the adjusted number of the dividing lines and the interval distance between the two adjacent dividing lines.
12. A video processing apparatus, comprising:
the first input module is used for receiving segmentation input of a shooting interface;
the first response module is used for responding to the segmentation input and segmenting the shooting interface into N shooting areas, wherein N is an integer greater than 1;
and the shooting module is used for carrying out division shooting on pictures through the N shooting areas to obtain a first video file formed by combining N sub-videos, wherein one shooting area is used for shooting one sub-video correspondingly.
13. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the video processing method of any of claims 1-11.
CN202311541942.6A 2023-11-17 2023-11-17 Video processing method and device and electronic equipment Pending CN117641114A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311541942.6A CN117641114A (en) 2023-11-17 2023-11-17 Video processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311541942.6A CN117641114A (en) 2023-11-17 2023-11-17 Video processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117641114A true CN117641114A (en) 2024-03-01

Family

ID=90034861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311541942.6A Pending CN117641114A (en) 2023-11-17 2023-11-17 Video processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117641114A (en)

Similar Documents

Publication Publication Date Title
CN113766129B (en) Video recording method, video recording device, electronic equipment and medium
CN112911147A (en) Display control method, display control device and electronic equipment
CN113570609A (en) Image display method and device and electronic equipment
CN113596555B (en) Video playing method and device and electronic equipment
CN112887794B (en) Video editing method and device
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112367487B (en) Video recording method and electronic equipment
CN115543137A (en) Video playing method and device
CN117641114A (en) Video processing method and device and electronic equipment
CN115086747A (en) Information processing method and device, electronic equipment and readable storage medium
CN114866835A (en) Bullet screen display method, bullet screen display device and electronic equipment
CN113473012A (en) Virtualization processing method and device and electronic equipment
CN114245017A (en) Shooting method and device and electronic equipment
CN113778300A (en) Screen capturing method and device
CN114025237A (en) Video generation method and device and electronic equipment
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN114390205B (en) Shooting method and device and electronic equipment
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN117319549A (en) Multimedia data selection method and device
CN117395462A (en) Method and device for generating media content, electronic equipment and readable storage medium
CN115328367A (en) Screen capturing method and device, electronic equipment and storage medium
CN117453101A (en) Image display method and device
CN114173178A (en) Video playing method, video playing device, electronic equipment and readable storage medium
CN115631109A (en) Image processing method, image processing device and electronic equipment
CN117294888A (en) Video playing method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination