CN114237800A - File processing method, file processing device, electronic device and medium - Google Patents

File processing method, file processing device, electronic device and medium Download PDF

Info

Publication number
CN114237800A
CN114237800A CN202111574206.1A CN202111574206A CN114237800A CN 114237800 A CN114237800 A CN 114237800A CN 202111574206 A CN202111574206 A CN 202111574206A CN 114237800 A CN114237800 A CN 114237800A
Authority
CN
China
Prior art keywords
target
file
image frame
focusing
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111574206.1A
Other languages
Chinese (zh)
Inventor
柳玙卿
张睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111574206.1A priority Critical patent/CN114237800A/en
Publication of CN114237800A publication Critical patent/CN114237800A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a file processing method, a file processing device, an electronic device and a medium, and belongs to the field of image processing. The method comprises the following steps: receiving a first input of a user; in response to the first input, updating a sharpness of an image of a first region of a target image frame in a target file; the target file comprises at least two image frames, and the target image frame to be updated in definition comprises at least one image frame of the target file.

Description

File processing method, file processing device, electronic device and medium
Technical Field
The present application relates to the field of image processing, and in particular, to a file processing method, a file processing apparatus, an electronic device, and a medium.
Background
At present, with the popularization of intelligent terminals, the intelligent terminals are integrated into various aspects of life; the shooting function of the intelligent terminal is widely used, people can record scenes to be kept with the help of the shooting function of the intelligent terminal in the form of multimedia files such as video images and the like, and the users can watch the scenes conveniently in the later period; however, the image display effect of the currently shot multimedia file such as video is relatively fixed and single.
Disclosure of Invention
The embodiment of the application aims to provide a file processing method, a file processing device, electronic equipment and a medium, after a video file is recorded, any target object in a local image area in at least one image frame in the video file can be focused in a targeted manner, so that the individual focusing requirement of a user on the target object in the video file is met, and the individual display requirement of the user on the picture content at a certain moment in the video file is met; in addition, in the process of re-editing the local image area in at least one image frame of the recorded target file, professional software special for later-stage video editing is not needed, so that the image processing operation in the target file is more convenient and faster for a user, and the file editing efficiency is further improved.
In a first aspect, an embodiment of the present application provides a file processing method, where the method includes:
receiving a first input of a user;
updating a sharpness of an image of a first region of a target image frame in a target file in response to the first input;
wherein the target file comprises at least two image frames, the target image frame comprising at least one image frame of the target file.
In a second aspect, an embodiment of the present application provides a document processing apparatus, including:
the first receiving module is used for receiving a first input of a user;
a first processing module for updating a sharpness of an image of a first region of a target image frame in a target file in response to the first input;
wherein the target file comprises at least two image frames, the target image frame comprising at least one image frame of the target file.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the file processing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the file processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the file processing method according to the first aspect.
In a sixth aspect, the present application provides a computer program product, which is stored in a storage medium and executed by at least one processor to implement the file processing method according to the first aspect.
In the embodiment of the application, a first input of a user is received; in response to the first input, updating a sharpness of an image of a first region of a target image frame in a target file; the target file comprises at least two image frames, and the target image frame with the definition to be updated comprises at least one image frame of the target file; therefore, after the target file is generated, a user can input a first input on a user interface of the electronic equipment according to actual requirements, the electronic equipment updates the definition of an image in any area of at least one target image frame in the target file after receiving the first input, and the effect of refocusing a target object contained in any area is achieved, so that after the video file is recorded, any target object in a local image area in at least one image frame in the video file can be refocused in a targeted manner, the requirement of the user on the personalized focusing of the target object in the video file is met, and the requirement of the user on the personalized display of the picture content at a certain moment in the video file is met; in addition, in the process of re-editing the local image area in at least one image frame of the recorded target file, professional software special for later-stage video editing is not needed, so that the image processing operation in the target file is more convenient and faster for a user, and the file editing efficiency is further improved.
Drawings
Fig. 1 is a first flowchart illustrating a document processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a second document processing method according to an embodiment of the present application;
fig. 3 is a schematic view of a first display interface of a file processing method according to an embodiment of the present application;
fig. 4 is a schematic view of a second display interface of the document processing method according to the embodiment of the present application;
FIG. 5 is a third flowchart illustrating a document processing method according to an embodiment of the present application;
FIG. 6a is a schematic view of a third display interface of a file processing method according to an embodiment of the present application;
FIG. 6b is a schematic diagram of a fourth display interface of the file processing method according to the embodiment of the present application;
fig. 6c is a schematic view of a fifth display interface of the file processing method according to the embodiment of the present application;
FIG. 7 is a fourth flowchart illustrating a document processing method according to an embodiment of the present application;
fig. 8 is a schematic view of a sixth display interface of a file processing method according to an embodiment of the present application;
fig. 9 is a schematic flowchart of a fifth document processing method according to an embodiment of the present application;
FIG. 10a is a schematic diagram of a seventh display interface of a document processing method according to an embodiment of the present application;
FIG. 10b is a schematic diagram of an eighth display interface of the document processing method according to the embodiment of the present application;
FIG. 10c is a schematic diagram of a ninth display interface of the document processing method according to the embodiment of the present application;
FIG. 10d is a schematic diagram of a tenth display interface of the document processing method according to the embodiment of the present application;
fig. 11a is a schematic view of an eleventh display interface of a document processing method according to an embodiment of the present application;
fig. 11b is a schematic view of a twelfth display interface of the file processing method according to the embodiment of the present application;
FIG. 11c is a schematic diagram of a thirteenth display interface of a document processing method according to an embodiment of the present application;
fig. 11d is a schematic diagram of a fourteenth display interface of a file processing method according to an embodiment of the present application;
fig. 12 is a schematic diagram of a fifteenth display interface of a document processing method according to an embodiment of the present application;
fig. 13 is a schematic diagram of a sixteenth display interface of a document processing method according to an embodiment of the present application;
FIG. 14 is a schematic diagram of a seventeenth display interface of the document processing method according to the embodiment of the present application;
fig. 15 is a schematic diagram of an eighteenth display interface of a file processing method according to an embodiment of the present application;
FIG. 16a is a schematic diagram illustrating a first module of a document processing apparatus according to an embodiment of the present disclosure;
FIG. 16b is a schematic diagram illustrating a second module of the document processing apparatus according to the embodiment of the present application;
fig. 17 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 18 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes the document processing method provided by the embodiment of the present application in detail through a specific embodiment and an application scenario thereof with reference to the accompanying drawings.
Fig. 1 is a schematic detailed flow diagram of a file processing method provided in an embodiment of the present application, where the file processing method may be an electronic device, and as shown in fig. 1, the file processing method mainly includes:
102, receiving a first input of a user;
wherein the first input may include: any one of a click input of a user on a target file displayed on the file display interface, a click input of a user on a user interface of the electronic device or a designated control displayed on the file display interface, a voice instruction input by the user, and a specific gesture input by the user may be specifically determined according to actual use requirements, which is not limited in the embodiment of the present application.
It should be noted that the specific gesture in the embodiment of the present application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
Step 104, in response to the first input, updating the image definition of a first area of a target image frame in a target file;
the target file comprises at least two image frames, and the target image frame comprises at least one image frame of the target file; the target file may be an original file captured by a certain camera in the electronic device, or a composite file generated based on the original files captured by at least two cameras, where the composite file may be a composite video file or a moving picture file including a plurality of image frames, and the definition of at least one image frame in the composite file meets the requirements of most users on definition, and then, in order to meet the personalized requirements of different users on the definition of an image in any area of some image frames, after the target file is generated, a function of focusing the image in the first area of the target image frame may be provided for the users.
The file processing method of the embodiment of the application can be applied to the scene needing to re-edit the local image area in at least one image frame of the recorded target file, after the target file is generated, the user can input a first input on the user interface of the electronic device according to actual requirements, and after the electronic device receives the first input, updating the sharpness of the image of any region in at least one target image frame in the target file, therefore, after the video file is recorded, any target object in a local image area in at least one image frame in the video file can be focused again in a targeted manner, so that the requirement of a user on the individualized focusing of the target object in the video file is met, further meeting the requirement of a user for carrying out personalized display on the picture content at a certain moment in the video file; in addition, in the process of re-editing the local image area in at least one image frame of the recorded target file, professional software special for later-stage video editing is not needed, so that the image processing operation in the target file is more convenient and faster for a user, and the file editing efficiency is further improved.
Specifically, the target image frame is an image frame determined based on a first input of a user, and the image frame includes a target area (i.e., the first area) of which the definition is to be updated; the target image frame may be an image frame in the target file, which partially includes the target area, or may be all image frames in the target file, which include the target area, that is, the target image frame includes at least one image frame of the target file.
Specifically, after receiving the first input, determining a target replacement frame in a plurality of candidate image frames, where the target replacement frame may be determined based on a selection input of a user, that is, determining the target replacement frame from the plurality of candidate image frames based on the selection input of the user, or may be automatically determined according to a predetermined determination manner of the target replacement frame, that is, automatically determining the target replacement frame in the plurality of image frames, for example, automatically determining an image frame with the highest image definition in the plurality of image frames as the target replacement frame, and then updating the image definition of the first region in the target image frame based on the definition of the image of the second region in the target replacement frame, specifically, directly replacing the image of the first region in the target image frame with the image of the second region in the target replacement frame, or pre-processing the image of the second region in the target replacement frame, and then replacing the image of the first area in the target image frame with the preprocessed image of the second area, wherein the definition of the image of the second area in the target replacement frame can be higher than that of the image of the first area in the target image frame, so that the effect of focusing the target object contained in the first area in the target image frame can be achieved.
In the above step 104, the updating the sharpness of the image in the first region of the target image frame in the target file includes:
updating the image of the first region in the target image frame based on the image of the second region in the target replacement frame;
the second area is an area corresponding to the first area in the target replacement frame, and the definition of the image of the second area is higher than that of the image of the first area; specifically, a first object contained in the first area and a second object contained in the second area may be the same object, and since the image sharpness of the same object may be different on different image frames, based on the image with higher sharpness in the second area where the target object is located in the target replacement frame, the image with lower sharpness in the first area where the target object is located in the target image frame is updated, that is, the image with the lower sharpness in the first area in the target image frame is refocused; correspondingly, the first object contained in the first area and the second object contained in the second area may also be different photographic objects, for example, two photographic objects belonging to the same type, when a user needs to replace an image of the photographic object a with a definition smaller than a first preset value in a target image frame with a photographic object B with a definition larger than a second preset value in a target replacement frame, where the second preset value is larger than the first preset value, and the photographic object a and the photographic object B belong to the same type, for example, the photographic object a is a white bolster and the photographic object B is a red bolster, so that in a case where the definition of the image of the area required by the photographic object a in all the image frames is not ideal, the definition of the image of the first area in the target image frame can be updated based on the definition of the image of the second area in the target replacement frame where the photographic object B is located, the effect of refocusing the image of the first area in the target image frame can be achieved.
The file processing method can be applied to a scene that a local image area in at least one image frame of a recorded video file needs to be refocused, and the image of the first area of the target image frame is updated by using the image of the second area in the target replacement frame, so that the effect of refocusing the image of the first area in the target image frame is achieved; in the process of updating the definition of the image in the first area of the target image frame, considering that the size of the image in the first area may be different from that in the second area, instead of directly performing image replacement, the image in the second area of the target replacement frame is preprocessed first, and then the image in the first area of the target image frame is replaced by the preprocessed image in the second area, so that not only can the target object contained in the first area of the target image frame be refocused, but also no image replacement trace of the target image frame after definition updating can be ensured, and further the image naturalness of the target image frame after definition updating can be improved.
Specifically, for the case that the target file is an original file captured by a certain camera in the electronic device, the target replacement frame is an image frame in the original file captured by the certain camera; correspondingly, when the target file is a composite file generated based on original files captured by at least two cameras in the electronic device, not only the composite file but also the original files captured by at least two cameras (e.g., a first file captured by a first camera and a second file captured by a second camera) need to be saved, and the target replacement frame is one image frame in the original files captured by at least two cameras; the target replacement frame is determined based on the image definition of the second area from a plurality of image frames included in the original file, specifically, may be determined based on a selection input of a user, or may be automatically determined according to a preset determination method of the target replacement frame.
In specific implementation, for the case that the target replacement frame is determined based on a selection input of a user, determining a plurality of candidate image frames in a plurality of image frames of the first file and the second file, wherein an image of a second region of each candidate image frame in the plurality of candidate image frames and an image of a first region of the target image frame may include the same object, and the definition of the image of each second region is higher than that of the image of the first region, then determining the target replacement frame from the plurality of candidate image frames based on the selection input of the user, and further updating the image of the first region in the target image frame based on the image of the second region in the target replacement frame; correspondingly, for a case that the target replacement frame is automatically determined according to a preset determination method of the target replacement frame, the target replacement frame is automatically determined in a plurality of image frames of the first file and the second file, for example, the target replacement frame with the highest image definition of the second area is automatically selected from the plurality of image frames, where the image of the second area of the target replacement frame and the image of the first area of the target image frame may include the same object.
Specifically, after the target replacement frame is determined, the image of the first area in the target image frame may be directly replaced with the image of the second area in the target replacement frame, so as to achieve the effect of refocusing the target object included in the first area in the target image frame; further, in consideration of a difference between the image size of the second region in the target replacement frame and the image size of the first region in the target image frame, in order to ensure the naturalness of the target image frame after the sharpness update, the image of the second region in the target replacement frame may be preprocessed to obtain a preprocessed image of the second region, where the preprocessing may include: adjusting at least one of the size of the area, the image definition, the image brightness and the image saturation; and then replacing the image of the first area in the target image frame with the image of the second area after preprocessing, thereby not only being capable of refocusing the target object contained in the first area in the target image frame, but also being capable of ensuring that no image replacement trace exists in the target image frame after the definition is updated, and further improving the image naturalness of the target image frame after the definition is updated.
In specific implementation, for example, the target replacement frame determined based on the selection input of the user or according to the determination method of the preset target replacement frame is the 5 th image frame in the target file, the image of the first area in the target image frame may be directly replaced with the image of the second area in the 5 th image frame; further, the image of the second region in the 5 th image frame may be preprocessed to obtain a preprocessed image of the second region in the 5 th image frame, where the preprocessing may include: adjusting at least one of the size of the area, the image definition, the image brightness and the image saturation; and replacing the image of the first area in the target image frame with the image of the second area in the preprocessed 5 th image frame to obtain the target image frame with the updated definition of the local image area.
Specifically, for the case that the target file is a composite file generated based on original files captured by two cameras in the electronic device, where the difference in viewfinding between the first camera and the second camera may be smaller than a preset threshold, image regions with the same viewfinding and different resolutions exist in a plurality of image frames (i.e. the first file and the second file) captured by the first camera and the second camera synchronously, that is, at least one identical object is included in the plurality of image frames of the first file and the second file, but the objects have different resolutions in different image frames, so that a target replacement frame may be selected from the plurality of image frames of the first file and the second file, and based on the image of the second region in the target replacement frame, the image of the first region in the target image frame of the target file, that is, based on the resolution of the image of the second region in the target replacement frame, is updated, and updating the definition of the image of the first area in the target image frame of the target file, wherein the image of the second area in the target replacement frame may be an image framing difference smaller than a preset threshold value from the first area of the target image frame and comprises the same photographic object (i.e. the second area is an area corresponding to the first area in the target replacement frame).
In order to further satisfy the requirement of the user for personalized focusing on a target object in a video file, a plurality of candidate image frames are provided for the user, so as to improve the flexibility of selection of the user, that is, for a case that the target replacement frame is determined in a selection input based on the user, as shown in fig. 2, the step 102 of receiving a first input of the user specifically includes:
step 1022, receiving a first sub-input of a user to a first area in a first image frame under the condition that the first image frame is displayed on a file display interface of the target file;
the first image frame is any image frame in a target file, for example, the target file is a video file, and a first sub-input of a user to a first area in the currently played first image frame is received under the condition that the video file is played on a file display interface; for another example, the target file is a moving picture file including a plurality of image frames, and in a case where the moving picture file is displayed on the file display interface frame by frame, a first sub-input of the user to a first area in a first image frame currently displayed is received.
Wherein the first sub-input may include: any one of a click input of a user to a first region in a first image frame displayed on a file display interface, a click input of a user to a designated control displayed on the file display interface, a voice instruction input by the user, and a specific gesture input by the user may be specifically determined according to an actual use requirement, which is not limited in the embodiment of the present application.
Step 1024, responding to the first sub-input, and displaying at least one preview identifier; wherein each preview identifier indicates a candidate image frame, the definition of the image of the third area in the candidate image frame is higher than that of the image of the first area in the first image frame, and the third area is the area corresponding to the first area in the candidate image frame. The mark in the present application is used for indicating words, symbols, images and the like of information, and a control or other container can be used as a carrier for displaying information, including but not limited to a word mark, a symbol mark, an image mark.
Correspondingly, the step 104 of updating the sharpness of the image in the first area of the target image frame in the target file includes:
step 1042, updating the sharpness of the image of the first region of the target image frame in the target file based on the candidate image frame indicated by the at least one preview identifier.
The file processing method of the embodiment of the application can be applied to the scene of how to determine the target replacement frame from a plurality of candidate image frames in the process of refocusing the local image area in the recorded video file, after receiving a first sub-input by a user to a first area in a first image frame, displaying at least one preview identifier, so that the user can combine actual needs based on the at least one preview identifier, in a plurality of corresponding candidate image frames, one candidate image frame is selected as a target replacement frame in a targeted manner, and the definition of the image in the first area in the target image frame is updated based on the definition of the image in the third area in the candidate image frame selected by a user, therefore, the flexibility of updating the definition of the image of the first area in the target image frame is realized, and the requirement of a user on the personalized setting of the definition of the image is further met.
Specifically, after receiving the first sub-input, determining a first image frame as a target image frame, where a first region in the first image frame is a target region of the image definition to be updated, where the target image frame may only include the first image frame, or may include, in addition to the first image frame, other image frames in a target file, for example, other image frames that also include a photographic subject in an image of the first region; then, determining a plurality of candidate image frames corresponding to a first area in the first image frame, displaying preview identifications corresponding to the candidate image frames on a file display interface, determining a candidate image frame selected by a user in at least one preview identification as a target replacement frame, and updating the definition of the image in the first area in the target image frame based on the definition of the image in a third area in the candidate image frame selected by the user; the definition of the image of the first area in the target image frame before updating is higher than that of the image of the first area in the target image frame after updating, namely, compared with the image of the first area in the target image frame before updating, the definition of the image of the first area in the target image frame after updating is higher, so that the image of the first area of the target image frame in the recorded video file is refocused, and the image of the first area in the target image frame seen by a user is clearer than the image area outside the first area.
Specifically, for example, the target file is a composite file generated based on original files captured by two cameras in the electronic device, the candidate image frames may include all image frames in the first file and the second file, where the definition of the image frame in the candidate image frame meets a preset requirement, that is, the definition of the image in the third area in the candidate image frame is higher than the definition of the image in the first area in the first image frame; the image of the third area in the candidate image frame and the first area of the target image frame may include the same shooting object (at this time, the third area may be determined to be the area corresponding to the first area in the candidate image frame); in addition, the image of the third region in the candidate image frame and the first region in the target image frame may include different photographic subjects, for example, two photographic subjects belonging to the same type (at this time, the third region may also be determined as a region corresponding to the first region in the candidate image frame), and the specific correspondence manner between the third region in the candidate image frame and the first region in the first image frame may be set according to actual requirements, and all of them are within the protection scope of the present application.
In particular implementation, a plurality of file identifications (e.g., file 1 to file n) are displayed on a video viewing interface of the electronic device, each file identification indicates one candidate file, a fifth sub-input of a certain candidate file by a user is received, and in response to the fifth sub-input, the candidate file to which the fifth sub-input is directed is determined as a target file; in the case that the first image frame is displayed on the file display interface of the target file, a first sub-input of a user to a first area in the first image frame is received, and in response to the first sub-input, at least one preview identifier (for example, preview identifier 1 to preview identifier i) is displayed, each preview identifier indicates one candidate image frame, the definition of an image of a third area in the candidate image frames is higher than that of the image of the first area in the first image frame, and the third area is an area corresponding to the first area in the candidate image frames, for example, the image of the third area and the image of the first area contain the same shooting object.
For example, as shown in fig. 3, a fifth sub-input of a user to a file 3 on the video viewing interface is received, and in response to the fifth sub-input, the file 3 to which the fifth sub-input is directed is determined as a target file; in the case of playing a file 3 on a file display interface, when playing to the 5 th second, receiving a first sub-input of a user to an area (i.e. a first area) where an object a is located in an image frame (i.e. a first image frame) corresponding to the 5 th second, and in response to the first sub-input, displaying i preview marks below a playing progress bar, wherein each preview mark indicates a candidate image frame containing the object a, and the definition of the object a contained in each candidate image frame is higher than that of the object a in the image frame corresponding to the 5 th second in the file 3.
Further, in order to simplify the operation steps of the user, the video viewing interface and the file display interface may be user interfaces of target applications, and the target applications may be application programs for managing target files (for example, a file manager or a gallery application on the electronic device), that is, after receiving a fifth sub-input of a candidate file by the user, if it is determined that the user has an image definition updating requirement, the candidate file to which the fifth sub-input is directed is determined as a target file, and the target file is opened, and after receiving a first sub-input of a first region of a first image frame in the target file by the user, a preview identifier is directly displayed on the file display interface, so that the focusing process on any object in the certain image frame can be directly completed under the application program for managing the target file without moving the target file to a specific other image processing application, the method and the device can improve the focusing processing efficiency of any object in a certain image frame, improve the use experience of a user, do not need to install additional other application programs, and reduce the occupation of the memory resource of the electronic equipment due to the installation of too many application programs with low use frequency.
After receiving a first sub-input of a user to a first region in the first image frame, the method may further include: displaying the definition values of the candidate image frames indicated by the at least one preview identifier; in a specific implementation, in the process of displaying at least one preview identifier, only the image in the third area in the candidate image frame may be displayed, the definition value of the image in the third area may be displayed while the image in the third area in the candidate image frame is displayed, and a recommendation identifier may be displayed below the candidate image frame with the highest definition value, so that a user may more intuitively select, from the candidate image frames, a candidate image frame whose definition meets respective requirements as a target replacement frame based on the definition value; as shown in fig. 4, on the basis of fig. 3, the definition values of the candidate image frames indicated by the preview markers are displayed below each preview marker, such as "xxx" representing the definition values, and the recommendation markers are displayed below the candidate image frame with the highest definition indicated by the preview marker 2.
In addition, in the process of displaying at least one preview mark, the definition values of all candidate image frames containing the target area can be displayed, or only the preview marks corresponding to the candidate image frames with the definition higher than that of the first image frame can be displayed, that is, the image frames with the definition lower than that of the first image frame are filtered, so that the selection difficulty of a user can be reduced, and the user can quickly select the candidate image frame with the satisfactory image definition meeting the respective requirements as the target replacement frame.
Further, for the case that two cameras are used for shooting synchronously and obtaining a target file, the target file may be a composite file obtained by shooting synchronously with the two cameras within a preset time, so that the target file includes at least two composite image frames which change continuously with time; the system comprises a plurality of cameras, a plurality of cameras and a server, wherein each camera can shoot at least one original image frame every second, the original image frames shot by each camera within preset time are respectively stored, namely at least two original image frames shot by a first camera are stored as a first file, at least two original image frames shot by a second camera are stored as a second file, and a target file (namely a composite file) containing at least two composite image frames is generated based on the original image frames in the first file and the original image frames in the second file; specifically, for an application scene of double video recording, a first video file is obtained by recording with a first camera, and a second video file is obtained by recording with a second camera, wherein the first video file comprises a plurality of first original image frames continuously shot within a preset video recording duration, and the second video file comprises a plurality of second original image frames continuously shot within the preset video recording duration; respectively synthesizing a first original image frame in the first video file and a second original image frame in the second video file to obtain a synthesized image frame, wherein the synthesized first original image frame and the synthesized second original image frame are two original image frames with the same shooting time; and generating a synthesized video file based on the synthesized image frames corresponding to the plurality of shooting moments obtained by synthesis.
In a specific implementation, preferably, the target file is a composite file, the composite file is obtained by combining a first file and a second file, the first file is shot by a first camera, and the second file is shot by a second camera; correspondingly, the target image frame is an image frame in a composite file, and the candidate image frame indicated by the preview identifier is an image frame in a first file or a second file.
The file processing method of the embodiment of the application can be applied to a scene of obtaining a target file by shooting with two cameras, not only can a composite file obtained based on a first file and a second file shot by two cameras be stored, but also the first file shot by the first camera and the second file shot by the second camera need to be stored, so as to display a composite file on the file display interface for the user to ensure that each image frame meets the preset definition requirement, and using a plurality of original image frames in the first file and the second file as candidate image frames, displaying preview marks corresponding to the candidate image frames, thus since the candidate image frames are from the first file and the second file captured by the two cameras, therefore, more image definition candidate image frames can be ensured to be contained in the displayed preview marks, and the selectivity of the user on the candidate image frames serving as the target replacement frames is improved.
Specifically, in consideration of the fact that the two cameras used in the double video recording have different focal lengths, so that the two cameras have different angles of view for capturing images and thus may generate a certain focusing difference, and the distance between each camera and the object to be captured may change with the passage of time and thus also generate a certain focusing difference, and the focusing difference in the double video recording process may cause a certain difference in the sharpness of the image frames captured by the first camera and the second camera, so that, in the case that the difference in the framing between the two cameras is smaller than a preset threshold, the image areas with the same framing and different sharpness exist in the image frames captured by the two cameras synchronously within a preset time (i.e. the image frames in the first file and the second file), that is, the image frames of the first file and the second file may include at least one same target object, however, the definition of the target object in different image frames is different, based on which, the definition of the image in the first area where the target object is located in the target image frame of the target file can be updated by using the definition difference of the target object in a plurality of image frames of the first file and the second file, that is, based on the image with higher definition in the second area where the target object is located in a certain original image frame (i.e. the target replacement frame) of the first file or the second file, the image with lower definition in the first area where the target object is located in at least one synthesized image frame (i.e. the target image frame) of the synthesized file is updated, so that the definition of the image in the first area where the target object is located in the updated target image frame is higher than the definition before updating, thereby achieving the effect of personalized focusing of the target object in some synthesized image frames in the photographed target file, the method and the device meet the requirement of a user for carrying out personalized display on the picture content at a certain moment in the video.
As shown in fig. 5, for a specific process of displaying a preview identifier for a user so that the user selects a target replacement frame based on a candidate image frame corresponding to the preview identifier, the step 1024, in response to the first sub-input, displays at least one preview identifier, specifically including:
step 10242, responding to the first sub-input, keeping displaying the first image frame in the first area of the file display interface, and displaying at least one preview mark in the second area of the file display interface;
specifically, after receiving a first sub-input of a user to a first area in a first image frame, a target file being played is paused, so that the first image frame is kept displayed in the first area of the file display interface, that is, a playing picture stays at the first image frame, and at least one preview identifier is displayed in a second area of the file display interface, for example, at least one preview identifier is displayed in sequence below the playing picture of the target file, so that the user selects a target replacement frame in a candidate image frame corresponding to the at least one preview identifier.
Correspondingly, after the step 1024, in response to the first sub-input, displaying at least one preview identifier, the method further includes:
step 1026, receiving a second sub-input of the user to the target preview identifier in the at least one preview identifier;
wherein the second sub-input may include: any one of a click input of a user to any preview identifier displayed on the file display interface, a click input of a user to a designated control displayed on the file display interface, a voice instruction input by the user, and a specific gesture input by the user may be specifically determined according to actual use requirements, which is not limited in the embodiments of the present application.
Correspondingly, in the step 1042, the updating the definition of the image in the first area of the target image frame in the target file based on the candidate image frame indicated by the at least one preview identifier specifically includes:
a step 10422 of updating, in response to the second sub-input, the sharpness of the image of the first region of the target image frame based on the target candidate image frame indicated by the target preview identifier;
specifically, the target candidate image frame is an image frame used for updating the definition of an image in a first region of the target image frame, that is, the target candidate image frame selected by the user based on at least one preview identifier is determined as the target replacement frame, and the definition of the image in the first region of the target image frame is updated by using the definition of the image in a third region of the target candidate image frame.
In particular, the update process of the image definition of the first area of the target image frame considers that there may be a difference between the image size of the third area in the candidate image frame and the image size of the first area in the target image frame, that is, considering that the third area in the candidate image frame with high definition in the lower preview image in the document display interface may not be in accordance with the size of the first area in the first image frame in the upper play screen, in order to ensure the naturalness of the target image frame after the definition is updated, the image of the third area in the target candidate image frame may be preprocessed to obtain a preprocessed image of the third area, and then the image of the first area in the target image frame may be replaced with the preprocessed image of the third area, so as to achieve an effect of updating the definition of the image of the first area in the target image frame.
The file processing method of the embodiment of the application can be applied to a scene of how to determine a target replacement frame from a plurality of candidate image frames based on the input operation of a user in the process of refocusing a local image area in a recorded video file, in the playing process of the target file, after receiving a first sub-input of the user to a first area in the first image frame, the first image frame is kept to be displayed, at least one preview identifier is displayed for the user, then, after receiving a second sub-input of the user to the target preview identifier, the target candidate image frame corresponding to the target preview identifier is determined to be the target replacement frame, then, the definition of the image in the first area in the first image frame is updated based on the definition of the image in a second area in the target replacement frame, so that the user can check the updating effect of the image definition in real time, and thus, by pausing the playing of the target file, and displaying at least one preview identifier for the user to select, and displaying the target image frame with updated definition to the user to realize the effect of refocusing the target object, so that the selection flexibility of the user can be improved, the user can be ensured to adjust the finally selected target candidate image frame in combination with the actual focusing effect, and the user can conveniently and individually select the focusing effect of the real desired target object.
In a specific embodiment, assuming that the first camera and the second camera both capture one image frame every second, and the capture time is 10 seconds, each of the first file and the second file includes 10 original image frames, wherein the target file is a composite file obtained by preliminarily optimizing and synthesizing the original image frames in the first file and the second file, the composite file may include 10 composite image frames generated based on 20 original image frames, that is, one composite image frame corresponding to each second is generated based on one original image frame in the first file and one original image frame in the second file captured every second, and each composite image frame in the composite file meets a preset definition requirement.
In a specific implementation, as shown in fig. 6a, in the process of playing the composite file on the file display interface, assuming that focusing needs to be performed on the object a in the first image frame displayed in the 5 th second, when the composite file is played to the 5 th second, receiving a first sub-input of the user to a target area where the object a is located in the first image frame displayed in the 5 th second, in response to the first sub-input, maintaining and displaying the first image frame corresponding to the 5 th second on the file display interface of the target file (i.e. pausing playing the target file in the 5 th second), and displaying at least one preview identifier (e.g. displaying i preview identifiers) in a second area of the file display interface (e.g. below a play progress bar of the target file), where each preview identifier indicates one candidate image frame, which is any original image frame in the first file or the second file, after the preview identifier is displayed, the sharpness value of the image in the third area in the candidate image frame may be displayed below the preview identifier indicating each candidate image frame. Further, as shown in fig. 6b, a second sub-input of the user to a target preview identifier in the at least one preview identifier is received, and a target candidate image frame (i.e., a target replacement frame) corresponding to the target preview identifier is determined from the plurality of candidate image frames, for example, the second sub-input of the user to preview identifier 2 (i.e., a target preview identifier) is received, and the candidate image frame indicated by preview identifier 2 is determined as the target candidate image frame (i.e., a target replacement frame); then, as shown in fig. 6c, the definition of the image in the region where the object a is located in the target image frame in the target file is updated based on the definition of the image in the region where the object a is located in the target candidate image frame, for example, the definition of the image in the region where the object a is located in the image frame corresponding to the 5 th second is updated with the definition of the image in the region where the object a is located in the candidate image frame indicated by the preview identifier 2.
Further, considering that the target file may include a plurality of objects capable of performing focusing processing, in order to achieve more intelligence of selecting a candidate focusing object in the target image frame of the target file, object identifiers of all pairs of shooting objects included in the target file may be displayed for the user, so that the user selects a focusing object requiring focusing processing in the target file display process based on the displayed object identifiers, as shown in fig. 7, before receiving the first input of the user in the above step 102, the method further includes:
step 110, displaying at least one object identifier on a file display interface of a target file, wherein each object identifier is used for indicating one object in any image frame of the target file;
specifically, object recognition is performed on at least two image frames in the target file, duplicate removal processing is performed on a plurality of recognized objects, all objects contained in the target file are determined, and object identifiers corresponding to the objects are displayed on a file display interface according to a preset display mode.
Correspondingly, the step 102 of receiving a first input of a user specifically includes:
step 1028, receiving a third sub-input of the user to the at least one object identifier, wherein the third sub-input is an input of selecting at least one candidate object identifier from the at least one object identifier, and the at least one candidate object identifier indicates at least one candidate focusing object;
wherein, the third sub-input may include: any one of a click input of an object identifier displayed on the file display interface by a user, a click input of a designated control displayed on the file display interface by the user, a voice instruction input by the user and a specific gesture input by the user can be specifically determined according to actual use requirements, and the method is not limited in the embodiment of the application;
specifically, the candidate object identifier is an object identifier selected by the user from the plurality of object identifiers, an object corresponding to the candidate object identifier is used as a candidate focusing object, and correspondingly, an image frame including the candidate focusing object in the target file is determined as a target image frame.
Correspondingly, the step 104 of updating the image definition of the first region of the target image frame in the target file specifically includes:
step 1044 of updating the definition of the image of the first area where the target focusing object is located in the target image frame in response to the third sub-input; wherein the target focusing object comprises part or all of at least one candidate focusing object.
Specifically, considering that the target image frame may only include a part of the candidate focusing objects, or the user may only need to perform focusing processing on a part of the candidate focusing objects in a certain target image frame, and perform focusing processing on other candidate focusing objects in another target image frame, that is, for a certain target image frame, not all the candidate focusing objects are subjected to focusing processing, therefore, it is necessary to first determine a target focusing object from a plurality of candidate focusing objects, and then update the definition of the image of the first area in the target image frame where the corresponding target focusing object is located based on the definition of the image of the second area in the target replacement frame.
The file processing method provided by the embodiment of the application can be applied to a scene of how to set at least one target object needing to be refocused in a target image frame in a process of refocusing a local image area in a recorded video file, all objects capable of being focused can be firstly identified from at least two image frames in the target file, and an object identifier corresponding to each object is displayed on a file display interface of the target file, so that a user can select a focusing object needing to be focused in the target file display process based on the displayed object identifier in combination with actual requirements, the process of selecting the target focusing object in the target image frame of the target file is more intelligent, and the selection efficiency of the focusing object which the user desires to carry out focusing is improved.
In a specific implementation, as shown in fig. 8, at least two image frames of the target file include an object a, an object B, an object C, an object D, and an object E, object identifiers corresponding to the object a, the object B, the object C, the object D, and the object E are displayed on the file display interface, each object identifier indicates an object in the target file, a third sub-input to the at least one object identifier by the user is received (for example, the user selects the object a, the object B, and the object C), the object a, the object B, and the object C are determined as candidate focusing objects, and then, for a certain target image frame, a target focusing object may be determined among the object a, the object B, and the object C.
Further, considering that there may be a plurality of candidate focusing objects included in a certain image frame, a user may set, by combining with actual requirements, to perform focusing processing only on a part of the candidate focusing objects, and therefore, different focusing types may also be preset for the user to select a focusing type corresponding to an actually desired focusing display effect, so as to achieve more flexible focusing on a part of the candidate focusing objects in the target image frame, and further enable the focusing effect to be more interesting in the display process of the target file, as shown in fig. 9, before the step 1044, in response to the third sub-input, updating the definition of the image in the first area of the target image frame in the target file, the method further includes:
step 1046, receiving a fourth sub-input of the user;
wherein, the fourth sub-input may include: any one of click input of a user to a control corresponding to a focusing type identifier displayed on a file display interface, input of drawing a specific track on the file display interface by the user, a voice instruction input by the user and a specific gesture input by the user can be specifically determined according to actual use requirements, and the embodiment of the application is not limited thereto;
step 1048, in response to the fourth sub-input, determining a target focusing type according to the fourth sub-input;
specifically, the fourth sub-input may be a selected input of a control corresponding to the displayed focusing type identifier by the user, and correspondingly, after the fourth sub-input is received, the target focusing type is determined based on the focusing type identifier selected by the user and indicated by the fourth sub-input; the fourth sub-input may also be an input indicating the target focusing type selected by the user through drawing the first track, and correspondingly, after receiving the fourth sub-input, the target focusing type is determined based on the drawing track indicated by the fourth sub-input.
Correspondingly, as shown in fig. 9, in the step 1044, updating the sharpness of the image of the first area where the target focusing object is located in the target image frame in response to the third sub-input, specifically includes:
step 10442, in response to the third sub-input, determining a target focusing object of the at least one candidate focusing object based on the target focusing type;
specifically, for each target image frame, based on the target focusing type, a target focusing object that needs to be focused in the target image frame is selected from at least one candidate focusing object, that is, only the definition of the image in the first area where the target focusing object corresponding to the target focusing type is located in the currently displayed target image frame is updated, and no processing is performed on other candidate focusing objects except the target focusing object in the candidate focusing objects.
In step 10444, the sharpness of the image in the first area where the target focusing object is located in the target image frame is updated.
Specifically, after the corresponding target focusing object is determined for the target image frame, the definition of the image of the first area in the target image frame where the corresponding target focusing object is located is updated based on the definition of the image of the second area in the target replacement frame.
The file processing method of the embodiment of the application can be applied to the process of refocusing the local image area in the recorded video file, how to determine the target focusing type and the scene of the target focusing object in each target image frame, a plurality of selectable focusing objects and a plurality of focusing types can be automatically displayed for the user, so that the user can select a candidate focusing object needing focusing and a target focusing type needing to be used in combination with the actual focusing requirement, so that the user can select the focusing type corresponding to the actually expected focusing display effect, therefore, the focusing processing is carried out on part of the candidate focusing objects in a certain target image frame in an individualized setting mode, the purpose of focusing on part of the candidate focusing objects in the target image frame more flexibly is achieved, and the focusing effect in the display process of the target file is more interesting.
Specifically, in the process of determining the target focusing type, for a case that the fourth sub-input is an input of the user by drawing the first track, a candidate focusing object sequence may be displayed on the file display interface, where the candidate focusing object sequence is used to indicate identifiers of candidate focusing objects respectively included in each target image frame, receive an input of a selection track of the identifiers of the candidate focusing objects by the user, and determine the target focusing type based on the selection track; further, a preset control (e.g., a control indicating depth information or priority) indicating object property information of the candidate focusing object in each target image frame may be displayed, so as to receive a user's selection input for the preset control and an input for a selection track of the candidate focusing object's identification, and determine a target focusing type based on the selection track and the selection input for the preset control.
In a specific implementation, as shown in fig. 10a, a candidate focusing object sequence, a control a, a control b, which corresponds to a target video frame of 5 th to 8 th seconds in a target file, are displayed below a play progress bar of the target file, the candidate focusing object sequence includes candidate focusing object identifications corresponding to a plurality of target image frames in the target file respectively, the control a is used for indicating a depth order of the plurality of candidate focusing objects in each target image frame (e.g., ↓ represents that the depth of field is from deep to shallow), the control b is used for indicating a priority order of the plurality of candidate focusing objects in each target image frame (e.g., < x > represents that the priority is increased), in addition, in order to facilitate a user to quickly identify a target focusing object which is actually required to be focused in each target image frame, a thumbnail corresponding to each target image frame may also be displayed, then, a user input of a connecting line track of the plurality of candidate focusing object identifications in the candidate focusing object sequence based on the thumbnail is received, for example, the user draws the connecting line tracks of a-C-B-a in sequence, that is, the target focusing objects selected by the user in the adjacent target image frames are all different, and as a result, the fourth sub-input is an object selection track input for indicating different candidate focusing objects in at least two target image frames, and the target focusing type is determined to be object rotation focusing.
For another example, as shown in fig. 10B, the user clicks the control a (i.e., selects the control a), at this time, for each target image frame, the multiple candidate focusing object identifiers displayed below the target image frame are sorted according to the depth of field, and if the user draws the line connecting track of B-C-a in sequence, that is, receives the input of the user on the line connecting track of the lowest candidate focusing object identifier of the multiple target image frames, it is determined that the target focusing type is the first target focusing type (e.g., an object that is preferred to focus a close distance).
For another example, as shown in fig. 10C, the user clicks the control B (i.e., selects the control B), at this time, for each target image frame, the multiple candidate focusing object identifiers displayed below the target image frame are sorted according to priority, that is, the priority is sequentially reduced from top to bottom, and if the user sequentially draws a line connecting track formed by B-C-B-C, that is, receives an input of the user on the line connecting track of the candidate focusing object identifier in the first row of the multiple target image frames, it is determined that the target focusing type is the third target focusing type (e.g., an object with a high preferred focusing priority is selected); therefore, the corresponding target focusing type can be determined based on different drawing tracks input by the user, so that the user can set the target focusing type desired to be selected by adopting the form of drawing the specific track on the file display interface, and the setting flexibility of the target focusing type is improved.
Specifically, in the process of determining the target focusing type, for a case that the fourth sub-input is a selected input of a user to a displayed focusing type identifier, not only an object identifier but also at least one focusing type identifier is displayed on a file display interface in advance based on a plurality of preset focusing types, and then the fourth sub-input of the user to the at least one focusing type identifier is received, wherein the fourth sub-input is an input of selecting the at least one target focusing type identifier from the at least one focusing type identifier, the at least one target focusing type identifier indicates the at least one target type, that is, the target focusing type identifier is a focusing type identifier selected by the user from the plurality of focusing type identifiers, and a focusing type corresponding to the target focusing type identifier is used as the target focusing type; in specific implementation, as shown in fig. 10d, focusing type identifiers corresponding to the first focusing type, the second focusing type, and the third focusing type are displayed on the file display interface, for example, focusing type 1, focusing type 2, and focusing type 3, if the user selects focusing type 1, focusing type 1 is determined as the target focusing type, further, the user may also select at least two focusing type identifiers at the same time, and if the user selects focusing type 1 and focusing type 2, the combination of focusing type 1 and focusing type 2 is determined as the target focusing type.
Further, considering that the candidate focusing object and the target focusing type can be set before the target file is played, or the target file can be paused to be played at any time in the process of playing the target file, and the candidate focusing object and the target focusing type are set, so that a user can adjust the candidate focusing object and the target focusing type in real time based on the actually displayed focusing effect, and specifically, under the condition that a third image frame is displayed on a file display interface of the target file, fifth sub-input of the user is received; displaying at least one focusing object identifier and a focusing type identifier in response to the fifth sub-input; receiving a sixth sub-input of the at least one focusing object identifier and the focusing type identifier from the user; and responding to the sixth sub-input, determining candidate focusing objects and a target focusing type, then determining a target focusing object in at least one candidate focusing object based on the target focusing type, and updating the definition of the image of the first area where the target focusing object is located in the target image frame.
Further, in the process of determining a target focusing object to be focused in each target image frame based on a target focusing type, the target focusing type may include: the focusing system comprises a first focusing type, a second focusing type, a third focusing type and a combined focusing type, wherein the combined focusing type corresponds to the combination of at least two focusing types;
correspondingly, the determining a target focusing object of the at least one candidate focusing object based on the target focusing type specifically includes:
determining a target focusing object based on depth of field information of at least one candidate focusing object under the condition that the target focusing type is a first focusing type;
determining a second target focusing object in the second target image frame based on a first target focusing object in the first target image frame for each adjacent first target image frame and second target image frame under the condition that the target focusing type is a second focusing type, wherein the first target focusing object and the second target focusing object are different;
determining a target focusing object based on a display order of at least one candidate focusing object in case that the target focusing type is a third focusing type;
and under the condition that the target focusing type is the combined focusing type, determining the target focusing object based on the target object determination modes corresponding to at least two focusing types corresponding to the combined focusing type.
According to the file processing method, in the process of refocusing the local image area in the recorded video file, under the condition that how to determine the scene of the target focusing object in each target image frame based on the target focusing type, after the candidate focusing object selected by the user and the target focusing type required to be used are determined, the target focusing object in the target image frame can be determined based on the target focusing type, so that the user selects the focusing type corresponding to the actually expected focusing display effect, the purpose of individually setting to focus only part of the candidate focusing objects in a certain target image frame is automatically achieved, the purpose of more flexibly focusing part of the candidate focusing objects in the target image frame is achieved, and the focusing effect in the display process of the target file is more interesting.
In particular implementation, (1) determining a target focusing object based on depth information of at least one candidate focusing object when the target focusing type is the first focusing type;
specifically, the depth of field information is used to represent a shooting distance between a focusing object and the camera, for example, if the first focusing type is an object that is preferably focused at a close distance, a candidate focusing object corresponding to a minimum depth of field in at least one candidate focusing object included in the target image frame is determined as a target focusing object, that is, a shooting object closest to the camera in shooting is taken as the target focusing object from the at least one candidate focusing object; for another example, if the first focusing type is an object which is preferably focused to a long distance, determining a candidate focusing object corresponding to a maximum depth of field in at least one candidate focusing object included in the target image frame as a target focusing object, that is, taking a shooting object which is farthest from the camera during shooting as the target focusing object from the at least one candidate focusing object; for another example, if the first focus type is an object with a preferred focusing middle distance, the candidate focusing object with the depth of field in the middle of at least one candidate focusing object included in the target image frame is determined as the target focusing object, that is, the shooting object from the middle position of the camera during shooting is taken as the target focusing object from the at least one candidate focusing object, that is, the at least one candidate focusing object is sorted according to the sequence of the depth of field from large to small, and the candidate focusing object sorted in the middle position is determined as the target focusing object. In particular implementations, a target focus object in a target image frame may be determined based on a depth of field magnitude relationship between a target focus type and a candidate focus object.
As shown in fig. 11a, for the case where the target focusing type is the first focusing type, the focusing object candidates include: an object a, an object B, an object C, and a first focus type is an object in a preferred focusing close distance, for example, an image frame 5 corresponding to the 5 th second in the target file, where the image frame 5 includes the object a and the object B, and if the depth of field of the object a is smaller than that of the object B, the object a is determined as a target focusing object, that is, an image of an area where the object a is located in the image frame 5 before focusing is blurred, and an image of an area where the object a is located in the image frame 5 after focusing is sharp; for another example, the image frame 6 corresponding to the 6 th second in the target file includes an object B and an object C, and if the depth of field of the object C is smaller than the depth of field of the object B, the object C is determined as a target focusing object, that is, an image of an area where the object C is located in the image frame 6 before focusing is blurred, and an image of an area where the object C is located in the image frame 6 after focusing is clear; for another example, if the depth of field of the object a is the smallest, the image frame 7 corresponding to the 7 th second in the target file includes the object a, the object B, and the object C, and the object a is determined as the target focusing object, that is, the image of the area where the object a is located in the image frame 7 before focusing is blurred, and the image of the area where the object a is located in the image frame 7 after focusing is sharp.
In the embodiment provided by the application, a user can select to determine which candidate focusing object in a target image frame is to be subjected to refocusing processing based on the depth of field information of the candidate focusing object, so that a close-distance candidate focusing object in a focusing target image frame, a middle-distance candidate focusing object in a focusing target file or a long-distance candidate focusing object in the focusing target file can be automatically optimized, and intelligent focusing processing according to the depth of field information of the candidate focusing object is realized.
(2) Determining a second target focusing object in the second target image frame based on a first target focusing object in the first target image frame for each adjacent first target image frame and second target image frame under the condition that the target focusing type is a second focusing type, wherein the first target focusing object and the second target focusing object are different;
specifically, the second focusing type may correspond to alternate focusing, and in a specific implementation, when the user selects the second focusing type, the candidate focusing objects that are not focused in the previous target image frame may be preferentially focused according to a sequence of appearance of the plurality of candidate focusing objects in the target file, so that the target focusing objects in each adjacent first target image frame and second target image frame are different.
As shown in fig. 11b, for the case that the target focusing type is the second focusing type, the focusing object candidates include: for example, the image frame 5 corresponding to the 5 th second in the target file includes the object a and the object B, and at this time, the image frame 5 is regarded as the first target image frame, and both the candidate focusing objects appearing in the image frame 5 are determined as target focusing objects, that is, both the object a and the object B are determined as target focusing objects in the image frame 5 (i.e., the first target focusing object in the first target image frame), that is, the images of the areas where the object a and the object B are located in the image frame 5 before focusing are blurred, and the images of the areas where the object a and the object B are located in the image frame 5 after focusing are clear.
The image frame 6 corresponding to the 6 th second in the target file includes an object B and an object C, where the image frame 6 is regarded as a first second target image frame, and since the object C is a candidate focusing object that is not focused in the image frame 5, the object C may be determined as a target focusing object in the image frame 6 (i.e., a second target focusing object in the second target image frame), that is, an image of an area where the object C is located in the image frame 6 before focusing is blurred, and an image of an area where the object C is located in the image frame 6 after focusing is clear.
And the image frame 7 corresponding to the 7 th second in the target file comprises an object a, an object B and an object C, the image frame 6 is regarded as the 2 nd first target image frame, the image frame 7 is regarded as the 2 nd second target image frame, and since the object a and the object B are both candidate focusing objects which are not focused in the image frame 6, the object a and the object B can be determined as target focusing objects in the image frame 7 (i.e. second target focusing objects in the second target image frame), that is, images of areas where the object a and the object B are located in the image frame 7 before focusing are blurred, and images of areas where the object a and the object B are located in the image frame 7 after focusing are clear.
In the embodiment provided by the application, a user can select a mode of automatically performing alternate focusing processing on different candidate focusing objects in adjacent target image frames, so that the target focusing objects in the two adjacent target image frames are different, and the user can view a more dynamic focusing effect.
(3) Determining a target focusing object based on a display order of at least one candidate focusing object in case that the target focusing type is a third focusing type;
specifically, the third focusing type may correspond to focusing according to priority, where the higher the display order of the candidate focusing objects is, the higher the priority of the candidate focusing objects is, specifically, the display order of the plurality of candidate focusing objects may be preset, that is, determined based on a default sorting manner, and in addition, the display order of the plurality of candidate focusing objects may also be set by the user in combination with actual requirements, that is, determined based on a sorting manner selected by the user, for example, when the user selects a theme label sorting manner, the closer the camera is to the shooting time is determined, the higher the priority is, that is, the foremost object in the target image frame is determined as the target focusing object.
As shown in fig. 11c, for the case where the target focusing type is the third focusing type, the focusing object candidates include: object A, object B and object C, and the display sequence is as follows: the object C, the object B, and the object a are taken as examples, that is, the priority is, in order from high to low: an object C, an object B, an object a, for example, an image frame 5 corresponding to the 5 th second in the target file, where the image frame 5 includes the object a and the object B, and if the priority of the object B is higher than that of the object a, the object B is determined as a target focusing object, that is, an image of an area where the object B is located in the image frame 5 before focusing is blurred, and an image of an area where the object B is located in the image frame 5 after focusing is sharp; for another example, the image frame 6 corresponding to the 6 th second in the target file, where the image frame 6 includes an object B and an object C, and if the priority of the object C is higher than that of the object B, the object C is determined as a target focusing object, that is, an image of an area where the object C is located in the image frame 6 before focusing is blurred, and an image of an area where the object C is located in the image frame 6 after focusing is sharp; for another example, if the priority of the object C is the highest, the object C is determined as the target focusing object, that is, the image of the area where the object C is located in the image frame 7 before focusing is blurred, and the image of the area where the object C is located in the image frame 7 after focusing is sharp.
In the embodiment provided by the application, the user can select to determine which candidate focusing object in the target image frame is to be subjected to refocusing processing based on the display order of the candidate focusing objects, so that the candidate focusing object with higher attention in the focusing target image frame can be automatically preferred, and intelligent focusing processing according to the priority order of the candidate focusing objects is realized.
(4) And under the condition that the target focusing type is the combined focusing type, determining the target focusing object based on the target object determination modes corresponding to at least two focusing types corresponding to the combined focusing type.
Specifically, taking the target focusing type as a combination of the second focusing type and the third focusing type as an example, that is, the combined focusing type includes the second focusing type and the third focusing type, and the target object determination method corresponding to the combined focusing type is as follows: for a target image frame comprising a plurality of candidate focusing objects, the target focusing object in each adjacent first target image frame and second target image frame is different, and the target focusing object in the target image frame is determined based on the display order of the candidate focusing objects; for another example, taking the target focusing type as a combination of the first focusing type and the second focusing type as an example, that is, the combined focusing type includes the first focusing type and the second focusing type, and the target object determination method corresponding to the combined focusing type is as follows: for a target image frame including a plurality of candidate focusing objects, the target focusing object in each adjacent first target image frame and second target image frame is different, and the target focusing object in the target image frame is determined based on depth information of the candidate focusing objects.
In specific implementation, as shown in fig. 11d, for a case that the target focusing type is a combination of the second focusing type and the third focusing type, that is, the combined focusing type includes the second focusing type and the third focusing type, a target object determination method corresponding to the combined focusing type is as follows: for a target image frame comprising a plurality of candidate focusing objects, the target focusing objects in each adjacent first target image frame and second target image frame are different, and the target focusing object is determined by determining the candidate focusing object with a forward display sequence; specifically, the focusing object candidate includes: object A, object B and object C, and the display sequence is as follows: the object C, the object B, and the object a are taken as examples, that is, the priority is, in order from high to low: the object C, the object B, and the object a, for example, an image frame 5 corresponding to the 5 th second in the target file, where the image frame 5 includes the object a and the object B, and at this time, the image frame 5 is regarded as the first target image frame, and the priority of the object B is higher than the priority of the object a, the object B with the high priority is determined as a target focusing object in the image frame 5 (i.e., the first target focusing object in the first target image frame), where an image of an area where the object B is located in the image frame 5 before focusing is blurred, and an image of an area where the object B is located in the image frame 5 after focusing is clear.
The image frame 6 corresponding to the 6 th second in the target file includes an object B and an object C, where the image frame 6 is regarded as the first second target image frame, and since the object C is a candidate focusing object that is not focused in the image frame 5 and the priority of the object C is higher than that of the object B, the object C may be determined as a target focusing object in the image frame 6 (i.e., the second target focusing object in the second target image frame), that is, an image of an area where the object C is located in the image frame 6 before focusing is blurred, and an image of an area where the object C is located in the image frame 6 after focusing is clear.
And the image frame 7 corresponding to the 7 th second in the target file comprises an object a, an object B and an object C, the image frame 6 is regarded as a 2 nd first target image frame, the image frame 7 is regarded as a 2 nd second target image frame, and since the object a and the object B are both candidate focusing objects which are not focused in the image frame 6 and the priority of the object B is higher than that of the object a, the object B can be determined as a target focusing object in the image frame 7 (i.e. a second target focusing object in the second target image frame) at this time, that is, an image of an area where the object B is located in the image frame 7 before focusing is blurred, and an image of an area where the object B is located in the image frame 7 after focusing is clear.
In the embodiment provided by the application, a user can select a target object determination method corresponding to at least two focusing types to determine which candidate focusing object in the target image frame is to be subjected to refocusing processing, so that the candidate focusing object is determined as the target focusing object in the target image frame only under the condition that the candidate focusing object meets the constraint conditions of the target object determination methods corresponding to the at least two focusing types, and the target focusing object selected from the multiple candidate focusing objects is more targeted.
In specific implementation, in order to improve the combination flexibility of the focusing types and increase the diversity of the combination modes of the combined focusing types, considering that a user may have the requirement of more flexibly combining and setting a plurality of focusing types, a corresponding option can be set according to the minimum focusing type; for example, the user may simultaneously select the second focusing type and the third focusing type (that is, the target focusing object in the second target image frame is a candidate focusing object that is not focused in the first target image frame and has the highest priority), or may also simultaneously select the first focusing type and the second focusing type (the target focusing object in the second target image frame is a candidate focusing object that is not focused in the first target image frame and has depth information meeting a preset condition, such as a candidate focusing object that is not focused in the first target image frame and has the maximum depth of field or the minimum depth of field), which may enable a random combination of the minimum focusing types, thereby improving the combination flexibility of the focusing types.
In the process of obtaining the target file by using the dual camera, before receiving the first input of the user in step 102, the method further includes:
controlling the first camera and the second camera to shoot synchronously;
under the condition that shooting is finished, generating a first file shot by a first camera, a second file shot by a second camera and a synthesized file obtained by synthesizing the first file and the second file;
the focal lengths of the first camera and the second camera are different; the target file is a composite file.
The file processing method can be applied to a scene of obtaining a target file through double video cameras, synchronous shooting is carried out by controlling the two cameras, due to the fact that focal lengths of the first camera and the second camera are different, image frames with different definitions exist in a first file generated by shooting through the first camera and a second file generated by shooting through the second camera, and the first file and the second file are stored respectively, so that the image frames with the image definitions meeting preset requirements in the image frames of the first file and the second file can be used as candidate image frames, preview marks corresponding to the candidate image frames are displayed, abundant image definition options are provided for users, and the selectivity of the users to the candidate image frames serving as target replacement frames is improved.
Specifically, a specific process for controlling the first camera and the second camera to shoot synchronously may be: a processor in the electronic equipment sends a shooting instruction to a first camera and a second camera simultaneously, the first camera performs framing shooting according to a first focal length after receiving the shooting instruction to obtain a first file, the second camera performs framing shooting according to a second focal length after receiving the shooting instruction to obtain a second file, and then the first file and the second file are synthesized to obtain a synthesized file (namely a target file); considering that there may be a certain difference between the time when the first camera and the time when the second camera respectively receive the shooting instruction, the shooting process between the second camera and the second camera may slightly delay, and as long as the delay time is less than the preset threshold, the first camera and the second camera can be considered as synchronous shooting.
Specifically, after a first file and a second file are obtained by synchronously shooting through a first camera and a second camera, the first file and the second file are synthesized to generate a synthesized file, and each synthesized image frame in the synthesized file meets the preset definition requirement; and it is necessary to save not only the composite file but also the first file and the second file in order to subsequently determine candidate image frames from the first file and the second file.
Further, considering that the user may have a need to blur the fourth region in at least one image frame of the target file in addition to the need to focus the target object in the target image frame, the file processing method further includes:
step one, receiving a second input of a user to a fourth area in a second image frame under the condition that the second image frame is displayed on a file display interface of a target file;
wherein the second input may include: any one of a click input of a user on the second image frame in the target file, a click input of the user on a designated control on the file display interface, a voice instruction input by the user, and a specific gesture input by the user may be specifically determined according to actual use requirements, and the embodiment of the application is not limited thereto.
Responding to a second input, and displaying a blurring adjustment control;
specifically, after receiving the second input, a blurring adjustment control is displayed at a designated position of the file display interface, for example, the blurring adjustment control may be displayed at a position adjacent to the second area.
Receiving a third input of the blurring adjustment control by the user;
wherein the third input may include: any one of the click input of the blurring adjustment control by the user, the voice instruction input by the user, and the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application.
And fourthly, responding to the third input, and performing blurring processing on the image of the fourth area based on the blurring degree corresponding to the third input.
Specifically, after the third input is received, the blurring degree indicated by the third input is determined, the image of the fourth area in the second image frame is subjected to blurring processing based on the blurring degree, and the second image frame after blurring processing is displayed on the file display interface.
The file processing method provided by the embodiment of the application can be applied to a scene that a local image area in a recorded video file needs to be blurred, after a second input of a user to a fourth area in a second image frame is received, if the requirement of the user for blurring a certain object in the second image frame is determined, a corresponding blurring adjustment control is automatically displayed, so that the user can set the blurring degree of the blurring object through the blurring adjustment control, and therefore, blurring processing can be performed on any image frame of a target file in the video playing process, further, the variety of personalized image editing on the target file is improved, and different image editing requirements of different users on the recorded video file are met.
As shown in fig. 12, during the playing process of the target video file, a second input (e.g., a double-click input) to the image frame 10 corresponding to the 10 th second is received by the user, the image frame 10 is kept displayed, a control c (i.e., a blurring adjustment control) is displayed, a third input to the blurring adjustment control by the user is received, and a blurring degree indicated by the third input is determined; wherein, the control c can be a blurring adjustment control comprising a slider, and the user can adjust the blurring degree by moving the position of the slider, for example, a user's double-click input of the object B in the image frame 10 corresponding to the 10 th second is received, the image frame 10 is kept displayed, and displaying a blurring adjustment control c in the area of the object B, receiving a third input of a user to a sliding block d on the blurring adjustment control c, further determining the blurring degree of the image in the area of the object B, that is, the image of the area where the object B is located in the image frame 10 before blurring is sharp, the image of the area where the object B is located in the image frame 10 after focusing is blurred, or the degree of blurring of the image of the other area than the area where the object B is located is determined, i.e. the image of the other area of the image frame 10 than the area where the object B was located before blurring is sharp, the image of the other area of the image frame 10 excluding the area where the object B is present after focusing is blurred; the control c may also include an information input box, and the user may adjust the blurring degree by inputting a certain numerical value in the information input box.
In specific implementation, only one virtualization adjusting control can be displayed on the file display interface under the condition that a user selects a plurality of virtualization objects in the second image frame, so that the user can drag the virtualization adjusting control to the area where the object to be virtualized is located through dragging operation of the virtualization adjusting control, and the purpose of performing virtualization processing on the virtualization object is achieved; and a blurring adjustment control can be respectively displayed in the area where each blurring object is located in the second image frame, so that blurring processing can be simultaneously performed on a plurality of blurring objects, and blurring effects of the plurality of blurring objects can be compared.
In a process of blurring an object in a target file by using a displayed blurring adjustment control, the blurring adjustment control may include a slider, and correspondingly, the third input is an input of moving the slider to a first position by a user;
correspondingly, in the fourth step, before performing the blurring processing on the image of the fourth area based on the blurring degree corresponding to the third input in response to the third input, the method further includes:
responding to a third input, and determining the virtualization degree according to the first position of the sliding block in the virtualization adjusting control;
the first position may be any position where the slider is moved to the blurring adjustment control, where the first position indicates a blurring degree of blurring the image in the fourth area, and specifically, the user may adjust the blurring degree by moving the position of the slider according to an actual requirement, for example, moving the slider upwards indicates increasing the blurring degree, and correspondingly, moving the slider downwards indicates decreasing the blurring degree; in a specific implementation, a corresponding relationship between the position of the slider and the blurring degree may be preset, and then, based on the first position of the slider and the corresponding relationship, the blurring degree of the image of the fourth area is determined.
Correspondingly, in the fourth step, after performing blurring processing on the image of the fourth area based on the blurring degree corresponding to the third input in response to the third input, the method further includes:
displaying a first processing identifier at a second position on the playing progress bar of the target file, and displaying a second processing identifier at a third position on the playing progress bar;
the second position is a position on the target image frame corresponding to the playing progress bar, and the first processing identifier indicates that the definition of the image in the first area of the target image frame is updated; the third position is the position of the second image frame corresponding to the playing progress bar, and the second processing identifier indicates that the second image frame has completed one blurring processing.
Specifically, a focusing position and a blurring position are marked on a playing progress bar of the target file, that is, a first processing identifier is displayed at the playing time of the target image frame with updated definition, and a second processing identifier is displayed at the playing time of a second image frame with once blurring processing completed, so that a user can quickly locate the positions of the image frame with updated definition and the image frame with once blurring processing completed in the target file.
The file processing method in the embodiment of the application can be applied to a scene after the definition of the first area in the target image frame is updated and the fourth area in the second image frame is subjected to blurring processing, the position of the image frame which is subjected to focusing and the position of the image frame which is subjected to blurring processing are marked on the playing progress bar of the target file, namely, the image frame which is subjected to definition updating and the image frame which is subjected to blurring processing are marked on the playing progress bar, so that a user can quickly locate the positions of the image frame which is subjected to definition updating and the position of the image frame which is subjected to blurring processing in the target file.
As shown in fig. 13, the target file is a video file with a playing time of 20 seconds, and if the 3 rd, 5 th, 6 th, 8 th, 12 th, and 18 th frame images in the video file are target image frames with updated definition, and the 4 th, 9 th, 10 th, and 16 th frame images are second image frames with blurring processing completed once, the first processing identifier (e.g., < x > at the playing time of the 3 rd, 5 th, 6 th, 8 th, 12 th, and 18 th frame images) and the second processing identifier (e.g., < x > at the playing time of the 4 th, 9 th, 10 th, and 16 th frame images are displayed.
Further, considering that the image frames of which the definition is updated and which have completed a blurring process in the target file can be extracted, and then the clip file is generated, so as to achieve the purpose of quickly clipping the segment of interest of the user, and to improve the viewing interestingness, based on this, the first processing identifier is displayed at the second position on the play progress bar of the target file, and after the second processing identifier is displayed at the third position on the play progress bar, the method further includes:
extracting at least one target image frame indicated by at least one first processing identifier and at least one second image frame indicated by at least one second processing identifier;
generating a clip file based on the extracted at least one target image frame and at least one second image frame.
Specifically, the focused and/or blurred image frame may be stored in the form of a single image, and the optimized segment containing the focused and/or blurred image frame may also be stored in the form of a video segment, so as to implement automatic clipping, so that the user may directly use the edited image frame or video segment.
The file processing method provided by the embodiment of the application can be applied to the scene that the focusing and blurring of the image frames are completed in the recorded video file, the image frames which are subjected to the focusing and/or blurring can be extracted based on the processing identification by marking the corresponding processing identification on the image frames which are subjected to the focusing and/or blurring processing and generating the corresponding clipping file, so that the purpose of rapidly clipping the interesting segments of the user is achieved, and the watching interestingness is improved.
In a specific implementation, the extracted aggregate of the at least one target image frame and the at least one second image frame may be directly determined as a clip file; the method can also display an image frame thumbnail, each icon in the image frame thumbnail indicates a target image frame or a second image frame, receive a second track drawn on the image frame thumbnail by a user, generate a clip file based on the target image frame and/or the second image frame contained in the second track, that is, the user can draw the second track on a user interface displaying the image frame thumbnail, wherein the second track is used for indicating the target image frame and/or the second image frame selected by the user, and then generate the clip file based on the target image frame and/or the second image frame contained in the second track, thereby further improving the generation flexibility of the clip file and further meeting the requirement of the user for personalized video clips.
As shown in fig. 14, on the basis of fig. 13, if the frame images of the 3 rd, 5 th, 6 th, 8 th, 12 th, and 18 th frames in the video file are the target image frames with updated definition, and the frame images of the 4 th, 9 th, 10 th, and 16 th frames in the video file are the second image frames with blurring completed once, thumbnails of the plurality of focused and/or blurred image frames are sequentially displayed on the user interface, where, to facilitate the user to distinguish the processing types corresponding to each image frame, a processing identifier may be displayed at a preset position of each icon in the thumbnail, for example, a first processing identifier corresponding to focusing processing (i.e., with updated definition) is "↓", and a second processing identifier corresponding to blurring processing (i.e., with blurring completed once) is "↓"; then, if a 3-5-12-16 connection track input by a user is received, generating a clip file based on the 3 rd, 5 th, 12 th and 16 th frame images in the video file; in specific implementation, thumbnails of the 3 rd, 5 th, 6 th, 8 th, 12 th and 18 th frame images can be displayed on the first area of the user interface, thumbnails of the 4 th, 9 th, 10 th and 16 th frame images can be displayed on the second area of the user interface, and then the corresponding clip files can be generated based on the connecting tracks input by the user.
Further, in order to improve file transfer efficiency in consideration that a user may have a need for transferring a clip file, after generating a clip file based on the extracted at least one target image frame and at least one second image frame, the method further includes:
receiving a fourth input of a user to a file sending control;
step two, responding to the fourth input, displaying a receiver list, wherein the candidate receiver list comprises a plurality of candidate receivers;
step three, receiving a fifth input of the user to the receiving party list, wherein the fifth input is an input of selecting a target receiving party from a plurality of candidate receiving parties;
and step four, sending the clip file to a target receiving party corresponding to the fifth input.
The recipient list can be determined based on historical behavior data in a preset time period of a user; specifically, the recipient list is determined based on at least one of an application usage record, a chat record, and a wireless signal connection record of the user within a preset time period; in specific implementation, a file receiving party that a user wishes to perform file transmission is automatically identified based on historical behavior data of the user within a preset time period, the identified file receiving party is used as a candidate receiving party and is displayed at a preset position on a user interface of a target application in a list form, for example, the candidate receiving party may be a certain network disk frequently used by the user in the near future, a certain number of friends frequently contacted by the user in the near future under a certain instant messaging application, or an electronic device with which wireless communication connection is established, and the candidate receiving party is displayed at the preset position on the user interface of the target application in the list form.
The file processing method can be applied to directly displaying the receiver list for the user under the application program for managing the target file in the scene that the user is determined to have the file transmission requirement, so that the user can quickly select the target receiver which is actually required to be sent, the file clipping and sending steps of the user are further simplified, and the file transmission process is quicker, more convenient and simpler.
As shown in fig. 15, on the clip file display interface, drag the clip file 1 to the location of the friend 1 under the communication application in the recipient list, determine the friend 1 as the target recipient, transmit the clip file 1 selected by the user to the friend 1, and jump to the communication chat interface with the friend 1, specifically, after determining that the clip file 1 needs to be transmitted to the friend 1, call the associated application where the friend 1 is located, and perform file transmission through the associated application.
In the file processing method provided by the embodiment of the application, a first input of a user is received; in response to the first input, updating a sharpness of an image of a first region of a target image frame in a target file; the target file comprises at least two image frames, and the target image frame with the definition to be updated comprises at least one image frame of the target file; therefore, after the target file is generated, a user can input a first input on a user interface of the electronic equipment according to actual requirements, and after the electronic equipment receives the first input, the electronic equipment updates the definition of an image in any area of at least one target image frame in the target file to achieve the effect of refocusing a target object contained in any area, so that after the video file is recorded, any target object in a local image area in at least one image frame in the video file can be refocused in a targeted manner, the personalized focusing requirements of the user on the target object in the video file are met, and the personalized display requirements of the user on the image content at a certain moment in the video file are met; in addition, in the process of re-editing the local image area in at least one image frame of the recorded target file, professional software special for later-stage video editing is not needed, so that the image processing operation in the target file is more convenient and faster for a user, and the file editing efficiency is further improved.
In the file processing method provided by the embodiment of the application, the execution main body can be a file processing device. In the embodiment of the present application, a document processing apparatus executing a document processing method is taken as an example, and the document processing apparatus provided in the embodiment of the present application is described.
Specifically, corresponding to the above document processing method, as shown in fig. 16a, the document processing apparatus provided in the embodiment of the present application includes:
a first receiving module 1602, configured to receive a first input of a user;
a first processing module 1604 for updating a sharpness of an image of a first region of a target image frame in a target file in response to the first input;
wherein the target file comprises at least two image frames, the target image frame comprising at least one image frame of the target file.
In the embodiment provided by the application, after the target file is generated, a user can input a first input on a user interface of the electronic device according to an actual requirement, and after the electronic device receives the first input, the electronic device updates the definition of an image in any region of at least one target image frame in the target file to achieve the effect of refocusing a target object contained in any region, so that after the video file is recorded, any target object in a local image region in at least one image frame in the video file can be refocused in a targeted manner, the personalized focusing requirement of the user on the target object in the video file is met, and the personalized display requirement of the user on the image content at a certain moment in the video file is met; in addition, in the process of re-editing the local image area in at least one image frame of the recorded target file, professional software special for later-stage video editing is not needed, so that the image processing operation in the target file is more convenient and faster for a user, and the file editing efficiency is further improved.
Optionally, the first processing module 1604 is specifically configured to:
updating an image of a first region in a target image frame based on an image of a second region in the target replacement frame;
the second area is an area corresponding to the first area in the target replacement frame, and the definition of the image of the second area is higher than that of the image of the first area.
Optionally, the first receiving module 1602 is specifically configured to:
under the condition that a first image frame is displayed on a file display interface of the target file, receiving a first sub-input of a user to a first area in the first image frame;
as shown in fig. 16b, the above apparatus further comprises: a first display module 1606 for:
in response to the first sub-input, displaying at least one preview identifier, each preview identifier indicating one candidate image frame, a third area in the candidate image frame having a higher definition than that of a first area in the first image frame, the third area being an area in the candidate image frame corresponding to the first area.
Optionally, the target file is a synthesized file, the synthesized file is obtained by synthesizing a first file and a second file, the first file is shot by a first camera, and the second file is shot by a second camera;
the target image frame is an image frame in the composite file, and the candidate image frame indicated by the preview identifier is an image frame in the first file or the second file.
Optionally, the apparatus further comprises: a second display module 1608 for displaying the sharpness values of the candidate image frames indicated by the at least one preview indicator.
Optionally, the first display module 1606 is specifically configured to:
the first image frame is kept displayed in a first area of the file display interface, and at least one preview mark is displayed in a second area of the file display interface;
the device further comprises: a second receiving module 1610, configured to receive a second sub-input of the target preview identifier from the at least one preview identifier by the user;
the first processing module 1604 is further specifically configured to:
updating the sharpness of the image of the first region of a target image frame based on a target candidate image frame indicated by the target preview identification in response to the second sub-input;
wherein the target candidate image frame is a target replacement frame.
Optionally, the apparatus further comprises: a third display module 1612 for displaying at least one object identifier on the file display interface of the target file, wherein the object identifier is used for indicating an object in any image frame of the target file;
a third receiving module 1614, configured to receive a third sub-input of the at least one object identifier from the user before updating the sharpness of the first region of the target image frame in the target file, where the third sub-input is an input of selecting at least one candidate object identifier from the at least one object identifier, and the at least one candidate object identifier indicates at least one candidate focusing object;
the first processing module 1604 is further specifically configured to:
updating the definition of an image of a first area in which a target focusing object is positioned in a target image frame;
wherein the target focusing object comprises a part or all of the at least one candidate focusing object.
Optionally, the apparatus further comprises:
a fourth receiving module 1616, configured to receive a fourth sub-input of the user;
a first determining module 1618, configured to determine, in response to the fourth sub-input, a target focusing type according to the fourth sub-input;
the first processing module 1604 is further specifically configured to:
determining a target focusing object of the at least one candidate focusing object based on the target focusing type;
and updating the definition of the image of the first area in the target image frame where the target focusing object is positioned.
Optionally, the target focusing type includes: the focusing system comprises a first focusing type, a second focusing type, a third focusing type and a combined focusing type, wherein the combined focusing type corresponds to the combination of at least two focusing types;
the first processing module 1604 is further specifically configured to:
determining a target focusing object based on depth of field information of the at least one candidate focusing object when the target focusing type is a first focusing type;
determining, for each adjacent first and second target image frames, a second target focusing object in the second target image frame based on a first target focusing object in the first target image frame, the first and second target focusing objects being different, in case the target focusing type is a second focusing type;
determining a target focusing object based on the display order of the at least one candidate focusing object when the target focusing type is a third focusing type;
and under the condition that the target focusing type is a combined focusing type, determining a target focusing object based on target object determination modes corresponding to the at least two focusing types.
Optionally, the apparatus further comprises:
the camera control module 1620 is configured to control the first camera and the second camera to perform shooting synchronously;
a first generating module 1622, configured to generate, when shooting is completed, a first file shot by the first camera, a second file shot by the second camera, and a synthesized file obtained by synthesizing the first file and the second file;
wherein the focal lengths of the first camera and the second camera are different; the target file is the composite file.
Optionally, the apparatus further comprises:
the fifth receiving module is used for receiving a second input of a user to a fourth area in a second image frame under the condition that the second image frame is displayed on a file display interface of the target file;
a fourth display module for displaying a ghosting adjustment control in response to the second input;
a sixth receiving module, configured to receive a third input to the blurring adjustment control by a user;
and the second processing module is used for responding to the third input and carrying out blurring processing on the image of the fourth area based on the blurring degree corresponding to the third input.
Optionally, the blurring adjustment control includes a slider, and the third input is an input of a user moving the slider to a first position;
the device further comprises:
a second determining module, configured to determine a blurring degree according to the first position;
a fifth display module, configured to display the first processing identifier at a second position on the playing progress bar of the target file, and display the second processing identifier at a third position on the playing progress bar;
wherein the second position is a position on the target image frame corresponding to the playing progress bar, and the first processing identifier indicates that the definition of the image of the first area of the target image frame is updated; the third position is a position on the second image frame corresponding to the playing progress bar, and the second processing identifier indicates that the second image frame has completed one blurring processing.
Optionally, the apparatus further comprises:
the image frame extraction module is used for extracting at least one target image frame indicated by at least one first processing identifier and at least one second image frame indicated by at least one second processing identifier;
a second generating module, configured to generate a clip file based on the extracted at least one target image frame and the at least one second image frame.
In the file processing device provided by the embodiment of the application, a first input of a user is received; in response to the first input, updating a sharpness of an image of a first region of a target image frame in a target file; the target file comprises at least two image frames, and the target image frame with the definition to be updated comprises at least one image frame of the target file; therefore, after the target file is generated, a user can input a first input on a user interface of the electronic equipment according to actual requirements, and after the electronic equipment receives the first input, the electronic equipment updates the definition of an image in any area of at least one target image frame in the target file to achieve the effect of refocusing a target object contained in any area, so that after the video file is recorded, any target object in a local image area in at least one image frame in the video file can be refocused in a targeted manner, the personalized focusing requirements of the user on the target object in the video file are met, and the personalized display requirements of the user on the image content at a certain moment in the video file are met; in addition, in the process of re-editing the local image area in at least one image frame of the recorded target file, professional software special for later-stage video editing is not needed, so that the image processing operation in the target file is more convenient and faster for a user, and the file editing efficiency is further improved.
The document processing apparatus in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The file processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The file processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments in fig. 1 to fig. 15, and is not described here again to avoid repetition.
Optionally, as shown in fig. 17, an electronic device 1700 according to an embodiment of the present application is further provided, and includes a processor 1701 and a memory 1702, where the memory 1702 stores a program or an instruction that can be executed on the processor 1701, and when the program or the instruction is executed by the processor 1701, the steps of the foregoing file processing method embodiment are implemented, and the same technical effects can be achieved, and are not described again to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 18 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1800 includes, but is not limited to: radio frequency unit 1801, network module 1802, audio output unit 1803, input unit 1804, sensors 1805, display unit 1806, user input unit 1807, interface unit 1808, memory 1809, and processor 1810.
Those skilled in the art will appreciate that the electronic device 1800 may also include a power supply (e.g., a battery) for powering the various components, and that the power supply may be logically connected to the processor 1810 via a power management system to perform functions such as managing charging, discharging, and power consumption. The electronic device structure shown in fig. 18 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The user input unit 1807 is configured to receive a first input of a user;
a processor 1810 for updating a sharpness of an image of a first region of a target image frame in a target file in response to the first input;
wherein the target file comprises at least two image frames, the target image frame comprising at least one image frame of the target file.
In the embodiment of the application, after the target file is generated, a user can input a first input on a user interface of the electronic device according to an actual requirement, and after the electronic device receives the first input, the electronic device updates the definition of an image in any region of at least one target image frame in the target file to achieve the effect of refocusing a target object contained in any region, so that after the video file is recorded, any target object in a local image region in at least one image frame in the video file can be refocused in a targeted manner, the requirement of the user for individualized focusing of the target object in the video file is met, and the requirement of the user for individualized display of picture content at a certain moment in the video file is met; in addition, in the process of re-editing the local image area in at least one image frame of the recorded target file, professional software special for later-stage video editing is not needed, so that the image processing operation in the target file is more convenient and faster for a user, and the file editing efficiency is further improved.
Optionally, the processor 1810, configured to update the sharpness of the image of the first region of the target image frame in the target file, includes:
updating an image of a first region in a target image frame based on an image of a second region in the target replacement frame;
the second area is an area corresponding to the first area in the target replacement frame, and the definition of the image of the second area is higher than that of the image of the first area.
Optionally, the user input unit 1807 is configured to receive a first input of a user, and includes:
under the condition that a first image frame is displayed on a file display interface of the target file, receiving a first sub-input of a user to a first area in the first image frame;
a display unit 1806, configured to display at least one preview identifier in response to the first sub-input, where each preview identifier indicates one candidate image frame, and a definition of an image of a third region in the candidate image frame is higher than a definition of an image of a first region in the first image frame, where the third region is a region in the candidate image frame corresponding to the first region.
Optionally, the target file is a synthesized file, the synthesized file is obtained by synthesizing a first file and a second file, the first file is shot by a first camera, and the second file is shot by a second camera;
the target image frame is an image frame in the composite file, and the candidate image frame indicated by the preview identifier is an image frame in the first file or the second file.
Optionally, after receiving a first sub-input of a user to a first region in the first image frame, the method further includes:
a display unit 1806, configured to display the sharpness values of the candidate image frames indicated by the at least one preview identifier.
Optionally, the display unit 1806 is configured to display at least one preview identifier, where the preview identifier includes:
the first image frame is kept displayed in a first area of the file display interface, and at least one preview mark is displayed in a second area of the file display interface;
after the displaying of the at least one preview identifier, the method further comprises:
a user input unit 1807, configured to receive a second sub-input of a target preview identifier of the at least one preview identifier by a user;
a processor 1810 for said updating the sharpness of the image of the first region of the target image frame in the target file comprises:
updating the sharpness of the image of the first region of a target image frame based on a target candidate image frame indicated by the target preview identification in response to the second sub-input;
wherein the target candidate image frame is a target replacement frame.
Optionally, before receiving the first input of the user, the method further includes:
a display unit 1806, configured to display at least one object identifier on a file display interface of the target file, where the object identifier is used to indicate an object in any image frame of the target file;
a user input unit 1807, configured to receive a first input of a user, including:
receiving a third sub-input of the user to the at least one object identifier, the third sub-input being an input of selecting at least one candidate object identifier from the at least one object identifier, the at least one candidate object identifier indicating at least one candidate focusing object;
a processor 1810 for said updating the sharpness of the image of the first region of the target image frame in the target file comprises:
updating the definition of an image of a first area in which a target focusing object is positioned in a target image frame;
wherein the target focusing object comprises a part or all of the at least one candidate focusing object.
Optionally, before the updating the sharpness of the image of the first region of the target image frame in the target file, the method further includes:
a user input unit 1807, configured to receive a fourth sub-input of the user;
a processor 1810, configured to, in response to the fourth sub-input, determine a target focusing type according to the fourth sub-input;
the processor 1810, configured to update the sharpness of the image of the first region where the target focusing object is located in the target image frame, includes:
determining a target focusing object of the at least one candidate focusing object based on the target focusing type;
and updating the definition of the image of the first area in the target image frame where the target focusing object is positioned.
Optionally, the target focusing type includes: the focusing system comprises a first focusing type, a second focusing type, a third focusing type and a combined focusing type, wherein the combined focusing type corresponds to the combination of at least two focusing types;
a processor 1810 configured to determine a target focusing object of the at least one candidate focusing object based on the target focusing type, including:
determining a target focusing object based on depth of field information of the at least one candidate focusing object when the target focusing type is a first focusing type;
determining, for each adjacent first and second target image frames, a second target focusing object in the second target image frame based on a first target focusing object in the first target image frame, the first and second target focusing objects being different, in case the target focusing type is a second focusing type;
determining a target focusing object based on the display order of the at least one candidate focusing object when the target focusing type is a third focusing type;
and under the condition that the target focusing type is a combined focusing type, determining a target focusing object based on target object determination modes corresponding to the at least two focusing types.
Optionally, before receiving the first input of the user, the method further includes:
the processor 1810 is used for controlling the first camera and the second camera to shoot synchronously;
a processor 1810, configured to generate, when shooting is completed, a first file shot by the first camera, a second file shot by the second camera, and a synthesized file obtained by synthesizing the first file and the second file;
wherein the focal lengths of the first camera and the second camera are different; the target file is the composite file.
Optionally, the user input unit 1807 is further configured to receive a second input of the user to a fourth area in the second image frame when the second image frame is displayed on the file display interface of the target file;
a display unit 1806, configured to display a blurring adjustment control in response to the second input;
a user input unit 1807, configured to receive a third input to the blurring adjustment control by a user;
a processor 1810, configured to perform, in response to the third input, blurring processing on the image of the fourth area based on a blurring degree corresponding to the third input.
Optionally, the blurring adjustment control includes a slider, and the third input is an input of a user moving the slider to a first position;
before performing the blurring processing on the image of the fourth region based on the blurring degree corresponding to the third input, the method further includes:
a processor 1810 for determining a blurring degree according to the first position;
after performing the blurring processing on the image of the fourth area based on the blurring degree corresponding to the third input, the method further includes:
a display unit 1806, configured to display a first processing identifier at a second position on the play progress bar of the target file, and display a second processing identifier at a third position on the play progress bar;
wherein the second position is a position on the target image frame corresponding to the playing progress bar, and the first processing identifier indicates that the definition of the image of the first area of the target image frame is updated; the third position is a position on the second image frame corresponding to the playing progress bar, and the second processing identifier indicates that the second image frame has completed one blurring processing.
Optionally, after the displaying the first processing identifier at the second position on the play progress bar of the target file and the displaying the second processing identifier at the third position on the play progress bar, the method further includes:
a processor 1810 for extracting at least one target image frame indicated by at least one of the first processing identifiers and at least one second image frame indicated by at least one of the second processing identifiers;
a processor 1810 configured to generate a clip file based on the extracted at least one target image frame and the at least one second image frame.
The electronic equipment provided by the embodiment of the application receives a first input of a user; in response to the first input, updating a sharpness of an image of a first region of a target image frame in a target file; the target file comprises at least two image frames, and the target image frame with the definition to be updated comprises at least one image frame of the target file; therefore, after the target file is generated, a user can input a first input on a user interface of the electronic equipment according to actual requirements, and after the electronic equipment receives the first input, the electronic equipment updates the definition of an image in any area of at least one target image frame in the target file to achieve the effect of refocusing a target object contained in any area, so that after the video file is recorded, any target object in a local image area in at least one image frame in the video file can be refocused in a targeted manner, the personalized focusing requirements of the user on the target object in the video file are met, and the personalized display requirements of the user on the image content at a certain moment in the video file are met; in addition, in the process of re-editing the local image area in at least one image frame of the recorded target file, professional software special for later-stage video editing is not needed, so that the image processing operation in the target file is more convenient and faster for a user, and the file editing efficiency is further improved.
It should be understood that in the embodiment of the present application, the input Unit 1804 may include a Graphics Processing Unit (GPU) 18041 and a microphone 18042, where the Graphics Processing Unit 18041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode; specifically, the image capturing apparatus may include a first camera and a second camera, and the graphic processor 18041 generates a composite file (i.e., a target file) based on a first file photographed by the first camera and a second file photographed by the second camera. The display unit 1806 may include a display panel 18061, and the display panel 18061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1807 includes at least one of a touch panel 18071 and other input devices 18072. A touch panel 18071, also referred to as a touch screen. The touch panel 18071 may include two parts of a touch detection device and a touch controller. Other input devices 18072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1809 may be used to store software programs as well as various data. The memory 1809 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, application programs or instructions required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, the memory 1809 may include volatile memory or nonvolatile memory, or the memory 1809 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). Memory 1809 in the present embodiment includes, but is not limited to, these and any other suitable types of memory.
Processor 1810 may include one or more processing units; optionally, the processor 1810 may integrate an application processor, which primarily handles operations related to the operating system, user interface, applications, etc., and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It is to be appreciated that the modem processor described above may not be integrated into processor 1810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the file processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the file processing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing file processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (16)

1. A method of file processing, the method comprising:
receiving a first input of a user;
updating a sharpness of an image of a first region of a target image frame in a target file in response to the first input;
wherein the target file comprises at least two image frames, the target image frame comprising at least one image frame of the target file.
2. The method of claim 1, wherein said updating the sharpness of the image of the first region of the target image frame in the target file comprises:
updating an image of a first region in a target image frame based on an image of a second region in the target replacement frame;
the second area is an area corresponding to the first area in the target replacement frame, and the definition of the image of the second area is higher than that of the image of the first area.
3. The method of claim 1, wherein receiving a first input from a user comprises:
under the condition that a first image frame is displayed on a file display interface of the target file, receiving a first sub-input of a user to a first area in the first image frame;
in response to the first sub-input, displaying at least one preview identifier, each preview identifier indicating one candidate image frame, a third area in the candidate image frame having a higher definition than that of a first area in the first image frame, the third area being an area in the candidate image frame corresponding to the first area.
4. The method of claim 3, wherein the target file is a composite file, the composite file is a composite of a first file and a second file, the first file is captured by a first camera and the second file is captured by a second camera;
the target image frame is an image frame in the composite file, and the candidate image frame indicated by the preview identifier is an image frame in the first file or the second file.
5. The method of claim 3, wherein after receiving a first sub-input from a user to a first region in the first image frame, further comprising:
and displaying the definition value of the candidate image frame indicated by the at least one preview identifier.
6. The method of claim 3, wherein displaying at least one preview identifier comprises:
the first image frame is kept displayed in a first area of the file display interface, and at least one preview mark is displayed in a second area of the file display interface;
after the displaying of the at least one preview identifier, the method further comprises:
receiving a second sub-input of a user to a target preview identifier in the at least one preview identifier;
the updating the sharpness of the image of the first region of the target image frame in the target file comprises:
updating the sharpness of the image of the first region of a target image frame based on a target candidate image frame indicated by the target preview identification in response to the second sub-input;
wherein the target candidate image frame is a target replacement frame.
7. The method of claim 1, wherein prior to receiving the first input from the user, further comprising:
displaying at least one object identifier on a file display interface of the target file, wherein the object identifier is used for indicating an object in any image frame of the target file;
the receiving a first input of a user comprises:
receiving a third sub-input of the user to the at least one object identifier, the third sub-input being an input of selecting at least one candidate object identifier from the at least one object identifier, the at least one candidate object identifier indicating at least one candidate focusing object;
the updating the sharpness of the image of the first region of the target image frame in the target file comprises:
updating the definition of an image of a first area in which a target focusing object is positioned in a target image frame;
wherein the target focusing object comprises a part or all of the at least one candidate focusing object.
8. The method of claim 7, wherein prior to updating the sharpness of the image of the first region of the target image frame in the target file, further comprising:
receiving a fourth sub-input of the user;
in response to the fourth sub-input, determining a target focusing type according to the fourth sub-input;
the updating the definition of the image of the first area where the target focusing object is located in the target image frame comprises the following steps:
determining a target focusing object of the at least one candidate focusing object based on the target focusing type;
and updating the definition of the image of the first area in the target image frame where the target focusing object is positioned.
9. The method of claim 8, wherein the target focus type comprises: the focusing system comprises a first focusing type, a second focusing type, a third focusing type and a combined focusing type, wherein the combined focusing type corresponds to the combination of at least two focusing types;
the determining a target focusing object of the at least one candidate focusing object based on the target focusing type comprises:
determining a target focusing object based on depth of field information of the at least one candidate focusing object when the target focusing type is a first focusing type;
determining, for each adjacent first and second target image frames, a second target focusing object in the second target image frame based on a first target focusing object in the first target image frame, the first and second target focusing objects being different, in case the target focusing type is a second focusing type;
determining a target focusing object based on the display order of the at least one candidate focusing object when the target focusing type is a third focusing type;
and under the condition that the target focusing type is a combined focusing type, determining a target focusing object based on target object determination modes corresponding to the at least two focusing types.
10. The method of claim 1, wherein prior to receiving the first input from the user, further comprising:
controlling the first camera and the second camera to shoot synchronously;
under the condition that shooting is finished, generating a first file shot by the first camera, a second file shot by the second camera and a synthesized file obtained by synthesizing the first file and the second file;
wherein the focal lengths of the first camera and the second camera are different; the target file is the composite file.
11. The method of claim 1, further comprising:
receiving a second input of a user to a fourth area in a second image frame under the condition that the second image frame is displayed on a file display interface of the target file;
displaying a ghosting adjustment control in response to the second input;
receiving a third input of the blurring adjustment control by a user;
and responding to the third input, and performing blurring processing on the image of the fourth area based on the blurring degree corresponding to the third input.
12. The method of claim 11, wherein the ghosting adjustment control comprises a slider, and wherein the third input is an input by a user to move the slider to a first position;
before performing the blurring processing on the image of the fourth region based on the blurring degree corresponding to the third input, the method further includes:
determining a blurring degree according to the first position;
after performing the blurring processing on the image of the fourth area based on the blurring degree corresponding to the third input, the method further includes:
displaying a first processing identifier at a second position on the playing progress bar of the target file, and displaying a second processing identifier at a third position on the playing progress bar;
wherein the second position is a position on the target image frame corresponding to the playing progress bar, and the first processing identifier indicates that the definition of the image of the first area of the target image frame is updated; the third position is a position on the second image frame corresponding to the playing progress bar, and the second processing identifier indicates that the second image frame has completed one blurring processing.
13. The method according to claim 12, wherein after displaying the first processing identifier at the second position on the playing progress bar of the target file and the second processing identifier at the third position on the playing progress bar, further comprising:
extracting at least one target image frame indicated by at least one first processing identifier and at least one second image frame indicated by at least one second processing identifier;
generating a clip file based on the extracted at least one target image frame and the at least one second image frame.
14. A document processing apparatus, characterized in that the apparatus comprises:
the first receiving module is used for receiving a first input of a user;
a first processing module for updating a sharpness of an image of a first region of a target image frame in a target file in response to the first input;
wherein the target file comprises at least two image frames, the target image frame comprising at least one image frame of the target file.
15. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the steps of the file processing method of any of claims 1 to 13.
16. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the file processing method according to any one of claims 1 to 13.
CN202111574206.1A 2021-12-21 2021-12-21 File processing method, file processing device, electronic device and medium Pending CN114237800A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111574206.1A CN114237800A (en) 2021-12-21 2021-12-21 File processing method, file processing device, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111574206.1A CN114237800A (en) 2021-12-21 2021-12-21 File processing method, file processing device, electronic device and medium

Publications (1)

Publication Number Publication Date
CN114237800A true CN114237800A (en) 2022-03-25

Family

ID=80760703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111574206.1A Pending CN114237800A (en) 2021-12-21 2021-12-21 File processing method, file processing device, electronic device and medium

Country Status (1)

Country Link
CN (1) CN114237800A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112795A (en) * 2023-04-13 2023-05-12 北京城建智控科技股份有限公司 Adaptive focusing control method, camera and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112795A (en) * 2023-04-13 2023-05-12 北京城建智控科技股份有限公司 Adaptive focusing control method, camera and storage medium

Similar Documents

Publication Publication Date Title
CN110119700B (en) Avatar control method, avatar control device and electronic equipment
CN108924622B (en) Video processing method and device, storage medium and electronic device
CN113766129B (en) Video recording method, video recording device, electronic equipment and medium
CN113194255A (en) Shooting method and device and electronic equipment
CN111612873A (en) GIF picture generation method and device and electronic equipment
CN111601012B (en) Image processing method and device and electronic equipment
CN113905175A (en) Video generation method and device, electronic equipment and readable storage medium
CN112672061A (en) Video shooting method and device, electronic equipment and medium
CN114422692B (en) Video recording method and device and electronic equipment
CN113207038B (en) Video processing method, video processing device and electronic equipment
CN113010738B (en) Video processing method, device, electronic equipment and readable storage medium
CN114237800A (en) File processing method, file processing device, electronic device and medium
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN113794923A (en) Video processing method and device, electronic equipment and readable storage medium
US20230368338A1 (en) Image display method and apparatus, and electronic device
CN113852757B (en) Video processing method, device, equipment and storage medium
CN113852756B (en) Image acquisition method, device, equipment and storage medium
CN112367487B (en) Video recording method and electronic equipment
CN113873319A (en) Video processing method and device, electronic equipment and storage medium
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
US20180074688A1 (en) Device, method and computer program product for creating viewable content on an interactive display
CN114222069B (en) Shooting method, shooting device and electronic equipment
CN114520874B (en) Video processing method and device and electronic equipment
CN112506393B (en) Icon display method and device and storage medium
CN114500852B (en) Shooting method, shooting device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination