CN113923353A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113923353A
CN113923353A CN202111150189.9A CN202111150189A CN113923353A CN 113923353 A CN113923353 A CN 113923353A CN 202111150189 A CN202111150189 A CN 202111150189A CN 113923353 A CN113923353 A CN 113923353A
Authority
CN
China
Prior art keywords
image
input
algorithm
interface
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111150189.9A
Other languages
Chinese (zh)
Inventor
陈喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202111150189.9A priority Critical patent/CN113923353A/en
Publication of CN113923353A publication Critical patent/CN113923353A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • H04N25/533Control of the integration time by using differing integration times for different sensor regions

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a storage medium, and belongs to the technical field of image processing. The method comprises the following steps: displaying a first image group and N algorithm identifications on a first interface, wherein the first image group comprises at least one image; receiving a first input of a target algorithm identifier in the N algorithm identifiers from a user; in response to the first input, performing image processing on the first image group through a target image processing algorithm indicated by the target algorithm identification; wherein N is a positive integer.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of electronic technology, the shooting performance of electronic devices such as mobile phones and the like is also greatly improved. However, in daily life, the image captured by electronic devices such as mobile phones is single in effect, and it is difficult to meet the capturing requirements of different users or different groups.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a storage medium, and can solve the problem that the shooting requirements of different users or different crowds are difficult to meet due to the fact that the image effect shot by electronic equipment such as a mobile phone is single.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes: displaying a first image group and N algorithm identifications on a first interface, wherein the first image group comprises at least one image; receiving a first input of a target algorithm identifier in the N algorithm identifiers from a user; in response to the first input, performing image processing on the first image group through a target image processing algorithm indicated by the target algorithm identification; wherein N is a positive integer.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the display module is used for displaying a first image group and N algorithm identifications on a first interface, wherein the first image group comprises at least one image; the receiving module is used for receiving first input of a target algorithm identifier in the N algorithm identifiers by a user, wherein N is a positive integer; and the response module is used for responding to the first input and carrying out image processing on the first image group through the target image processing algorithm indicated by the target algorithm identification.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment, through showing first image group and N algorithm sign for the user can select the target algorithm sign from N algorithm signs according to self demand, and then handles first image group according to the image processing calculation that target algorithm sign indicates, from this, can make the image after handling satisfy user's different shooting demands, avoids the image effect that exists comparatively single in the correlation technique simultaneously, is difficult to satisfy the problem of different users or different crowds' shooting demand.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 2a is a schematic diagram of an image capture interface provided in an embodiment of the present application.
Fig. 2b is a schematic diagram of an image acquisition principle in a streaming mode according to an embodiment of the present application.
Fig. 2c is a second schematic diagram illustrating an image acquisition principle in a streaming mode according to an embodiment of the present application.
Fig. 3 is a second flowchart of the image processing method according to the embodiment of the present application.
Fig. 4a is a third schematic view of the first interface according to the embodiment of the present application.
Fig. 4b is a fourth schematic view of the first interface provided in the embodiment of the present application.
Fig. 4c is a fifth schematic view of the first interface provided in the embodiment of the present application.
Fig. 4d is a sixth schematic view of the first interface according to the embodiment of the present application.
Fig. 5 is a third schematic flowchart of an image processing method according to an embodiment of the present application.
Fig. 6a is one of schematic diagrams of a second interface provided in an embodiment of the present application.
Fig. 6b is a second schematic view of a second interface provided in the present embodiment.
Fig. 7 is a fourth flowchart illustrating an image processing method according to an embodiment of the present application.
Fig. 8 is a third schematic view of a second interface provided in the present application.
Fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Fig. 11 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, which is a flowchart illustrating an image processing method 100 according to an exemplary embodiment of the present application, the method 100 may be executed by, but not limited to, an electronic device, and in particular may be executed by software and/or hardware installed in the electronic device. The method 100 may include at least the following steps.
And step 110, displaying the first image group and the N algorithm identifications on a first interface.
Wherein the first interface is an interface available for image processing. In this embodiment, according to different shooting scenes, for example, during or after shooting, the first interface may be different, and for example, the first interface may include an image capture interface, an image preview interface, an image processing interface, and the like during shooting, or may be an image preview interface, an image processing interface, and the like entered by an album control and the like after shooting, so that image processing in multiple scenes can be implemented, and different image processing requirements of different users can be met.
It should be noted that the image acquisition interface can be understood as: the system comprises a photographing preview interface, a photographing interface, a video preview interface and a video interface. The photographing preview interface can be an interface of a preview image acquired by the image sensor and displayed after the photographing program is started; the photographing interface can be an interface for acquiring images through the image sensor after the photographing program is started; the video preview interface is an interface of a preview image acquired by an image sensor and displayed after a camera program is started; the video recording interface may be an interface for acquiring an image through the image sensor after the photographing program is started.
The image preview interface may be understood as an interface capable of previewing an image or video or the like that has been photographed.
The image processing interface may be understood as an interface dedicated to image editing. Wherein, the aforementioned "editing" can be understood as: the user can select a corresponding image processing algorithm to process the first image group according to the algorithm identification displayed on the image processing interface.
The first image group may include a photographing preview image, a video frame preview image, and the like acquired in real time by an image sensor when the first interface is an image acquisition interface.
When the first interface is an image preview interface, the first image group may include an image selected by a user and saved in an album or a video frame image in a video.
In a case that the first interface is an image processing interface, the first image group may be at least one photographed preview image, video frame preview image, and the like acquired in real time by an image sensor, or may be an image selected by a user and stored in a location such as an album or a video frame image in a video, and is not limited herein.
In addition, in consideration of the reliability of the image processing result, and simultaneously enabling the processed image to meet the requirements of different users and different people, the first image group may include at least one image. For example, the first image group may include one or more images. In the case that the first image group includes at least two images, the at least two images may be images with different image parameters acquired in the same acquisition period, or may be a group of images or a group of video frame images selected by a user according to a requirement of the user, and the like, which is not limited herein.
In this case, as a possible implementation manner, assuming that the images in the first image group are images with different image parameters acquired in the same image acquisition period, the acquisition process of the first image group may include the following process. The image parameters may include an image exposure duration, an image resolution, an image format, and the like.
(1) A sixth input by the user is received.
Wherein the sixth input may be initiated by a user or the like on the image capture interface for implementing image capture in a video capture or streaming mode. Optionally, the sixth input may be a single-click input, a double-click input, a long-press input, a voice input, or the like, and the sixth input may also be a sixth operation, which is not limited herein.
(2) And responding to the sixth input, and controlling T image sensors (which can also be understood as cameras) to acquire images according to a preset image acquisition period.
The time length of the image capturing period may be set according to a shooting requirement, for example, in this embodiment, the image capturing period may be 33ms, 36ms, or the like. T is an integer greater than or equal to 1, and in addition, in the case where T is greater than 1, image parameters of different image sensors may be different or the same, and the image parameters may be exposure parameters (such as exposure duration), resolution, image format, and the like.
For example, assuming that T is 1 and S is 3, the image sensor may sequentially acquire 3 images with different image parameters in a time division multiplexing manner. For another example, if T is 3 and S is 3, 3 of the image sensors can simultaneously acquire 3 images with different image parameters.
(3) And respectively caching S images with different image parameters, which are acquired by the T image sensors in each image acquisition period, into corresponding image frame queues.
Where S is an integer greater than 1, in this embodiment, one image frame queue may correspond to an image of one image parameter.
Based on the foregoing description of the acquisition process, it is further explained below with reference to examples.
For example, assuming T is 1 and S is 3, referring to fig. 2a, the user may open the camera on the electronic device, click the "stream mode" control, and enter the stream mode camera. In the streaming mode, the image acquisition interface only displays the common image with the specified parameters acquired in real time, namely the preview image, and the left lower corner and the right lower corner of the image acquisition interface are respectively suspended transparent controls, namely the quick photo album inlet and the front and back image sensor switching control.
At this time, the image sensor outputs the acquired image stream in real time, and directly displays the normal image with the specified parameters in the image stream as a preview image on a screen of the electronic device, or performs image reduction, resolution reduction, image cropping and other processing on the normal image with the specified parameters in the image stream, and displays the processed image as the preview image on the screen of the electronic device for the user to preview. Meanwhile, the common image is cached in an image frame queue of the background, image acquisition of different image parameters (such as exposure parameters) is carried out in a preview frame capture interval (namely, an image acquisition period), and the acquired images with different image parameters are respectively cached in a plurality of different image frame queues of the background. It is understood that an "image stream" may be understood as a plurality of consecutive image frames that are acquired and output by the image sensor in a temporal sequence.
For example, corresponding to the aforementioned S, 3 image frame queues may be provided in the present embodiment for storing a specified number of image frames having different image parameters (e.g., exposure time duration). As shown in fig. 2b, the image sensor continuously outputs (acquires) an image stream along the time axis at a period (e.g., 33ms) of the frame interval of the preview frame. In an image acquisition period, firstly, a preview frame (namely a common image) is captured and sent to a screen for display in real time, long exposure image frames and short exposure image frames are continuously captured in the rest time according to different exposure time, the image frames are placed in an image frame queue for caching, and the image frames are used when a user takes a picture or processes the image.
Furthermore, in addition to the aforementioned T being equal to 1, when T is greater than 1, i.e. in the case of multiple image sensors, different image sensors acquire images at different image parameters and buffer them into different sets of image frame queues.
For example, as shown in fig. 2c, in the case that there are multiple available image sensors, T image queue groups may be set according to the number of the image sensors, so as to store images corresponding to various image parameters acquired by different image sensors.
In which all image sensors continuously output an image stream along a time axis with a frame interval of preview frames as a period (e.g., 33 ms). For example, for a main image sensor, in an image acquisition period, a preview frame is firstly captured and sent to a screen of electronic equipment in real time for display, different image frames are continuously captured in the rest time according to different image parameters and the like, and the images are placed in corresponding image frame queues for caching; for other image sensors, different image frames are captured according to different image parameters and the like in the whole image acquisition period, and the images are cached in the image frame queue, so that image streams with different parameters are continuously output by using a plurality of image sensors and are provided for a user to select, the image selection range of the user during self-defined image processing can be further increased, and the shooting requirements of the user in more scenes are met.
Step 120, receiving a first input of a target algorithm identifier in the N algorithm identifiers from a user.
Wherein N is a positive integer. The first input may be a single-click input, a double-click input, a long-press input, a voice input, or the like, and the first input may also be a first operation, which is not limited herein.
It is understood that each algorithm identifier may be respectively associated with or indicate an image processing algorithm, that is, a user can select the associated or indicated image processing algorithm by inputting or operating the algorithm identifier. In addition, the target algorithm identification belongs to N algorithm identifications. The algorithm identifier in the present application is a text, symbol, image, etc. for indicating information, which may use a control or other container as a carrier for displaying information, including but not limited to text identifier, symbol identifier, image identifier, etc.
Step 130, in response to the first input, performing image processing on the first image group through the target image processing algorithm indicated by the target algorithm identification.
The target algorithm identification may be multiple, that is, the user may select one or more image processing algorithms through the first input to perform image processing on the first image group, so as to meet the personalized image processing requirement of the user.
In this embodiment, through showing first image group and N algorithm sign for the user can select the target algorithm sign from N algorithm signs according to self demand, and then handles first image group according to the image processing calculation that target algorithm sign indicates, from this, can make the image after handling satisfy user's different shooting demands, avoids the image effect that exists comparatively single in the correlation technique simultaneously, is difficult to satisfy the problem of different users or different crowds' shooting demand.
As shown in fig. 3, a flowchart of an image processing method 300 according to an exemplary embodiment of the present application is provided, where the method 300 may be executed by, but not limited to, an electronic device, and in particular may be executed by software and/or hardware installed in the electronic device. The method 300 may include at least the following steps.
And 310, receiving a third input of the user to the album identification displayed on the second interface.
The second interface may include an image capture interface, an image preview interface, or other interfaces with album identifiers displayed on the electronic device. It is to be understood that, similar to the first interface described in the present application, the image capturing interface may be understood as an interface for image capturing or an interface for video capturing, etc. specifically referring to the foregoing description of the first interface.
Based on this, in this embodiment, the first interface may be the same as or different from the first interface. For example, the first interface and the second interface may both be an image acquisition interface or an image preview interface; for another example, the first interface is an image processing interface, and the second interface is an image capturing interface or an image previewing interface, which is not limited in this embodiment.
Alternatively, the third input may be an input of a user through a touch device such as a finger or a stylus, such as a single-click input, a double-click input, a long-press input, a voice input, and the like, and the third input may also be a third operation, and the like, which is not limited herein.
And step 320, responding to the third input, and displaying at least two images in the photo album corresponding to the photo album identification.
The at least two images may be images acquired in different image acquisition periods, or the at least two images may be images acquired in the same image acquisition period, or the at least two images may be different video frame images in the same video, and the like, which is not limited herein.
Step 330, receiving a fourth input of the user to K images of the at least two images.
The fourth input may be click input, double-click input, long-press input, voice input, or the like, and the fourth input may also be a fourth operation or the like.
Step 340, in response to the fourth input, determining K images as the first image group.
Wherein, assuming that K is 1, an image group to which the K images belong may be determined as the first image group. It is to be understood here that the images in a group of images are images acquired within one image acquisition cycle.
For example, referring to fig. 2b again, assuming that a normal image, a long exposure image, and a short exposure image acquired in one image acquisition cycle are an image group, and images with different image parameters in the same image group are respectively buffered in different image frame queues, in the case that K is 1, the image group (including the normal image, the long exposure image, and the short exposure image) to which the K images belong may be determined as the first image group.
Step 350, displaying a first image group and N algorithm identifications on a first interface, wherein the first image group comprises at least one image.
Wherein N is a positive integer.
It is to be understood that the implementation process of step 350 may refer to the related description in the foregoing method embodiment 100, and is not repeated herein for avoiding repetition.
Step 360, receiving a first input of a target algorithm identifier in the N algorithm identifiers from a user.
It is understood that, in addition to the implementation process of step 360 referring to the related description in the foregoing method embodiment 100, as a possible implementation manner in this embodiment, please refer to fig. 4a in combination, the first interface may include a first area a and a second area B, where the first area a is used to display the first image group; the second area B comprises a first sub-area B1 and a second sub-area B2, and the first sub-area B1 is used for displaying the N algorithm identifications, such as algorithm 1, algorithm 2, and algorithm 3 … …. The second sub-area B2 is used to display the target algorithm identification selected by the first input, in which case the implementation of step 360 may include step 361, as follows. It is understood that the aforementioned "algorithm 1", "algorithm 2" and "algorithm 3" may be a beauty algorithm, a background blurring algorithm, an image enhancement algorithm, an image geometric transformation algorithm, an edge detection algorithm, an image segmentation algorithm, a corner detection algorithm, and the like.
Step 361, receiving a first input of the M algorithm identifications displayed in the first sub-area from the user.
The target algorithm identifiers include the M algorithm identifiers, M is an integer greater than 1, M is not greater than N, that is, the user may select one or more target algorithm identifiers from the plurality of algorithm identifiers displayed in the second sub-region B2.
Alternatively, the first input may be a user sliding in the first sub-area B1 (as shown in fig. 4 a), clicking, double-clicking, etc. by a finger or other input device, which is not limited herein.
Step 370, in response to the first input, displaying the M algorithm identifications in the second sub-area.
For example, as shown in FIG. 4a, assuming that the M algorithms selected by the user via the first input are identified as "Algorithm 2", then "Algorithm 2" may be displayed in the second sub-region B2.
In this embodiment, through the display modes described in the foregoing steps 361 and 370, the image streams with different parameters are continuously output by using multiple image sensors, and a superimposed interactive mode of the selection algorithm is provided for the user, so that the user can select the image to be processed and the corresponding image processing algorithm according to the preference when taking a picture or recording a video, thereby providing the user with richer user-defined selection of image processing modes and better interactive modes, and meeting the image processing requirements of the user in more scenes.
It should be noted that, in addition to the display manners described in the foregoing step 361 and step 370, as another possible implementation manner, please refer to fig. 4B in combination, the first interface may also include only a first area a and a second area B, where the first area a is used for displaying the first image group; the second area B is used for displaying the N algorithm identifiers, such as algorithm 1, algorithm 2, algorithm 3 … …, and the target algorithm identifier selected by the first input.
Step 380, responding to the first input, and performing image processing on the first image group through the target image processing algorithm indicated by the target algorithm identification.
It is understood that, in addition to the implementation process of step 380 referring to the related description in the foregoing method embodiment 100, in this embodiment, as a possible implementation manner, please refer to fig. 4a again, the second sub-area B2 may include at least two algorithm windows S, each algorithm window S includes one algorithm identifier of the M algorithm identifiers; in this case, the implementation process of step 380 may also include step 381 and step 382, as follows.
And 381, determining the algorithm combination mode of the image processing algorithm indicated by the M algorithm identifications according to the display information of the algorithm window.
The display information of the algorithm window S may include the number of algorithm windows, arrangement order information of the algorithm windows, arrangement modes of the algorithm windows, information of algorithm identifiers displayed in the algorithm windows, and the like, which is not limited herein.
For example, when the display information is the arrangement order information, the algorithm combination manner may be to sequentially use the target image algorithms indicated by the algorithm identifiers in the algorithm windows. For example, the algorithm window S in the second sub-region B2 sequentially displays an image enhancement algorithm, an image geometric transformation algorithm, and an edge detection algorithm, and then the algorithm combination may be: the first image group is processed by using an image enhancement algorithm, then the first image group is processed by using an image geometric transformation algorithm, and finally the first image group is processed by using an edge detection algorithm to obtain a final target image.
It should be noted that, besides the foregoing algorithm combination, it can also be: when image processing is performed using each algorithm displayed in the algorithm window S, the order of use of each algorithm is not limited, and the present embodiment does not limit this.
Step 382, according to the algorithm combination mode, performing image processing on the first image group through a target image processing algorithm indicated by the target algorithm identifier.
In this embodiment, through the setting of step 381 and step 382, the user can drag different algorithm identifiers into an algorithm window (which can also be understood as an algorithm overlay frame), so that on one hand, the flexibility of algorithm selection and combination can be further improved, and on the other hand, the first image group can be processed by multiple algorithms, so as to achieve the purpose of obtaining multiple effects by processing at one time, and the processed image can further meet the self-requirements of the user. In addition, as an implementation mode, the user can view visual algorithm effect display by clicking the algorithm control, and the user is helped to quickly know the actual effect of the algorithm.
Further, on the basis of the foregoing, as a possible implementation manner, please refer to fig. 4C and 4d in combination, the first interface may further include a third area C, where the third area C is configured to display at least one image group T, in this case, during the image processing, the electronic device may further receive a second input (e.g., a single-click input, a double-click input, a long-press input, a voice input, and the like, and the second input may also be a second operation and the like) of the user on a second image group displayed in the third area C; and in response to the second input, updating the first image group displayed in the first area to the second image group, namely replacing the first image group displayed in the first area with the selected second image group, and performing image processing.
For example, if the user is not satisfied with the first image group at the current time, the image group adjacent to the first image group, such as the second image group, may be switched by sliding left and right in the third region C as shown in fig. 4C and 4d to replace the first image group. It should be noted that after the second image group is dragged to the first area a, when the selected second image group is slid left and right, the selected second image group is not changed, and other image groups can be dragged again to overlay the selected second image group.
Based on the description of the foregoing method embodiment 300, the implementation process thereof is further described below with reference to fig. 4a to 4 d.
For example, assuming that the first image group is an image including 3 frames with different exposure time lengths, but the user is not satisfied with it, in this case, the user may select a satisfactory image group from the third area C and drag it into the first area a to update or replace the first image group; meanwhile, the user can further select different algorithm identifiers, such as algorithm 1; after the algorithm identification and the selection of the image group to be processed are completed, the user can click the 'start' control, namely, the image processing algorithm indicated by the selected algorithm identification can be utilized to perform image processing on the image group to be processed, so that a target image meeting the requirements of the user is generated and stored.
As shown in fig. 5, a flowchart of an image processing method 500 provided in an exemplary embodiment of the present application is shown, where the method 500 may be executed by, but not limited to, an electronic device, and in particular may be executed by software and/or hardware installed in the electronic device. The method 500 may include at least the following steps.
Step 510, receiving a fifth input of the second interface from the user.
Similar to the foregoing, the second interface may be an image capture interface or a video capture interface, and is not limited herein. For example, in a case where the second interface is an image capture interface, as shown in fig. 6a, a preview image for the user to understand and view the posture of the photographed object is displayed on the second interface, so that the user can directly initiate a fifth input on the second interface when the user is satisfied with the preview image.
The fifth input is used for triggering a quick photographing mode or a user-defined photographing mode by a user, and on the premise, the input characteristic of the fifth input can be set according to actual requirements. For example, when the fast photographing mode needs to be triggered, the fast photographing mode is associated with a first preset feature, and thus, as described in step 520 below, when the input feature of the fifth input is the first preset feature, the fast photographing mode is triggered.
For another example, when the user-defined photographing mode needs to be triggered, the user-defined photographing mode is associated with a second preset feature, so that, as described in step 720 below, when the input feature of the fifth input is the second preset feature, the user-defined photographing mode is triggered.
It should be noted that the first preset feature is different from the second preset feature, so that the electronic device can effectively identify whether the quick photographing mode or the custom photographing mode is triggered.
And step 520, responding to the fifth input, and under the condition that the input characteristic of the fifth input is a first preset characteristic, extracting a first image from the target image frame queue and storing the first image into an album.
The first preset feature may be, without limitation, an upward slide, a downward slide, a leftward slide, or a rightward slide of the user on the screen.
The target image frame queue is used for caching image frames acquired by an image sensor in a shooting preview process, the image parameters of the target image frame queue are of preset parameter types, and the acquisition time of the second image is earlier than the input time of the fifth input. It is to be understood that, in conjunction with the image capturing process in the foregoing method embodiment 200, the target image frame queue may be an image frame queue storing normal images, that is, the type of the preset parameter (i.e., the foregoing specified parameter) may be an exposure parameter between a short exposure and a long exposure, and the like, which is not limited herein.
In this embodiment, through the settings of step 510 and step 520, on one hand, fast photographing in the streaming mode can be realized, so as to meet the fast photographing requirement of the user. On the other hand, the quick photographing mode can provide a user with a way of viewing the posture of the photographed object in the image immediately after photographing, and provide a user with a way of further performing custom processing on the image.
Step 530, displaying a first image group and N algorithm identifications on a first interface, wherein the first image group comprises at least one image.
Step 540, receiving a first input of a target algorithm identifier in the N algorithm identifiers from a user.
Wherein N is a positive integer.
Step 550, in response to the first input, performing image processing on the first image group through the target image processing algorithm indicated by the target algorithm identification.
The implementation process of step 530 to step 550 may refer to the related description in method embodiments 100 and/or 300, and is not repeated herein for avoiding repetition.
The foregoing fast photographing process is described below with reference to fig. 2a, 6a, and 6b, as an example.
(1) As shown in fig. 2a, the camera is opened and the "stream mode" control is clicked to enter the stream mode camera.
(2) As shown in fig. 6a, in the image preview interface (i.e. the second interface), the user slides down a single finger to complete one quick photo, or the user slides down a single finger multiple times to complete multiple quick photos.
As shown in fig. 6b, the photo album entry K (i.e. the aforementioned album identifier) displays the latest common image, that is, the image at the photo time point taken from the target image frame queue in fig. 2b is displayed on the control of the photo album entry. Based on this, the user clicks the entry of the quick photo album, can enter the quick photo album to view the common photos, can open the big picture by clicking the photos, and can enter the user-defined image processing interface (i.e. the first interface) by dragging the photos to the edit box L.
As shown in fig. 7, a flowchart of an image processing method 700 provided in an exemplary embodiment of the present application is shown, where the method 700 may be executed by, but not limited to, an electronic device, and in particular may be executed by software and/or hardware installed in the electronic device. The method 700 may include at least the following steps.
Step 710, receiving a fifth input of the user to the second interface.
It is to be understood that the implementation of step 710 can refer to the related description in method embodiment 500 (e.g., step 510), and is not limited herein to avoid repetition.
Step 720, in response to the fifth input, in the case that the input feature of the fifth input is a second preset feature, extracting a second image from the target image frame queue, and determining the image group of the second image as the first image group.
The target image frame queue is used for caching image frames acquired by an image sensor in a shooting preview process, image parameters of the target image frame queue are of preset parameter types, and the acquisition time of the second image is earlier than the input time of the fifth input.
The second preset feature may be a user's sliding up, sliding down, sliding left, or sliding right on the screen, etc., but it should be noted that the second preset feature is different from the first preset feature.
It is to be understood that the implementation of steps 710 to 720 can refer to the related description in method embodiment 500 (e.g., steps 510 and 520), and is not limited herein to avoid repetition.
The foregoing custom shooting process is described below with reference to fig. 2a and 8 as an example.
(1) As shown in fig. 2a, the camera is opened and the "stream mode" control is clicked to enter the stream mode camera.
(2) As shown in fig. 8, in the image preview interface, the user slides up the single finger to complete one self-defined photographing, and at this time, the user can exit from the image preview interface and enter into the first interface, such as the image processing interface. And the first image group displayed on the first interface is an image group to which the second image extracted from the target image frame queue belongs.
It should be noted that the difference between the quick photographing mode described in the foregoing method embodiment 500 and the custom photographing mode described in this method embodiment is: the fast-shooting mode can be understood as: in the streaming mode, if the input feature responded to the fifth input by the electronic device is the first preset feature, the ordinary image is directly extracted from the target image frame queue to be used as the final target image, and whether the target image needs to be processed or not can be selected by a user according to the requirement of the user. The custom photography mode can be understood as: and when the electronic equipment responds to the condition that the input feature of the fifth input is the second preset feature, directly extracting the common image from the target image frame queue, and taking the image group to which the common image belongs as the first image group to perform image processing to obtain the target image.
In this embodiment, through the settings in step 710 and step 720, the customized photographing mode can provide a user with a mode that the user can immediately check the posture of the photographed object in the image during the photographing process, and simultaneously provide an image photographing mode that the user can perform customized processing on the photographed image, thereby effectively meeting the personalized photographing requirements of the user.
Step 730, displaying a first image group and N algorithm identifiers on a first interface, wherein the first image group includes at least one image.
Step 770, receiving a first input of a target algorithm identifier of the N algorithm identifiers from a user.
Wherein N is a positive integer.
Step 750, responding to the first input, and performing image processing on the first image group through the target image processing algorithm indicated by the target algorithm identification.
The implementation process of step 730 to step 750 may refer to the related description in method embodiments 100, 300, or 500, and is not repeated herein for avoiding repetition.
It should be noted that, in the image processing methods 100 to 700 provided in the embodiments of the present application, the execution subject may be an image processing apparatus, or a control module for executing the image processing method in the image processing apparatus. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
As shown in fig. 9, which is a schematic structural diagram of an image processing apparatus 900 according to an embodiment of the present application, the apparatus 900 includes a display module 910, configured to display a first image group and N algorithm identifiers on a first interface, where the first image group includes at least one image; a receiving module 920, configured to receive a first input of a target algorithm identifier in the N algorithm identifiers, where N is a positive integer; a response module 930 configured to perform image processing on the first image group in response to the first input by identifying the indicated target image processing algorithm by the target algorithm.
In one optional implementation, the first interface includes: an image acquisition interface, an image preview interface or an image processing interface; the first image group comprises a shooting preview image, a video preview image or a video frame in a target video, which are acquired by an image sensor.
In another possible implementation manner, the first interface includes a first area and a second area, and the first area is used for displaying the first image group; the second area comprises a first sub-area and a second sub-area, the first sub-area is used for displaying the N algorithm identifications, and the second sub-area is used for displaying the target algorithm identification selected by the first input; the receiving module 920 is configured to receive a first input of M algorithm identifiers displayed in the first sub-area from a user, where the target algorithm identifier includes the M algorithm identifiers, M is an integer greater than 1, and M is not greater than N; the response module 930 is further configured to display the M algorithm identifications in the second sub-area in response to the first input.
In another possible implementation manner, the second sub-region includes at least two algorithm windows, and each algorithm window includes one algorithm identifier of the M algorithm identifiers; the response module 930 is configured to determine, according to the display information of the algorithm window, an algorithm combination manner of the image processing algorithms indicated by the M algorithm identifiers; and according to the algorithm combination mode, carrying out image processing on the first image group through a target image processing algorithm indicated by the target algorithm identification.
In another possible implementation manner, the first interface further includes a third area, and the third area is used for displaying at least one image group; the receiving module 920 is further configured to receive a second input of the user to the second image group displayed in the third area; the responding module 930 is further configured to update the first image group displayed in the first area to the second image group in response to the second input.
In another possible implementation manner, the receiving module 920 is further configured to receive a third input of an album identifier displayed on a second interface by the user, where the second interface includes an image acquisition interface or an image preview interface; the receiving module 920 is further configured to respond to the third input, and display at least two images in the album corresponding to the album identifier; the receiving module 920 is further configured to receive a fourth input of the user to K images of the at least two images, where K is a positive integer; the response module 930 is further configured to determine K images as the first image group in response to the fourth input.
In another possible implementation manner, the receiving module 920 is further configured to receive a fifth input to the second interface from the user; the response module 930, further configured to, in response to the fifth input, extract a first image from the target image frame queue and store the first image into an album if an input feature of the fifth input is a first preset feature; the target image frame queue is used for caching image frames acquired by an image sensor in a shooting preview process, image parameters of the target image frame queue are of preset parameter types, and the acquisition time of the first image is earlier than the input time of the fifth input.
In another possible implementation manner, the responding module 930 is further configured to, in response to the fifth input, extract a second image from the target image frame queue and determine the image group of the second image as the first image group in a case that an input feature of the fifth input is a second preset feature.
In another possible implementation manner, the receiving module 920 is further configured to receive a sixth input from the user; the response module 930, further configured to, in response to the sixth input, control the T image sensors to perform image acquisition according to a preset image acquisition period, where image parameters of different image sensors are different; respectively caching S images with different image parameters, which are acquired by the T image sensors in each image acquisition period, into corresponding image frame queues; wherein, one image frame queue corresponds to an image of an image parameter, T is an integer greater than or equal to 1, and S is an integer greater than 1.
In another possible implementation, in a case that the first image group includes at least two images, the at least two images are images with different image parameters acquired in a same acquisition cycle.
The image processing apparatus 900 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus 900 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 8, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 10, an electronic device 1000 is further provided in this embodiment of the present application, and includes a processor 1001, a memory 1002, and a program or an instruction stored in the memory 1002 and executable on the processor 1001, where the program or the instruction is executed by the processor 1001 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, a processor 1110, and the like.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The display unit 1106 is configured to display a first image group and N algorithm identifiers on a first interface, where the first image group includes at least one image; a user input unit 1107 for receiving a first input from a user to a target algorithm identifier of the N algorithm identifiers, and for responding to the first input, and controlling the processor 1110 to perform image processing on the first image group by using the target image processing algorithm indicated by the target algorithm identifier, where N is a positive integer.
In one optional implementation, the first interface includes: an image acquisition interface, an image preview interface or an image processing interface; the first image group comprises a shooting preview image, a video preview image or a video frame in a target video, which are acquired by an image sensor.
In another possible implementation manner, the first interface includes a first area and a second area, and the first area is used for displaying the first image group; the second area comprises a first sub-area and a second sub-area, the first sub-area is used for displaying the N algorithm identifications, and the second sub-area is used for displaying the target algorithm identification selected by the first input; the user input unit 1107 is configured to receive a first input of the M algorithm identifiers displayed in the first sub-area from a user, where the target algorithm identifier includes the M algorithm identifiers, and in response to the first input, control the display unit 1106 to display the M algorithm identifiers in the second sub-area, where M is an integer greater than 1, and M is not greater than N.
In another possible implementation manner, the second sub-region includes at least two algorithm windows, and each algorithm window includes one algorithm identifier of the M algorithm identifiers; the processor 1110 is configured to determine an algorithm combination manner of the image processing algorithms indicated by the M algorithm identifiers according to the display information of the algorithm window; and according to the algorithm combination mode, carrying out image processing on the first image group through a target image processing algorithm indicated by the target algorithm identification.
In another possible implementation manner, the first interface further includes a third area, and the third area is used for displaying at least one image group; the user input unit 1107 is further configured to receive a second input by the user to the second image group displayed in the third area; and updating the first image group displayed in the first area to the second image group in response to the second input.
In another possible implementation manner, the user input unit 1107 is further configured to receive a third input of an album identifier displayed on a second interface by the user, where the second interface includes an image capture interface or an image preview interface; responding to the third input, and displaying at least two images in the photo album corresponding to the photo album identification; the image processing device is further used for receiving a fourth input of the user to K images in the at least two images, wherein K is a positive integer; and determining K images as the first image group in response to the fourth input.
In another possible implementation manner, the user input unit 1107 is further configured to receive a fifth input to the second interface from the user; and in response to the fifth input, under the condition that the input feature of the fifth input is a first preset feature, extracting a first image from a target image frame queue, and storing the first image into an album; the target image frame queue is used for caching image frames acquired by an image sensor in a shooting preview process, image parameters of the target image frame queue are of preset parameter types, and the acquisition time of the first image is earlier than the input time of the fifth input.
In another possible implementation manner, the user input unit 1107 is further configured to, in response to the fifth input, extract a second image from the target image frame queue in a case that an input feature of the fifth input is a second preset feature, and determine the image group of the second image as the first image group.
In another possible implementation manner, the user input unit 1107 is further configured to receive a sixth input from the user; responding to the sixth input, and controlling the T image sensors to acquire images according to a preset image acquisition period, wherein the image parameters of different image sensors are different; respectively caching S images with different image parameters, which are acquired by the T image sensors in each image acquisition period, into corresponding image frame queues; wherein, one image frame queue corresponds to an image of an image parameter, T is an integer greater than or equal to 1, and S is an integer greater than 1.
In another possible implementation, in a case that the first image group includes at least two images, the at least two images are images with different image parameters acquired in a same acquisition cycle.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (22)

1. An image processing method, characterized in that the method comprises:
displaying a first image group and N algorithm identifications on a first interface, wherein the first image group comprises at least one image;
receiving a first input of a target algorithm identifier in the N algorithm identifiers from a user;
in response to the first input, performing image processing on the first image group through a target image processing algorithm indicated by the target algorithm identification;
wherein N is a positive integer.
2. The method of claim 1, wherein the first interface comprises: an image acquisition interface, an image preview interface or an image processing interface;
the first image group comprises a shooting preview image, a video preview image or a video frame in a target video, which are acquired by an image sensor.
3. The method of claim 1, wherein the first interface comprises a first area and a second area, the first area being used to display the first set of images; the second area comprises a first sub-area and a second sub-area, the first sub-area is used for displaying the N algorithm identifications, and the second sub-area is used for displaying the target algorithm identification selected by the first input;
the receiving a first input of a user to a target algorithm identifier in the N algorithm identifiers includes:
receiving first input of a user to M algorithm identifications displayed in the first sub-area, wherein the target algorithm identification comprises the M algorithm identifications, M is an integer larger than 1, and M is not larger than N;
after receiving a first input of a target algorithm identification from the N algorithm identifications by a user, the method further includes:
in response to the first input, displaying the M algorithm identifications in the second sub-area.
4. The method of claim 3, wherein the second sub-region comprises at least two algorithm windows, each algorithm window comprising one of the M algorithm identifiers;
the image processing on the first image group through the target image processing algorithm indicated by the target algorithm identification comprises the following steps:
determining an algorithm combination mode of the image processing algorithms indicated by the M algorithm identifications according to the display information of the algorithm window;
and according to the algorithm combination mode, carrying out image processing on the first image group through a target image processing algorithm indicated by the target algorithm identification.
5. The method of claim 3, wherein the first interface further comprises a third area for displaying at least one image set;
after the first image group and the N algorithm identifications are displayed on the first interface, the method further comprises the following steps:
receiving a second input of the user to the second image group displayed in the third area;
updating the first image group displayed in the first area to the second image group in response to the second input.
6. The method of claim 1, wherein prior to displaying the first set of images and the N algorithm identifications in the first interface, the method further comprises:
receiving a third input of a user on an album identifier displayed on a second interface, wherein the second interface comprises an image acquisition interface or an image preview interface;
responding to the third input, and displaying at least two images in the photo album corresponding to the photo album identification;
receiving a fourth input of the user to K images in the at least two images;
determining K images as the first image group in response to the fourth input;
wherein K is a positive integer.
7. The method of claim 6, wherein prior to receiving a third input by the user of the album identification displayed on the second interface, the method further comprises:
receiving a fifth input of the second interface by the user;
in response to the fifth input, in the case that the input feature of the fifth input is a first preset feature, extracting a first image from a target image frame queue, and storing the first image to an album;
the target image frame queue is used for caching image frames acquired by an image sensor in a shooting preview process, image parameters of the target image frame queue are of preset parameter types, and the acquisition time of the first image is earlier than the input time of the fifth input.
8. The method of claim 7, wherein after receiving a fifth input from the user to the second interface, the method further comprises:
in response to the fifth input, in the case that the input feature of the fifth input is a second preset feature, extracting a second image from the target image frame queue, and determining the image group of the second image as the first image group.
9. The method of claim 1, wherein prior to displaying the first set of images and the N algorithm identifications in the first interface, the method further comprises:
receiving a sixth input of the user;
responding to the sixth input, and controlling T image sensors to acquire images according to a preset image acquisition period, wherein the image parameters of different image sensors are different;
respectively caching S images with different image parameters, which are acquired by the T image sensors in each image acquisition period, into corresponding image frame queues;
wherein, one image frame queue corresponds to an image of an image parameter, T is an integer greater than or equal to 1, and S is an integer greater than 1.
10. Method according to claim 9, characterized in that in case the first image set comprises at least two images, the at least two images are images with different image parameters acquired in the same acquisition cycle.
11. An image processing apparatus, characterized in that the apparatus comprises:
the display module is used for displaying a first image group and N algorithm identifications on a first interface, wherein the first image group comprises at least one image;
the receiving module is used for receiving first input of a target algorithm identifier in the N algorithm identifiers by a user, wherein N is a positive integer;
and the response module is used for responding to the first input and carrying out image processing on the first image group through the target image processing algorithm indicated by the target algorithm identification.
12. The apparatus of claim 11, wherein the first interface comprises: an image acquisition interface, an image preview interface or an image processing interface;
the first image group comprises a shooting preview image, a video preview image or a video frame in a target video, which are acquired by an image sensor.
13. The apparatus of claim 11, wherein the first interface comprises a first area and a second area, the first area for displaying the first set of images; the second area comprises a first sub-area and a second sub-area, the first sub-area is used for displaying the N algorithm identifications, and the second sub-area is used for displaying the target algorithm identification selected by the first input;
the receiving module is used for receiving first input of M algorithm identifications displayed in the first sub-area by a user, the target algorithm identification comprises the M algorithm identifications, M is an integer larger than 1, and M is not more than N;
the response module is further configured to display the M algorithm identifications in the second sub-area in response to the first input.
14. The apparatus of claim 13, wherein the second sub-region comprises at least two algorithm windows, each algorithm window comprising one of the M algorithm identifiers;
the response module is used for determining the algorithm combination mode of the image processing algorithm indicated by the M algorithm identifications according to the display information of the algorithm window; and
and according to the algorithm combination mode, carrying out image processing on the first image group through a target image processing algorithm indicated by the target algorithm identification.
15. The apparatus of claim 13, wherein the first interface further comprises a third area for displaying at least one image set;
the receiving module is further configured to receive a second input of the user to the second image group displayed in the third area;
the response module is further configured to update the first image group displayed in the first area to the second image group in response to the second input.
16. The apparatus of claim 11,
the receiving module is further used for receiving a third input of the user on an album identifier displayed on a second interface, wherein the second interface comprises an image acquisition interface or an image preview interface;
the receiving module is further configured to respond to the third input and display at least two images in the album corresponding to the album identifier;
the receiving module is further configured to receive a fourth input of the user to K images of the at least two images, where K is a positive integer;
the response module is further configured to determine K images as the first image group in response to the fourth input.
17. The apparatus of claim 16,
the receiving module is further configured to receive a fifth input to the second interface from the user;
the response module is further used for responding to the fifth input, extracting a first image from a target image frame queue under the condition that the input feature of the fifth input is a first preset feature, and storing the first image into an album;
the target image frame queue is used for caching image frames acquired by an image sensor in a shooting preview process, image parameters of the target image frame queue are of preset parameter types, and the acquisition time of the first image is earlier than the input time of the fifth input.
18. The apparatus of claim 17,
the response module is further used for responding to the fifth input, extracting a second image from the target image frame queue under the condition that the input feature of the fifth input is a second preset feature, and determining the image group of the second image as the first image group.
19. The apparatus of claim 11,
the receiving module is further configured to receive a sixth input of the user;
the response module is further configured to respond to the sixth input and control the T image sensors to perform image acquisition according to a preset image acquisition period, where image parameters of different image sensors are different; respectively caching S images with different image parameters, which are acquired by the T image sensors in each image acquisition period, into corresponding image frame queues;
wherein, one image frame queue corresponds to an image of an image parameter, T is an integer greater than or equal to 1, and S is an integer greater than 1.
20. The apparatus according to claim 19, wherein in case the first image set comprises at least two images, the at least two images are images acquired within the same acquisition cycle with different image parameters.
21. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 10.
22. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 10.
CN202111150189.9A 2021-09-29 2021-09-29 Image processing method, image processing device, electronic equipment and storage medium Pending CN113923353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111150189.9A CN113923353A (en) 2021-09-29 2021-09-29 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111150189.9A CN113923353A (en) 2021-09-29 2021-09-29 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113923353A true CN113923353A (en) 2022-01-11

Family

ID=79236890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111150189.9A Pending CN113923353A (en) 2021-09-29 2021-09-29 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113923353A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140218554A1 (en) * 2013-02-07 2014-08-07 Lg Electronics Inc. Electronic device and method of controlling the same
CN104247392A (en) * 2012-03-06 2014-12-24 苹果公司 Fanning user interface controls for media editing application
CN104322050A (en) * 2012-05-22 2015-01-28 株式会社尼康 Electronic camera, image display device, and image display program
JP2015073185A (en) * 2013-10-02 2015-04-16 キヤノン株式会社 Image processing device, image processing method and program
CN108419012A (en) * 2018-03-18 2018-08-17 广东欧珀移动通信有限公司 Photographic method, device, storage medium and electronic equipment
CN110070497A (en) * 2019-03-08 2019-07-30 维沃移动通信(深圳)有限公司 A kind of image processing method and terminal device
CN113194255A (en) * 2021-04-29 2021-07-30 南京维沃软件技术有限公司 Shooting method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104247392A (en) * 2012-03-06 2014-12-24 苹果公司 Fanning user interface controls for media editing application
CN104322050A (en) * 2012-05-22 2015-01-28 株式会社尼康 Electronic camera, image display device, and image display program
US20140218554A1 (en) * 2013-02-07 2014-08-07 Lg Electronics Inc. Electronic device and method of controlling the same
JP2015073185A (en) * 2013-10-02 2015-04-16 キヤノン株式会社 Image processing device, image processing method and program
CN108419012A (en) * 2018-03-18 2018-08-17 广东欧珀移动通信有限公司 Photographic method, device, storage medium and electronic equipment
CN110070497A (en) * 2019-03-08 2019-07-30 维沃移动通信(深圳)有限公司 A kind of image processing method and terminal device
CN113194255A (en) * 2021-04-29 2021-07-30 南京维沃软件技术有限公司 Shooting method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN113093968B (en) Shooting interface display method and device, electronic equipment and medium
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN111857512A (en) Image editing method and device and electronic equipment
CN112911147B (en) Display control method, display control device and electronic equipment
CN112995500A (en) Shooting method, shooting device, electronic equipment and medium
CN112672061B (en) Video shooting method and device, electronic equipment and medium
CN112486390A (en) Display control method and device and electronic equipment
CN113794829A (en) Shooting method and device and electronic equipment
CN113794834A (en) Image processing method and device and electronic equipment
CN113709368A (en) Image display method, device and equipment
CN112822394A (en) Display control method and device, electronic equipment and readable storage medium
CN112734661A (en) Image processing method and device
CN113010738A (en) Video processing method and device, electronic equipment and readable storage medium
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112136309B (en) System and method for performing rewind operations with a mobile image capture device
CN111866379A (en) Image processing method, image processing device, electronic equipment and storage medium
WO2022247766A1 (en) Image processing method and apparatus, and electronic device
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN113271378B (en) Image processing method and device and electronic equipment
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN113596331B (en) Shooting method, shooting device, shooting equipment and storage medium
CN113923353A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113873168A (en) Shooting method, shooting device, electronic equipment and medium
CN113542599A (en) Image shooting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination