CN114972011A - Image editing method, device, terminal and storage medium - Google Patents

Image editing method, device, terminal and storage medium Download PDF

Info

Publication number
CN114972011A
CN114972011A CN202210350730.9A CN202210350730A CN114972011A CN 114972011 A CN114972011 A CN 114972011A CN 202210350730 A CN202210350730 A CN 202210350730A CN 114972011 A CN114972011 A CN 114972011A
Authority
CN
China
Prior art keywords
image
filter effect
area
category
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210350730.9A
Other languages
Chinese (zh)
Inventor
屈占祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202210350730.9A priority Critical patent/CN114972011A/en
Publication of CN114972011A publication Critical patent/CN114972011A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses an image editing method, an image editing device, a terminal and a storage medium, and belongs to the technical field of internet. The method comprises the following steps: in response to an editing instruction of the first image, determining at least one object area and a category of each object area in the first image, wherein the object area is an area containing an object to be identified, and the category is a category to which the object contained in the object area belongs; determining a filter effect corresponding to each object area based on the category of each object area; and respectively editing each object area in the first image based on the filter effect corresponding to each object area to obtain a second image. By determining the filter effect suitable for each category of object area, the process of intelligently matching the filter effect for the local area of the first image is realized, and thus the second image obtained by editing based on the determined filter effect can better meet the visual demand of the user, thereby improving the editing effect of the image and improving the aesthetic property of the image.

Description

Image editing method, device, terminal and storage medium
Technical Field
The embodiment of the application relates to the technical field of internet, in particular to an image editing method, an image editing device, a terminal and a storage medium.
Background
At present, the requirement of users on the visual aesthetic feeling effect of images is higher and higher. The images can generally be edited to conform to the user's look and feel requirements. In the process of editing the image, a filter effect is determined first, and then the filter effect is added to the image. However, the image editing effect of the method is single.
Disclosure of Invention
The embodiment of the application provides an image editing method, an image editing device, a terminal and a storage medium, which can improve the image editing effect. The technical scheme is as follows:
in one aspect, an image editing method is provided, and the method includes:
in response to an editing instruction of a first image, determining at least one object area and a category of each object area in the first image, wherein the object area is an area containing an object to be identified, and the category is a category to which the object contained in the object area belongs;
determining a filter effect corresponding to each object region based on the category of each object region;
and respectively editing each object area in the first image based on the filter effect corresponding to each object area to obtain a second image.
In some embodiments, the determining, based on the category of each of the object regions, a filter effect corresponding to each of the object regions includes:
for each object region, determining a target class with a matching parameter of the class of the object region higher than a matching threshold from a plurality of preset classes;
and determining the filter effect corresponding to the target category as the filter effect corresponding to the object area.
In some embodiments, the editing each object region in the first image based on the filter effect corresponding to each object region respectively to obtain a second image includes:
respectively editing each object area in the first image based on the filter effect corresponding to each object area;
and generating the second image based on a background area and each edited object area, wherein the background area is an area except the at least one object area in the first image.
In some embodiments, the generating the second image based on the background region and each edited object region includes:
determining a filter effect corresponding to the first image;
editing the background area and each edited object area based on the filter effect corresponding to the first image;
and forming the second image by the edited background area and each edited object area.
In some embodiments, the generating the second image based on the background region and each edited object region includes:
determining a filter effect corresponding to the first image;
editing the background area based on the filter effect corresponding to the first image;
and forming the second image by the edited background area and each edited object area.
In some embodiments, the determining a filter effect corresponding to the first image includes:
determining a scene category to which a scene of the first image belongs;
and determining the filter effect corresponding to the scene type as the filter effect corresponding to the first image.
In some embodiments, after determining the filter effect corresponding to each of the object regions based on the category of each of the object regions, the method further comprises:
displaying a parameter adjusting control corresponding to each object region in the first image, wherein the parameter adjusting control is used for triggering filter parameters for adjusting a filter effect corresponding to the object region, and the filter parameters are used for describing the strength of the filter effect;
the editing each object region in the first image based on the filter effect corresponding to each object region includes:
and responding to the triggering operation of the parameter adjusting control, and editing the object area corresponding to the parameter adjusting control based on the filter parameter indicated by the parameter adjusting control.
In some embodiments, the method further comprises:
displaying a parameter adjusting control corresponding to each object area in the second image, wherein the parameter adjusting control is used for triggering filter parameters for adjusting the filter effect added to the object area, and the filter parameters are used for describing the strength of the effect of the added filter effect;
and responding to the triggering operation of any parameter adjusting control, and editing an object area corresponding to the parameter adjusting control based on the filter parameter indicated by the parameter adjusting control to obtain a third image.
In some embodiments, the method further comprises:
displaying the first image and a filter adding control in an image editing interface, wherein the filter adding control is used for triggering the addition of a filter effect to at least one object area in the first image;
the determining at least one object region in the first image and the category of each object region in response to the editing instruction for the first image comprises:
in response to a triggering operation of the filter addition control, at least one object region and a category of each object region in the first image are determined.
In some embodiments, the determining at least one object region and a category of each object region in the first image in response to the editing instruction for the first image comprises:
in response to an editing instruction of the first image, determining a selected area in the first image as the object area;
and identifying the category to which the object contained in the object area belongs.
In another aspect, there is provided an image editing apparatus, the apparatus including:
the image processing device comprises a determining module, a judging module and a judging module, wherein the determining module is used for responding to an editing instruction of a first image, determining at least one object area in the first image and the category of each object area, the object area is an area containing an object to be identified, and the category is a category to which the object contained in the object area belongs;
the determining module is further configured to determine, based on the category of each object region, a filter effect corresponding to each object region;
and the editing module is used for respectively editing each object area in the first image based on the filter effect corresponding to each object area to obtain a second image.
In some embodiments, the determining module is to:
for each object region, determining a target class with a matching parameter higher than a matching threshold value with the class of the object region from a plurality of preset classes;
and determining the filter effect corresponding to the target category as the filter effect corresponding to the object area.
In some embodiments, the editing module comprises:
the editing unit is used for respectively editing each object area in the first image based on the filter effect corresponding to each object area;
a generating unit, configured to generate the second image based on a background area and each edited object area, where the background area is an area of the first image except for the at least one object area.
In some embodiments, the generating unit is configured to:
determining a filter effect corresponding to the first image;
editing the background area and each edited object area based on the filter effect corresponding to the first image;
and forming the second image by the edited background area and each edited object area.
In some embodiments, the generating unit is configured to:
determining a filter effect corresponding to the first image;
editing the background area based on the filter effect corresponding to the first image;
and forming the second image by the edited background area and each edited object area.
In some embodiments, the generating unit is configured to:
determining a scene category to which a scene of the first image belongs;
and determining the filter effect corresponding to the scene type as the filter effect corresponding to the first image.
In some embodiments, the apparatus further comprises:
a display module, configured to display a parameter adjustment control corresponding to each object region in the first image, where the parameter adjustment control is used to trigger a filter parameter for adjusting a filter effect corresponding to the object region, and the filter parameter is used to describe a strength of the filter effect;
and the editing unit is used for responding to the triggering operation of the parameter adjusting control and editing the object area corresponding to the parameter adjusting control based on the filter parameter indicated by the parameter adjusting control.
In some embodiments, the apparatus further comprises:
a display module, configured to display a parameter adjustment control corresponding to each object region in the second image, where the parameter adjustment control is used to trigger a filter parameter for adjusting a filter effect added to the object region, and the filter parameter is used to describe a strength of the added filter effect;
the editing module is further configured to, in response to a trigger operation on any one of the parameter adjustment controls, edit an object region corresponding to the parameter adjustment control based on the filter parameter indicated by the parameter adjustment control, so as to obtain a third image.
In some embodiments, the apparatus further comprises:
the display module is used for displaying the first image and a filter adding control in an image editing interface, wherein the filter adding control is used for triggering the addition of a filter effect to at least one object area in the first image;
the determining module is used for responding to the triggering operation of the filter adding control, and determining at least one object area in the first image and the category of each object area.
In some embodiments, the determining module is to:
in response to an editing instruction of the first image, determining a selected area in the first image as the object area;
and identifying the category to which the object contained in the object area belongs.
In another aspect, a server is provided, which includes a processor and a memory, where at least one program code is stored in the memory, and the at least one program code is loaded and executed by the processor to implement the image editing method according to the above aspect.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the at least one program code is loaded and executed by a processor to implement the image editing method of the above aspect.
In another aspect, the present application provides a computer program product, where at least one computer program is stored in the computer program product, and the at least one computer program is loaded and executed by a processor to implement the image editing method according to the above aspect.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the embodiment of the application provides a scheme for respectively adding filter effects to local areas of images, at least one object area in a first image and the category of each object area are determined, so that the filter effects suitable for the category can be determined according to the object area of each category, the process of intelligently matching the filter effects to the local areas of the first image is realized, and a second image obtained by editing based on the determined filter effects can better meet the visual and sensory requirements of a user, so that the editing effect of the images is improved, and the attractiveness of the images is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of an image editing method provided in an embodiment of the present application;
FIG. 2 is a flowchart of another image editing method provided in an embodiment of the present application;
FIG. 3 is a flow chart of an image editing process provided by an embodiment of the present application;
fig. 4 is a flowchart of a further image editing method provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image editing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
It should be noted that information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, displayed data, etc.), and signals referred to in this application are authorized by the user or sufficiently authorized by various parties, and the collection, use, and processing of the relevant data is required to comply with relevant laws and regulations and standards in relevant countries and regions. For example, the images referred to in this application are all acquired with sufficient authorization.
The image editing method provided by the embodiment of the application is executed by the terminal. Optionally, the terminal is a mobile phone, a tablet computer, a computer, or the like.
The image editing method can be applied to image editing scenes. When a user wants to edit an image, the terminal is triggered to display the image, and the terminal adds corresponding filter effects to at least one object area in the image through the image editing method provided by the embodiment of the application to obtain the edited image, so that the edited image can better meet the impression requirements of the user, and the image editing effect is improved.
For example, an album application is installed on the terminal, a user selects an image from the album application, and the terminal adds a corresponding filter effect to at least one object area in the image through the image editing method provided by the embodiment of the application to obtain the edited image, so that the edited image can better meet the viewing and feeling requirements of the user.
For another example, a view sharing application is installed on a terminal, when a user shares a view through the view sharing application, one image is selected from images displayed in the view sharing application by the terminal, and the terminal adds a corresponding filter effect to at least one object area in the image through the image editing method provided by the embodiment of the application to obtain an edited image, so that the edited image can better meet the look and feel requirements of the user, and the user can share the view based on the edited image.
For another example, a session application is installed in the terminal, when a user performs a session through the session application, an image is selected from images displayed in the session application by the terminal, and the terminal adds a corresponding filter effect to at least one object region in the image through the image editing method provided by the embodiment of the application to obtain an edited image, so that the edited image can better meet the viewing and sensing requirements of the user, and the user can perform a session with other users based on the edited image.
Fig. 1 is a flowchart of an image editing method provided in an embodiment of the present application, and referring to fig. 1, the method is applied in a terminal, and the method includes:
101. the terminal responds to an editing instruction of the first image, and determines at least one object area and the category of each object area in the first image, wherein the object area is an area containing an object to be identified, and the category is a category to which the object contained in the object area belongs.
Wherein the first image contains at least one category of objects and each object region contains one category of objects. The object and the category of the object may be set as needed, which is not limited in the embodiment of the present application. Alternatively, the object includes a person, an animal, an article, etc., the article includes food, a scene, or living goods, etc., and the category of the object includes a person category, an animal category, a food category, a scene category, or living goods category, etc. For example, the subject is a dog, the subject belongs to the animal category, the subject is a steamed stuffed bun, and the subject belongs to the food category.
The editing instruction is used for triggering the image to be edited. Optionally, the editing instructions are used to trigger the addition of a filter effect to the image. Filters may be used to achieve various special effects of the image. Adding a filter effect to an image may be understood as superimposing a filter effect on the basis of the image, or as performing some kind of transformation on the image. The filter effect may be in various forms, but is not limited in this application, and for example, the filter effect may be an effect for a color, such as a red filter, a blue filter, or a gray filter, or the like, or the filter effect may be an effect for a shape of an image body, such as a face-thinning effect, a leg-thinning effect, or the like.
102. And the terminal determines the filter effect corresponding to each object area based on the category of each object area.
In the embodiment of the application, the terminal determines the filter effect corresponding to the category of each object area, and determines the filter effect as the filter effect corresponding to the object area.
103. And the terminal respectively edits each object area in the first image based on the filter effect corresponding to each object area to obtain a second image.
And the second image is an image obtained by adding a filter effect to the first image. In the embodiment of the application, after the filter effect corresponding to each object region is determined, the terminal edits each object region, that is, adds the filter effect corresponding to the object region to each object region, thereby obtaining the second image.
The embodiment of the application provides a scheme for respectively adding filter effects to local areas of images, at least one object area in a first image and the category of each object area are determined, so that the filter effects suitable for the category can be determined according to the object area of each category, the process of intelligently matching the filter effects to the local areas of the first image is realized, and a second image obtained by editing based on the determined filter effects can better meet the visual and sensory requirements of a user, so that the editing effect of the images is improved, and the attractiveness of the images is improved.
In some embodiments, when a user wants to edit an image, the user may trigger the terminal to display an image editing interface corresponding to the image, so that the image is edited based on the image editing interface. Optionally, taking the image as the first image as an example, the terminal determines the first image and displays an image editing interface corresponding to the first image. The implementation mode of the terminal for determining the first image comprises the following steps: the terminal displays at least one image, and in response to any one of the at least one image being selected, determines the selected image as a first image.
Optionally, the terminal is installed with a target application. The target application is any application with an image editing function, such as an album, a view sharing application or a social application. For example, the target application is an album application, the terminal displays at least one image in the album application, the user selects any one of the images, and the terminal determines the selected image as the first image in response to selection of any one of the images in the album. As another example, the target application is a view sharing application. The method comprises the steps that a user triggers a terminal to run a viewpoint sharing application, the sharing application comprises a sharing control, the user triggers the sharing control, the terminal responds to the fact that the sharing control is triggered, an image selection interface is displayed, at least one image is displayed in the image selection interface, the user selects any image, the terminal responds to the fact that any image in the image selection interface is selected, and the selected image is determined to be a first image.
Optionally, after the first image is determined, the terminal directly displays an image editing interface corresponding to the first image. Or the target application comprises an image editing control, and the image editing control is used for triggering the display of the image editing interface. After the first image is determined, the terminal displays the first image and an image editing control in an image display interface of the target application, the user triggers the image editing control, and the terminal responds to the triggering of the image editing control and displays an image editing interface corresponding to the first image. The display mode of the first image and the image editing control may be set as required, which is not limited in the embodiment of the present application, for example, the image editing control is displayed below the first image in an "editing" style.
In the embodiment of the application, the image editing interface is triggered and displayed, so that a user can trigger the terminal to edit the image in the image editing interface, a channel for editing the image is provided for the user, the convenience for editing the image is further improved, and the use experience of the user is improved.
Taking an example that a terminal adds a filter effect to a first image based on an image editing interface to obtain a second image as an example, fig. 2 is a flowchart of another image editing method provided in the embodiment of the present application, and referring to fig. 2, the method includes:
201. the terminal displays a first image and a filter adding control in an image editing interface, wherein the filter adding control is used for triggering the addition of a filter effect to at least one object area in the image, and the object area is an area containing an object to be identified.
The display modes of the first image and the filter adding control can be set according to needs, and the display modes are not limited in the embodiment of the application. For example, the filter addition control is displayed below the first image in the word "smart filter".
It should be noted that the first image displayed in the image editing interface by the terminal may be an original image, that is, an unedited image, or an image obtained through other editing operations, for example, the editing operations include operations of cutting, adding characters, or adding stickers.
In the embodiment of the application, a user views the first image through an image editing interface displayed by the terminal. When it is desired to add a filter effect to the image, the user can trigger the filter addition control, thereby triggering the terminal to perform the operation of step 202. The trigger operation may be set as required, which is not limited in the embodiment of the present application. For example, the trigger operation is a single click or a double click.
202. And the terminal responds to the triggering operation of the filter adding control, and determines at least one object area in the first image and the category of each object area, wherein the category is the category to which the object contained in the object area belongs.
The terminal determines at least one object area in the first image, identifies the category to which the object contained in each object area belongs, and determines the category as the category of the object area. In some embodiments, the terminal is deployed with an image segmentation model and an image recognition model, and the image segmentation model is used for performing image segmentation on an input image to obtain at least one object region in the image. The image recognition model is used for recognizing the category to which the object included in the input image belongs. The terminal invokes the image segmentation model, inputs the first image into the image segmentation model, and the image segmentation model outputs the at least one object region. After obtaining at least one object area of the first object, the terminal calls an image recognition model, each object area is input into the image recognition model, and the image recognition model outputs the category of each object area. In this embodiment, the first image is processed by means of the image segmentation model and the image recognition model to obtain at least one object region and a category of each object region, so that no manual processing is needed and the determination efficiency is high.
In further embodiments, the server is deployed with an image segmentation model and an image recognition model, and the terminal determines at least one object region and a class of each object region in the first image by means of the server. Correspondingly, the terminal responds to the triggering operation of the filter adding control and sends an image identification request to the server, and the image identification request carries the first image. The server responds to the image recognition request, calls an image segmentation model, inputs a first image into the image segmentation model, the image segmentation model outputs at least one object area, calls the image recognition model, inputs each object area into the image recognition model, the image recognition model outputs the category of each object area, and returns at least one object area in the first image and the category of each object area to the terminal. In this embodiment, by determining at least one object region of the first image and the category of each object region by means of the server, a large number of calculation processes are transferred from the terminal to the server, so that the terminal saves considerably on calculation resources and does not need to deploy a model, which also saves on storage space.
In some embodiments, step 202 is an implementation in which the terminal determines at least one object region in the first image and a category of each object region in response to an editing instruction for the first image. And triggering the filter adding control by the user to trigger an editing instruction of the first image.
In the embodiment of the application, the image editing control is displayed in the image editing interface, so that a user only needs to trigger the image editing control without performing other operations, and the terminal can be triggered to determine at least one object area of the first image and the category of each object area, thereby not only providing a man-machine interaction mode for image editing, but also reducing the operation steps for determining the object areas and the categories of the object areas, and improving the determination efficiency.
In some embodiments, upon receiving an editing instruction for the first image, the terminal may automatically determine at least one object region in the first image, for example by an image segmentation model. Or, the user may customize at least one object region in the first image, and accordingly, the implementation manner of the terminal responding to the editing instruction of the first image to determine at least one object region in the first image and the category of each object region includes: the terminal responds to an editing instruction of the first image, and determines the selected area in the first image as an object area; and identifying the category to which the object contained in the object area belongs.
The selected area is an area formed by the detected sliding track triggering the operation. Accordingly, the terminal displays the first image, determines a detected slide trajectory that triggers the operation, and determines an area formed by the slide trajectory as the object area. Optionally, the implementation manner of identifying the category to which the object included in the object region belongs by the terminal is the same as the implementation manner of determining the category of each object region based on the image identification model in step 202, and is not described herein again.
It should be noted that the terminal may further determine at least one object region of the first object by combining the two implementation manners, for example, the terminal determines the object region in the first image based on the image segmentation model, and then determines the selected region as the object region to obtain the at least one object region in the first image. Or, the terminal determines the selected region as an object region, and then determines the object region in the first image based on the image segmentation model to obtain at least one object region in the first image, which is not limited in the present application.
In the embodiment of the application, the selected area in the first image is determined as the object area, so that a mode of customizing the object area is provided for a user, the user can determine the object area by himself or herself, the determination mode of the object area is enriched, the determined object area can better meet the requirements of the user, and the use experience of the user is improved.
203. And the terminal determines the filter effect corresponding to each object area based on the category of each object area.
In some embodiments, the terminal stores a plurality of preset categories and a filter effect corresponding to each preset category. The preset category and the filter effect corresponding to the preset category can be set according to needs, which is not limited in the present application. For example, the preset category is a food category and the corresponding filter effect is a red filter. Correspondingly, the implementation manner of the terminal for determining the filter effect corresponding to each object region based on the category of each object region includes: the terminal determines a target category of which the matching parameter with the category of the object area is higher than a matching threshold value from a plurality of preset categories for each object area; and determining the filter effect corresponding to the target category as the filter effect corresponding to the target area.
Wherein the matching parameter is used to indicate the degree of matching between the two categories. The matching threshold may be set as needed, which is not limited in this application. If the matching parameter is higher than the matching threshold, it indicates that the category of the object region is relatively matched with the target category, that is, the category is relatively close to the target category, and the filter effect corresponding to the target category may be determined as the filter effect corresponding to the category; if the matching parameter is not higher than the matching threshold, it indicates that the category of the object region is not matched with the target category, that is, the category is not close to the target category, and the filter effect corresponding to the target category cannot be determined as the filter effect corresponding to the category.
In the embodiment of the application, the target category which is higher than a certain threshold value in the category of the object region is determined from the plurality of preset categories, so that the target category which is relatively close to the target category can be determined, the filter effect corresponding to the target category can be directly determined as the filter effect corresponding to the object region, the determined filter effect corresponding to the object region is more in line with the category of the object region, and the accuracy of the determined filter effect is higher.
204. And the terminal edits each object area in the first image respectively based on the filter effect corresponding to each object area.
In some embodiments, for each object region, the terminal adds a filter effect corresponding to the object region to obtain an edited object region.
205. And the terminal generates a second image based on the background area and each edited object area, wherein the background area is an area except for at least one object area in the first image.
In some embodiments, the terminal directly composes the background region and each edited object region into the second image. Or, the terminal may further add a filter effect to the background area, and optionally, the implementation manner of generating the second image based on the background area and each edited object area by the terminal includes: the terminal determines a filter effect corresponding to the first image; editing the background area based on the filter effect corresponding to the first image; and forming a second image by the edited background area and each edited object area.
The filter effect corresponding to the first image is the overall filter effect of the first image, and after the filter effect is determined, the terminal is a background area, namely an area without the filter effect except an object area is edited, namely the filter effect corresponding to the first image is added.
Optionally, the terminal may be configured to determine a filter effect corresponding to the first image based on a scene of the first image, and accordingly, an implementation manner of the terminal determining the filter effect corresponding to the first image includes: determining a scene category to which a scene of the first image belongs; and determining the filter effect corresponding to the scene type as the filter effect corresponding to the first image. The terminal portion is deployed with an identification model used for identifying the scene category of the image, and accordingly the terminal calls the identification model, inputs the first image into the identification model, and outputs the scene category to which the scene of the first image belongs. For example, if the scene of the first image is seaside, the scene category is seaside category, and if the scene of the first image is restaurant, the scene category is restaurant category.
In the embodiment of the application, the scene type of the first image is embodied to a great extent through the background area of the first image, the filter effect which is in accordance with the scene type can be determined by identifying the scene type of the first image, and the determined filter effect is relatively matched with the background area of the first image, so that the edited background area can better meet the impression requirement of a user, and the image editing effect is improved.
Or, the terminal may further add a filter effect to both the background area and the edited object area, and optionally, the implementation manner in which the terminal generates the second image based on the background area and each edited object area includes: the terminal determines a filter effect corresponding to the first image; editing the background area and each edited object area based on the filter effect corresponding to the first image; and forming a second image by the edited background area and each edited object area.
The implementation manner of determining the filter effect corresponding to the first image by the terminal refers to the above optional implementation manner, which is not described herein again. In the embodiment of the application, the determined filter effect corresponding to the first image is the overall filter effect of the first image, and the overall filter effect of the edited image can be unified by adding the filter effect to the background region and the edited object region, so that the editing effect of the image is improved.
In some embodiments, steps 204 to 205 are an implementation manner in which the terminal edits each object region in the first image respectively based on the filter effect corresponding to each object region to obtain the second image.
After the second image is obtained, the terminal displays the second image so that the user can view the second image. Optionally, each object region and the background region are still divided in the second image, so that the user can compare the filter effect of each object region and the background region respectively.
In some embodiments, after step 205, the terminal may further adjust a filter effect added in the second image, and accordingly, the image editing method provided by the embodiment of the present application further includes the following steps: the terminal displays a parameter adjusting control corresponding to each object area in the second image, the parameter adjusting control is used for triggering filter parameters for adjusting the filter effect added to the object area, and the filter parameters are used for describing the strength of the effect of the added filter effect; and responding to the triggering operation of any parameter adjusting control, and editing the object area corresponding to the parameter adjusting control based on the filter parameter indicated by the parameter adjusting control to obtain a third image.
The range of the filter parameters can be set as required, for example, the range is 0 to 100%, and accordingly, the parameter adjustment control can be displayed in the form of a degree bar. Optionally, in response to a trigger operation on any parameter adjustment control, the terminal adds a filter effect of the filter parameter to an object region corresponding to the parameter adjustment control to obtain an edited object region. For example, if the filter parameter is 20%, the terminal adds 20% of the filter effect to the target area.
The embodiment of the application provides a mode for independently editing the local area of the image for a user, further provides a mode for automatically adjusting the strength of the effect of the filter effect of the local area for the user, and the user can adjust the strength of the effect of the filter effect of the object area in the second image through the displayed parameter adjustment control, so that the edited object area can better meet the viewing and feeling requirements of the user, the editing effect of the image is further improved, and the attractiveness of the image is improved.
With the above embodiments taken as an example in fig. 3, an image editing process according to an embodiment of the present application is described, with reference to fig. 3, the image editing process includes: a user triggers a terminal to select a first image; the method comprises the steps that a terminal obtains a first image, and determines at least one object area in the first image and the category of each object area; the terminal determines a corresponding filter effect for each object area, then adds the corresponding filter effect for the object area, and then adds the filter effect corresponding to the first image for the background area and the edited object area to obtain a second image; and the user adjusts the filter effect corresponding to the object area in the second image, the terminal adds the adjusted filter effect to the object area, and then adds the filter effect corresponding to the first image to the background area and each object area in the second image, thereby obtaining the final image.
For example, when a user goes to a restaurant, an image is taken through the terminal, the image includes the scenes of food and restaurants (such as tables, ornaments and the like), the user triggers the terminal to edit the image, the terminal determines a plurality of object areas of the image, the object area 1 contains food, a filter effect corresponding to the food category can be used for the object area 1, the object area 2 contains a decoration, the decoration is a vase, the filter effect corresponding to the vase type can be used aiming at the object area 2, whereas the object area 3 comprises a table, the filter effect corresponding to the table category may be used for the object area 3, and finally, the scene category of the first image is determined, if the scene category is the restaurant category, the filter effect corresponding to the restaurant category is used for the background area, therefore, the edited image is obtained, and the edited image is more in line with the appearance requirements of the user.
The embodiment of the application provides a scheme for respectively adding filter effects to local areas of images, at least one object area in a first image and the category of each object area are determined, so that the filter effects suitable for the category can be determined according to the object area of each category, the process of intelligently matching the filter effects to the local areas of the first image is realized, and a second image obtained by editing based on the determined filter effects can better meet the visual and sensory requirements of a user, so that the editing effect of the images is improved, and the attractiveness of the images is improved.
In some embodiments, after the filter effect corresponding to each object region in the first image is determined, the terminal does not add the filter effect to the object region, but provides a mode for setting the effect intensity of the filter effect for the user, so that the user can select the effect intensity of adding the filter effect to the object region by himself, and the edited object region can better meet the viewing and feeling requirements of the user. Accordingly, fig. 4 is a flowchart of another image editing method provided in an embodiment of the present application, and referring to fig. 4, the method is applied in a terminal, and the method includes:
401. the terminal displays a first image and a filter adding control in an image editing interface, wherein the filter adding control is used for triggering the addition of a filter effect to at least one object area in the image, and the object area is an area containing an object to be identified.
402. And the terminal responds to the triggering operation of the filter adding control, and determines at least one object area in the first image and the category of each object area, wherein the category is the category to which the object contained in the object area belongs.
403. And the terminal determines the filter effect corresponding to each object area based on the category of each object area.
The implementation manners of steps 401 to 403 are the same as those of steps 201 to 203, and are not described herein again.
404. The terminal displays a parameter adjusting control corresponding to each object area in the first image, the parameter adjusting control is used for triggering filter parameters for adjusting the filter effect corresponding to the object area, and the filter parameters are used for describing the strength of the filter effect.
After the filter effect corresponding to each object region is determined, the terminal does not directly add the filter effect to the object region, but displays a parameter adjusting control corresponding to the object region, so that a user can adjust the strength of the filter effect according to the parameter adjusting control.
405. And the terminal responds to the triggering operation of the parameter adjusting control and edits an object area corresponding to the parameter adjusting control based on the filter parameter indicated by the parameter adjusting control.
The user can trigger the parameter adjustment control, so that the required strength of the filter effect is selected, and accordingly, the terminal adds the filter effect under the filter parameter to the object area corresponding to the parameter adjustment control according to the filter parameter indicated by the current parameter adjustment control, and the edited object area is obtained.
It should be noted that, a user may adjust filter parameters for a part of object regions in at least one object region, and then, for an object region for which filter parameters are not adjusted, the terminal adds a filter effect under preset filter parameters to the object region. The preset filter parameter may be set as needed, which is not limited in the present application, for example, the preset filter parameter is 100%.
406. And the terminal generates a second image based on the background area and each edited object area, wherein the background area is an area except for at least one object area in the first image.
The implementation of step 406 is the same as that of step 205, and is not described herein again.
In the embodiment of the application, after the filter effect corresponding to each object region is determined, the parameter adjustment control corresponding to the object region is displayed, so that a user can set the effect intensity degree of the filter effect corresponding to the object region by triggering the parameter adjustment control, the filter effect meeting the user requirement is added to the object region, the use experience of the user is improved, the editing effect of an image is improved, and the attractiveness of the image is improved.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 5 is a schematic structural diagram of an image editing apparatus provided in an embodiment of the present application, and referring to fig. 5, the apparatus includes:
a determining module 501, configured to determine, in response to an editing instruction for a first image, at least one object region and a category of each object region in the first image, where the object region is a region including an object to be identified, and the category is a category to which an object included in the object region belongs;
the determining module 501 is further configured to determine, based on the category of each object region, a filter effect corresponding to each object region;
the editing module 502 is configured to edit each object region in the first image based on the filter effect corresponding to each object region, to obtain a second image.
In some embodiments, the determining module 501 is configured to:
for each object region, determining a target class with a matching parameter higher than a matching threshold value with the class of the object region from a plurality of preset classes;
and determining the filter effect corresponding to the target category as the filter effect corresponding to the target area.
In some embodiments, the editing module 502 comprises:
the editing unit is used for respectively editing each object area in the first image based on the filter effect corresponding to each object area;
and the generating unit is used for generating a second image based on the background area and each edited object area, wherein the background area is an area except for at least one object area in the first image.
In some embodiments, a generation unit to:
determining a filter effect corresponding to the first image;
editing the background area and each edited object area based on the filter effect corresponding to the first image;
and forming a second image by the edited background area and each edited object area.
In some embodiments, a generation unit to:
determining a filter effect corresponding to the first image;
editing the background area based on the filter effect corresponding to the first image;
and forming a second image by the edited background area and each edited object area.
In some embodiments, a generation unit to:
determining a scene category to which a scene of the first image belongs;
and determining the filter effect corresponding to the scene type as the filter effect corresponding to the first image.
In some embodiments, the apparatus further comprises:
the display module is used for displaying a parameter adjusting control corresponding to each object area in the first image, the parameter adjusting control is used for triggering filter parameters for adjusting the filter effect corresponding to the object area, and the filter parameters are used for describing the strength of the filter effect;
and the editing unit is used for responding to the triggering operation of the parameter adjusting control and editing the object area corresponding to the parameter adjusting control based on the filter parameter indicated by the parameter adjusting control.
In some embodiments, the apparatus further comprises:
the display module is used for displaying a parameter adjusting control corresponding to each object area in the second image, the parameter adjusting control is used for triggering filter parameters for adjusting the filter effect added to the object area, and the filter parameters are used for describing the strength of the effect of the added filter effect;
the editing module 502 is further configured to, in response to a trigger operation on any parameter adjustment control, edit an object region corresponding to the parameter adjustment control based on a filter parameter indicated by the parameter adjustment control, so as to obtain a third image.
In some embodiments, the apparatus further comprises:
the display module is used for displaying the first image and a filter adding control in the image editing interface, and the filter adding control is used for triggering the addition of a filter effect to at least one object area in the first image;
a determining module 501, configured to determine at least one object region and a category of each object region in the first image in response to a triggering operation of the filter addition control.
In some embodiments, the determining module 501 is configured to:
in response to an editing instruction for the first image, determining a selected area in the first image as an object area;
the category to which the object contained in the object region belongs is identified.
The embodiment of the application provides a scheme for respectively adding filter effects to local areas of images, and by determining at least one object area in a first image and the category of each object area, the filter effect suitable for the category can be determined for the object area of each category, so that the process of intelligently matching the filter effect to the local area of the first image is realized, and a second image edited based on the determined filter effect can better meet the visual and sensory requirements of a user, so that the editing effect of the images is improved, and the attractiveness of the images is improved.
It should be understood that the image editing apparatus provided in fig. 5 is only illustrated by dividing the functional modules into different functional modules when implementing the functions thereof, and in practical applications, the above function allocation is performed by different functional modules according to needs, that is, the internal structure of the terminal is divided into different functional modules to perform all or part of the functions described above. In addition, the image editing apparatus provided in the above embodiment and the image editing method provided in the method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
Fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 600 is a distribution terminal in the above embodiment. The terminal 600 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion Picture Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression standard Audio Layer 4), a notebook computer, a desktop computer, a head-mounted device, or any other intelligent terminal. The terminal 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 602 is used to store at least one program code for execution by the processor 601 to implement the image editing methods provided by the method embodiments herein.
In some embodiments, the terminal 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the terminal 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the terminal 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used for positioning the current geographic Location of the terminal 600 to implement navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian glonass Positioning System, or the european union galileo Positioning System.
Power supply 609 is used to provide power to the various components in terminal 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may identify the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 600. For example, the acceleration sensor 611 may be used to identify the components of the gravitational acceleration in three coordinate axes. The processor 601 may control the display screen 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may identify a body direction and a rotation angle of the terminal 600, and the gyro sensor 612 may cooperate with the acceleration sensor 611 to acquire a 3D motion of the user on the terminal 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 613 may be disposed on the side bezel of terminal 600 and/or underneath display screen 605. When the pressure sensor 613 is disposed on the side frame of the terminal 600, the holding signal of the user to the terminal 600 can be recognized, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the terminal 600. When a physical button or vendor Logo is provided on the terminal 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of display screen 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the display screen 605 is increased; when the ambient light intensity is low, the display brightness of the display screen 605 is adjusted down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
A proximity sensor 616, also referred to as a distance sensor, is disposed on the front panel of the terminal 600. The proximity sensor 616 is used to collect the distance between the user and the front surface of the terminal 600. In one embodiment, when proximity sensor 616 identifies that the distance between the user and the front face of terminal 600 is gradually decreased, processor 601 controls display 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 recognizes that the distance between the user and the front surface of the terminal 600 becomes gradually larger, the processor 601 controls the display 605 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is not intended to be limiting of terminal 600 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, there is also provided a computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to implement the image editing method in the above-described embodiments. Alternatively, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to implement the image editing method in the above-described embodiments.
In some embodiments, the computer program according to the embodiments of the present disclosure may be deployed to be executed on one electronic device, or on a plurality of electronic devices located at one site, or on a plurality of electronic devices distributed at a plurality of sites and interconnected by a communication network, and the plurality of electronic devices distributed at the plurality of sites and interconnected by the communication network may constitute a block chain system. The electronic device may be provided as a terminal.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (13)

1. An image editing method, characterized in that the method comprises:
in response to an editing instruction of a first image, determining at least one object area and a category of each object area in the first image, wherein the object area is an area containing an object to be identified, and the category is a category to which the object contained in the object area belongs;
determining a filter effect corresponding to each object region based on the category of each object region;
and respectively editing each object area in the first image based on the filter effect corresponding to each object area to obtain a second image.
2. The method of claim 1, wherein determining the filter effect corresponding to each of the object regions based on the category of each of the object regions comprises:
for each object region, determining a target class with a matching parameter higher than a matching threshold value with the class of the object region from a plurality of preset classes;
and determining the filter effect corresponding to the target category as the filter effect corresponding to the object area.
3. The method according to claim 1, wherein the editing each object region in the first image based on the filter effect corresponding to each object region to obtain a second image comprises:
respectively editing each object area in the first image based on the filter effect corresponding to each object area;
and generating the second image based on a background area and each edited object area, wherein the background area is an area except the at least one object area in the first image.
4. The method of claim 3, wherein generating the second image based on the background region and each edited object region comprises:
determining a filter effect corresponding to the first image;
editing the background area and each edited object area based on the filter effect corresponding to the first image;
and forming the second image by the edited background area and each edited object area.
5. The method of claim 3, wherein generating the second image based on the background region and each edited object region comprises:
determining a filter effect corresponding to the first image;
editing the background area based on the filter effect corresponding to the first image;
and forming the second image by the edited background area and each edited object area.
6. The method of claim 4 or 5, wherein the determining the filter effect corresponding to the first image comprises:
determining a scene category to which a scene of the first image belongs;
and determining the filter effect corresponding to the scene type as the filter effect corresponding to the first image.
7. The method of claim 3, wherein after determining the filter effect corresponding to each of the object regions based on the category of each of the object regions, the method further comprises:
displaying a parameter adjusting control corresponding to each object region in the first image, wherein the parameter adjusting control is used for triggering filter parameters for adjusting a filter effect corresponding to the object region, and the filter parameters are used for describing the strength of the filter effect;
the editing each object region in the first image based on the filter effect corresponding to each object region includes:
and responding to the triggering operation of the parameter adjusting control, and editing the object area corresponding to the parameter adjusting control based on the filter parameter indicated by the parameter adjusting control.
8. The method according to any one of claims 1-5, further comprising:
displaying a parameter adjusting control corresponding to each object area in the second image, wherein the parameter adjusting control is used for triggering filter parameters for adjusting the filter effect added to the object area, and the filter parameters are used for describing the strength of the effect of the added filter effect;
and responding to the triggering operation of any parameter adjusting control, and editing an object area corresponding to the parameter adjusting control based on the filter parameter indicated by the parameter adjusting control to obtain a third image.
9. The method according to any one of claims 1-5, further comprising:
displaying the first image and a filter adding control in an image editing interface, wherein the filter adding control is used for triggering the addition of a filter effect to at least one object area in the first image;
the determining at least one object region in the first image and the category of each object region in response to the editing instruction for the first image comprises:
in response to a triggering operation of the filter addition control, at least one object region and a category of each object region in the first image are determined.
10. The method according to any one of claims 1-5, wherein determining at least one object region and a category of each object region in the first image in response to the editing instruction for the first image comprises:
in response to an editing instruction of the first image, determining a selected area in the first image as the object area;
and identifying the category to which the object contained in the object area belongs.
11. An image editing apparatus, characterized in that the apparatus comprises:
the image processing device comprises a determining module, a judging module and a judging module, wherein the determining module is used for responding to an editing instruction of a first image, determining at least one object area in the first image and the category of each object area, the object area is an area containing an object to be identified, and the category is a category to which the object contained in the object area belongs;
the determining module is further configured to determine, based on the category of each object region, a filter effect corresponding to each object region;
and the editing module is used for respectively editing each object area in the first image based on the filter effect corresponding to each object area to obtain a second image.
12. A terminal, characterized in that the terminal comprises a processor and a memory, in which at least one program code is stored, which is loaded and executed by the processor, to cause the terminal to implement the image editing method according to any one of claims 1 to 10.
13. A computer-readable storage medium, having stored therein at least one program code, which is loaded and executed by a processor, to cause a terminal to implement the image editing method according to any one of claims 1 to 10.
CN202210350730.9A 2022-04-02 2022-04-02 Image editing method, device, terminal and storage medium Pending CN114972011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210350730.9A CN114972011A (en) 2022-04-02 2022-04-02 Image editing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210350730.9A CN114972011A (en) 2022-04-02 2022-04-02 Image editing method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN114972011A true CN114972011A (en) 2022-08-30

Family

ID=82978533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210350730.9A Pending CN114972011A (en) 2022-04-02 2022-04-02 Image editing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114972011A (en)

Similar Documents

Publication Publication Date Title
CN112162671B (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN109191549B (en) Method and device for displaying animation
CN110865754B (en) Information display method and device and terminal
CN110545476B (en) Video synthesis method and device, computer equipment and storage medium
CN108965757B (en) Video recording method, device, terminal and storage medium
CN112533017B (en) Live broadcast method, device, terminal and storage medium
CN110300274B (en) Video file recording method, device and storage medium
CN112363660B (en) Method and device for determining cover image, electronic equipment and storage medium
CN110956580B (en) Method, device, computer equipment and storage medium for changing face of image
CN111142838B (en) Audio playing method, device, computer equipment and storage medium
CN112788359B (en) Live broadcast processing method and device, electronic equipment and storage medium
CN111880888B (en) Preview cover generation method and device, electronic equipment and storage medium
CN111368114B (en) Information display method, device, equipment and storage medium
CN113407291A (en) Content item display method, device, terminal and computer readable storage medium
CN112052897A (en) Multimedia data shooting method, device, terminal, server and storage medium
CN112822544B (en) Video material file generation method, video synthesis method, device and medium
CN112738606B (en) Audio file processing method, device, terminal and storage medium
CN110891181B (en) Live broadcast picture display method and device, storage medium and terminal
CN113609358A (en) Content sharing method and device, electronic equipment and storage medium
CN110152309B (en) Voice communication method, device, electronic equipment and storage medium
CN110808021A (en) Audio playing method, device, terminal and storage medium
CN110942426B (en) Image processing method, device, computer equipment and storage medium
CN114972011A (en) Image editing method, device, terminal and storage medium
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN110888710A (en) Method and device for adding subtitles, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination