CN115880139A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN115880139A
CN115880139A CN202211493154.XA CN202211493154A CN115880139A CN 115880139 A CN115880139 A CN 115880139A CN 202211493154 A CN202211493154 A CN 202211493154A CN 115880139 A CN115880139 A CN 115880139A
Authority
CN
China
Prior art keywords
image
target area
target
mole
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211493154.XA
Other languages
Chinese (zh)
Inventor
李英英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Wentai Information Technology Co ltd
Original Assignee
Wuxi Wentai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Wentai Information Technology Co ltd filed Critical Wuxi Wentai Information Technology Co ltd
Priority to CN202211493154.XA priority Critical patent/CN115880139A/en
Publication of CN115880139A publication Critical patent/CN115880139A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application relates to the technical field of image processing, and discloses an image processing method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first image, and determining whether the first image comprises a target area, wherein the target area at least comprises an image of a mole; if the first image comprises the target area, storing pixel information corresponding to the target area, and executing preset image processing operation on the first image to obtain a second image; and according to the stored pixel information corresponding to the target area, reducing the image included in the target area to the area corresponding to the second image to obtain the target image. By implementing the embodiment of the application, the situation that the processed image does not meet the requirements of the user due to excessive image processing operation can be avoided.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the rapid development of image processing technology, nowadays, after a user takes an image by an image pickup device, the user can perform corresponding image processing technology, such as: the image is processed by beauty treatment, buffing treatment and the like so as to improve the expression effect of the image.
However, in practice, it is found that after an excessive image processing operation is performed on an image, sometimes the requirement of a user cannot be completely met, and thus the use experience of the user is reduced.
Disclosure of Invention
The embodiment of the application discloses an image processing method and device, electronic equipment and a storage medium, which can avoid the situation that a processed image does not meet the requirements of a user due to excessive image processing operation.
A first aspect of an embodiment of the present application discloses an image processing method, where the method includes:
acquiring a first image, and determining whether the first image comprises a target area, wherein the target area at least comprises an image of a mole;
if the first image comprises a target area, storing pixel information corresponding to the target area, and executing preset image processing operation on the first image to obtain a second image;
and according to the stored pixel information corresponding to the target area, reducing the image included in the target area to the area corresponding to the second image to obtain a target image.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the determining whether the first image includes the target region includes:
determining whether the first image comprises a target area or not through a semantic segmentation model, wherein the semantic segmentation model is obtained by training a semantic segmentation network according to a sample image, and is used for adding corresponding color labels to different classes of objects included in an input image, and the different color labels are used for marking the different classes of objects.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the determining, by using a semantic segmentation model, whether the first image includes the target region includes:
inputting the first image into a semantic segmentation model to obtain a third image output by the semantic segmentation model;
if the third image comprises an area marked by a first color label, determining that the first image comprises a target area, wherein the first color label indicates that an object marked by the first color label is a mole;
if the area marked by the first color label is not included in the third image, it is determined that the target area is not included in the first image.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the sample images include a sample image with a mole and a sample image without a mole, the sample image includes one or more of a background region, a face region, and a mole region, and the background region, the face region, and the mole region are respectively marked by different color labels.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the target area is an area of an image including a mole; and if the first image comprises a target area, storing pixel information corresponding to the target area, including:
if the first image comprises a target area, acquiring the target color of a mole included in the target area;
and if the target color is a first color, storing pixel information corresponding to the target area, wherein the first color comprises red, yellow, purple or green.
As an optional implementation manner, in the first aspect of this embodiment of the present application, if the first image includes a target area, storing pixel information corresponding to the target area includes:
if the first image comprises a target area, determining whether the first image belongs to a target certificate type;
and if the first image does not belong to the target certificate type, storing pixel information corresponding to the target area.
As an optional implementation manner, in the first aspect of this embodiment of the present application, after the determining whether the first image includes the target region, the method further includes:
if the first image does not comprise the target area, directly executing the preset image processing operation on the first image to obtain a fourth image;
and taking the fourth image as a target image.
A second aspect of the embodiments of the present application discloses an image processing apparatus, including:
the device comprises a determining unit, a processing unit and a processing unit, wherein the determining unit is used for acquiring a first image and determining whether a target area is included in the first image, and the target area at least comprises an image of a mole;
the storage unit is used for storing pixel information corresponding to a target area when the first image is determined to comprise the target area, and executing preset image processing operation on the first image to obtain a second image;
and the restoring unit is used for restoring the image included in the target area to the area corresponding to the second image according to the stored pixel information corresponding to the target area so as to obtain the target image.
A third aspect of an embodiment of the present application discloses an electronic device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the image processing method disclosed by the first aspect of the embodiment of the application.
A fourth aspect of embodiments of the present application discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute an image processing method disclosed in the first aspect of embodiments of the present application.
A fifth aspect of embodiments of the present application discloses a computer program product, which, when run on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect of embodiments of the present application.
A sixth aspect of embodiments of the present application discloses an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product, when running on a computer, causes the computer to perform part or all of the steps of any one of the methods of the first aspect of the embodiments of the present application.
Compared with the related art, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, after the first image is acquired, if it is determined that a target area including a mole image exists in the first image, pixel information corresponding to the target area may be stored, and then a preset image processing operation is performed on the first image to obtain a processed second image; further, the mole image may be restored to a corresponding area in the second image according to the stored pixel information, so as to obtain a target image with the mole image. Therefore, by implementing the embodiment of the application, the mole image which the user wants to keep can be prevented from being erased after the preset image processing operation, and the phenomenon that the processed image does not meet the requirements of the user due to excessive image processing operation can be avoided, so that the use experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method disclosed in an embodiment of the present application;
FIG. 2 is a schematic diagram of an image processing effect disclosed in an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram of another image processing method disclosed in the embodiments of the present application;
FIG. 4 is a schematic flowchart of another image processing method disclosed in the embodiment of the present application;
FIG. 5 is a schematic structural diagram of an image processing apparatus disclosed in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first", "second", "third" and "fourth", etc. in the description and claims of the present application are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and "having," and any variations thereof, of the embodiments of the present application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses an image processing method and device, electronic equipment and a storage medium, which can avoid the situation that a processed image does not meet the requirements of a user due to excessive image processing operation.
The technical solution of the present application will be described in detail with reference to specific embodiments.
In order to more clearly describe an image processing method and apparatus, an electronic device, and a storage medium disclosed in the embodiments of the present application. Firstly, an image processing method in the related art is introduced, in the related art, after a user obtains an image through a camera device, the image can be processed through image processing operations such as beautifying processing, skin grinding processing and the like so as to improve the representation effect of the image, particularly for a face image, in the related art, areas such as speckle and mole in the face image can be identified through a related algorithm, and the areas such as the speckle and mole are erased, so that the effect of enabling the face to be more white and clean is achieved.
However, in practice, it is found that some nevi on the face are actually drawn by the user actively, for example: the lucky mole is a mole for decoration, so the user does not want the lucky mole to be erased in the image processing process, and the image processing method in the related technology can not completely meet the requirements of the user by directly adopting the image processing method, thereby reducing the use experience of the user.
Therefore, the image processing method disclosed by the embodiment of the application can prevent the nevus image which the user wants to keep from being erased after the preset image processing operation, and can prevent the processed image from not meeting the requirements of the user due to excessive image processing operation, so that the use experience of the user is improved.
Based on this, the image processing method disclosed in the embodiment of the present application is described below.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application. Optionally, the method may be applied to various electronic devices with image processing capability, for example: a mobile phone, a camera, a smart watch, or the like, but not limited thereto. The method may comprise the steps of:
102. acquiring a first image, and determining whether the first image comprises a target area, wherein the target area at least comprises an image of a mole.
In the embodiment of the present application, the electronic device may further have an image capturing capability in addition to the image processing capability, and the first image may be any image captured by the electronic device. Optionally, the first image may also be an image obtained by the electronic device from another electronic device or from the internet, which is not limited herein. Alternatively, the first image may be a face image including a human face.
As an alternative embodiment, the electronic device may determine whether the first image includes the target region through a preset image recognition algorithm. Optionally, the preset image recognition algorithm may include an edge recognition algorithm, a texture recognition algorithm, and the like, which are not limited herein.
Alternatively, the target area may be an area of an image that includes a mole. It will be appreciated that the tattoo or ornamentation on the user may also serve as a decoration, so that the user may not want them to be erased during the image processing, and in other alternative embodiments, the target area may be an area including the tattoo image, or an area including the ornamentation image, without limitation.
In an optional embodiment, the electronic device may identify target contour features of each object in the first image through an edge recognition algorithm, and further determine that the first image includes the target region if the target contour features are matched with any one of the first contour features corresponding to the nevi, the second contour features corresponding to the tattoo, and the third contour features corresponding to the accessories.
104. And if the first image comprises the target area, storing pixel information corresponding to the target area, and executing preset image processing operation on the first image to obtain a second image.
In the embodiment of the application, the electronic device can extract the pixel information corresponding to the target area when determining that the first image includes the target area, and store the extracted pixel information in the corresponding storage medium, so that the subsequent acquisition and use are facilitated.
Optionally, the pixel information may include first pixel information corresponding to each pixel point included in the target region, where the first pixel information may include a coordinate position of the corresponding pixel point in the first image, a color parameter (for example, an RGB color parameter, and three-color ratio information of red, green, and blue) corresponding to the corresponding pixel point, and the like, and is not limited herein.
Further, the electronic device may perform a preset image processing operation on the first image to obtain a processed second image. Optionally, the preset image processing operation may include, but is not limited to: beauty treatment operation, buffing treatment operation and the like. Optionally, the preset image processing operation may be set by a developer according to development experience, or may be set by a user according to a use requirement, which is not limited herein.
106. And according to the stored pixel information corresponding to the target area, reducing the image included in the target area to an area corresponding to the second image to obtain a target image.
In the embodiment of the present application, after the electronic device performs the predetermined image processing operation on the first image, it is inevitable to process the target area in the first image, and further, the image in the target area may be faded or erased. In this regard, the electronic device may restore each pixel point included in the target area to a corresponding area in the second image according to the stored pixel information corresponding to the target area, so as to obtain the target image in which the image corresponding to the target area is reserved.
Referring to fig. 2, fig. 2 is a schematic diagram illustrating an image processing effect according to an embodiment of the present disclosure. After the first image 200 is subjected to the predetermined image processing operation, a second image 210 with the nevus lucky a erased is obtained; the electronic device can obtain the target image 220 with the nevus lucky a after restoring the nevus lucky a to the area corresponding to the second image according to the stored pixel information corresponding to the nevus lucky a.
Fig. 2 is a diagram for convenience of description, and should not be construed as limiting the embodiment of the present application.
By implementing the method disclosed in each of the above embodiments, after the first image is obtained, if it is determined that the first image has a target area including a mole image, pixel information corresponding to the target area may be stored, and then a preset image processing operation is performed on the first image to obtain a processed second image; further, the mole image may be restored to a corresponding area in the second image according to the stored pixel information, so as to obtain a target image with the mole image. Therefore, by implementing the embodiment of the application, the mole image which the user wants to keep can be prevented from being erased after the preset image processing operation, and the phenomenon that the processed image does not meet the requirements of the user due to excessive image processing operation can be avoided, so that the use experience of the user is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of another image processing method disclosed in the embodiment of the present application. Optionally, the method may be applied to various electronic devices with image processing capability, for example: a mobile phone, a camera, a smart watch, or the like, but not limited thereto. The method may comprise the steps of:
302. acquiring a first image, and determining whether the first image comprises a target area or not through a semantic segmentation model.
As an alternative embodiment, after acquiring the first image, the electronic device may determine whether the first image includes the target region through a semantic segmentation model. The semantic segmentation model can add corresponding color labels to different classes of objects included in the input first image, and then the subsequent electronic device can determine whether the first image includes the target area through the color labels.
As an optional implementation manner, after acquiring the first image, the electronic device may input the first image to the semantic segmentation model, and then the semantic segmentation model may add different color labels to different classes of objects in the first image, and output a third image including the color labels.
Further, the electronic device may identify each color tag included in the third image, and if the third image includes a region marked by the first color tag, the electronic device may determine that the first image includes the target region, and determine the region marked by the first color tag as the target region; wherein, the first color label indicates that the marked object is a nevus.
Optionally, if the third image includes the area marked by the second color label, the electronic device may determine that the first image includes the target area, and determine the area marked by the second color label as the target area; wherein, the second color label represents that the marked object is a tattoo;
optionally, if the third image includes an area marked by the third color label, the electronic device may determine that the first image includes the target area, and determine the area marked by the third color label as the target area; wherein, the third color label represents that the marked object is an ornament.
In another alternative embodiment, if the third image does not include the region marked by the first color label, the second color label, and the third color label, it may be determined that the target region is not included in the first image.
By implementing the method, the electronic device can determine whether the first image includes the target region through the semantic segmentation model, and can determine whether the first image includes the target region more quickly and accurately compared with other methods such as image recognition.
As an alternative embodiment, the semantic segmentation model may be obtained by training a semantic segmentation network according to the sample images. Alternatively, the sample image may include a sample image with a mole and a sample image without a mole, which is not limited herein. Each sample image can be divided into one or more of a background region, a face region and a mole region, and optionally, the background region, the face region and the mole region can be respectively marked by different color labels.
Optionally, a background region, a face region, and a mole region in the sample image may be determined through an edge extraction algorithm, and then the background region, the face region, and the mole region may be marked with different color labels. In another embodiment, after the background region, the face region and the mole region in the sample image are determined through the edge extraction algorithm, the division result can be output, and then developers can correct the division result to obtain a correction result, and further can mark the background region, the face region and the mole region in the correction result by adopting different color labels, so that the marking accuracy is improved, and the quality of the sample image is improved.
In an alternative embodiment, the semantic segmentation network can be a hyper network HyperSeg model based on a U-Net network structure. Optionally, the HyperSeg model may include an encoding network, a decoding network, and a head module; the encoding network is used for collecting image characteristics, the decoding network is used for dyeing and segmenting an original image based on the image characteristics, and the head module is used for generating a weight of the decoding network, so that the effect of accelerating a training process is achieved.
Optionally, the electronic device may train the semantic segmentation network according to the sample image to obtain a semantic segmentation model. In another embodiment, the electronic device may train the semantic segmentation network according to the sample image through a training device (e.g., a computer, a server, etc.) to obtain a semantic segmentation model, and then the training device may send the trained semantic segmentation model to the electronic device, so that the electronic device may be prevented from executing a training task, and the computational load and power consumption of the electronic device are reduced.
By implementing the method, the semantic segmentation network can be trained through the high-quality sample image so as to obtain a semantic segmentation model with better effect, and the accuracy of subsequently determining the target area is improved.
304. And if the first image comprises the target area, storing pixel information corresponding to the target area, and executing preset image processing operation on the first image to obtain a second image.
As an alternative implementation, if it is determined that the target area is not included in the first image, the electronic device may directly perform a preset image processing operation on the first image to obtain a fourth image, and use the fourth image as the target image.
By implementing the method, the preset image processing operation can be directly executed on the first image under the condition that the target area is not included in the first image, so that the step of storing and restoring the pixel information by the electronic equipment can be avoided, and the calculation amount of the electronic equipment can be reduced.
As an alternative implementation, if the target area is included in the first image and it is determined that the preset image processing operation to be performed on the first image is a target image processing operation that does not process the target area, the electronic device may directly perform the preset image processing operation on the first image to obtain a fifth image, and use the fifth image as the target image.
Optionally, the target area is generally an area of an image including moles, and the moles are generally located in the face area, and the target image processing operation may include a background blurring operation, a hair blurring operation, and the like, which is not limited herein.
By implementing the method, the preset image processing operation can be directly executed on the first image under the condition that the preset image processing operation to be executed on the first image is determined not to process the target area, so that the steps of storing and restoring the pixel information by the electronic equipment can be avoided under the condition that the target area is reserved, and the calculation amount of the electronic equipment can be reduced.
As an optional implementation manner, the first image may be obtained by shooting with an electronic device, the electronic device may obtain a target position of shooting the first image, if the target position is in a sports competition venue, the electronic device may further determine whether an image included in the target area is a national flag image or not under the condition that the target area is determined to be included in the first image, and if the image included in the target area is the national flag image, perform the step of storing pixel information corresponding to the target area and performing a preset image processing operation on the first image to obtain the second image.
It can be understood that, when the user is assisted by athletes in his or her own country in the stadium, the user often paints his or her own national flag on the face or other parts of his or her body, and for this reason, the user does not want the national flag of the picture to be erased when shooting, and for this electronic device, the national flag can be retained, so that the target image generated subsequently better meets the user's needs, thereby improving the user experience.
306. And according to the stored pixel information corresponding to the target area, reducing the image included in the target area to an area corresponding to the second image to obtain a target image.
As an optional implementation manner, after the electronic device restores the image included in the target area to the area corresponding to the second image to obtain the target image, the electronic device may delete the stored pixel information corresponding to the target area to release the storage space of the electronic device.
By implementing the method disclosed by each embodiment, the nevus image which the user wants to keep can be prevented from being erased after the preset image processing operation, and the phenomenon that the processed image does not meet the requirements of the user due to excessive image processing operation can be avoided, so that the use experience of the user is improved; whether the first image comprises the target area or not can be determined through the semantic segmentation model, and whether the first image comprises the target area or not can be determined more quickly and accurately compared with other methods such as image recognition; the semantic segmentation network can be trained through the high-quality sample image to obtain a semantic segmentation model with a better effect, so that the accuracy of subsequently determining the target area is improved; and the preset image processing operation can be directly executed on the first image under the condition that the preset image processing operation to be executed on the first image is determined not to process the target area, so that the steps of storing and restoring the pixel information by the electronic equipment can be avoided under the condition of keeping the target area, and the calculation amount of the electronic equipment can be reduced.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating another image processing method according to an embodiment of the present disclosure. Optionally, the method may be applied to various electronic devices with image processing capability, for example: a mobile phone, a camera, a smart watch, or the like, but not limited thereto. The method may comprise the steps of:
402. acquiring a first image, and determining whether the first image comprises a target area, wherein the target area at least comprises an image of a mole.
404. And if the first image comprises the target area, acquiring the target color of the image comprised by the target area.
In this embodiment, the target area may be an area of an image including a mole, and for this reason, when it is determined that the first image includes the target area, pixel information of each pixel point included in the target area may be further obtained, and then a target color of the mole included in the target area is determined according to the pixel information.
406. And if the target color is the first color, storing the pixel information corresponding to the target area, and executing preset image processing operation on the first image to obtain a second image.
It will be appreciated that the nevus that the user needs to keep is typically a lucky nevus that the user actively draws for decoration, and the color of the lucky nevus is typically red, yellow, etc. Alternatively, the electronic device may further determine whether the target color of the nevus is a first color, which may include, but is not limited to, red, yellow, purple, or green, which are common colors for lucky nevus.
And if the target color is determined to be the first color, the electronic device may store the pixel information corresponding to the target area, and perform a preset image processing operation on the first image to obtain a second image.
If the target color is determined not to be the first color, the electronic device may use the mole included in the target area as a mole that the user does not need to keep, and then the electronic device may directly perform a preset image processing operation on the first image, so that the step of the electronic device performing pixel information storage and restoration may be avoided, and further the calculation amount of the electronic device may be reduced.
By implementing the method, the target area can be reserved only when the nevus included in the target area is determined to be a lucky nevus which needs to be reserved by the user, so that the follow-up target image with the target area reserved can be improved to meet the requirements of the user, and the use experience of the user is improved.
As an alternative embodiment, in the case that the electronic device determines that the first image includes the target area, the electronic device may further determine whether the first image belongs to a target identification photo type; if the first image does not belong to the target identification photo type, a step of storing pixel information corresponding to the target area is performed, wherein the target identification photo type may include but is not limited to: identity card photographs, passport card photographs, and the like.
If the first image belongs to the target identification photo type, the electronic equipment can directly execute preset image processing operation on the first image, so that the step of executing pixel information storage and restoration by the electronic equipment can be avoided, and further the calculation amount of the electronic equipment can be reduced.
It can be understood that the identification photo usually does not allow the face region to be smeared or mole-removed, and by implementing the method, the electronic device can reserve the target region only when the first image is determined not to be used as the identification photo, so that the flexibility of the method is improved, and the subsequently generated target image is more in line with the requirements of the user.
As an optional implementation manner, the electronic device may acquire a background area in the first image, and if the background area is a solid color area of the second color, it may be determined that the first image belongs to the target identification photo type; if the background area is not the pure color area of the second color, the first image can be determined not to belong to the type of the target identification photo; the second color may include red or blue, etc., and is not limited herein.
It can be understood that the background of the identification photo is usually pure red or pure blue, and for this reason, the electronic device can determine whether the first image belongs to the target identification photo type through the color of the background area of the first image, and the determination method is simple and accurate, thereby reducing the implementation difficulty of the method.
408. And according to the stored pixel information corresponding to the target area, reducing the image included in the target area to an area corresponding to the second image to obtain a target image.
As an optional implementation manner, after the electronic device reduces the image included in the target area to the area corresponding to the second image to obtain the target image, the electronic device may output the target image for reference by the user.
As another optional implementation, after performing a preset image processing operation on the first image to obtain a second image, the electronic device may store the second image; and after the image included in the target area is reduced to the area corresponding to the second image to obtain the target image, the second image and the target image can be respectively output for reference of a user.
By implementing the method, the electronic equipment can respectively output the target image of the reserved target area and the second image of the unreserved target area for the reference of the user, so that more options are provided for the user, and the use experience of the user is improved.
By implementing the method disclosed by each embodiment, the nevus image which the user wants to keep can be prevented from being erased after the preset image processing operation, and the phenomenon that the processed image does not meet the requirements of the user due to excessive image processing operation can be avoided, so that the use experience of the user is improved; the target area can be reserved only when the nevus included in the target area is determined to be a lucky nevus required to be reserved by the user, so that the follow-up target image with the reserved target area can be improved to meet the requirements of the user, and the use experience of the user is improved; and the electronic equipment can reserve the target area only when the first image is determined not to be used as the identification photo, so that the flexibility of the method is improved, and the subsequently generated target image can better meet the requirements of users.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure. Optionally, the apparatus may be applied to various electronic devices with image processing capability, such as: a mobile phone, a camera, a smart watch, or the like, but not limited thereto. The apparatus may comprise a first determining unit 502, a storing unit 504 and a restoring unit 506, wherein:
a first determining unit 502, configured to acquire a first image, and determine whether the first image includes a target area, where the target area includes at least an image of a mole;
a storage unit 504, configured to store pixel information corresponding to a target area when it is determined that the first image includes the target area, and perform a preset image processing operation on the first image to obtain a second image;
and a restoring unit 506, configured to restore, according to the stored pixel information corresponding to the target area, the image included in the target area to an area corresponding to the second image, so as to obtain the target image.
By implementing the device, after the first image is acquired, if a target area including a mole image exists in the first image, pixel information corresponding to the target area can be stored, and then a preset image processing operation is performed on the first image to obtain a processed second image; further, the mole image may be restored to a corresponding area in the second image according to the stored pixel information, so as to obtain a target image with the mole image. Therefore, by implementing the embodiment of the application, the mole image which the user wants to keep can be prevented from being erased after the preset image processing operation, and the phenomenon that the processed image does not meet the requirements of the user due to excessive image processing operation can be avoided, so that the use experience of the user is improved.
As an optional implementation manner, the first determining unit 502 is further configured to determine whether the first image includes the target region through a semantic segmentation model, where the semantic segmentation model is obtained by training a semantic segmentation network according to the sample image, and the semantic segmentation model is configured to add corresponding color labels to different classes of objects included in the input image, where the different color labels are used to label the different classes of objects.
By implementing the device, whether the first image comprises the target area can be determined through the semantic segmentation model, and whether the first image comprises the target area can be determined more quickly and accurately compared with other methods such as image recognition.
As an optional implementation manner, the first determining unit 502 is further configured to input the first image into the semantic segmentation model to obtain a third image output by the semantic segmentation model; if the third image comprises the area marked by the first color label, determining that the first image comprises the target area, wherein the first color label represents that the marked object is a mole; and if the region marked by the first color label is not included in the third image, determining that the target region is not included in the first image.
By implementing the device, whether the first image comprises the target area can be determined through the semantic segmentation model, and compared with other methods such as image recognition, whether the first image comprises the target area can be determined more quickly and accurately.
As an alternative embodiment, the sample image includes a sample image with a mole and a sample image without a mole, the sample image includes one or more of a background region, a face region, and a mole region, and the background region, the face region, and the mole region are respectively marked by different color labels.
By implementing the device, the semantic segmentation network can be trained through the high-quality sample image so as to obtain a semantic segmentation model with better effect, and the accuracy of subsequently determining the target area is improved.
As an alternative embodiment, the target area is an area of the image that includes a mole; the storage unit 504 is further configured to, when it is determined that the first image includes the target area, obtain a target color of a mole included in the target area; and if the target color is a first color, storing pixel information corresponding to the target area, wherein the first color comprises red, yellow, purple or green.
By implementing the device, the target area can be reserved only when the nevus included in the target area is determined to be the lucky nevus required to be reserved by the user, so that the follow-up target image reserved with the target area can be improved to meet the requirements of the user, and the use experience of the user is improved.
As an alternative embodiment, the storage unit 504 is further configured to determine whether the first image belongs to a target identification photo type when the target area is included in the first image; and if the first image does not belong to the target identification photo type, storing pixel information corresponding to the target area.
By implementing the device, the target area can be reserved only when the first image is determined not to be used as the identification photo, so that the flexibility of the method is improved, and the subsequently generated target image can better meet the requirements of users.
As an alternative implementation, the apparatus shown in fig. 5 may further include a processing unit and a second determining unit, which are not shown in the drawing, wherein:
the processing unit is used for directly executing preset image processing operation on the first image to obtain a fourth image if the target area is not included in the first image after the first image is determined to be included in the first image;
and a second determination unit configured to take the fourth image as the target image.
By implementing the device, the preset image processing operation can be directly executed on the first image under the condition that the preset image processing operation to be executed on the first image is determined not to process the target area, so that the steps of storing and restoring the pixel information by the electronic equipment can be avoided under the condition that the target area is reserved, and the calculation amount of the electronic equipment can be reduced.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 6, the electronic device may include:
a memory 601 in which executable program code is stored;
a processor 602 coupled to a memory 601;
the processor 602 calls the executable program code stored in the memory 601 to execute the image processing method disclosed in the above embodiments.
The embodiment of the application discloses a computer readable storage medium, which stores a computer program, wherein the computer program enables a computer to execute the image processing method disclosed by each embodiment.
The embodiment of the present application further discloses an application publishing platform, wherein the application publishing platform is used for publishing a computer program product, and when the computer program product runs on a computer, the computer is caused to execute part or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the sequence numbers of the above-mentioned processes do not imply a necessary order of execution, and the order of execution of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be implemented by program instructions associated with hardware, and the program may be stored in a computer-readable storage medium, which includes Read-Only Memory (ROM), random Access Memory (RAM), programmable Read-Only Memory (PROM), erasable Programmable Read-Only Memory (EPROM), one-time Programmable Read-Only Memory (OTPROM), electrically Erasable Programmable Read-Only Memory (EEPROM), an optical Disc-Read-Only Memory (CD-ROM) or other storage medium, a magnetic tape, or any other medium capable of storing data for a computer or other computer.
The image processing method and apparatus, the electronic device, and the storage medium disclosed in the embodiments of the present application are described in detail above, and specific examples are applied herein to explain the principles and embodiments of the present application, and the description of the embodiments above is only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific implementation manner and the application scope may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a first image, and determining whether the first image comprises a target area, wherein the target area at least comprises an image of a mole;
if the first image comprises a target area, storing pixel information corresponding to the target area, and executing preset image processing operation on the first image to obtain a second image;
and according to the stored pixel information corresponding to the target area, reducing the image included in the target area to the area corresponding to the second image to obtain a target image.
2. The method of claim 1, wherein determining whether a target region is included in the first image comprises:
determining whether the first image comprises a target area or not through a semantic segmentation model, wherein the semantic segmentation model is obtained by training a semantic segmentation network according to a sample image, and is used for adding corresponding color labels to different types of objects in an input image, and the different color labels are used for marking the different types of objects.
3. The method of claim 2, wherein the determining whether the first image includes the target region through a semantic segmentation model comprises:
inputting the first image into a semantic segmentation model to obtain a third image output by the semantic segmentation model;
if the third image comprises an area marked by a first color label, determining that the first image comprises a target area, wherein the first color label indicates that an object marked by the first color label is a mole;
and if the third image does not comprise the area marked by the first color label, determining that the first image does not comprise the target area.
4. The method according to claim 2 or 3, wherein the sample images include a sample image with a mole and a sample image without a mole, the sample images include one or more of a background region, a face region and a mole region, and the background region, the face region and the mole region are respectively marked by different color labels.
5. The method of claim 1, wherein the target area is an area of an image comprising a mole; and if the first image includes a target area, storing pixel information corresponding to the target area, including:
if the first image comprises a target area, acquiring the target color of a mole contained in the target area;
and if the target color is a first color, storing pixel information corresponding to the target area, wherein the first color comprises red, yellow, purple or green.
6. The method according to claim 1, wherein if the first image includes a target area, storing pixel information corresponding to the target area comprises:
if the first image comprises a target area, determining whether the first image belongs to a target identification photo type;
and if the first image does not belong to the target identification photo type, storing pixel information corresponding to the target area.
7. The method of any of claims 1-3, 5, or 6, wherein after said determining whether a target region is included in said first image, said method further comprises:
if the first image does not comprise the target area, directly executing the preset image processing operation on the first image to obtain a fourth image;
and taking the fourth image as a target image.
8. An image processing apparatus, characterized in that the apparatus comprises:
the device comprises a first determining unit, a second determining unit and a third determining unit, wherein the first determining unit is used for acquiring a first image and determining whether a target area is included in the first image, and the target area at least comprises an image of a mole;
the storage unit is used for storing pixel information corresponding to a target area when the first image is determined to comprise the target area, and executing preset image processing operation on the first image to obtain a second image;
and the restoring unit is used for restoring the image included in the target area to the area corresponding to the second image according to the stored pixel information corresponding to the target area so as to obtain the target image.
9. An electronic device comprising a memory storing executable program code, and a processor coupled to the memory; wherein the processor invokes the executable program code stored in the memory to perform the method of any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202211493154.XA 2022-11-25 2022-11-25 Image processing method and device, electronic device and storage medium Pending CN115880139A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211493154.XA CN115880139A (en) 2022-11-25 2022-11-25 Image processing method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211493154.XA CN115880139A (en) 2022-11-25 2022-11-25 Image processing method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115880139A true CN115880139A (en) 2023-03-31

Family

ID=85764033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211493154.XA Pending CN115880139A (en) 2022-11-25 2022-11-25 Image processing method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115880139A (en)

Similar Documents

Publication Publication Date Title
CN108012081B (en) Intelligent beautifying method, device, terminal and computer readable storage medium
CN107665482B (en) Video data real-time processing method and device for realizing double exposure and computing equipment
JP5991536B2 (en) Makeup support device, makeup support method, and makeup support program
CN108629339B (en) Image processing method and related product
WO2015001437A1 (en) Image processing method and apparatus, and electronic device
CN109903291B (en) Image processing method and related device
CN111787242A (en) Method and apparatus for virtual fitting
US20100128939A1 (en) Hair segmentation
CN107610149B (en) Image segmentation result edge optimization processing method and device and computing equipment
KR20140004230A (en) Image processing device, information generation device, image processing method, information generation method, control program, and recording medium
CN107705279B (en) Image data real-time processing method and device for realizing double exposure and computing equipment
CN114007099A (en) Video processing method and device for video processing
CN107808372B (en) Image crossing processing method and device, computing equipment and computer storage medium
CN108234770B (en) Auxiliary makeup system, auxiliary makeup method and auxiliary makeup device
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
CN107851309A (en) A kind of image enchancing method and device
KR101647318B1 (en) Portable device for skin condition diagnosis and method for diagnosing and managing skin using the same
CN108171716B (en) Video character decorating method and device based on self-adaptive tracking frame segmentation
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
CN108337427B (en) Image processing method and electronic equipment
CN107203646A (en) A kind of intelligent social sharing method and device
CN110837901A (en) Cloud test drive appointment auditing method and device, storage medium and cloud server
CN107743263B (en) Video data real-time processing method and device and computing equipment
CN115880139A (en) Image processing method and device, electronic device and storage medium
CN108010038B (en) Live-broadcast dress decorating method and device based on self-adaptive threshold segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination