CN115423827B - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115423827B
CN115423827B CN202211366792.5A CN202211366792A CN115423827B CN 115423827 B CN115423827 B CN 115423827B CN 202211366792 A CN202211366792 A CN 202211366792A CN 115423827 B CN115423827 B CN 115423827B
Authority
CN
China
Prior art keywords
image
beautification
objects
target
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211366792.5A
Other languages
Chinese (zh)
Other versions
CN115423827A (en
Inventor
胡晓文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211366792.5A priority Critical patent/CN115423827B/en
Publication of CN115423827A publication Critical patent/CN115423827A/en
Application granted granted Critical
Publication of CN115423827B publication Critical patent/CN115423827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a storage medium, which relate to the technical field of artificial intelligence, specifically to the technical fields of augmented reality, virtual reality, computer vision, deep learning, and the like, and can be applied to scenes such as image processing, meta universe, and the like. The implementation scheme is as follows: obtaining a target image, wherein the target image comprises a plurality of objects; performing segmentation processing on a target image to obtain a plurality of segmented images, the plurality of segmented images including an object segmented image corresponding to each of a plurality of objects; and performing beautification processing on the plurality of objects in the target image respectively based on the plurality of object segmentation images to obtain a target object beautification image corresponding to the target image.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the technical field of artificial intelligence, specifically to the technical fields of augmented reality, virtual reality, computer vision, deep learning, and the like, and specifically to an image processing method, apparatus, electronic device, computer-readable storage medium, and computer program product.
Background
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology and the like.
The image processing technology based on artificial intelligence beautifies the object in the image by processing the image, so that the object in the image reaches the preset color or shape, the image is clearer and more attractive, and the user experience can be improved.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The present disclosure provides an image processing method, an apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided an image processing method including: obtaining a target image, wherein the target image comprises a plurality of objects; performing segmentation processing on the target image to obtain a plurality of segmented images, wherein the plurality of segmented images comprise an object segmented image corresponding to each object in the plurality of objects; and performing beautification processing on the plurality of objects in the target image based on the plurality of object segmentation images respectively to obtain a target object beautification image corresponding to the target image.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: a target image obtaining unit configured to obtain a target image including a plurality of objects; an image segmentation unit configured to perform segmentation processing on the target image, obtaining a plurality of segmented images including an object segmented image corresponding to each of the plurality of objects; and a first beautification unit configured to perform beautification processing on the plurality of objects in the target image based on a plurality of object segmentation images corresponding to the plurality of objects, respectively, so as to obtain a target object beautification image corresponding to the target image.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to the embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program realizes the method according to embodiments of the present disclosure when executed by a processor.
According to one or more embodiments of the present disclosure, an object segmentation image corresponding to each object is obtained by segmenting a target image, and for each object, beautification processing is performed based on the segmentation image, so that the beautification process of each object in the target image is not affected by other objects, and the beautification effect is improved, and especially when mutual occlusion exists among a plurality of objects, the beautification effect can be significantly improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
Fig. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with embodiments of the present disclosure;
FIG. 2 shows a flow diagram of an image processing method according to an embodiment of the present disclosure;
fig. 3 shows a flowchart of a procedure of performing beautification processing on a plurality of objects in a target image based on a plurality of object segmentation images, respectively, in an image processing method according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating a process of performing beautification processing on each object in an object segmentation image corresponding to the object for each of a plurality of objects in an image processing method according to an embodiment of the disclosure;
FIG. 5 illustrates a flow chart of a process for obtaining an object beautification image corresponding to a first object based on a patch image in an image processing method according to an embodiment of the present disclosure;
fig. 6 is a flowchart illustrating a process of performing beautification processing on a second area different from a first area where a plurality of objects are located in a target object beautification image based on a background segmentation image to obtain a target beautification image in an image processing method according to an embodiment of the present disclosure;
fig. 7 shows a block diagram of the structure of an image processing apparatus according to an embodiment of the present disclosure;
FIG. 8 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", and the like to describe various elements is not intended to limit the positional relationship, the temporal relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes a client device 101, a client device 102, a client device 103, a client device 104, one or more of a client device 105 and a client device 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client device 101, client device 102, client device 103, client device 104, client device 105, and client device 106 may be configured to execute one or more applications.
In an embodiment of the present disclosure, the server 120 may run one or more services or software applications that enable the execution of the image processing method according to an embodiment of the present disclosure.
In some embodiments, the server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating client device 101, client device 102, client device 103, client device 104, client device 105, and/or client device 106 may, in turn, utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may use client device 101, client device 102, client device 103, client device 104, client device 105, and/or client device 106 to receive the target object beautification image obtained in the image processing method according to the embodiment of the present disclosure. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various Mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablets, personal Digital Assistants (PDAs), and the like. Wearable devices may include head-mounted displays (such as smart glasses) and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 can also run any of a variety of additional server applications and/or mid-tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like.
In some implementations, server 120 may include one or more applications to analyze and merge data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client device 101, client device 102, client device 103, client device 104, client device 105, and/or client device 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The database 130 may be of different types. In certain embodiments, the database used by the server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve data to and from the databases in response to the commands.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
In the related technology, human body key point identification or human body contour identification is adopted to obtain human body contour key points in an image, and human body beautification in the image is carried out based on the human body contour key points so as to realize the beautification effect. However, when there are multiple people in the image, the key of the human body profile is obtained through the identification of the key points or the human body profile, and then the multiple people in the image are beautified based on the key points of the obtained human body profile, so that the beautification process of the multiple people, especially when there is mutual shielding among the multiple people, cannot be performed on all the human bodies simultaneously or the beautification effect is poor.
According to an aspect of the present disclosure, there is provided an image processing method. Referring to fig. 2, an image processing method 200 according to some embodiments of the present disclosure includes:
step S210: obtaining a target image, wherein the target image comprises a plurality of objects;
step S220: performing segmentation processing on the target image to obtain a plurality of segmented images, wherein the plurality of segmented images comprise an object segmented image corresponding to each object in the plurality of objects; and
step S230: and performing beautification processing on the plurality of objects in the target image respectively based on the plurality of object segmentation images to obtain a target object beautification image corresponding to the target image.
The target image is segmented to obtain the object segmentation image corresponding to each object, and the beautification processing is performed on each object based on the segmentation image, so that the beautification process of each object in the target image is not influenced by other objects, the beautification effect is improved, and particularly when a plurality of objects are mutually shielded, the beautification effect can be remarkably improved.
In some embodiments, the target image may be any image containing multiple objects, such as an image captured by a camera.
In some embodiments, each of the plurality of subjects may be a human, an animal, or any other subject, etc.
In some embodiments, the plurality of segmented images is obtained by inputting the target image into an image segmentation network.
According to an embodiment of the present disclosure, the plurality of segmented images includes an object segmented image corresponding to each of a plurality of objects in the target image, wherein the object segmented image contains the respective object.
In some embodiments, in the object segmentation image, the region corresponding to the corresponding object is the same as the region corresponding to the corresponding object in the target image, and each pixel position other than the region corresponding to the corresponding object has a preset pixel value. For example, the preset pixel value is 255.
In some embodiments, the target image includes, in addition to the areas where the plurality of objects are located, a background area that is different from the area where each of the plurality of objects is located; the plurality of divided image processes obtained by dividing the target image include an object divided image corresponding to each of the plurality of objects and a background divided image corresponding to the background region. It is understood that the background segmentation image includes a background region in the target image and other regions different from the background region, and the other regions may have preset pixel values. For example, the preset pixel value is 255.
In some embodiments, the background region may be a solid color region having a uniform pixel value at each pixel position, or may be an image region obtained after photographing for another object different from the plurality of objects, for example, the other object may be various plants, sky, roads, and the like.
In some embodiments, for each object segmentation image, the beautification processing is performed on the object in the target image by performing beautification processing on an area of the target image where the object segmentation image corresponds to based on the object segmentation image.
In some embodiments, beautifying the object in the target image may include adjusting a pixel value of a region in which the object is located in the target image such that the pixel value of the region in which the object is located in the target image reaches a preset pixel value.
In some embodiments, beautification processing of an object in a target image may include adjusting a size of an area of the object in the target image such that the size is reduced or increased to a preset size.
In some embodiments, the preset pixel value and the preset size may be set by a user.
It can be understood that the beautification processing on the object in the target image may include changing the skin color, the body shape, and the like of the object in the image by adjusting the pixels, the area size, and the like of the area where the object is located in the target image, so that the color, the shape, and the like of the object in the adjusted target image conform to the preset color and shape, and the target image is clearer and more beautiful, or more aesthetically pleasing to a person viewing the target image. In some embodiments, as shown in fig. 3, the performing beautification processing on the plurality of objects in the target image based on the plurality of object segmentation images respectively comprises:
step S310: for each object in the plurality of objects, performing beautification processing on the object in the object segmentation image corresponding to the object to obtain an object beautified image corresponding to the object; and
step S320: and obtaining the target beautified object image based on a plurality of object beautified images corresponding to the plurality of objects.
The beautification processing is carried out on each object based on the corresponding object segmentation image of each object to obtain a corresponding object beautification image, and the target object beautification image is obtained based on the plurality of object beautification images corresponding to the plurality of objects, so that the beautification process of each object is independently carried out in the object segmentation image, and the mutual influence of the beautification processes of each object is further avoided.
In some embodiments, for each of the plurality of objects, performing beautification on the object in the object segmentation image corresponding to the object may be achieved by adjusting a pixel value and an area size of an area in which the object is located in the corresponding object segmentation image.
In some embodiments, a first object of the plurality of objects is partially occluded in the target image, such that the occluded portion of the first object is not included in the object segmentation image of the first object, and wherein, as shown in fig. 4, the performing the beautification processing on each object of the plurality of objects in the object segmentation image to which the object corresponds includes:
step S410: obtaining a filled image corresponding to the first object based on an object segmentation image corresponding to the first object, wherein the filled image comprises a part of the first object which is blocked; and
step S420: and obtaining an object beautification image corresponding to the first object based on the filled-up image.
When a first sheltered object exists in the target image, beautifying is carried out by supplementing the sheltered part of the first object, so that the first object is beautified integrally, the first object is beautified more accurately and comprehensively, and the accuracy of the beautifying effect is improved.
In some embodiments, the partial occlusion of the first object in the target image may be the occlusion of the first object by other objects in the plurality of objects, or may be the capturing of only a partial region of the first object during the capturing of the plurality of objects to obtain the target image.
In some embodiments, the corresponding registered image of the first object is obtained by inputting an object segmentation image corresponding to the first object into the image generation model.
It is understood that the patch image contains a portion of the first object that is occluded in the target image, so that the patch image is similar to an image containing the first object as a whole that is taken separately for the first object.
In some embodiments, beautifying the first object is achieved by adjusting pixels and sizes of an area where the first object is located in the filled image, so that an object beautified image corresponding to the first object is obtained.
In some embodiments, as shown in fig. 5, obtaining an object beautification image corresponding to the first object based on the padded image comprises:
step S510: detecting the filled image to obtain a plurality of contour key points of the first object in the filled image;
step S520: dividing the area where the first object is located in the filling image into a plurality of triangles based on the plurality of contour key points to obtain a plurality of triangle vertexes; and
step S530: and performing beautification processing on a first object in the patch image based on the plurality of triangle vertexes to obtain an object beautification image corresponding to the first object.
The beautification effect is further improved by obtaining a plurality of contour key points of the first object in the supplemented image and performing beautification processing on the first triangles divided in the supplemented image based on the contour key points.
In some embodiments, a plurality of contour keypoints of the first object in the patch image is obtained by inputting the patch image into a keypoint detection model.
In some embodiments, the area in which the first object is located in the filled-up image is divided into a plurality of triangles using triangulation techniques.
In some embodiments, the 2D rendering technology is used to move and render different vertices of the triangle, so as to implement beautification of the first exclusive object, thereby obtaining an object beautification image corresponding to the first object.
In some embodiments, after the object beautification images of the respective objects are obtained, the object beautification images corresponding to the respective objects are combined to obtain a target object beautification image corresponding to the target object.
In some embodiments, the image processing method according to the present disclosure further includes: obtaining a hierarchical relationship between the plurality of objects based on the target image, the hierarchical relationship indicating front and back positions of the plurality of objects in space in the target image; and the obtaining the target beautified object image based on the plurality of beautified object images corresponding to the plurality of objects comprises:
and merging the plurality of object beautification images based on the hierarchical relationship.
The method for obtaining the beautified target object image by obtaining the hierarchical relationship among the objects and combining the beautified target object images based on the hierarchical relationship to obtain the beautified target object image has the advantages that the hierarchical relationship (spatial position relationship) of the objects in the beautified target object image is the same as the hierarchical relationship (spatial position relationship) of the objects in the target image, the accuracy of the beautified target object image is improved, the beautified target object image is obtained by combining the beautified target object images, the processing steps are simplified, and the data processing amount is reduced.
It can be understood that under the condition that a plurality of objects are shielded, the beautified target object image obtained by the method is more accurate, each object in the beautified target object image is closer to each object in the target image, and the beautification effect is better.
In some embodiments, the hierarchical relationship is obtained by inputting the target image to the recognition model. The recognition model is used for recognizing the relative positions of a plurality of objects in the target image in space.
In some embodiments, the target image includes a background region different from a region in which each of the plurality of objects is located, the plurality of segmented images further include a background segmented image corresponding to the background region, and the image processing method according to the present disclosure further includes:
and performing beautification processing on a second area, which is different from the first area where the plurality of objects are located, in the target object beautification image based on the background segmentation image to obtain a target beautification image.
And under the condition that the target image comprises the background area, performing beautification processing on a second area of the first area where a plurality of objects are respectively located in the beautified target image, and further improving the overall beautification effect of the target image.
For example, when the background region is an image region captured for another object (for example, a forest including various plants), the second region in the target object beautification image is segmented based on the background, so that the obtained target beautification image includes a plurality of objects in the target image and the background region at the same time, and the target image is beautified and the background region in the target image is beautified, thereby improving the consistency between the target beautification image and the target image.
In some embodiments, the background segmented image is directly merged with the target object beautification image such that the second area in the target object beautification image is a corresponding area in the background segmented image, thereby obtaining the target beautification image.
In some embodiments, as shown in fig. 6, the performing the beautification processing on a second area of the target object beautification image different from the first area where the plurality of objects are located based on the background segmentation image to obtain a target beautification image includes:
step S610: obtaining a background beautification image based on the background segmentation image; and
step S620: and merging the background beautification image and the target object beautification image to obtain the target beautification image.
The method has the advantages that the method obtains the beautified target image by obtaining the beautified background image and combining the beautified background image and the beautified target image, simplifies processing steps and reduces data processing amount.
Since the areas occupied by the plurality of objects in the target object beautification image are smaller than the areas occupied by the plurality of objects in the process of beautifying the plurality of objects, if the background segmentation image and the target object beautification image are directly merged to obtain the target beautification image, the background area in the target beautification image is not matched with the areas occupied by the plurality of objects (for example, gaps exist between the background area and the areas where the plurality of objects are located), so that the beautification effect is poor. According to the embodiment of the disclosure, the beautification processing is carried out on the segmented background image to obtain the beautified background image, and then the beautified background image and the beautified target object image are combined, so that pixels between the background area in the obtained beautified target image and the area occupied by the plurality of objects are natural and smooth, the beautification is achieved, and the beautification effect of the target image is improved.
In some embodiments, obtaining a background beautification image based on the background segmentation image comprises:
obtaining a background filling image based on the background segmentation image, wherein the similarity between the background area and the areas corresponding to the object areas in the background filling image is greater than a preset similarity threshold; and
and obtaining the background beautification image based on the background filled image.
By filling up the background images, the obtained background beautifying images are vivid and have good beautifying effect.
In some embodiments, the background-supplemented image is obtained by copying a partial image in a first region in the background-segmented image that coincides with the background region in the target image into a second region corresponding to the plurality of objects in the target image.
In some embodiments, the background segmentation image is input into a background generation model to generate a background patch image. The background generation model is generated by training based on a training background image, and the training background image comprises a first training background image lacking a partial area and a second training background image containing a complete background.
In some embodiments, the background-filled image is used directly as a background beautification image.
In some embodiments, the background beautification image is obtained by adjusting the pixel values for each pixel location in the background patch image.
In some embodiments, after obtaining the background beautification image, the target beautification image is obtained by merging the background beautification image with the target object beautification image. For example, the target beautification image is obtained by obtaining an area in the background beautification image corresponding to an area other than the areas where the plurality of objects are located in the target object beautification image, and combining the corresponding area with the areas where the plurality of objects are located in the target object beautification image.
In another aspect according to the present disclosure, there is also provided an image processing apparatus, as shown in fig. 7, including: a target image obtaining unit 710 configured to obtain a target image including a plurality of objects; an image segmentation unit 720 configured to perform segmentation processing on the target image to obtain a plurality of segmented images including an object segmented image corresponding to each of the plurality of objects; and a first beautification unit 730 configured to perform beautification processing on the plurality of objects in the target image based on the plurality of object segmentation images, respectively, to obtain a target object beautification image corresponding to the target image.
In some embodiments, the first beautification unit 730 includes: an object segmentation image beautification unit configured to, for each of the plurality of objects, perform beautification processing on the object in the object segmentation image corresponding to the object to obtain an object beautification image corresponding to the object; a beautification image obtaining subunit configured to obtain the target beautification image based on a plurality of beautification images of the object corresponding to the plurality of objects.
In some embodiments, a first object of the plurality of objects is partially occluded in the target image such that the occluded portion of the first object is not included in an object segmentation image of the first object, and wherein the object segmentation image beautification unit comprises: an object registration unit configured to obtain a registration image corresponding to the first object based on an object segmentation image corresponding to the first object, wherein the registration image corresponding to the first object includes a portion of the first object that is occluded; and a first segmentation image beautification subunit configured to obtain an object beautification image corresponding to the first object based on the filled-up image.
In some embodiments, the first split image beautifying subunit includes: a detection unit configured to detect the filled-up image to obtain a plurality of contour key points of the first object in the filled-up image; a triangulation unit configured to divide an area in which the first object is located in the filled image into a plurality of triangles based on the plurality of contour key points to obtain a plurality of triangle vertices; and a second segmentation image beautifying subunit, configured to perform beautifying processing on a first object in the patch image based on the plurality of triangle vertices to obtain an object beautifying image corresponding to the first object.
In some embodiments, the apparatus 700 further comprises: a hierarchical relationship obtaining unit configured to obtain a hierarchical relationship between the plurality of objects based on the target image, the hierarchical relationship indicating front and rear positions of the plurality of objects in space in the target image; and the beautified image acquiring subunit includes: a first merging unit configured to merge the plurality of object beautification images based on the hierarchical relationship.
In some embodiments, the target image includes a background region different from a region in which each of the plurality of objects is located, the plurality of segmented images further includes a background segmented image corresponding to the background region, the apparatus further includes: a second beautification unit configured to perform beautification processing on a second area, which is different from the first area where the plurality of objects are located, in the target object beautification image based on the background segmentation image to obtain the target beautification image.
In some embodiments, the second beautifying unit includes: a background segmentation image beautification unit configured to obtain a background beautification image based on the background segmentation image; and a second merging unit configured to merge the background beautification image and the target object beautification image to obtain the target beautification image.
In some embodiments, the background segmentation image beautification unit comprises: a background filling unit configured to obtain a background filling image based on the background segmentation image, wherein the similarity between the background area and the areas corresponding to the plurality of object areas in the background filling image is greater than a preset similarity threshold; and a background segmentation image beautification subunit configured to obtain the background beautification image based on the background filled-up image.
In some embodiments, the plurality of subjects comprises humans or animals.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
According to an embodiment of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product.
Referring to fig. 8, a block diagram of a structure of an electronic device 800, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic apparatus 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the electronic device 800 are connected to the I/O interface 805, including: an input unit 806, an output unit 807, a storage unit 808, and a communication unit 809. The input unit 806 may be any type of device capable of inputting information to the electronic device 800, and the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote controller. Output unit 807 can be any type of device capable of presenting information and can include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 808 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The computing unit 801 performs the various methods and processes described above, such as the method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. When loaded into RAM 803 and executed by computing unit 801, may perform one or more of the steps of method 200 described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the method 200 in any other suitable manner (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (13)

1. An image processing method comprising:
obtaining a target image, wherein the target image comprises a plurality of objects and a background area which is different from an area where each object in the plurality of objects is located;
performing segmentation processing on the target image to obtain a plurality of segmentation images, wherein the plurality of segmentation images comprise an object segmentation image corresponding to each object in the plurality of objects and a background segmentation image corresponding to the background area;
performing beautification processing on the plurality of objects in the target image respectively based on the plurality of object segmentation images to obtain a target object beautification image corresponding to the target image;
obtaining a background filling image based on the background segmentation image, wherein the similarity between the background area and the areas corresponding to the plurality of objects in the background filling image is greater than a preset similarity threshold;
adjusting the pixel value of each pixel position in the background complementing image to obtain a background beautifying image; and
and merging the background beautification image and the target object beautification image to obtain a target beautification image.
2. The method of claim 1, wherein the performing the beautification processing on the plurality of objects in the target image based on the plurality of object segmentation images, respectively, comprises:
for each object in the plurality of objects, performing beautification processing on the object in the object segmentation image corresponding to the object to obtain an object beautified image corresponding to the object; and
and obtaining the target beautified object image based on a plurality of object beautified images corresponding to the plurality of objects.
3. The method of claim 2, wherein a first object of the plurality of objects is partially occluded in the target image such that the occluded portion of the first object is not included in the object segmentation image of the first object, and wherein the beautification processing of each object of the plurality of objects in the object segmentation image to which the object corresponds comprises:
obtaining a complementary image corresponding to the first object based on an object segmentation image corresponding to the first object, wherein the complementary image comprises a part of the first object, which is shielded; and
and obtaining an object beautification image corresponding to the first object based on the filled-up image.
4. The method of claim 3, wherein the obtaining an object beautification image corresponding to the first object based on the padded image comprises:
detecting the filled image to obtain a plurality of contour key points of the first object in the filled image;
dividing the area where the first object is located in the filling image into a plurality of triangles based on the plurality of contour key points to obtain a plurality of triangle vertexes; and
and performing beautification processing on a first object in the patch image based on the plurality of triangle vertexes to obtain an object beautification image corresponding to the first object.
5. The method of any of claims 2-4, further comprising:
obtaining a hierarchical relationship between the plurality of objects based on the target image, the hierarchical relationship indicating front and back positions of the plurality of objects in space in the target image; and the obtaining the target beautified object image based on the plurality of beautified object images corresponding to the plurality of objects comprises:
merging the plurality of object beautification images based on the hierarchical relationship.
6. The method of claim 1, wherein the plurality of objects comprises a person, an animal, an avatar, or an item.
7. An image processing apparatus comprising:
a target image acquisition unit configured to obtain a target image including a plurality of objects and a background area different from an area where each of the plurality of objects is located;
an image segmentation unit configured to perform segmentation processing on the target image to obtain a plurality of segmented images including an object segmented image corresponding to each of the plurality of objects and a background segmented image corresponding to the background region;
a first beautification unit configured to perform beautification processing on the plurality of objects in the target image based on the plurality of object segmentation images respectively to obtain a target object beautification image corresponding to the target image;
a background filling unit configured to obtain a background filling image based on the background segmentation image, wherein the similarity between the background area and the areas corresponding to the plurality of objects in the background filling image is greater than a preset similarity threshold;
the background segmentation image beautification subunit is configured to adjust the pixel value of each pixel position in the background supplemented image to obtain a background beautification image; and
a second merging unit configured to merge the background beautification image and the target object beautification image to obtain a target beautification image.
8. The apparatus of claim 7, wherein the first beautification unit comprises:
an object segmentation image beautification unit configured to perform beautification processing on each of the plurality of objects in the object segmentation image corresponding to the object to obtain an object beautification image corresponding to the object;
a beautification image obtaining subunit configured to obtain the target beautification image based on a plurality of beautification images of the object corresponding to the plurality of objects.
9. The apparatus of claim 8, wherein a first object of the plurality of objects is partially occluded in the target image such that the occluded portion of the first object is not included in an object segmentation image of the first object, and wherein the object segmentation image beautification unit comprises:
an object registration unit configured to obtain a registration image corresponding to the first object based on an object segmentation image corresponding to the first object, wherein the registration image corresponding to the first object includes a portion of the first object that is occluded; and
a first segmentation image beautification subunit configured to obtain an object beautification image corresponding to the first object based on the filled-up image.
10. The apparatus of claim 9, wherein the first split image beautifying subunit comprises:
a detection unit configured to detect the filled-up image to obtain a plurality of contour key points of the first object in the filled-up image;
a triangulation unit configured to divide an area in which the first object is located in the filled image into a plurality of triangles based on the plurality of contour key points to obtain a plurality of triangle vertices; and
a second segmentation image beautification subunit configured to perform beautification processing on a first object in the patch image based on the plurality of triangle vertexes to obtain an object beautification image corresponding to the first object.
11. The apparatus of any of claims 8-10, further comprising:
a hierarchical relationship obtaining unit configured to obtain a hierarchical relationship between the plurality of objects based on the target image, the hierarchical relationship indicating front and rear positions of the plurality of objects in space in the target image; and the beautified image acquiring subunit includes:
a first merging unit configured to merge the plurality of object beautification images based on the hierarchical relationship.
12. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
13. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202211366792.5A 2022-11-03 2022-11-03 Image processing method, image processing device, electronic equipment and storage medium Active CN115423827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211366792.5A CN115423827B (en) 2022-11-03 2022-11-03 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211366792.5A CN115423827B (en) 2022-11-03 2022-11-03 Image processing method, image processing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115423827A CN115423827A (en) 2022-12-02
CN115423827B true CN115423827B (en) 2023-03-24

Family

ID=84208218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211366792.5A Active CN115423827B (en) 2022-11-03 2022-11-03 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115423827B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325907A (en) * 2018-09-18 2019-02-12 北京旷视科技有限公司 Image landscaping treatment method, apparatus and system
CN111489311A (en) * 2020-04-09 2020-08-04 北京百度网讯科技有限公司 Face beautifying method and device, electronic equipment and storage medium
CN115147306A (en) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730449B (en) * 2017-11-07 2021-12-14 深圳市云之梦科技有限公司 Method and system for beautifying facial features
CN109584151B (en) * 2018-11-30 2022-12-13 腾讯科技(深圳)有限公司 Face beautifying method, device, terminal and storage medium
CN110135428B (en) * 2019-04-11 2021-06-04 北京航空航天大学 Image segmentation processing method and device
CN113793247A (en) * 2021-07-08 2021-12-14 福建榕基软件股份有限公司 Ornament image beautifying method and terminal
CN113421204A (en) * 2021-07-09 2021-09-21 北京百度网讯科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN115205161B (en) * 2022-08-18 2023-02-21 荣耀终端有限公司 Image processing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325907A (en) * 2018-09-18 2019-02-12 北京旷视科技有限公司 Image landscaping treatment method, apparatus and system
CN111489311A (en) * 2020-04-09 2020-08-04 北京百度网讯科技有限公司 Face beautifying method and device, electronic equipment and storage medium
CN115147306A (en) * 2022-06-30 2022-10-04 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115423827A (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN114972958B (en) Key point detection method, neural network training method, device and equipment
CN112967355A (en) Image filling method and device, electronic device and medium
CN112967356A (en) Image filling method and device, electronic device and medium
CN115482325A (en) Picture rendering method, device, system, equipment and medium
CN115170819A (en) Target identification method and device, electronic equipment and medium
CN114723949A (en) Three-dimensional scene segmentation method and method for training segmentation model
CN115661375B (en) Three-dimensional hair style generation method and device, electronic equipment and storage medium
CN116245998B (en) Rendering map generation method and device, and model training method and device
CN114120448B (en) Image processing method and device
CN115393514A (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method, device and equipment
CN114119935B (en) Image processing method and device
CN116030185A (en) Three-dimensional hairline generating method and model training method
CN115423827B (en) Image processing method, image processing device, electronic equipment and storage medium
CN115761855A (en) Face key point information generation, neural network training and three-dimensional face reconstruction method
CN114596476A (en) Key point detection model training method, key point detection method and device
CN114913549A (en) Image processing method, apparatus, device and medium
CN114119154A (en) Virtual makeup method and device
CN114092556A (en) Method, apparatus, electronic device, medium for determining human body posture
CN114494797A (en) Method and apparatus for training image detection model
CN115345981B (en) Image processing method, image processing device, electronic equipment and storage medium
CN115359194B (en) Image processing method, image processing device, electronic equipment and storage medium
CN115937430B (en) Method, device, equipment and medium for displaying virtual object
CN116030191B (en) Method, device, equipment and medium for displaying virtual object
CN115131562B (en) Three-dimensional scene segmentation method, model training method, device and electronic equipment
CN114120412B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant