CN116385641B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116385641B
CN116385641B CN202310325431.4A CN202310325431A CN116385641B CN 116385641 B CN116385641 B CN 116385641B CN 202310325431 A CN202310325431 A CN 202310325431A CN 116385641 B CN116385641 B CN 116385641B
Authority
CN
China
Prior art keywords
region
pixel
target
area
hairstyle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310325431.4A
Other languages
Chinese (zh)
Other versions
CN116385641A (en
Inventor
彭昊天
陈睿智
赵晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310325431.4A priority Critical patent/CN116385641B/en
Publication of CN116385641A publication Critical patent/CN116385641A/en
Application granted granted Critical
Publication of CN116385641B publication Critical patent/CN116385641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The disclosure provides an image processing method and device, electronic equipment and storage medium, relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, augmented reality, virtual reality, deep learning and the like, and can be applied to scenes such as metauniverse, digital people and the like. The implementation scheme is as follows: acquiring a first projection image corresponding to a first model image, wherein the first model image comprises a hairstyle under a target visual angle, and the first projection image is a projection image of a three-dimensional model corresponding to the hairstyle under the target visual angle; expanding a first hairstyle region in the first hairstyle image to obtain a target hairstyle region in response to the area of the region to be filled in the first projection image being larger than a threshold, wherein the region to be filled is a region, which is not covered by the first hairstyle region, in a second hairstyle region in the first projection image; and filling the region to be filled based on the target hair style region.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, augmented reality, virtual reality, deep learning and the like, and can be applied to scenes such as metauniverse, digital people and the like. The disclosure relates in particular to an image processing method and apparatus, an electronic device, a computer-readable storage medium and a computer program product.
Background
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
The three-dimensional virtual image has wide application value in social, live broadcast, game and other user scenes. The three-dimensional virtual image generation technology based on artificial intelligence generates customized virtual images based on face images, and can effectively meet personalized requirements of users.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides an image processing method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided an image processing method including: acquiring a first projection image corresponding to a first model image, wherein the first model image comprises a hairstyle under a target visual angle, and the first projection image is a projection image of a three-dimensional model corresponding to the hairstyle under the target visual angle; expanding a first hairstyle region in the first hairstyle image to obtain a target hairstyle region in response to the area of the region to be filled in the first projection image being larger than a threshold, wherein the region to be filled is a region, which is not covered by the first hairstyle region, in a second hairstyle region in the first projection image; and filling the region to be filled based on the target hair style region.
According to an aspect of the present disclosure, there is provided an image processing apparatus including: the acquisition module is configured to acquire a first projection image corresponding to a first model image, wherein the first model image comprises a hairstyle under a target visual angle, and the first projection image is a projection image of a three-dimensional model corresponding to the hairstyle under the target visual angle; the expansion module is configured to respond to the fact that the area of a region to be filled in the first projection image is larger than a threshold value, and expand a first hairstyle region in the first hairstyle image to obtain a target hairstyle region, wherein the region to be filled is a region, which is not covered by the first hairstyle region, in a second hairstyle region in the first projection image; and a filling module configured to fill the region to be filled based on the target hair style region. According to an aspect of the present disclosure, there is provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to an aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer program product comprising computer program instructions which, when executed by a processor, implement the above-described method.
According to one or more embodiments of the present disclosure, the integrity and fineness of the texture of the three-dimensional hairstyle can be improved, thereby improving the quality of the three-dimensional hairstyle.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 shows a flowchart of an image processing method according to an embodiment of the present disclosure;
3A-3C show schematic diagrams of a first luminescent image, a first projection image, and an area to be filled, respectively, according to an embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of multiple augmentations of a first pattern region in accordance with an embodiment of the present disclosure;
fig. 5 shows a block diagram of the structure of an image processing apparatus according to an embodiment of the present disclosure; and
fig. 6 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
The appearance of a human face is largely determined by the hairstyle. Therefore, in generating a three-dimensional avatar for a user, it is necessary to secure the quality of the generated three-dimensional hairstyle.
In view of the above problems, an embodiment of the present disclosure provides an image processing method, which can improve the integrity and fineness of a three-dimensional hairstyle texture, thereby improving the quality of the three-dimensional hairstyle.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the client devices 101, 102, 103, 104, 105, and 106 and the server 120 may run one or more services or software applications that enable execution of the image processing methods.
In some embodiments, server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The client devices 101, 102, 103, 104, 105, and/or 106 may provide interfaces that enable a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, vehicle-mounted devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, appli os, UNIX-like operating systems, linux, or Linux-like operating systems; or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, wi-Fi), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different types. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
For purposes of embodiments of the present disclosure, in the example of FIG. 1, client applications for avatar generation may be included in client devices 101-106. Accordingly, the server 120 may be a server to which the client application corresponds. The user may upload his own photos through the client application. The server 120 generates a customized three-dimensional avatar for the user based on the head image uploaded by the user. The three-dimensional avatar includes not only a three-dimensional facial model but also a three-dimensional hairstyle model.
In some cases, the quality of the three-dimensional hairstyle model may not be good enough. For example, the hairstyle area in the projected image of the three-dimensional hairstyle model along a certain view angle may not coincide with the hairstyle area in the image of the user's head at that view angle, making the three-dimensional hairstyle model unrealistic and there is a hairline texture missing. In view of this problem, according to some embodiments, the server 120 may further execute the image processing method 200 of the embodiment of the present disclosure after generating the three-dimensional hairstyle model for the user, and texture-fill the projection image of the three-dimensional hairstyle model based on the head image of the user, so that the integrity and fineness of the three-dimensional hairstyle texture can be improved, thereby improving the quality of the three-dimensional hairstyle model.
In other embodiments, the client devices 101-106 may also texture-fill the projected image of the three-dimensional hairstyle model generated by the server 120 by performing the image processing method 200 of the embodiments of the present disclosure to improve the quality of the three-dimensional hairstyle model.
Fig. 2 shows a flowchart of an image processing method 200 according to an embodiment of the present disclosure. As described above, the subject of execution of the steps of method 200 may be a client (e.g., client devices 101-106 shown in FIG. 1) or a server (e.g., server 120 shown in FIG. 1).
As shown in fig. 2, the method 200 includes steps S210-S230.
In step S210, a first projection image corresponding to the first transmission image is acquired. The first model image comprises a hairstyle under a target visual angle, and the first projection image is a projection image of a three-dimensional model corresponding to the hairstyle under the target visual angle.
In step S220, in response to the area of the region to be filled in the first projection image being greater than the threshold, the first styling region in the first styling image is expanded to obtain the target styling region. The region to be filled is a region which is not covered by the first hair-styling region in the second hair-styling region in the first projection image.
In step S230, the region to be filled is filled based on the target hairstyle region.
According to the embodiment of the present disclosure, by expanding the hairstyle region (i.e., the first hairstyle region) in the original hairstyle image (i.e., the first hairstyle image), the expanded hairstyle region (i.e., the target hairstyle region) can be made to completely cover the hairstyle region in the projected image of the three-dimensional hairstyle (i.e., the first projected image). And filling the texture missing region (namely, the region to be filled) in the projection image by utilizing the expanded hairstyle region, so that the hairline texture missing of the three-dimensional hairstyle can be avoided, the integrity and the fineness of the three-dimensional hairstyle texture are improved, and the quality of the three-dimensional hairstyle is improved.
The steps of method 200 are described in detail below.
In step S210, a first projection image corresponding to the first transmission image is acquired.
The first-type image may be a head image of a certain object uploaded by the user. The object may be a real character (e.g., the user himself or other user) or an avatar (e.g., a two-dimensional cartoon character, a cartoon character, etc.).
The first hair style image includes a hair style at a target viewing angle. In embodiments of the present disclosure, hairstyle refers to hair having a length, color, and shape. The target viewing angle may be, for example, a front view, a left view, a top view, etc. In some embodiments, the target perspective may be represented by an attitude angle of the image acquisition device at the time of acquisition of the first-modality image.
Based on the first profile image, a three-dimensional avatar model of the user may be generated. The three-dimensional avatar may be an avatar of the user itself or may be an avatar. Further, the three-dimensional visual model includes a three-dimensional model of a hairstyle of the three-dimensional model of the face. In some embodiments, the three-dimensional visual model may be generated based on a first-type image of a different perspective.
It should be noted that the present disclosure does not limit the method for generating the three-dimensional image model. For example, a three-dimensional visual model of the user, i.e., a three-dimensional model of the hairstyle, may be generated using neural surface reconstruction (NeuS), neural radiation field (NeRF), or the like.
The first projection image is a projection image of a three-dimensional model of the hairstyle at the target viewing angle.
In an embodiment of the present disclosure, the region in which the hairstyle is located in the first hairstyle image is referred to as a first hairstyle region, and the region in which the hairstyle is located in the first projection image is referred to as a second hairstyle region.
According to some embodiments, the hairstyle area in the image may be obtained using a trained hairstyle segmentation network. For example, pixels in the first hair-type image may be semantically segmented using a hair-type segmentation network to obtain a mask for the first hair-type region, i.e., a first mask. A mask can be understood as a binary image identifying a region of interest (Region Of Interest, ROI) where the pixels of the region of interest have a pixel value of 1 and the pixels of the other regions have a pixel value of 0. Accordingly, the first mask corresponding to the first molding region is a binary image for identifying the first molding region. In the first mask, the pixel value of the pixel belonging to the first emission region is 1, and the pixel value of the pixel not belonging to the first emission region is 0. Based on the first mask, a partial image of the departure type, i.e., a first departure region, may be extracted from the first departure image.
Similarly, a second hairstyle area may be extracted from the first projection image using a hairstyle segmentation network.
It will be appreciated that the second hair style region in the first projection image will not normally coincide exactly with the first hair style region in the first hair style image due to some error in the three-dimensional model of the hair style generated. There may be a portion of the pixels in the second hairstyle area that are not covered by the first hairstyle area, such that there is a partial loss of hairstyle texture in the first projected image. In an embodiment of the present disclosure, an area of the second hairstyle area in the first projection image that is not covered by the first hairstyle area is noted as an area to be filled. In order to improve the quality of the three-dimensional hairstyle, texture filling of the area to be filled is required.
Fig. 3A-3C show schematic diagrams of a first luminescent image 310, a first projection image 320, and an area to be filled 330, respectively, according to an embodiment of the disclosure.
As shown in fig. 3A, the first emissive image 310 is a head image of a frontal view of a user 312, which includes a first emissive region 314 therein.
As shown in fig. 3B, the first projection image 320 is a projection image of a three-dimensional model of an avatar 322 (including a three-dimensional model of a hairstyle) generated from the first hairstyle image 310 of the user 312 at a front viewing angle. The image includes a second hairstyle area 324 therein.
The first luminescent image 310 is overlaid with the first projected image 320 to result in FIG. 3C. As shown in fig. 3C, the area of the second hairstyle area not covered by the first hairstyle area 314 is an area to be filled 330.
In step S220, in response to the area of the region to be filled in the first projection image being greater than the threshold, the first styling region in the first styling image is expanded to obtain the target styling region.
The area of the region to be filled may be, for example, the number of pixels included in the region to be filled. The threshold may be set as desired. To ensure a more complete and fine texture in a three-dimensional hairstyle, the threshold value may be set to a small value, such as 0, 5, 10, etc.
According to some embodiments, the first molding region may be expanded according to the following steps S222-S226.
In step S222, the expansion operation is performed on the first mask corresponding to the first molding region, so as to obtain a second mask. The non-overlapping area of the second mask and the first mask is an expansion area, and the expansion area comprises at least one expansion pixel.
In step S224, for any one of the extended pixels in the extended region, a target pixel corresponding to the extended pixel in the first generation region is determined.
In step S226, the pixel value of the extended pixel is determined based on the pixel value of the target pixel.
According to the embodiment, the first hairstyle area can be expanded outwards uniformly along the edge of the first hairstyle area through the expansion operation, and the expanded target hairstyle area can keep a shape similar to that of the first hairstyle area, so that the authenticity of the target hairstyle area is ensured.
It will be understood that, for the above step S222, in the case where the region of interest is represented by a pixel value of 1 in the mask, the extended pixel is a pixel having a pixel value of 1 in the second mask and a pixel value of 0 in the first mask. The extended area is composed of one or more extended pixels.
According to some embodiments, in step S224, the pixel closest to the extended pixel in the first emitting region may be taken as the target pixel.
According to other embodiments, the target pixel may also be determined according to the following steps S2242 and S2244. That is, step S224 may further include step S2242 and step S2244.
In step S2242, the first mask is subjected to etching operation to obtain a third mask. The non-overlapping region of the first mask and the third mask is a sampling region, which includes at least one sampling pixel.
In step S2244, the sampling pixel closest to the extended pixel in the sampling region is taken as the target pixel.
It will be appreciated that the expansion operation is an expansion along the edge of the hair style towards the outer ring and the etching operation is an etching along the edge of the hair style towards the inner ring, both operations being approximately mirror image operations, the expanded pixels (i.e. the expanded pixels) having a similar colour to the etched pixels (i.e. the sampled pixels). According to the above steps S2242 and S2244, the color of the expanded pixel is determined by the color of the corroded pixel, so that the expanded pixel can maintain the color similar to that of the original hair, thereby improving the integrity and fineness of the hairline texture.
It will be appreciated that, in the case where the region of interest is represented by a pixel value of 1 in the mask for step S2242 described above, the sampling pixel is a pixel having a pixel value of 1 in the first mask and a pixel value of 0 in the third mask. The sampling region is made up of one or more sampling pixels.
According to some embodiments, in the above step S226, the pixel value of the target pixel may be directly determined as the pixel value of the extended pixel. Therefore, calculation can be simplified, the integrity and the fineness of the hairline texture are ensured, and the calculation efficiency is improved.
According to other embodiments, in the step S226, a certain process may be performed on the pixel value of the target pixel, and the processing result is determined as the pixel value of the extended pixel. The above-mentioned processing may be, for example, adding, subtracting, multiplying, dividing by a certain value, inputting the pixel value of the target pixel to a certain trained neural network, or the like.
According to some embodiments, in the case that the area of the region to be filled is greater than the threshold value, the first styling region in the first styling image may be extended at least once to obtain the target styling region. That is, the first hairstyle area may be extended only once, and the result of the extension may be used as the target hairstyle area; the first hairstyle area can be expanded for multiple times, and the expansion result of the last expansion is used as the target hairstyle area.
According to the embodiment, the first hairstyle area is expanded at least once, so that the first hairstyle area gradually approaches the target hairstyle area, the fineness of texture expansion is improved, and invalid calculation can be avoided.
It will be appreciated that each expansion of the first-type region may be achieved through steps S222-S226 described above.
According to some embodiments, for each expansion of the at least one expansion, the area of the area to be filled may be updated based on the expansion result obtained by the expansion. It can be understood that the expansion result obtained by each expansion is the first expanded type area. The area of the expanded first molding area is larger than the area of the first molding area before expansion. As a result of the current expansion, the area of the first hair-styling area increases, and accordingly, the area of the second hair-styling area not covered by the first hair-styling area will decrease, i.e. the area of the area to be filled will decrease.
According to some embodiments, in response to the updated area of the region to be filled being less than or equal to a threshold (e.g., 0), the current expansion result is taken as the target hair style region. The area of the region to be filled is less than or equal to the threshold value, and the second hair styling region can be completely covered by the target hair styling region. The target hairstyle area can thus be used to fill the area to be filled in the second hairstyle area (it should be noted that the area to be filled here is the original, not updated area to be filled), thereby optimizing the texture of the three-dimensional hairstyle.
According to some embodiments, in response to the updated area of the region to be filled being greater than the threshold, the current expansion result is continued for the next expansion.
Fig. 4 illustrates a schematic diagram of multiple augmentations of a first emitter region in accordance with an embodiment of the present disclosure. The first transmitting area before each expansion and the first transmitting area after each expansion are color images. The sampling region and the expansion region are represented by a mask, and the pixel values of the sampling pixels in the sampling region and the expansion pixels in the expansion region are 1.
The expansion process shown in fig. 4 is as follows:
1. a hairstyle Mask (i.e., a first Mask) in the first hairstyle image is extracted using the trained hairstyle segmentation network, and an image Hair of the first hairstyle region is extracted using the Mask.
2. Setting an initial value of a last-time expansion Mask as Mask, namely, last-time=mask; and sets the initial value of the last_mask to Mask, i.e., last_mask=mask.
3. The expansion operation is performed on the previous expansion mask last_dialate, and the expansion mask (i.e., the second mask) of this time is obtained, which is mask_dialate=dialate (last_dialate). And performing etching operation on the previous etching mask last_error to obtain a current etching mask (i.e. a third mask) mask_error=error (last_error).
4. The non-overlapping area of the previous expansion mask last_dialate and the current expansion mask mask_dialate, i.e., the expansion area mask_exp= (mask_dialate |=last_dialate) is calculated. The non-overlapping area of the previous etching mask mask_error and the current etching mask mask_error, i.e., sampling area mask_smp= (mask_error |=last_error) is calculated.
5. For each pixel (i.e., an extended pixel) in mask_exp, the nearest neighbor pixel (i.e., the target pixel) in mask_smp is retrieved, and the pixel value (i.e., color) of that pixel in mask_exp is filled with the pixel value of the nearest neighbor pixel in mask_smp.
6. The previous inflation mask last_disc=mask_disc is set, and the previous etching mask last_mask=mask_mask is set.
7. The above steps 3-6 are repeated until the number of pixels in the first projection image of the three-dimensional hairstyle that are not covered by the first styling area (i.e. the area of the area to be filled) is less than a threshold value (e.g. 0). The first hair style area after the last expansion is the target hair style area.
In the embodiment shown in fig. 4, the target hair style area 420 is obtained after 10 extensions of the first hair style area. As can be seen in fig. 4, the target hairstyle area 410 obtained by 10 extensions is increased in area relative to the original hairstyle area 410 before extension, and the hairline texture similar to the original hairstyle area 410 is maintained, so that the hairline texture of the target hairstyle area 420 is real and fine.
After the target hairstyle area is obtained through step S220, step S230 is performed. In step S230, the region to be filled is filled based on the target hairstyle region.
According to some embodiments, the pixel value of a pixel in the region to be filled may be determined directly as the pixel value of the corresponding pixel in the target hair style region. It will be appreciated that the pixels in the region to be filled have the same pixel coordinates as the corresponding pixels in the target hair style region. According to the embodiment, the extended target hairstyle area is directly utilized to fill the projection image of the three-dimensional hairstyle, so that the integrity and fineness of the texture of the three-dimensional hairstyle can be effectively improved.
According to further embodiments, the three-dimensional model of the hair style may be updated based on a second hair style image comprising the target hair style area; and acquiring a second projection image of the updated three-dimensional model under the target view angle. According to the embodiment, the three-dimensional model of the hairstyle is updated by utilizing the target hairstyle area, the projection image is regenerated based on the three-dimensional model, so that the area to be filled in the original projection image is filled, the integrity and the fineness of the three-dimensional hairstyle texture can be improved, and the semantic consistency of the projection images of different visual angles can be ensured.
It will be appreciated that after the expansion of the first hair styling area to obtain the target hair styling area, the first hair styling image is changed accordingly to obtain the second hair styling image. That is, the second hair style image includes the expanded target hair style area. Based on the second hair style image, the three-dimensional model of the hair style may be updated by methods such as neural surface reconstruction (NeuS), neural radiation field (NeRF), etc. And acquiring a second projection image of the updated three-dimensional model under the target view angle. In this second projection image, the region corresponding to the region to be filled in the first projection image can be filled with the complete hairline texture under the guidance of the target hairstyle region.
According to an embodiment of the present disclosure, there is also provided an image processing apparatus. Fig. 5 shows a block diagram of the image processing apparatus 500 according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus 500 includes an acquisition module 510, an expansion module 520, and a population module 530.
The obtaining module 510 is configured to obtain a first projection image corresponding to a first emission image, where the first emission image includes a hairstyle under a target viewing angle, and the first projection image is a projection image of a three-dimensional model corresponding to the hairstyle under the target viewing angle.
The expansion module 520 is configured to expand a first hair style region in the first hair style image to obtain a target hair style region in response to an area of the region to be filled in the first projection image being greater than a threshold, wherein the region to be filled is a region of a second hair style region in the first projection image that is not covered by the first hair style region.
The filling module 530 is configured to fill the region to be filled based on the target hair style region.
According to the embodiment of the present disclosure, by expanding the hairstyle region (i.e., the first hairstyle region) in the original hairstyle image (i.e., the first hairstyle image), the expanded hairstyle region (i.e., the target hairstyle region) can be made to completely cover the hairstyle region in the projected image of the three-dimensional hairstyle (i.e., the first projected image). And filling the texture missing region (namely, the region to be filled) in the projection image by utilizing the expanded hairstyle region, so that the hairline texture missing of the three-dimensional hairstyle can be avoided, the integrity and the fineness of the three-dimensional hairstyle texture are improved, and the quality of the three-dimensional hairstyle is improved.
According to some embodiments, the expansion module 520 includes: the expansion unit is configured to perform expansion operation on the first mask corresponding to the first forming area to obtain a second mask, wherein a non-overlapping area of the second mask and the first mask is an expansion area, and the expansion area comprises at least one expansion pixel; a first determination unit configured to determine, for any one of the extended pixels in the extended region, a target pixel corresponding to the extended pixel in the first transmission region; and a second determination unit configured to determine a pixel value of the extended pixel based on the pixel value of the target pixel.
According to some embodiments, the first determining unit comprises: the etching unit is configured to perform etching operation on the first mask to obtain a third mask, wherein a non-overlapping area of the first mask and the third mask is a sampling area, and the sampling area comprises at least one sampling pixel; and a third determination unit configured to take, as the target pixel, a sampling pixel closest to the extended pixel in the sampling region.
According to some embodiments, the second determining unit is further configured to: and determining the pixel value of the target pixel as the pixel value of the extended pixel.
According to some embodiments, the expansion module 520 is further configured to: performing at least one expansion of the first-type region, wherein for each expansion of the at least one expansion: updating the area of the region to be filled based on the expansion result obtained by the expansion; and responding to the updated area being smaller than or equal to the threshold value, and taking the expansion result as the target hair style area.
According to some embodiments, the filling module 530 is further configured to: and determining the pixel value of the pixel in the region to be filled as the pixel value of the corresponding pixel in the target hairstyle region.
According to some embodiments, the filling module 530 includes: an updating unit configured to update the three-dimensional model based on a second hair style image including the target hair style area; and an acquisition unit configured to acquire a second projection image of the updated three-dimensional model at the target viewing angle.
It should be appreciated that the various modules and units of the apparatus 500 shown in fig. 5 may correspond to the various steps in the method 200 described with reference to fig. 2. Thus, the operations, features and advantages described above with respect to method 200 are equally applicable to apparatus 500 and the modules and units comprising the same. For brevity, certain operations, features and advantages are not described in detail herein.
Although specific functions are discussed above with reference to specific modules, it should be noted that the functions of the various modules discussed herein may be divided into multiple modules and/or at least some of the functions of the multiple modules may be combined into a single module.
It should also be appreciated that various techniques may be described herein in the general context of software hardware elements or program modules. The various elements described above with respect to fig. 5 may be implemented in hardware or in hardware in combination with software and/or firmware. For example, the units may be implemented as computer program code/instructions configured to be executed in one or more processors and stored in a computer-readable storage medium. Alternatively, these units may be implemented as hardware logic/circuitry. For example, in some embodiments, one or more of the units 510-530 may be implemented together in a System on Chip (SoC). The SoC may include an integrated circuit chip including one or more components of a processor (e.g., a central processing unit (Central Processing Unit, CPU), microcontroller, microprocessor, digital signal processor (Digital Signal Processor, DSP), etc.), memory, one or more communication interfaces, and/or other circuitry, and may optionally execute received program code and/or include embedded firmware to perform functions.
There is also provided, in accordance with an embodiment of the present disclosure, an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the image processing methods of the embodiments of the present disclosure.
According to an embodiment of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the image processing method of the embodiment of the present disclosure.
According to an embodiment of the present disclosure, there is also provided a computer program product comprising computer program instructions which, when executed by a processor, implement the image processing method of the embodiments of the present disclosure.
Referring to fig. 6, a block diagram of an electronic device 600 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic device 600 can also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606, an output unit 607, a storage unit 608, and a communication unit 609. The input unit 606 may be any type of device capable of inputting information to the electronic device 600, the input unit 606 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 607 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 608 may include, but is not limited to, magnetic disks, optical disks. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth devices, 802.11 devices, wi-Fi devices, wiMAX devices, cellular communication devices, and/or the like.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the various methods and processes described above, such as method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. One or more of the steps of the method 200 described above may be performed when a computer program is loaded into RAM 603 and executed by the computing unit 601. Alternatively, in other embodiments, computing unit 601 may be configured to perform method 200 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely illustrative embodiments or examples and that the scope of the present disclosure is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (13)

1. An image processing method, comprising:
acquiring a first projection image corresponding to a first model image, wherein the first model image comprises a hairstyle under a target visual angle, and the first projection image is a projection image of a three-dimensional model corresponding to the hairstyle under the target visual angle;
And expanding a first hair-type region in the first hair-type image to obtain a target hair-type region in response to the area of the region to be filled in the first projection image being greater than a threshold, wherein the region to be filled is a region, which is not covered by the first hair-type region, in a second hair-type region in the first projection image, and the expanding the first hair-type region in the first hair-type image comprises:
performing expansion operation on a first mask corresponding to the first forming region to obtain a second mask, wherein a non-overlapping region of the second mask and the first mask is an expansion region, and the expansion region comprises at least one expansion pixel;
performing corrosion operation on the first mask to obtain a third mask, wherein a non-overlapping area of the first mask and the third mask is a sampling area, and the sampling area comprises at least one sampling pixel;
for any extended pixel in the extended area, taking the sampling pixel closest to the extended pixel in the sampling area as a target pixel corresponding to the extended pixel;
determining a pixel value of the extended pixel based on a pixel value of the target pixel;
And
And filling the region to be filled based on the target hairstyle region.
2. The method of claim 1, wherein the determining the pixel value of the extended pixel based on the pixel value of the target pixel comprises:
and determining the pixel value of the target pixel as the pixel value of the extended pixel.
3. The method of claim 1, wherein expanding the first hair style region in the first hair style image to obtain a target hair style region comprises:
at least one expansion of the first molding region is performed,
wherein, for each of the at least one expansion:
updating the area of the region to be filled based on the expansion result obtained by the expansion; and
and responding to the updated area being smaller than or equal to the threshold value, and taking the expansion result as the target hair style area.
4. The method of claim 1, wherein the filling the region to be filled based on the target hair style region comprises:
and determining the pixel value of the pixel in the region to be filled as the pixel value of the corresponding pixel in the target hairstyle region.
5. The method of claim 1, wherein the filling the region to be filled based on the target hair style region comprises:
updating the three-dimensional model based on a second hair style image including the target hair style region; and
and acquiring a second projection image of the updated three-dimensional model under the target visual angle.
6. An image processing apparatus comprising:
the acquisition module is configured to acquire a first projection image corresponding to a first model image, wherein the first model image comprises a hairstyle under a target visual angle, and the first projection image is a projection image of a three-dimensional model corresponding to the hairstyle under the target visual angle;
an expansion module configured to expand a first hair style region in the first hair style image to obtain a target hair style region in response to an area of the region to be filled in the first projection image being greater than a threshold, wherein the region to be filled is a region of a second hair style region in the first projection image that is not covered by the first hair style region, the expansion module comprising:
the expansion unit is configured to perform expansion operation on the first mask corresponding to the first forming area to obtain a second mask, wherein a non-overlapping area of the second mask and the first mask is an expansion area, and the expansion area comprises at least one expansion pixel;
The etching unit is configured to perform etching operation on the first mask to obtain a third mask, wherein a non-overlapping area of the first mask and the third mask is a sampling area, and the sampling area comprises at least one sampling pixel;
a third determination unit configured to, for any one of the extended pixels in the extended region, take a sampling pixel closest to the extended pixel in the sampling region as a target pixel corresponding to the extended pixel;
a second determination unit configured to determine a pixel value of the extended pixel based on a pixel value of the target pixel;
and
And the filling module is configured to fill the region to be filled based on the target hairstyle region.
7. The apparatus of claim 6, wherein the second determination unit is further configured to:
and determining the pixel value of the target pixel as the pixel value of the extended pixel.
8. The apparatus of claim 6, wherein the expansion module is further configured to:
at least one expansion of the first molding region is performed,
wherein, for each of the at least one expansion:
Updating the area of the region to be filled based on the expansion result obtained by the expansion; and
and responding to the updated area being smaller than or equal to the threshold value, and taking the expansion result as the target hair style area.
9. The apparatus of claim 6, wherein the population module is further configured to:
and determining the pixel value of the pixel in the region to be filled as the pixel value of the corresponding pixel in the target hairstyle region.
10. The apparatus of claim 6, wherein the population module comprises:
an updating unit configured to update the three-dimensional model based on a second hair style image including the target hair style area; and
and the acquisition unit is configured to acquire a second projection image of the updated three-dimensional model at the target view angle.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-5.
13. A computer program product comprising computer program instructions, wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1-5.
CN202310325431.4A 2023-03-29 2023-03-29 Image processing method and device, electronic equipment and storage medium Active CN116385641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310325431.4A CN116385641B (en) 2023-03-29 2023-03-29 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310325431.4A CN116385641B (en) 2023-03-29 2023-03-29 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116385641A CN116385641A (en) 2023-07-04
CN116385641B true CN116385641B (en) 2024-03-19

Family

ID=86974393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310325431.4A Active CN116385641B (en) 2023-03-29 2023-03-29 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116385641B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200828182A (en) * 2006-12-27 2008-07-01 Ind Tech Res Inst Method of utilizing multi-view images to solve occlusion problem for photorealistic model reconstruction
CN112967356A (en) * 2021-03-05 2021-06-15 北京百度网讯科技有限公司 Image filling method and device, electronic device and medium
CN112990331A (en) * 2021-03-26 2021-06-18 共达地创新技术(深圳)有限公司 Image processing method, electronic device, and storage medium
CN113206992A (en) * 2021-04-20 2021-08-03 聚好看科技股份有限公司 Method for converting projection format of panoramic video and display equipment
CN113888431A (en) * 2021-09-30 2022-01-04 Oppo广东移动通信有限公司 Training method and device of image restoration model, computer equipment and storage medium
CN114049290A (en) * 2021-11-10 2022-02-15 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN115131260A (en) * 2022-07-22 2022-09-30 北京字跳网络技术有限公司 Image processing method, device, equipment, computer readable storage medium and product
CN115222886A (en) * 2022-07-18 2022-10-21 北京奇艺世纪科技有限公司 Three-dimensional reconstruction method and device, electronic equipment and storage medium
WO2022261828A1 (en) * 2021-06-15 2022-12-22 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040208385A1 (en) * 2003-04-18 2004-10-21 Medispectra, Inc. Methods and apparatus for visually enhancing images

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200828182A (en) * 2006-12-27 2008-07-01 Ind Tech Res Inst Method of utilizing multi-view images to solve occlusion problem for photorealistic model reconstruction
CN112967356A (en) * 2021-03-05 2021-06-15 北京百度网讯科技有限公司 Image filling method and device, electronic device and medium
CN112990331A (en) * 2021-03-26 2021-06-18 共达地创新技术(深圳)有限公司 Image processing method, electronic device, and storage medium
CN113206992A (en) * 2021-04-20 2021-08-03 聚好看科技股份有限公司 Method for converting projection format of panoramic video and display equipment
WO2022261828A1 (en) * 2021-06-15 2022-12-22 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN113888431A (en) * 2021-09-30 2022-01-04 Oppo广东移动通信有限公司 Training method and device of image restoration model, computer equipment and storage medium
CN114049290A (en) * 2021-11-10 2022-02-15 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN115222886A (en) * 2022-07-18 2022-10-21 北京奇艺世纪科技有限公司 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN115131260A (en) * 2022-07-22 2022-09-30 北京字跳网络技术有限公司 Image processing method, device, equipment, computer readable storage medium and product

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Femoral strength can be predicted from 2D projections using a 3D statistical deformation and texture model with finite element analysis;Lukas Steiner等;Medical Engineering & Physics;72-82 *
一种结合多种图像分割算法的实例分割方案;詹琦梁;陈胜勇;胡海根;李小薪;周乾伟;;小型微型计算机系统(第04期);全文 *
基于移动平台的三维虚拟试发型系统实现及应用;邹晓;陈正鸣;朱红强;童晶;;图学学报(第02期);全文 *
复杂运动摄像机拍摄视频的背景修复技术;徐展;曹哲;;计算机应用(第12期);全文 *
张善文,张传雷,迟玉红,郭竟编.图像模式识别.西安电子科技大学出版社,2020,第42-46页. *

Also Published As

Publication number Publication date
CN116385641A (en) 2023-07-04

Similar Documents

Publication Publication Date Title
EP4033444A2 (en) Method and apparatus for enhancing image quality, device, and medium
CN115409922B (en) Three-dimensional hairstyle generation method, device, electronic equipment and storage medium
CN116051729B (en) Three-dimensional content generation method and device and electronic equipment
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN112749685B (en) Video classification method, apparatus and medium
CN112967196A (en) Image restoration method and device, electronic device and medium
CN116228867B (en) Pose determination method, pose determination device, electronic equipment and medium
CN112967355A (en) Image filling method and device, electronic device and medium
CN112967356A (en) Image filling method and device, electronic device and medium
EP3855386B1 (en) Method, apparatus, device and storage medium for transforming hairstyle and computer program product
CN117274491A (en) Training method, device, equipment and medium for three-dimensional reconstruction model
CN115661375B (en) Three-dimensional hair style generation method and device, electronic equipment and storage medium
CN116245998B (en) Rendering map generation method and device, and model training method and device
CN114119935B (en) Image processing method and device
CN116030185A (en) Three-dimensional hairline generating method and model training method
CN116385641B (en) Image processing method and device, electronic equipment and storage medium
CN115393514A (en) Training method of three-dimensional reconstruction model, three-dimensional reconstruction method, device and equipment
CN114119154A (en) Virtual makeup method and device
CN114049472A (en) Three-dimensional model adjustment method, device, electronic apparatus, and medium
CN113223128B (en) Method and apparatus for generating image
CN115423827B (en) Image processing method, image processing device, electronic equipment and storage medium
CN114120412B (en) Image processing method and device
CN115937430B (en) Method, device, equipment and medium for displaying virtual object
CN115331077B (en) Training method of feature extraction model, target classification method, device and equipment
CN116030191B (en) Method, device, equipment and medium for displaying virtual object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant