CN115797661A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN115797661A
CN115797661A CN202211584685.XA CN202211584685A CN115797661A CN 115797661 A CN115797661 A CN 115797661A CN 202211584685 A CN202211584685 A CN 202211584685A CN 115797661 A CN115797661 A CN 115797661A
Authority
CN
China
Prior art keywords
image
target
color
fragment
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211584685.XA
Other languages
Chinese (zh)
Inventor
刘东东
胡晓文
梁烁
孙瑞
孙昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211584685.XA priority Critical patent/CN115797661A/en
Publication of CN115797661A publication Critical patent/CN115797661A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The present disclosure provides an image processing method and apparatus, an electronic device, and a storage medium, which relate to the technical field of artificial intelligence, specifically to the technical fields of augmented reality, virtual reality, computer vision, deep learning, and the like, and can be applied to scenes such as a meta universe, a virtual digital person, and the like. The specific implementation scheme is as follows: detecting key points of a target object in an image to be processed; determining feature points corresponding to the key points in the reference image; the color features of a target area in the reference image are different from those of other areas, and the target area is an area corresponding to the target object; re-coloring the image to be processed according to the key points, the characteristic points and the reference image to obtain a target image; and performing color sampling in the target image according to the position of the target fragment, and determining whether the target fragment is located in the area where the target object is located according to the sampling result.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image processing technology, and in particular, to the field of artificial intelligence technology, specifically, augmented reality, virtual reality, computer vision, deep learning, and the like, and can be applied to scenes such as metas, virtual digital people, and the like.
Background
Image processing, also known as video processing, is a technique for analyzing an image with a computer to achieve a desired result. The image processing is generally referred to as digital image processing, where the digital image is a two-dimensional array obtained by shooting with an industrial camera, a video camera, a scanner, a mobile terminal, and other devices, and elements of the two-dimensional array are called pixels.
With the continuous development of artificial intelligence on the image processing technology, the computer can perform personalized processing on the image by using the artificial intelligence technology, for example, beautifying the object in the image or repairing the object in the image, so as to obtain an image processing result meeting the actual requirements of the user, thereby improving the user experience.
Disclosure of Invention
The disclosure provides an image processing method and apparatus, an electronic device, and a storage medium.
According to an aspect of the present disclosure, there is provided an image processing method including: detecting key points of a target object in an image to be processed; determining feature points corresponding to the key points in the reference image; the reference image is an image corresponding to the target object, the color characteristics of a target area in the reference image are different from those of other areas, and the target area is an area corresponding to the target object; re-coloring the image to be processed according to the key points, the characteristic points and the reference image to obtain a target image; wherein the color features of the target object in the target image correspond to the color features of the target region in the reference image, and the color features of the other objects correspond to the color features of the other regions in the reference image; performing color sampling in the target image according to the position of the target fragment, and determining whether the target fragment is located in the area where the target object is located according to the sampling result; and the target fragment is any fragment corresponding to the image to be processed.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: the key point detection module is used for detecting key points of a target object in an image to be processed; the characteristic point determining module is used for determining the characteristic points corresponding to the key points in the reference image; the reference image is an image corresponding to the target object, the color characteristics of a target area in the reference image are different from those of other areas, and the target area is an area corresponding to the target object; the image coloring module is used for re-coloring the image to be processed according to the key points, the characteristic points and the reference image to obtain a target image; wherein the color features of the target object in the target image correspond to the color features of the target region in the reference image, and the color features of the other objects correspond to the color features of the other regions in the reference image; the region determining module is used for performing color sampling in the target image according to the position of the target fragment and determining whether the target fragment is in the region of the target object according to the sampling result; and the target fragment is any fragment corresponding to the image to be processed.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the image processing method described above.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the image processing method described above.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of an image processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a reference image of an embodiment of the present disclosure;
FIG. 3 is a scene diagram of an image processing method in which embodiments of the present disclosure may be implemented;
FIG. 4 is a schematic diagram of step S104 according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of step S104 according to another embodiment of the present disclosure;
fig. 6 is a block diagram of the structure of an image processing apparatus according to an embodiment of the present disclosure.
Fig. 7 is a block diagram of an electronic device for implementing an image processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present disclosure, there is provided an embodiment of an image processing method, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure, as shown in fig. 1, the method including the following steps S101 to S104:
and step S101, detecting key points of the target object in the image to be processed.
The target object may be a human body or an object, or may be a part of a human body or a part of an object, such as a face of a human body, eyebrows in a face, or tires of an automobile.
In a specific implementation of step S101, a target detection algorithm may be used to detect key points of the target object. In a specific example, the target object is a human face, and the key points of the human face in the image to be processed are detected by using a human face detection algorithm, where the key points may include points in a face contour region, and may also include points in an eyebrow, an eye, a nose, and a mouth region. The face detection algorithm may also be referred to as a face recognition algorithm, and may be implemented specifically by a template matching-based method, a singular value feature-based method, a subspace analysis method, a local preserving projection method, a principal component analysis method, or the like.
And S102, determining characteristic points corresponding to the key points in the reference image. The reference image is an image corresponding to the target object, the color characteristics of a target area in the reference image are different from those of other areas, and the target area is an area corresponding to the target object. Wherein the other regions in the reference image are regions in the reference image other than the target region.
In a specific implementation, the color feature may be a color, that is, an RGB value, a luminance value converted from an RGB value, a gray value converted from an RGB value, or the like.
In a specific example, the target object is a human face, the reference image is an image corresponding to the human face, and as shown in fig. 2, the color of the target area, which is an area corresponding to the human face, in the reference image is white, i.e., RGB = (255 ), and the color of the other areas is black, i.e., RGB = (0,0,0).
It should be noted that there may be only one color feature of the target region in the reference image, or there may be multiple color features. In a specific example, the color of the region corresponding to the human face in the reference image, i.e., the target region, is two, including white and yellow, wherein the color of the eyebrow, eye, nose and mouth regions is white, and the color of the face contour region is yellow.
Wherein the location of the keypoint on the target object corresponds to the location of the feature point on the reference image. In a specific implementation of step S102, feature points in the reference image may be detected using the same target detection algorithm as in step S101.
And S103, re-coloring the image to be processed according to the key points, the characteristic points and the reference image to obtain a target image. Wherein the color feature of the target object in the target image corresponds to the color feature of the target region in the reference image, and the color features of the other objects correspond to the color features of the other regions in the reference image. Wherein other objects in the target image are objects in the target image other than the target object.
The target image is an image obtained by re-coloring the image to be processed, and specifically, the region corresponding to the key point in the image to be processed, that is, the region where the target object is located, may be colored according to the color feature of the region corresponding to the feature point, that is, the target region in the reference image, and the region outside the target object in the image to be processed may be colored according to the color feature of the other region outside the target region in the reference image.
And S104, performing color sampling in the target image according to the position of the target fragment, and determining whether the target fragment is located in the area where the target object is located according to the sampling result.
The target fragment is any fragment corresponding to the image to be processed, the position of the target fragment refers to the position of the target fragment in the image to be processed, and color sampling is performed at the same position in the target image to obtain a sampling result. In a specific implementation, the sampling result may be an RGB value.
In some application scenarios, when the to-be-processed image is processed in the fragment shader, it is necessary to determine whether the region where the target fragment is located is the region where the target object is located, and then perform corresponding processing on the target fragment according to the region where the target fragment is located. In the embodiment of the disclosure, firstly, a target object and other objects in an image to be processed are respectively recolored according to color features of a target area and other areas in a reference image to obtain a target image, then, color sampling is performed in the target image according to the position of a target fragment in the image to be processed, and finally, the area where the target fragment is located in the image to be processed is determined according to a sampling result.
In an alternative embodiment of step S103, the to-be-processed image may be recolorized by using a vertex shader and a fragment shader to obtain a target image. Specifically, as shown in fig. 3, the coordinates of the key point and the coordinates of the feature point are input into a Vertex Shader (Vertex Shader), and the coordinates of the reference image and the coordinates of the feature point are input into a Fragment Shader (Fragment Shader), so as to obtain a target image.
In a rendering pipeline of a GPU (Graphics Processing Unit), a vertex shader is used to control the conversion process of vertex coordinates, and the next stage of the vertex shader is primitive assembly for combining vertices output by the vertex shader into primitives. The primitive is a geometric object such as a triangle, a straight line or a point. The next stage of primitive assembly is rasterization, which is used to convert the primitive into a set of two-dimensional fragments. The fragment shader is used to compute the color of each fragment generated by the rasterization stage.
In this embodiment, the coordinates of the key point and the coordinates of the feature point are input to a vertex shader, and in a rendering pipeline of the GPU, the vertex shader generates a two-dimensional fragment based on a triangle primitive according to the coordinates of the key point, and inputs the coordinates of the feature point into the fragment shader. In addition, the reference image is also input into a fragment shader, each fragment in the fragment shader corresponds to a feature point, the positions of the feature points on the reference image are subjected to color sampling, the fragments corresponding to the feature points are colored according to the color features sampled from the positions of the feature points, the positions of non-feature points on the reference image are subjected to color sampling, and the fragments corresponding to the non-feature points are colored according to the color features sampled from the positions of the non-feature points, so that the target image can be obtained. The non-feature points refer to pixel points on the reference image except for the feature points.
In some examples, the Fragment may also be referred to as a Fragment or Fragment and the Fragment shader may also be referred to as a Fragment shader.
In an embodiment where step S103 is optional, the color feature of the target object in the target image is the same as the color feature of the target area in the reference image, and the color feature of the other object is the same as the color feature of the other area in the reference image.
Specifically, in an example of recoloring the image to be processed by using a vertex shader and a fragment shader, in the fragment shader, after color sampling is performed on a position of a feature point on the reference image, a sampled color feature is directly output to color a fragment corresponding to the feature point in the image to be processed, so that the color feature of the target object in the obtained target image is the same as the color feature of the target area in the reference image; and after color sampling is carried out on the position of a non-characteristic point on the reference image, directly outputting the sampled color characteristic to color the fragment corresponding to the non-characteristic point in the image to be processed, so that the color characteristics of the other objects in the obtained target image are the same as the color characteristics of the other areas in the reference image.
In this embodiment, the target object in the image to be processed is directly rendered according to the color feature of the feature point on the reference image, that is, the color feature of the target area, and the other object in the image to be processed is directly rendered according to the color feature of the non-feature point on the reference image, that is, the color feature of the other area, so that the efficiency of re-rendering the image to be processed can be improved.
In an embodiment where step S103 is optional, the color feature of the target object in the target image is different from the color feature of the target area in the reference image, and the color feature of the other object is also different from the color feature of the other area in the reference image.
In a specific implementation, a first color feature corresponding to the color feature of the target region in the reference image and a second color feature corresponding to the color feature of the other region in the reference image may be set according to an actual requirement, and the target object in the image to be processed is rendered according to the first color feature and the other object in the image to be processed is rendered according to the second color feature to obtain the target image. In a specific example, the color of the target area in the reference image is white, and the color of the other areas is black. A first color corresponding to the color of the target area in the reference image may be set to yellow, and the target object in the image to be processed may be colored to yellow. A second color corresponding to the color of the other region in the reference image may be set to green, and the other objects in the image to be processed may be colored to green.
In this embodiment, the target object in the image to be processed is rendered according to the color feature corresponding to the color feature of the target region in the reference image, and the other objects in the image to be processed are rendered according to the color feature corresponding to the color feature of the other regions in the reference image, so that the flexibility of re-rendering the image to be processed can be improved without being limited to the color feature in the reference image.
In an alternative embodiment, as shown in fig. 4, step S104 specifically includes the following steps S401 to S403:
step S401, determining whether the color feature corresponding to the sampling result matches with the color feature of the target object in the target image, if yes, performing step S402, and if no, performing step S403.
Step S402, determining the target fragment is located in the area of the target object.
Step S403, determining that the target fragment is not located in the region where the target object is located, that is, determining that the target fragment is located in the region where the other objects are located in the target image.
In particular implementations, the R, G, and/or B values in the sampling results may be matched to the R, G, and/or B values of the target object in the target image. In a specific example, if the R value, the G value, and the B value in the sampling result all match with the R value, the G value, and the B value of the target object in the target image, it is considered that the color feature corresponding to the sampling result matches with the color feature of the target object in the target image. In another specific example, if the R value in the sampling result matches the R value of the target object in the target image, the color feature corresponding to the sampling result is considered to match the color feature of the target object in the target image. For example, if the color of the target object in the target image is yellow, i.e., RGB = (255,255,0), and the color of the other object in the target image is black, i.e., RGB = (0,0,0), if the R value in the sampling result is 255 and matches the R value of the target object in the target image, the color feature corresponding to the sampling result is considered to match the color feature of the target object in the target image.
In this embodiment, the color feature corresponding to the sampling result is matched with the color feature of the target object in the target image, and the determination of the region of the target fragment in the image to be processed is achieved according to the matching result.
In another alternative embodiment, as shown in fig. 5, step S104 specifically includes the following steps S501 to S503:
step S501, determining whether the color feature corresponding to the sampling result matches with the color feature of another object in the target image, if not, executing step S502, and if so, executing step S503.
Step S502, determining that the target fragment is located in the area of the target object.
Step S503, determining that the target fragment is not located in the region where the target object is located, that is, determining that the target fragment is located in the region where the other object is located in the target image.
In particular implementations, the R, G, and/or B values in the sampling results may be matched to R, G, and/or B values of other objects in the target image. In a specific example, if the R value, the G value, and the B value in the sampling result all match with the R value, the G value, and the B value of other objects in the target image, the color feature corresponding to the sampling result is considered to match with the color feature of other objects in the target image. In another specific example, if the R value in the sampling result matches the R values of other objects in the target image, the color feature corresponding to the sampling result is considered to match the color features of other objects in the target image.
In this embodiment, the color features corresponding to the sampling result are matched with the color features of other objects in the target image, and the determination of the region of the target fragment in the image to be processed is achieved according to the matching result.
In an optional embodiment, the image processing method further includes: and executing different operations on the fragment located in the region where the target object is located in the image to be processed and the fragment located in the region where the other objects are located. The operation may be various operations such as enlarging, reducing, rotating, image enhancing, image repairing, coloring, and the like.
In this embodiment, different operations are performed on the fragment according to whether the region in which the fragment is located in the image to be processed is the region in which the target object is located or the regions in which other objects are located, so that different image processing can be performed on the target object and other objects in the image to be processed, and thus an image meeting the user requirements is obtained.
It should be noted that, when the target object in this embodiment is a human body or a certain part of a human body, the reference image is not an image for a certain specific user and cannot reflect personal information of the certain specific user, and the reference image in this embodiment is derived from a public data set.
According to an embodiment of the present disclosure, an embodiment of an image processing apparatus is further provided, where fig. 6 is a schematic diagram of an image processing apparatus according to an embodiment of the present disclosure, and the apparatus includes a key point detecting module 601, a feature point determining module 602, an image coloring module 603, and an area determining module 604. The key point detection module 601 is used for detecting key points of a target object in an image to be processed; the feature point determining module 602 is configured to determine feature points corresponding to the key points in the reference image; the reference image is an image corresponding to the target object, the color characteristics of a target area in the reference image are different from those of other areas, and the target area is an area corresponding to the target object; the image coloring module 603 is configured to re-color the image to be processed according to the key points, the feature points, and the reference image, so as to obtain a target image; wherein the color feature of the target object in the target image corresponds to the color feature of the target region in the reference image, and the colors of other objects correspond to the color features of the other regions in the reference image; the region determining module 604 is configured to perform color sampling in the target image according to a position of a target fragment, and determine whether the target fragment is in a region where the target object is located according to a sampling result; and the target fragment is any fragment corresponding to the image to be processed.
It should be noted that the above-mentioned key point detecting module 601, the feature point determining module 602, the image coloring module 603, and the region determining module 604 correspond to steps S101 to S104 in the above-mentioned embodiment, and these four modules are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure in the above-mentioned embodiment.
In an optional embodiment, the image shading module is specifically configured to input the coordinates of the key point and the coordinates of the feature point into a vertex shader, and input the coordinates of the reference image and the coordinates of the feature point into a fragment shader, so as to obtain the target image.
In an alternative embodiment, the color characteristics of the target object in the target image are the same as the color characteristics of the target area in the reference image, and the color characteristics of other objects are the same as the color characteristics of the other areas in the reference image.
In an optional embodiment, the region determining module is specifically configured to determine that the target fragment is located in the region where the target object is located, when the color corresponding to the sampling result matches the color of the target object in the target image.
In an optional embodiment, the area determining module is specifically configured to determine that the target fragment is located in the area where the target object is located when the color corresponding to the sampling result does not match the colors of other objects in the target image.
In an optional embodiment, the apparatus further includes a fragment operation module, configured to perform different operations on a fragment in the region where the target object is located in the to-be-processed image and a fragment in the region where the other object is located.
The above-described embodiments of the apparatus are merely illustrative, and the modules or units illustrated as separate components may or may not be physically separate, may be located in one place, and may be distributed on a plurality of network units. Some or all of the modules or units can be selected according to actual needs to achieve the purpose of the scheme of the disclosure.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 executes the respective methods and processes described above, such as an image processing method. For example, in some embodiments, the image processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM703 and executed by the computing unit 701, one or more steps of the image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the image processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. An image processing method comprising:
detecting key points of a target object in an image to be processed;
determining feature points corresponding to the key points in the reference image; the reference image is an image corresponding to the target object, the color characteristics of a target area in the reference image are different from those of other areas, and the target area is an area corresponding to the target object;
re-coloring the image to be processed according to the key points, the characteristic points and the reference image to obtain a target image; wherein the color features of the target object in the target image correspond to the color features of the target region in the reference image, and the color features of the other objects correspond to the color features of the other regions in the reference image;
performing color sampling in the target image according to the position of the target fragment, and determining whether the target fragment is located in the area where the target object is located according to the sampling result; and the target fragment is any fragment corresponding to the image to be processed.
2. The image processing method according to claim 1, wherein the re-coloring the image to be processed according to the key points, the feature points and the reference image to obtain a target image comprises: and inputting the coordinates of the key points and the coordinates of the feature points into a vertex shader, and inputting the coordinates of the reference image and the coordinates of the feature points into a fragment shader to obtain a target image.
3. The image processing method according to claim 1, wherein the color feature of the target object in the target image is the same as the color feature of the target region in the reference image, and the color feature of the other object is the same as the color feature of the other region in the reference image.
4. The image processing method according to claim 1, wherein the determining whether the target fragment is located in an area where the target object is located according to the sampling result comprises:
and if the color feature corresponding to the sampling result is matched with the color feature of the target object in the target image, determining that the target fragment is located in the area where the target object is located.
5. The image processing method according to claim 1, wherein the determining whether the target fragment is located in a region where the target object is located according to the sampling result comprises:
and if the color features corresponding to the sampling result are not matched with the color features of other objects in the target image, determining that the target fragment is located in the area where the target object is located.
6. The image processing method according to any one of claims 1 to 5, further comprising:
and executing different operations on the fragment located in the region where the target object is located in the image to be processed and the fragment located in the region where the other objects are located.
7. An image processing apparatus comprising:
the key point detection module is used for detecting key points of a target object in an image to be processed;
the characteristic point determining module is used for determining the characteristic points corresponding to the key points in the reference image; the reference image is an image corresponding to the target object, the color characteristics of a target area in the reference image are different from those of other areas, and the target area is an area corresponding to the target object;
the image coloring module is used for re-coloring the image to be processed according to the key points, the characteristic points and the reference image to obtain a target image; wherein the color features of the target object in the target image correspond to the color features of the target region in the reference image, and the color features of the other objects correspond to the color features of the other regions in the reference image;
the region determining module is used for carrying out color sampling in the target image according to the position of the target fragment and determining whether the target fragment is in the region of the target object according to a sampling result; and the target fragment is any fragment corresponding to the image to be processed.
8. The image processing apparatus according to claim 7, wherein the image shading module is specifically configured to input the coordinates of the key point and the coordinates of the feature point into a vertex shader, and input the coordinates of the reference image and the coordinates of the feature point into a fragment shader, so as to obtain a target image.
9. The image processing apparatus according to claim 7, wherein the color feature of the target object in the target image is the same as the color feature of the target region in the reference image, and the color feature of the other object is the same as the color feature of the other region in the reference image.
10. The image processing apparatus according to claim 7, wherein the region determining module is specifically configured to determine that the target fragment is located in the region where the target object is located when a color feature corresponding to the sampling result matches a color feature of the target object in the target image.
11. The image processing apparatus according to claim 7, wherein the region determining module is specifically configured to determine that the target fragment is located in the region where the target object is located when the color feature corresponding to the sampling result does not match the color feature of another object in the target image.
12. The image processing apparatus according to any one of claims 7 to 11, further comprising:
and the fragment operation module is used for executing different operations on the fragments located in the region where the target object is located and the fragments located in the regions where the other objects are located in the image to be processed.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the image processing method according to any one of claims 1 to 6.
15. A computer program product comprising a computer program which, when executed by a processor, implements an image processing method according to any one of claims 1-6.
CN202211584685.XA 2022-12-09 2022-12-09 Image processing method and device, electronic device and storage medium Pending CN115797661A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211584685.XA CN115797661A (en) 2022-12-09 2022-12-09 Image processing method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211584685.XA CN115797661A (en) 2022-12-09 2022-12-09 Image processing method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN115797661A true CN115797661A (en) 2023-03-14

Family

ID=85418630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211584685.XA Pending CN115797661A (en) 2022-12-09 2022-12-09 Image processing method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN115797661A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824028A (en) * 2023-08-30 2023-09-29 腾讯科技(深圳)有限公司 Image coloring method, apparatus, electronic device, storage medium, and program product

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116824028A (en) * 2023-08-30 2023-09-29 腾讯科技(深圳)有限公司 Image coloring method, apparatus, electronic device, storage medium, and program product
CN116824028B (en) * 2023-08-30 2023-11-17 腾讯科技(深圳)有限公司 Image coloring method, apparatus, electronic device, storage medium, and program product

Similar Documents

Publication Publication Date Title
CN113012210B (en) Method and device for generating depth map, electronic equipment and storage medium
CN113160257B (en) Image data labeling method, device, electronic equipment and storage medium
CN112634343A (en) Training method of image depth estimation model and processing method of image depth information
CN112785674A (en) Texture map generation method, rendering method, device, equipment and storage medium
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN114792355B (en) Virtual image generation method and device, electronic equipment and storage medium
CN112802037A (en) Portrait extraction method, device, electronic equipment and storage medium
JP2022168167A (en) Image processing method, device, electronic apparatus, and storage medium
CN115797661A (en) Image processing method and device, electronic device and storage medium
CN113139905B (en) Image stitching method, device, equipment and medium
CN113923474A (en) Video frame processing method and device, electronic equipment and storage medium
CN114120413A (en) Model training method, image synthesis method, device, equipment and program product
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
CN113421335B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN115409951A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115147306A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115311403A (en) Deep learning network training method, virtual image generation method and device
CN115775215A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115019057A (en) Image feature extraction model determining method and device and image identification method and device
CN114842066A (en) Image depth recognition model training method, image depth recognition method and device
CN111652978B (en) Grid generation method and device, electronic equipment and storage medium
CN114494782A (en) Image processing method, model training method, related device and electronic equipment
CN114093006A (en) Training method, device and equipment of living human face detection model and storage medium
CN113947146A (en) Sample data generation method, model training method, image detection method and device
CN113361535A (en) Image segmentation model training method, image segmentation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination