CN111462007A - Image processing method, device, equipment and computer storage medium - Google Patents

Image processing method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN111462007A
CN111462007A CN202010244882.1A CN202010244882A CN111462007A CN 111462007 A CN111462007 A CN 111462007A CN 202010244882 A CN202010244882 A CN 202010244882A CN 111462007 A CN111462007 A CN 111462007A
Authority
CN
China
Prior art keywords
target
image
image block
area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010244882.1A
Other languages
Chinese (zh)
Other versions
CN111462007B (en
Inventor
庞文杰
洪智滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010244882.1A priority Critical patent/CN111462007B/en
Publication of CN111462007A publication Critical patent/CN111462007A/en
Application granted granted Critical
Publication of CN111462007B publication Critical patent/CN111462007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a computer storage medium, and relates to the technical field of computers, in particular to the field of image processing. The specific implementation scheme is as follows: determining a target region in the first image; determining a first target image block comprising a target area in a first image according to the target area; determining a second target image block in a second image according to the first target image block; and in the target area, fusing the content in the second target image block with the content in the first target image block.

Description

Image processing method, device, equipment and computer storage medium
Technical Field
The present application relates to the field of computer technology, and more particularly, to the field of image processing.
Background
With the development of mobile terminals, users can take pictures anytime and anywhere by using portable electronic equipment. The development of the photographing technology of the mobile terminal is also changing from the initial pixel improvement to the diversification of the use modes such as the art of beauty and sectional drawing.
Today, most mobile terminal applications require more or less image acquisition and processing techniques to be involved. With the enhancement of the dependence of users on the application programs of the mobile terminal, how to seek breakthrough in image acquisition and processing is an important problem to be considered for the perfection of image processing and even application programs.
Disclosure of Invention
In order to solve at least one problem in the prior art, embodiments of the present application provide an image processing method, an apparatus, a device, and a computer storage medium.
In a first aspect, an embodiment of the present application provides an image processing method, including:
determining a target region in the first image;
determining a first target image block comprising a target area in a first image according to the target area;
determining a second target image block in a second image according to the first target image block;
and in the target area, fusing the content in the second target image block with the content in the first target image block.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
a target area module: for determining a target region in the first image;
a first image target block module: the image processing method comprises the steps of determining a first target image block including a target area in a first image according to the target area;
a second target image block module: the image processing device is used for determining a second target image block in a second image according to the first target image block;
a fusion module: and the image fusion module is used for fusing the content in the second target image block with the content in the first target image block in the target area.
In a third aspect, an embodiment of the present application provides an electronic device, which includes:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform an image processing method provided by any one of the embodiments of the present application.
In a fourth aspect, the present application provides a non-transitory computer-readable storage medium storing computer instructions, where the computer instructions are configured to cause a computer to execute an image processing method provided in any one of the embodiments of the present application.
One embodiment in the above application has the following advantages or benefits: can provide a new development direction for the terminal image shooting technology. According to the method and the device, the target area in the first image is determined, then the first target image block is determined, then the second target image block in the second image which has a certain correlation with the first target image block is determined, finally the content in the second target image block is fused into the target area in the first image, elements in the second image can be migrated into the first image, a rich image processing means is provided for a user, and therefore the technical problem of single image shooting mode is solved.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a schematic diagram of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image processing method according to another embodiment of the present application;
FIG. 3 is a diagram illustrating an image processing method according to another embodiment of the present application;
FIG. 4 is a diagram illustrating an image processing method according to another embodiment of the present application;
FIG. 5 is a schematic diagram of a facial image according to an embodiment of the present application;
FIG. 6 is a diagram illustrating an image processing method according to another embodiment of the present application;
FIG. 7 is a diagram illustrating an image processing method according to another embodiment of the present application;
8A, 8B is an effect diagram after processing of a first image and a second image of an example of the present application;
FIGS. 9A and 9B are diagrams illustrating the effect of the first image and the second image after processing according to an example of the present application;
FIG. 10 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic diagram of an image processing apparatus according to another embodiment of the present application;
fig. 13 is a schematic diagram of an image processing apparatus according to another embodiment of the present application;
fig. 14 is a schematic diagram of an image processing apparatus according to another embodiment of the present application;
fig. 15 is a block diagram of an electronic device for implementing the image processing method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the image processing method provided by the embodiment of the application, a first target image block is selected in a first image to be processed, then an image block closest to the first target image block is searched in a second image to be a second target image block, and then the content in the second target image block is transferred to a target area in the first target image block.
An embodiment of the present application first provides an image processing method, as shown in fig. 1, including:
step 101: a target region in the first image is determined.
In this embodiment of the application, the target area may be obtained according to preset information. For example, the first image is a face image, the makeup regions in the face image may include an eye makeup region, a cheek makeup region, a lip makeup region, and an eyebrow makeup region, and the target region may be at least one of the makeup regions. Alternatively, the first image is a face image including facial organs such as eyes, cheeks, lips, eyebrows, etc., and the target region is at least one of the facial organ regions.
In one embodiment of the present application, the first image is a face image and the target area is one of the makeup areas. The makeup areas in different ranges can be selected and determined according to the makeup types. For example, the makeup types may be light makeup, heavy makeup, bright makeup, Beijing opera facial makeup, and the like. The type of makeup may be determined based on a user-selected parameter, or may be determined based on the first image. For example, if the area ratio of the bright colors in the first image is large, the makeup type can be determined to be bright. The type of makeup may also be determined from the second image. For example, the second image is mostly light or dark, and the makeup type may be determined to be light. Different makeup types have different corresponding makeup ranges and different target areas. For example, the target area is larger if the makeup range of the eye corresponding to the smoke makeup is larger; the eye makeup range corresponding to light makeup is smaller, and the corresponding target area is also smaller.
The target area may be acquired according to a type of the first image. For example, in one embodiment, the first image is a face image, and the target area is a makeup area in the face image. In another embodiment, the first image may be other images, such as a building image, an animal face image, a sculpture face image, and the like, and the target area is a preset area in which the content needs to be changed.
Step 102: and determining a first target image block comprising the target area in the first image according to the target area.
In the embodiment of the present application, the first image block may coincide with the target area.
In another embodiment of the present application, the first target image block is a circumscribed rectangle of the target area. For example, the first image is a face image, the target region may be one of an eye makeup region, an eyebrow makeup region, a cheek makeup region, and a lip makeup region, and the target region may be an irregular figure. In this case, the first target image block may be a rectangle circumscribing the target area figure.
As another example, if the first image is a face image and the target area may be one of an eye area, an eyebrow area, a cheek area, and a lip area, the first target image block may be a rectangle circumscribing the target area figure.
Step 103: and determining a second target image block in the second image according to the first target image block.
In the embodiment of the present application, the second target image block may be an image block (Patch) in the second image whose relationship with the first target image block conforms to a set value. For example, the second target image block may be an image block in a second image having an image characteristic closest to the first target image block.
Step 104: and in the target area, fusing the content in the second target image block with the content in the first target image block.
In the embodiment of the present application, the content in the second target image block is fused with the content in the first target image block, and the content in the second target image block may be selectively displayed in the target area in the first target image block.
In the embodiment of the present application, the content in the second target image block and the content in the first target image block are fused, and part or all of the image features in the first target image block may be changed by using part or all of the image features in the second target image block.
For the first image, in which a plurality of target areas may exist, the operations of the above steps 101-104 may be performed on each target area one by one.
In the embodiment of the application, the second target image block can be determined in the second image according to the target area in the first image, and then the content in the second target image block and the content in the first target image block are fused in the target area, so that the technical effect of processing and adjusting the first image according to the elements in the second image can be realized. In practical application, the method can be applied to scenes for making up the face of a user by referring to pictures, and provides diversified processing modes for application such as terminal image beautification or video beautification.
In one embodiment, as shown in fig. 2, step S104 may include:
step 201: determining a target mask layer corresponding to a target area from a plurality of preset mask layers, wherein the target mask layer comprises a first presentation proportion and a second presentation proportion;
step 202: and in the target area, presenting the corresponding content of the first target image block in a first presentation proportion, and presenting the corresponding content of the second target image block in a second presentation proportion.
In the embodiment of the present application, the first rendering ratio and the second rendering ratio may be rendering ratios for all pixels in the target region, or may be rendering ratios for a part of pixels in the target region.
In one embodiment, the sum of the first presentation ratio and the second presentation ratio is 1. For example, 30% of the content of the first target image block is presented, and 70% of the content of the second target image block is presented. In this way, the purpose of content migration can be achieved, and the characteristics of textures and the like in the first target image block can be retained.
In the embodiment of the present application, the plurality of preset mask layers may correspond to different types of target regions. For example, the first image is a face image, and if the target region is an eye makeup region, the mask layer is a mask layer corresponding to the eye makeup region.
In the embodiment of the application, in the target area, the content in the first target image block is presented according to the set first presentation ratio, and the content in the second target image block is presented according to the set second presentation ratio, so that the content in the second target image block of the second image can be migrated into the target area of the first image. Therefore, the first image has the content and style of the second image, and a new function is developed for the video and image shooting of the current terminal.
In one embodiment, as shown in fig. 3, step S104 may include:
step 301: determining a target mask layer corresponding to a target area from a plurality of preset mask layers, wherein the target mask layer comprises a first rendering ratio and a second rendering ratio of each pixel in the target area;
step 302: and for each pixel in the target area, presenting the corresponding content of the first target image block on the pixel according to the first presentation proportion corresponding to the pixel, and presenting the corresponding content of the second target image block according to the second presentation proportion corresponding to the pixel.
In the embodiment of the present application, different pixels in the target area may correspond to different rendering ratios. Therefore, when the method is applied to a scene for making up a facial image, a natural transition can be formed between a make-up area and a non-make-up area, and a better make-up effect is achieved.
In one embodiment, as shown in fig. 4, the first image is a face image, the target area is a target makeup area, and the step S101 may include:
step 401: determining a sample makeup area in the sample face image;
step 402: and according to the feature points of the first image, affine transforming the sample makeup area to a face area in the first image to obtain a target area.
In the embodiment of the present application, a sample face is set, as shown in fig. 5, in an image of the sample face, a plurality of predetermined feature points 501 exist, these feature points 501 form a plurality of triangular regions of the human face, and at least one sample makeup region, that is, a region of a dotted line in the figure, is set on the sample face. Such as sample eye makeup areas 502 and 503, sample lip makeup area 504, and sample cheek makeup areas 505 and 506. Wherein the sample lip makeup area 504 coincides with the lip area. In the face image of the user, predetermined feature points also exist, so that the sample makeup area of the sample image can be affine-transformed into the face image of the user.
Affine transformation, also called affine mapping, refers to a geometric transformation in which one vector space is linearly transformed and then translated into another vector space. Affine transformation is geometrically defined as an affine transformation between two vector spaces or affine mapping consisting of a non-singular linear transformation (transformation using a linear function) followed by a translation transformation. According to the embodiment of the application, the sample makeup area is projected into the face image of the user through affine transformation, so that the makeup area can be determined in the face image of the user, and the makeup area can be made to accord with the face characteristics of the specific user.
According to the method and the device, the sample makeup area can be affine transformed into the face image of the user, so that the target area can be determined in the face image of the user, and the makeup area which is in accordance with the personal face characteristics of the user can be determined according to the actual face image of the user. For example, some users have a large eye area, and the corresponding eye makeup area range is correspondingly large.
In one example, the target region may include a first target region and a second target region. Further, step S102 may include: determining a first target image block comprising a first target area in the first image according to the first target area; step S104 may include: and in a second target area corresponding to the first target area, fusing the content in the second target image block with the content in the first target image block. For example: the first image is a face image and the first target region is a face organ region, such as an eye region (eye region or eyes). The second target area may be a circumscribed area of the facial organ, such as an eye circumscribed area where makeup is desired.
In some cases, the second target area may be larger than the first target area. For example: the target makeup area is larger than the corresponding face organ area, such as the eye makeup area is larger than the eye area. In other cases, the second target area may be smaller than the first target area. For example: the target makeup area is smaller than the corresponding facial organ area, e.g., the cheek makeup area is smaller than the cheek area.
In one embodiment, as shown in fig. 6, determining a second target image block in a second image according to a first image block includes:
step 601: determining at least one image block to be selected in a second image;
step 602: and determining a second target image block from the image blocks to be selected according to the similarity between the image blocks to be selected and the first target image block.
In the embodiment of the application, a candidate image block with the highest similarity to the first target image block may be determined from the candidate image blocks as the second target image block.
In the embodiment of the present application, the sliding windows with different specifications may be utilized to find the second target image block with the most similar characteristics to the first target image block.
Specifically, the candidate image block closest to the first target image block may be determined as the second target image block according to a Neural Patch-Based Similarity (Neural Patch-Based Similarity).
According to the method and the device, the second target image block is determined in the second image by utilizing the similarity, so that the content of the migrated second image can be adapted to the content of the first target image block, and the fused first image is more harmonious.
In an example of the present application, as shown in fig. 7, an image processing method includes:
step 701: a second image is obtained. The second image contains makeup requirements of the user. The second image may specifically be a drawing.
Step 702: a makeup sample chart, i.e., a sample face image, is obtained. The makeup example view includes at least one makeup area.
Step 703: according to the cosmetic example diagram, the first image is split to obtain a plurality of first image blocks. Specifically, the first image may be split into a plurality of first image blocks according to characteristics of the length, the width, the outer contour, and the like of the eyebrow. Specifically, the face part (target area) to be made up in the first image may be determined according to the makeup sample chart, and then the first image may be split according to the face part to be made up.
Step 704: a first target image block is determined among the first image blocks.
Step 705: and searching a second target image block closest to the first target image block in the second image according to a VGG (Visual Geometry Group Network) grid.
Step 706: a mask layer corresponding to the target area is selected.
Step 707: and the content of the second target image block is fused into the target area by utilizing the mask layer, so that the makeup is transferred. The facial image segmentation technology such as facial features segmentation is utilized to find the makeup areas corresponding to the human face, such as the eye circumference, lips and the like.
In one example of the present application, the second image may be a landscape as shown in fig. 8B. The first image may be a face image, and the eye makeup effect map is shown in fig. 8A.
In one example of the present application, the second image may be a portrait as shown in fig. 9B. The first image may be a face image, and the face makeup effect map is shown in fig. 9A.
An embodiment of the present application further provides an image processing apparatus, as shown in fig. 10, including:
target area module 1001: for determining a target region in the first image;
first image target tile module 1002: the image processing method comprises the steps of determining a first target image block including a target area in a first image according to the target area;
second target image block module 1003: the image processing device is used for determining a second target image block in a second image according to the first target image block;
the fusion module 1004: and the image fusion module is used for fusing the content in the second target image block with the content in the first target image block in the target area.
In one embodiment, as shown in fig. 11, the fusion module 1004 includes:
first mask layer unit 1101: the method comprises the steps of determining a target mask layer corresponding to a target area from a plurality of preset mask layers, wherein the target mask layer comprises a first rendering ratio and a second rendering ratio;
the first presentation unit 1102: and the image processing device is used for presenting the corresponding content of the first target image block in a first presentation proportion and presenting the corresponding content of the second target image block in a second presentation proportion in the target area.
In one embodiment, as shown in fig. 12, the fusion module 1004 includes:
second mask layer unit 1201: the method comprises the steps of determining a target mask layer corresponding to a target area from a plurality of preset mask layers, wherein the target mask layer comprises a first rendering ratio and a second rendering ratio of each pixel in the target area;
the second presentation unit 1202: and the image rendering device is used for rendering the corresponding content of the first target image block on the pixel in a first rendering proportion corresponding to the pixel and rendering the corresponding content of the second target image block in a second rendering proportion corresponding to the pixel for each pixel in the target area.
In one embodiment, as shown in fig. 13, the first image is a face image, the target area is a target makeup area, and the target area module 1001 includes:
make-up area determination module 1301: the face image processing device is used for determining a sample makeup area in a sample face image according to a target area;
affine transformation module 1302: and according to the feature points of the first image, affine transforming the sample makeup area to a face area in the first image to obtain a target area.
In one embodiment, as shown in fig. 14, the second target image block module 1003 includes:
image block to be selected unit 1501: the image selection method comprises the steps of determining at least one image block to be selected in a second image;
the second target image block unit 1502: and the image processing device is used for determining a second target image block from each image block to be selected according to the similarity between the image block to be selected and the first target image block.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 15, is a block diagram of an electronic device according to a method of image processing of an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 15, the electronic apparatus includes: one or more processors 1601, memory 1602, and interfaces for connecting components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 15 illustrates an example of a processor 1601.
Memory 1602 is a non-transitory computer-readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of image processing provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of image processing provided herein.
The memory 1602, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method of image processing in the embodiments of the present application (e.g., the target area module 1001, the first target image block module 1002, the second target image block module 1003, and the fusion module 1004 shown in fig. 10). The processor 1601 executes various functional applications of the server and data processing, i.e., a method of implementing image processing in the above-described method embodiments, by executing non-transitory software programs, instructions, and modules stored in the memory 1602.
The memory 1602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device for image processing, and the like. Further, the memory 1602 may include high-speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1602 may optionally include memory located remotely from the processor 1601, which may be connected to image processing electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of image processing may further include: an input device 1603 and an output device 1604. The processor 1601, the memory 1602, the input device 1603, and the output device 1604 may be connected by a bus or other means, and are illustrated in fig. 15 as being connected by a bus.
The input device 1603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of an image processing electronic device, such as a touch screen, keypad, mouse, track pad, touch pad, pointing stick, one or more mouse buttons, track ball, joystick, etc. the output device 1604 may include a display device, auxiliary lighting (e.g., L ED), and tactile feedback (e.g., vibrating motor), etc.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable logic devices (P L D)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal.
The systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or L CD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer for providing interaction with the user.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., AN application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with AN implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the target area in the first image is determined, then the first target image block is determined, then the second target image block in the second image which has a certain correlation with the first target image block is determined, and finally the content in the second target image block is fused to the target area in the first image, so that the technical problem that diversified roads are sought by image shooting modes is solved, and the technical effect of enriching the image shooting means is achieved. It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. An image processing method, comprising:
determining a target region in the first image;
determining a first target image block comprising the target area in the first image according to the target area;
determining a second target image block in a second image according to the first target image block;
and in the target area, fusing the content in the second target image block with the content in the first target image block.
2. The method according to claim 1, wherein fusing the content of the second target image block with the content of the first target image block in the target area comprises:
determining a target mask layer corresponding to the target area from a plurality of preset mask layers, wherein the target mask layer comprises a first rendering ratio and a second rendering ratio;
and in the target area, presenting the corresponding content of the first target image block in the first presentation proportion, and presenting the corresponding content of the second target image block in the second presentation proportion.
3. The method according to claim 1, wherein fusing the content of the second target image block with the content of the first target image block in the target area comprises:
determining a target mask layer corresponding to the target area from a plurality of preset mask layers, wherein the target mask layer comprises a first rendering ratio and a second rendering ratio of each pixel in the target area;
and for each pixel in the target area, presenting the corresponding content of the first target image block on the pixel in a first presentation proportion corresponding to the pixel, and presenting the corresponding content of the second target image block in a second presentation proportion corresponding to the pixel.
4. The method of any one of claims 1 to 3, wherein the first image is a face image, the target area is a target makeup area, and determining the target area in the first image comprises:
determining a sample makeup area in the sample face image;
and affine transforming the sample makeup area to a face area in the first image according to the feature points of the first image to obtain the target area.
5. The method of claim 1, wherein determining a second target image block in a second image from the first image block comprises:
determining at least one image block to be selected in the second image;
and determining the second target image block from each to-be-selected image block according to the similarity between the to-be-selected image block and the first target image block.
6. An image processing apparatus characterized by comprising:
a target area module: for determining a target region in the first image;
a first image target block module: the image processing device is used for determining a first target image block comprising the target area in the first image according to the target area;
a second target image block module: the image processing device is used for determining a second target image block in a second image according to the first target image block;
a fusion module: and the image fusion module is used for fusing the content in the second target image block with the content in the first target image block in the target area.
7. The apparatus of claim 6, wherein the fusion module comprises:
a first mask layer unit: the method comprises the steps of determining a target mask layer corresponding to a target area from a plurality of preset mask layers, wherein the target mask layer comprises a first rendering ratio and a second rendering ratio;
a first presentation unit: and the image processing device is used for presenting the corresponding content of the first target image block in the first presentation proportion and presenting the corresponding content of the second target image block in the second presentation proportion in the target area.
8. The apparatus of claim 6, wherein the fusion module comprises:
a second mask layer unit: the method comprises the steps of determining a target mask layer corresponding to a target area from a plurality of preset mask layers, wherein the target mask layer comprises a first rendering ratio and a second rendering ratio of each pixel in the target area;
a second presentation unit: and for each pixel in the target area, presenting the corresponding content of the first target image block on the pixel in a first presentation proportion corresponding to the pixel, and presenting the corresponding content of the second target image block in a second presentation proportion corresponding to the pixel.
9. The apparatus of any one of claims 6 to 8, wherein the first image is a facial image and the target area is a target makeup area, the target area module comprising:
make up area determination module: determining a sample makeup area in the sample facial image;
an affine transformation module: and affine transforming the sample makeup area to a face area in the first image according to the feature points of the first image to obtain the target area.
10. The apparatus of claim 6, wherein the second target patch module comprises:
the image block unit to be selected comprises: the image selection method comprises the steps of determining at least one image block to be selected in the second image;
a second target image block unit: and the image processing device is used for determining the second target image block from each image block to be selected according to the similarity between the image block to be selected and the first target image block.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202010244882.1A 2020-03-31 2020-03-31 Image processing method, device, equipment and computer storage medium Active CN111462007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010244882.1A CN111462007B (en) 2020-03-31 2020-03-31 Image processing method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244882.1A CN111462007B (en) 2020-03-31 2020-03-31 Image processing method, device, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN111462007A true CN111462007A (en) 2020-07-28
CN111462007B CN111462007B (en) 2023-06-09

Family

ID=71680187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244882.1A Active CN111462007B (en) 2020-03-31 2020-03-31 Image processing method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111462007B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348755A (en) * 2020-10-30 2021-02-09 咪咕文化科技有限公司 Image content restoration method, electronic device and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101779218A (en) * 2007-08-10 2010-07-14 株式会社资生堂 Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
CN102708575A (en) * 2012-05-17 2012-10-03 彭强 Daily makeup design method and system based on face feature region recognition
CN103236066A (en) * 2013-05-10 2013-08-07 苏州华漫信息服务有限公司 Virtual trial make-up method based on human face feature analysis
CN103870821A (en) * 2014-04-10 2014-06-18 上海影火智能科技有限公司 Virtual make-up trial method and system
CN104899825A (en) * 2014-03-06 2015-09-09 腾讯科技(深圳)有限公司 Method and device for modeling picture figure
US20150261998A1 (en) * 2014-03-13 2015-09-17 Panasonic Intellectual Property Management Co., Ltd. Image processing apparatus and image processing method
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A kind of three-dimensional automatic Beijing Opera facial mask making-up method
CN107123083A (en) * 2017-05-02 2017-09-01 中国科学技术大学 Face edit methods
CN108257084A (en) * 2018-02-12 2018-07-06 北京中视广信科技有限公司 A kind of automatic cosmetic method of lightweight face based on mobile terminal
US20180260983A1 (en) * 2015-12-28 2018-09-13 Panasonic Intellectual Property Management Co., Lt Makeup simulation assistance apparatus, makeup simulation assistance method, and non-transitory computer-readable recording medium storing makeup simulation assistance program
WO2018188534A1 (en) * 2017-04-14 2018-10-18 深圳市商汤科技有限公司 Face image processing method and device, and electronic device
CN110136054A (en) * 2019-05-17 2019-08-16 北京字节跳动网络技术有限公司 Image processing method and device
CN110390632A (en) * 2019-07-22 2019-10-29 北京七鑫易维信息技术有限公司 Image processing method, device, storage medium and terminal based on dressing template
US20200051298A1 (en) * 2016-10-14 2020-02-13 Panasonic Intellectual Property Management Co., Ltd. Virtual make-up apparatus and virtual make-up method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101779218A (en) * 2007-08-10 2010-07-14 株式会社资生堂 Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
CN102708575A (en) * 2012-05-17 2012-10-03 彭强 Daily makeup design method and system based on face feature region recognition
CN103236066A (en) * 2013-05-10 2013-08-07 苏州华漫信息服务有限公司 Virtual trial make-up method based on human face feature analysis
CN104899825A (en) * 2014-03-06 2015-09-09 腾讯科技(深圳)有限公司 Method and device for modeling picture figure
US20150261998A1 (en) * 2014-03-13 2015-09-17 Panasonic Intellectual Property Management Co., Ltd. Image processing apparatus and image processing method
CN103870821A (en) * 2014-04-10 2014-06-18 上海影火智能科技有限公司 Virtual make-up trial method and system
US20180260983A1 (en) * 2015-12-28 2018-09-13 Panasonic Intellectual Property Management Co., Lt Makeup simulation assistance apparatus, makeup simulation assistance method, and non-transitory computer-readable recording medium storing makeup simulation assistance program
US20200051298A1 (en) * 2016-10-14 2020-02-13 Panasonic Intellectual Property Management Co., Ltd. Virtual make-up apparatus and virtual make-up method
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A kind of three-dimensional automatic Beijing Opera facial mask making-up method
WO2018188534A1 (en) * 2017-04-14 2018-10-18 深圳市商汤科技有限公司 Face image processing method and device, and electronic device
CN107123083A (en) * 2017-05-02 2017-09-01 中国科学技术大学 Face edit methods
CN108257084A (en) * 2018-02-12 2018-07-06 北京中视广信科技有限公司 A kind of automatic cosmetic method of lightweight face based on mobile terminal
CN110136054A (en) * 2019-05-17 2019-08-16 北京字节跳动网络技术有限公司 Image processing method and device
CN110390632A (en) * 2019-07-22 2019-10-29 北京七鑫易维信息技术有限公司 Image processing method, device, storage medium and terminal based on dressing template

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李杰: "基于基于图像处理的实时虚拟化妆及推荐方法研究图像处理的实时虚拟化妆及推荐方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》,基于图像处理的实时虚拟化妆及推荐方法研究, pages 1 - 60 *
甄蓓蓓: "一种基于样例图片的数字人脸化妆技术", 《中国优秀硕士学位论文全文数据库 信息科技辑》,一种基于样例图片的数字人脸化妆技术, pages 1 - 41 *
黄妍,何泽文,张文生: "一种多通路的分区域快速妆容迁移深度网络", 《软件学报》,一种多通路的分区域快速妆容迁移深度网络, pages 3549 - 3566 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348755A (en) * 2020-10-30 2021-02-09 咪咕文化科技有限公司 Image content restoration method, electronic device and storage medium

Also Published As

Publication number Publication date
CN111462007B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN111652828B (en) Face image generation method, device, equipment and medium
US9600869B2 (en) Image editing method and system
WO2021218040A1 (en) Image processing method and apparatus
CN109688346A (en) A kind of hangover special efficacy rendering method, device, equipment and storage medium
CN111583379B (en) Virtual model rendering method and device, storage medium and electronic equipment
CN111768356A (en) Face image fusion method and device, electronic equipment and storage medium
CN112672185B (en) Augmented reality-based display method, device, equipment and storage medium
JP2022036319A (en) Image rendering method, device, electronic device, computer readable storage medium, and computer program
JP7213291B2 (en) Method and apparatus for generating images
CN112053370A (en) Augmented reality-based display method, device and storage medium
US20230326110A1 (en) Method, apparatus, device and media for publishing video
CN112328345A (en) Method and device for determining theme color, electronic equipment and readable storage medium
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN112766215A (en) Face fusion method and device, electronic equipment and storage medium
CN112337091A (en) Man-machine interaction method and device and electronic equipment
CN110266926B (en) Image processing method, image processing device, mobile terminal and storage medium
CN111462007A (en) Image processing method, device, equipment and computer storage medium
JP2005327314A (en) Image display method and device
CN110858409A (en) Animation generation method and device
JP2019512141A (en) Face model editing method and apparatus
CN111652792A (en) Image local processing method, image live broadcasting method, image local processing device, image live broadcasting equipment and storage medium
CN115311397A (en) Method, apparatus, device and storage medium for image rendering
CN114419253A (en) Construction and live broadcast method of cartoon face and related device
CN114066715A (en) Image style migration method and device, electronic equipment and storage medium
TWM589834U (en) Augmented Reality Integration System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant