CN111462007B - Image processing method, device, equipment and computer storage medium - Google Patents

Image processing method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN111462007B
CN111462007B CN202010244882.1A CN202010244882A CN111462007B CN 111462007 B CN111462007 B CN 111462007B CN 202010244882 A CN202010244882 A CN 202010244882A CN 111462007 B CN111462007 B CN 111462007B
Authority
CN
China
Prior art keywords
target
image
image block
area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010244882.1A
Other languages
Chinese (zh)
Other versions
CN111462007A (en
Inventor
庞文杰
洪智滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010244882.1A priority Critical patent/CN111462007B/en
Publication of CN111462007A publication Critical patent/CN111462007A/en
Application granted granted Critical
Publication of CN111462007B publication Critical patent/CN111462007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a computer storage medium, relates to the technical field of computers, and particularly relates to the field of image processing. The specific implementation scheme is as follows: determining a target area in the first image; determining a first target image block comprising a target area in the first image according to the target area; determining a second target image block in the second image according to the first target image block; and fusing the content in the second target image block with the content in the first target image block in the target area.

Description

Image processing method, device, equipment and computer storage medium
Technical Field
The present application relates to the field of computer technology, and in particular, to the field of image processing.
Background
With the development of mobile terminals, users can take photos using portable electronic devices anytime and anywhere. The development direction of the photographing technology of the mobile terminal is also changed from the initial pixel improvement to the diversification of using modes such as image making, image matting and the like.
Today, most mobile terminal applications are more or less required to involve image acquisition and processing technology. With the enhancement of the dependence of users on applications of mobile terminals, how to seek breakthrough in image acquisition and processing is an important issue to be considered in image processing and even in optimizing and perfecting applications.
Disclosure of Invention
In order to solve at least one problem in the prior art, embodiments of the present application provide an image processing method, apparatus, device, and computer storage medium.
In a first aspect, an embodiment of the present application provides an image processing method, including:
determining a target area in the first image;
determining a first target image block comprising a target area in the first image according to the target area;
determining a second target image block in the second image according to the first target image block;
and fusing the content in the second target image block with the content in the first target image block in the target area.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
target area module: for determining a target region in the first image;
a first image object image block module: the first target image block is used for determining a first image including the target area according to the target area;
a second target image block module: the method comprises the steps of determining a second target image block in a second image according to a first target image block;
and a fusion module: for fusing the content in the second target image block with the content in the first target image block in the target area.
In a third aspect, embodiments of the present application provide an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing methods provided in any one of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the image processing method provided in any one of the embodiments of the present application.
One embodiment of the above application has the following advantages or benefits: can provide a new development direction for the terminal image shooting technology. According to the method and the device for processing the image, the target area in the first image is determined, the first target image block is determined, the second target image block in the second image with certain correlation with the first target image block is determined, and finally, the content in the second target image block is fused to the target area in the first image, so that elements in the second image can be migrated to the first image, a rich image processing means is provided for a user, and the technical problem of single image shooting mode is solved.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a schematic diagram of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image processing method according to another embodiment of the present application;
FIG. 3 is a schematic diagram of an image processing method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of an image processing method according to another embodiment of the present application;
FIG. 5 is a schematic illustration of a facial image according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an image processing method according to another embodiment of the present application;
FIG. 7 is a schematic diagram of an image processing method according to another embodiment of the present application;
fig. 8A, 8B are effect diagrams after processing a first image and a second image of one example of the present application;
fig. 9A, 9B are effect diagrams after processing a first image and a second image of one example of the present application;
FIG. 10 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 12 is a schematic view of an image processing apparatus according to another embodiment of the present application;
fig. 13 is a schematic view of an image processing apparatus according to another embodiment of the present application;
fig. 14 is a schematic view of an image processing apparatus according to another embodiment of the present application;
fig. 15 is a block diagram of an electronic device for implementing the image processing method of the embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
According to the image processing method, a first target image block is selected from a first image to be processed, then an image block closest to the first target image block is searched in a second image to be a second target image block, and then the content in the second target image block is migrated to a target area in the first target image block.
The embodiment of the application first provides an image processing method, as shown in fig. 1, including:
step 101: a target region in the first image is determined.
In the embodiment of the present application, the target area may be acquired according to preset information. For example, the first image is a facial image, the makeup regions in the facial image may include an eye makeup region, a cheek makeup region, a lip makeup region, a eyebrow makeup region, and the target region may be at least one of the makeup regions. Alternatively, the first image is a facial image including facial organs such as eyes, cheeks, lips, eyebrows, etc., and the target area is at least one of the facial organ areas.
In one embodiment of the present application, the first image is a facial image and the target area is one of the make-up areas. The application area of different ranges can be selected and determined according to the application type. For example, the makeup type may be light makeup, heavy makeup, bright makeup, beijing opera facial makeup, etc. The type of makeup may be determined based on a user selected parameter or may be determined based on the first image. For example, if the area occupied by the gorgeous color in the first image is large, the makeup type can be determined to be gorgeous. The type of makeup may also be determined from the second image. For example, the second image is mostly light or dark, and the makeup type may be determined to be light. Different makeup types, corresponding makeup ranges and target areas are different. For example, the range of the fumigated makeup corresponding to the eyes is larger, and the target area is larger; the light make-up corresponds to a smaller range of make-up on the eyes and a smaller corresponding target area.
The target region may be acquired according to the type of the first image. For example, in one embodiment, the first image is a facial image and the target area is a make-up area in the facial image. In another embodiment, the first image may be other images, such as a building image, an animal face image, a sculptured face image, etc., and the target area is a preset area where the content needs to be changed.
Step 102: and determining a first target image block comprising the target area in the first image according to the target area.
In an embodiment of the present application, the first image block may coincide with the target area.
In another embodiment of the present application, the first target image block is a circumscribed rectangle of the target area. For example, the first image is a facial image, the target area may be one of an eye make-up area, an eyebrow make-up area, a cheek make-up area, and a lip make-up area, and the target area may be an irregular pattern. In this case, the first target image block may be a rectangle circumscribing the target area pattern.
For another example, the first image is a facial image, the target area may be one of an eye area, a eyebrow area, a cheek area, and a lip area, and the first target image block may be a rectangle circumscribing the target area pattern.
Step 103: and determining a second target image block in the second image according to the first target image block.
In the embodiment of the present application, the second target image block may be an image block (Patch) in the second image, in which a relationship between the second target image block and the first target image block conforms to a set value. For example, the second target image block may be an image block in the second image having an image characteristic closest to the first target image block.
Step 104: and fusing the content in the second target image block with the content in the first target image block in the target area.
In the embodiment of the application, the content in the second target image block is fused with the content in the first target image block, and the content in the second target image block may be selectively displayed in the target area in the first target image block.
In the embodiment of the application, the content in the second target image block is fused with the content in the first target image block, and part or all of the image features in the first target image block can be changed by using part or all of the image features in the second target image block.
For the first image, where there may be multiple target areas, the operations of steps 101-104 described above may be performed on each target area one by one.
In the embodiment of the application, the second target image block can be determined in the second image according to the target area in the first image, and then the content in the second target image block is fused with the content in the first target image block in the target area, so that the technical effects of processing and adjusting the first image according to the elements in the second image can be realized. In practical application, the method can be applied to scenes and the like for making up the face of the user by referring to the pictures, and provides diversified processing modes for realizing applications such as terminal image beautification or video beautification.
In one embodiment, as shown in fig. 2, in step S104 may include:
step 201: determining a target mask layer corresponding to the target region from a plurality of preset mask layers, wherein the target mask layer comprises a first presentation proportion and a second presentation proportion;
step 202: and in the target area, the corresponding content of the first target image block is presented in a first presentation proportion, and the corresponding content of the second target image block is presented in a second presentation proportion.
In this embodiment of the present application, the first rendering ratio and the second rendering ratio may be rendering ratios for all pixels in the target area, or may be rendering ratios for some pixels in the target area.
In one embodiment, the sum of the first presentation scale and the second presentation scale is 1. For example, 30% of the first target image block content is presented and 70% of the second target image block content is presented. Therefore, the purpose of content migration can be achieved, and characteristics such as textures in the first target image block can be reserved.
In this embodiment of the present application, the plurality of preset mask layers may correspond to different target region types. For example, the first image is a facial image, and if the target region is an eye makeup region, the mask layer is a mask layer corresponding to the eye makeup region.
In the embodiment of the present invention, in the target area, the content in the first target image block is presented according to the set first presentation ratio, and the content in the second target image block is presented according to the set second presentation ratio, so that the content in the second target image block of the second image can be migrated to the target area of the first image. Thus, the first image has the content and style of the second image, and a new function is developed for video and image shooting of the current terminal.
In one embodiment, as shown in fig. 3, in step S104 may include:
step 301: determining a target mask layer corresponding to a target area from a plurality of preset mask layers, wherein the target mask layer comprises a first presentation proportion and a second presentation proportion of each pixel in the target area;
step 302: for each pixel in the target area, the corresponding content of the first target image block is presented on the pixel at a first presentation scale corresponding to the pixel, and the corresponding content of the second target image block is presented at a second presentation scale corresponding to the pixel.
In the embodiment of the application, different pixels in the target area may correspond to different rendering proportions. Therefore, when the method is applied to a scene of applying makeup to a facial image, natural transition exists between the makeup area and the non-makeup area, and a better makeup effect is achieved.
In one embodiment, as shown in fig. 4, the first image is a facial image, the target area is a target make-up area, and step S101 may include:
step 401: determining a sample make-up area in the sample facial image;
step 402: the sample makeup area is affine transformed to a face area in the first image according to the feature points of the first image to obtain a target area.
In the embodiment of the present application, a sample face is set, as shown in fig. 5, in the image of the sample face, there are a plurality of predetermined feature points 501, these feature points 501 form a plurality of triangular areas of the human face, and at least one sample make-up area, that is, an area of a broken line in the figure, is set on the sample face. Such as sample eye make-up areas 502 and 503, sample lip make-up area 504, and sample cheek make-up areas 505 and 506. Wherein the sample lip make-up area 504 coincides with the lip area. In the user face image, predetermined feature points are also present, so that the sample makeup area of the sample image can be affine transformed into the user face image.
Affine transformation, also called affine mapping, refers to the transformation of one vector space into another vector space by performing a linear transformation and a translation. Affine transformation is geometrically defined as an affine transformation between two vector spaces or affine mapping consisting of a non-singular linear transformation (transformation using a linear function) followed by a translational transformation. According to the method and the device for applying makeup to the face of the user, the sample makeup area is projected into the face image of the user through affine transformation, so that the makeup area can be determined in the face image of the user, and the makeup area accords with the face characteristics of the specific user.
According to the method and the device for determining the facial features of the user, the sample facial regions can be affine transformed into the facial images of the user, so that the target regions can be determined in the facial images of the user, and the facial regions which accord with the personal facial features of the user can be determined according to the actual facial images of the user. For example, some users have larger regions of eyes and corresponding regions of makeup on the eyes are correspondingly larger.
In one example, the target region may include a first target region and a second target region. Further, step S102 may include: determining a first target image block comprising a first target area in the first image according to the first target area; step S104 may include: in a second target area corresponding to the first target area, the content in the second target image block is fused with the content in the first target image block. For example: the first image is a facial image and the first target region is a facial organ region, such as an eye region (eye region or eyes). The second target region may be an circumscribed region of the facial organ, such as an circumscribed region of the eyes where makeup is desired.
In some cases, the second target area may be larger than the first target area. For example: the target make-up area is larger than the corresponding facial organ area, e.g., the eye make-up area is larger than the eye area. In other cases, the second target area may be smaller than the first target area. For example: the target make-up area is smaller than the corresponding facial organ area, e.g., the cheek make-up area is smaller than the cheek area.
In one embodiment, as shown in fig. 6, determining a second target image block in the second image from the first image block includes:
step 601: determining at least one image block to be selected in the second image;
step 602: and determining a second target image block from the image blocks to be selected according to the similarity between the image blocks to be selected and the first target image block.
In the embodiment of the application, the image block to be selected with the highest similarity with the first target image block can be determined from the image blocks to be selected and used as the second target image block.
In the embodiment of the application, the sliding windows with different specifications can be utilized to find the second target image block which has the most similar characteristics with the first target image block.
Specifically, the image block to be selected closest to the first target image block may be determined as the second target image block Based on a neuron Similarity (Neural Patch-Based Similarity).
According to the method and the device for determining the second target image block in the second image, the similarity is utilized, so that migrated second image content can be adapted to content in the first target image block, and the fused first image is more coordinated.
In one example of the present application, as shown in fig. 7, the image processing method includes:
step 701: a second image is obtained. The second image contains the user's cosmetic needs. The second image may in particular be a picture.
Step 702: and obtaining a dressing appearance diagram, namely a sample face image. The cosmetic case graphic includes at least one make-up area.
Step 703: and splitting the first image according to the dressing pattern diagram to obtain a plurality of first image blocks. Specifically, the first image may be split into a plurality of first image blocks according to the characteristics of the length, width, outer contour, etc. of the eyebrows. Specifically, facial organs (target areas) to be made up in the first image may be determined according to the makeup appearance map, and then the first image may be split according to the facial organs to be made up.
Step 704: a first target image block is determined in the first image block.
Step 705: the second target image block closest to the first target image block is found in the second image according to the VGG (Visual Geometry Group Network), visual geometry grid set.
Step 706: a mask layer corresponding to the target area is selected.
Step 707: and fusing the content of the second target image block into the target area by using the mask layer to realize migration of the makeup. Facial image segmentation techniques such as facial segmentation are utilized to find makeup areas such as periocular, lips, etc. corresponding to the face.
In one example of the present application, the second image may be a landscape as shown in fig. 8B. The first image may be a facial image, and the eye make-up effect is shown in fig. 8A.
In one example of the present application, the second image may be a portrait as shown in fig. 9B. The first image may be a facial image, and the face make-up effect is shown in fig. 9A.
The embodiment of the application also provides an image processing apparatus, as shown in fig. 10, including:
target area module 1001: for determining a target region in the first image;
first image object tile module 1002: the first target image block is used for determining a first image including the target area according to the target area;
the second target image block module 1003: the method comprises the steps of determining a second target image block in a second image according to a first target image block;
fusion module 1004: for fusing the content in the second target image block with the content in the first target image block in the target area.
In one embodiment, as shown in fig. 11, the fusion module 1004 includes:
first mask layer unit 1101: the target mask layer is used for determining a target mask layer corresponding to the target area from a plurality of preset mask layers, and comprises a first presentation proportion and a second presentation proportion;
the first presentation unit 1102: for presenting the corresponding content of the first target image block at a first presentation scale and the corresponding content of the second target image block at a second presentation scale in the target area.
In one embodiment, as shown in fig. 12, the fusion module 1004 includes:
a second mask layer unit 1201: the method comprises the steps of determining a target mask layer corresponding to a target area from a plurality of preset mask layers, wherein the target mask layer comprises a first presentation proportion and a second presentation proportion of each pixel in the target area;
a second presentation unit 1202: for each pixel in the target area, presenting the corresponding content of the first target image block at a first presentation scale corresponding to the pixel on the pixel, and presenting the corresponding content of the second target image block at a second presentation scale corresponding to the pixel.
In one embodiment, as shown in fig. 13, the first image is a facial image, the target area is a target make-up area, and the target area module 1001 includes:
make-up area determination module 1301: the method comprises the steps of determining a sample makeup area in a sample face image according to a target area;
affine transformation module 1302: the sample makeup area is affine transformed to a face area in the first image according to the feature points of the first image to obtain a target area.
In one embodiment, as shown in fig. 14, the second target image block module 1003 includes:
a to-be-selected image block unit 1501: for determining at least one image block to be selected in the second image;
a second target image block unit 1502: and determining a second target image block from the image blocks to be selected according to the similarity between the image blocks to be selected and the first target image block.
The functions of each module in each apparatus of the embodiments of the present application may be referred to the corresponding descriptions in the above methods, which are not described herein again.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 15, there is a block diagram of an electronic device according to a method of image processing according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 15, the electronic device includes: one or more processors 1601, memory 1602, and interfaces for connecting the components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 1601 is shown in fig. 15 as an example.
Memory 1602 is a non-transitory computer-readable storage medium provided herein. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the methods of image processing provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of image processing provided herein.
The memory 1602 is a non-transitory computer readable storage medium that can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules (e.g., the target region module 1001, the first target image block module 1002, the second target image block module 1003, and the fusion module 1004 shown in fig. 10) corresponding to the method of image processing in the embodiments of the present application. The processor 1601 executes various functional applications of the server and data processing, i.e., a method of implementing image processing in the above-described method embodiment, by executing non-transitory software programs, instructions, and modules stored in the memory 1602.
Memory 1602 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device for image processing, or the like. In addition, memory 1602 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 1602 may optionally include memory located remotely from processor 1601, which may be connected to the image processing electronics by a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of image processing may further include: an input device 1603 and an output device 1604. The processor 1601, memory 1602, input device 1603, and output device 1604 may be connected by a bus or otherwise, for example in fig. 15.
The input device 1603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the image processing electronic device, such as a touch screen, keypad, mouse, trackpad, touchpad, pointer stick, one or more mouse buttons, trackball, joystick, and like input devices. The output devices 1604 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the target area in the first image is determined, the first target image block is determined, the second target image block in the second image with certain correlation with the first target image block is determined, and finally the content in the second target image block is fused to the target area in the first image, so that the technical problem that the image shooting mode seeks diversified roads is solved, and the technical effect of enriching the image shooting means is achieved. It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (10)

1. An image processing method, comprising:
determining a target area in the first image;
determining a first target image block comprising the target area in the first image according to the target area;
determining a second target image block in a second image according to the first target image block;
fusing the content in the second target image block with the content in the first target image block in the target area;
the first image is a facial image, the target area is a target make-up area, and determining the target area in the first image includes:
determining a sample make-up area in the sample facial image;
affine transforming the sample makeup area to a face area in the first image according to the feature points of the first image to obtain the target area;
the target region comprises a first target region and a second target region, wherein the first target region is a facial organ region, and the second target region is a circumscribed region of the facial organ.
2. The method of claim 1, wherein fusing the content in the second target image block with the content in the first target image block in the target region comprises:
determining a target mask layer corresponding to the target region from a plurality of preset mask layers, wherein the target mask layer comprises a first presentation proportion and a second presentation proportion;
and in the target area, presenting the corresponding content of the first target image block according to the first presentation proportion, and presenting the corresponding content of the second target image block according to the second presentation proportion.
3. The method of claim 1, wherein fusing the content in the second target image block with the content in the first target image block in the target region comprises:
determining a target mask layer corresponding to the target region from a plurality of preset mask layers, wherein the target mask layer comprises a first presentation proportion and a second presentation proportion of each pixel in the target region;
for each pixel in the target area, presenting the corresponding content of the first target image block on the pixel in a first presentation proportion corresponding to the pixel, and presenting the corresponding content of the second target image block in a second presentation proportion corresponding to the pixel.
4. The method of claim 1, wherein determining a second target image block in a second image from the first image block comprises:
determining at least one image block to be selected in the second image;
and determining the second target image block from the image blocks to be selected according to the similarity between the image blocks to be selected and the first target image block.
5. An image processing apparatus, comprising:
target area module: for determining a target region in the first image;
a first image object image block module: the first target image block comprising the target area in the first image is determined according to the target area;
a second target image block module: the method comprises the steps of determining a second target image block in a second image according to the first target image block;
and a fusion module: for fusing, in the target area, content in the second target image block with content in the first target image block;
the first image is a facial image, the target area is a target make-up area, and the target area module comprises:
make-up area determination module: a sample make-up area for determining a sample facial image;
affine transformation module: affine transforming the sample makeup area to a face area in the first image according to the feature points of the first image to obtain the target area;
the target region comprises a first target region and a second target region, wherein the first target region is a facial organ region, and the second target region is a circumscribed region of the facial organ.
6. The apparatus of claim 5, wherein the fusion module comprises:
a first mask layer unit: the target mask layer is used for determining a target mask layer corresponding to the target area from a plurality of preset mask layers, and the target mask layer comprises a first presentation proportion and a second presentation proportion;
a first presentation unit: and the display unit is used for displaying the corresponding content of the first target image block in the first display proportion and displaying the corresponding content of the second target image block in the second display proportion in the target area.
7. The apparatus of claim 5, wherein the fusion module comprises:
a second mask layer unit: the target mask layer is used for determining a target mask layer corresponding to the target area from a plurality of preset mask layers, and the target mask layer comprises a first presentation proportion and a second presentation proportion of each pixel in the target area;
a second presentation unit: for each pixel in the target area, presenting the corresponding content of the first target image block on the pixel at a first presentation scale corresponding to the pixel, and presenting the corresponding content of the second target image block at a second presentation scale corresponding to the pixel.
8. The apparatus of claim 5, wherein the second target image block module comprises:
a to-be-selected image block unit: for determining at least one image block to be selected in the second image;
a second target image block unit: and determining the second target image block from the image blocks to be selected according to the similarity between the image blocks to be selected and the first target image block.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-4.
CN202010244882.1A 2020-03-31 2020-03-31 Image processing method, device, equipment and computer storage medium Active CN111462007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010244882.1A CN111462007B (en) 2020-03-31 2020-03-31 Image processing method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010244882.1A CN111462007B (en) 2020-03-31 2020-03-31 Image processing method, device, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN111462007A CN111462007A (en) 2020-07-28
CN111462007B true CN111462007B (en) 2023-06-09

Family

ID=71680187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010244882.1A Active CN111462007B (en) 2020-03-31 2020-03-31 Image processing method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111462007B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348755A (en) * 2020-10-30 2021-02-09 咪咕文化科技有限公司 Image content restoration method, electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101779218A (en) * 2007-08-10 2010-07-14 株式会社资生堂 Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
CN103870821A (en) * 2014-04-10 2014-06-18 上海影火智能科技有限公司 Virtual make-up trial method and system
WO2018188534A1 (en) * 2017-04-14 2018-10-18 深圳市商汤科技有限公司 Face image processing method and device, and electronic device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102708575A (en) * 2012-05-17 2012-10-03 彭强 Daily makeup design method and system based on face feature region recognition
CN103236066A (en) * 2013-05-10 2013-08-07 苏州华漫信息服务有限公司 Virtual trial make-up method based on human face feature analysis
CN104899825B (en) * 2014-03-06 2019-07-05 腾讯科技(深圳)有限公司 A kind of method and apparatus of pair of picture character moulding
US9501689B2 (en) * 2014-03-13 2016-11-22 Panasonic Intellectual Property Management Co., Ltd. Image processing apparatus and image processing method
EP3399493A4 (en) * 2015-12-28 2018-12-05 Panasonic Intellectual Property Management Co., Ltd. Makeup simulation assistance device, makeup simulation assistance method, and makeup simulation assistance program
JP6876941B2 (en) * 2016-10-14 2021-05-26 パナソニックIpマネジメント株式会社 Virtual make-up device, virtual make-up method and virtual make-up program
CN106952221B (en) * 2017-03-15 2019-12-31 中山大学 Three-dimensional Beijing opera facial makeup automatic making-up method
CN107123083B (en) * 2017-05-02 2019-08-27 中国科学技术大学 Face edit methods
CN108257084B (en) * 2018-02-12 2021-08-24 北京中视广信科技有限公司 Lightweight face automatic makeup method based on mobile terminal
CN110136054B (en) * 2019-05-17 2024-01-09 北京字节跳动网络技术有限公司 Image processing method and device
CN110390632B (en) * 2019-07-22 2023-06-09 北京七鑫易维信息技术有限公司 Image processing method and device based on dressing template, storage medium and terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101779218A (en) * 2007-08-10 2010-07-14 株式会社资生堂 Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
CN103870821A (en) * 2014-04-10 2014-06-18 上海影火智能科技有限公司 Virtual make-up trial method and system
WO2018188534A1 (en) * 2017-04-14 2018-10-18 深圳市商汤科技有限公司 Face image processing method and device, and electronic device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种基于样例图片的数字人脸化妆技术;甄蓓蓓;《中国优秀硕士学位论文全文数据库 信息科技辑》,一种基于样例图片的数字人脸化妆技术;正文第1-41页 *
一种多通路的分区域快速妆容迁移深度网络;黄妍,何泽文,张文生;《软件学报》,一种多通路的分区域快速妆容迁移深度网络;正文第3549-3566 *
基于基于图像处理的实时虚拟化妆及推荐方法研究图像处理的实时虚拟化妆及推荐方法研究;李杰;《中国优秀硕士学位论文全文数据库 信息科技辑》,基于图像处理的实时虚拟化妆及推荐方法研究;正文第1-60页 *

Also Published As

Publication number Publication date
CN111462007A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111652828B (en) Face image generation method, device, equipment and medium
US11074437B2 (en) Method, apparatus, electronic device and storage medium for expression driving
CN111563855B (en) Image processing method and device
CN112328345B (en) Method, apparatus, electronic device and readable storage medium for determining theme colors
CN109688346A (en) A kind of hangover special efficacy rendering method, device, equipment and storage medium
CN111583379B (en) Virtual model rendering method and device, storage medium and electronic equipment
CN111768356A (en) Face image fusion method and device, electronic equipment and storage medium
CN112634282B (en) Image processing method and device and electronic equipment
CN111291218B (en) Video fusion method, device, electronic equipment and readable storage medium
CN111259183B (en) Image recognition method and device, electronic equipment and medium
WO2022152116A1 (en) Image processing method and apparatus, device, storage medium, and computer program product
CN112337091A (en) Man-machine interaction method and device and electronic equipment
CN113240783A (en) Stylized rendering method and device, readable storage medium and electronic equipment
CN111462007B (en) Image processing method, device, equipment and computer storage medium
JP7160495B2 (en) Image preprocessing method, device, electronic device and storage medium
CN111462205A (en) Image data deformation and live broadcast method and device, electronic equipment and storage medium
US20210279928A1 (en) Method and apparatus for image processing
CN114863008B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111652792A (en) Image local processing method, image live broadcasting method, image local processing device, image live broadcasting equipment and storage medium
WO2022022260A1 (en) Image style transfer method and apparatus therefor
CN115546082A (en) Lens halo generation method and device and electronic device
CN114299207A (en) Virtual object rendering method and device, readable storage medium and electronic device
KR20230149934A (en) Method and system for processing image for lip makeup based on augmented reality
CN116459516A (en) Split screen special effect prop generation method, device, equipment and medium
CN107153808B (en) Face shape positioning method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant