WO2020093798A1 - Method and apparatus for displaying target image, terminal, and storage medium - Google Patents

Method and apparatus for displaying target image, terminal, and storage medium Download PDF

Info

Publication number
WO2020093798A1
WO2020093798A1 PCT/CN2019/107085 CN2019107085W WO2020093798A1 WO 2020093798 A1 WO2020093798 A1 WO 2020093798A1 CN 2019107085 W CN2019107085 W CN 2019107085W WO 2020093798 A1 WO2020093798 A1 WO 2020093798A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
facial
target
topology
skin color
Prior art date
Application number
PCT/CN2019/107085
Other languages
French (fr)
Chinese (zh)
Inventor
刘莹
杨浩
辛光
Original Assignee
北京达佳互联信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京达佳互联信息技术有限公司 filed Critical 北京达佳互联信息技术有限公司
Publication of WO2020093798A1 publication Critical patent/WO2020093798A1/en

Links

Images

Classifications

    • G06T3/18
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of image processing, and in particular, to a method, device, terminal, and storage medium for displaying a target image.
  • simulating the image of the future baby through the images of two people is very interesting and a popular way of entertainment.
  • the user when simulating a baby image through a parent ’s image, the user needs to first upload the father ’s image and the mother ’s image to the terminal. After receiving the two images, the terminal randomly selects one of the two images, and then Get pre-stored baby images in the image library. Match the selected image with each baby image separately to obtain the similarity between the selected image and each baby image, and then use the baby image with the highest similarity as the result of the simulation on the terminal's display screen display.
  • the present application provides a method, device, terminal and storage medium for displaying a target image, which can solve the problem of poor simulation effect.
  • a method for displaying a target image including:
  • the target image is displayed on the target interface.
  • an apparatus for displaying a target image including:
  • An obtaining unit configured to obtain a first facial image and a second facial image
  • a generating unit configured to generate a first facial topology image based on the first facial image, and generate a second facial topology image based on the second facial image;
  • the generating unit is further configured to fuse the first facial topology image and the second facial topology image to generate a target image
  • the display unit is configured to display the target image on the target interface.
  • a terminal including:
  • Memory for storing processor executable instructions
  • the processor is configured to perform a method of displaying the target image.
  • a non-transitory computer-readable storage medium When instructions in the storage medium are executed by a processor of a server, the server is enabled to perform a method of displaying a target image.
  • an application program which causes the terminal to execute a method of displaying a target image when the application program is running on the terminal.
  • the terminal acquires the first facial image and the second facial image, and then generates the target image according to the fusion of the first facial image and the second facial image.
  • the baby image with the highest similarity to the parent's image is selected as the target image from the pre-stored baby images.
  • the target image selected by this method is only the relatively similarity selected from the limited baby images. The largest image, but the selected image does not integrate the characteristics of the parent's image, so the similarity between the target image and the parent's image is low, and the simulation effect is poor.
  • the generated target image merges the features of the first facial image and the second facial image. Therefore, compared with the related art, the target image generated by the application and the first facial image and the second facial image The similarity is greater, so the simulation effect is better.
  • Fig. 1 is a flow chart of a method for displaying a target image according to an exemplary embodiment.
  • Fig. 2 is a flow chart of a method for displaying a target image according to an exemplary embodiment.
  • Fig. 3 is a schematic diagram showing an interface for displaying a target image according to an exemplary embodiment.
  • Fig. 4 is a schematic diagram showing a scene of displaying a target image according to an exemplary embodiment.
  • Fig. 5 is a schematic diagram showing an interface for displaying a target image according to an exemplary embodiment.
  • Fig. 6 is a block diagram of a device for displaying a target image according to an exemplary embodiment.
  • Fig. 7 is a schematic block diagram showing a structure of a terminal according to an exemplary embodiment
  • Fig. 8 is a schematic block diagram of a specific structure of a terminal 700 according to an exemplary embodiment.
  • Fig. 1 is a flowchart of a method for displaying a target image according to an exemplary embodiment. As shown in Fig. 1, the method is used in a terminal and includes the following steps.
  • step 101 a first facial image and a second facial image are acquired.
  • a first facial topology image is generated based on the first facial image
  • a second facial topology image is generated based on the second facial image.
  • step 103 the first facial topology image and the second facial topology image are fused to generate a target image.
  • step 104 a target image is displayed on the target interface.
  • fusing the first facial topology image with the second facial topology image to generate the target image includes:
  • a plurality of first region sub-images in the first facial topology image and a plurality of corresponding second region sub-images in the second facial topology image are respectively fused to generate a target image.
  • fusing a plurality of first region sub-images in the first facial topology image and a plurality of corresponding second-region sub images in the second facial topology image to generate a target image including:
  • fusing the first region sub-image in the first facial topology image with the corresponding second region sub-image in the second facial topology image to generate the target image includes:
  • the first region sub-image in the adjusted first facial topology image is fused with the corresponding second region sub-image in the adjusted second facial topology image to generate a target image.
  • the first weight is determined randomly, and the second weight is the difference between 1 and the first weight.
  • the method further includes:
  • the skin color value at the preset position in the target image is adjusted.
  • determining the target skin color value at the preset position of the target image includes:
  • the sum of the first skin color value and the second skin color value is obtained, and the difference obtained by subtracting the skin color adjustment value from the sum value is determined as the target skin color value at the preset position of the target facial topology image.
  • acquiring the first facial image and the second facial image includes:
  • the video frame is collected through the camera, and one frame of the video frame is randomly selected as the source image from the collected multi-frame video frames;
  • the method further includes:
  • the first facial image and the second facial image are displayed.
  • generating a first facial topology image based on the first facial image and generating a second facial topology image based on the second facial image include:
  • the method further includes:
  • the update patch package carries instructions for calling the CPU function of the central processor and instructions for calling the GPU function of the image processor.
  • the CPU functions include using sensors, detecting touch events, detecting trigger events, and determining facial feature points. ;
  • the embodiments of the present application will introduce a method of displaying a target image in combination with a specific implementation manner.
  • This method may be implemented by a terminal, which may be a terminal installed with an application program that displays a target image.
  • the terminal includes at least a graphics card and a CPU (Central Processing Unit, central processing unit).
  • the first facial image and the second facial image are the father's facial image and the mother's facial image, respectively, and the generated target image is the baby image.
  • the method for generating the target image will be described.
  • the processing flow of this method may include the following steps:
  • step 201 when the terminal receives the target image generation instruction, it collects video frames through the camera, randomly selects one frame of video frames from the collected multi-frame video frames as the source image, and then extracts the first The facial image and the second facial image, and then, on the target interface of the terminal, the first facial image and the second facial image are displayed.
  • the source image refers to an image containing at least two facial images to be fused.
  • the user wants to generate a baby image, that is, when the user wants to generate a target image
  • the user can first open the application installed on the terminal, then, the user clicks the option to generate a baby image, the terminal receives the option to generate a baby image
  • the instruction is the target image generation instruction.
  • the terminal starts the camera function and collects video frames through the camera.
  • the terminal receives the video capture stop instruction, or when the duration of capturing video frames reaches the preset duration, the terminal stops capturing video frames and randomly selects at least one video frame among the multi-frame video frames that have been captured As the source image.
  • the terminal can use the pre-stored face recognition algorithm to identify two face images in the obtained source image to obtain the first face image and the second face image. Then, the terminal on the target interface The first facial image and the second facial image are displayed.
  • the face recognition algorithm can be a local feature analysis method, a feature face method, a recognition algorithm based on an elastic model, a neural network recognition algorithm, a hidden Markov model algorithm, etc., as long as the first facial image can be extracted from the source image
  • this application does not limit it.
  • the terminal can intercept the multiple recognized facial images and display them to the user, so that the user can manually select two of the facial images as the first facial image and the first Two face images, as shown in Figure 3.
  • the terminal may also send an image error prompt message to the user to prompt the user to upload the source image again, which is not limited in this application.
  • the terminal cannot recognize two facial images in the source image, it will send an image error prompt message to prompt the user to upload the source image again.
  • There are many ways to send an image error prompt message such as displaying prompt text, displaying a prompt picture, and issuing a voice prompt. As long as it can play the role of prompting the user to re-upload the source image, this application will send an image error prompt message The method is not limited.
  • the terminal determines the processing method of the source image.
  • the terminal may receive at least one image uploaded by the user as a source image.
  • the user can choose to upload the image stored in the terminal in advance, or can choose to take the image and upload the taken image.
  • the user can upload at least one image, and the terminal uses the received image as the source image.
  • the terminal may receive at least one image and video frame uploaded by the user as the source image. In this way, the user can upload at least one image and a piece of video.
  • the terminal selects at least one video frame from the received video, and uses the selected video frame and the received image as the source image.
  • the terminal determines the source image in various processing methods, and can select a corresponding processing method according to actual needs, which is not limited in this application.
  • step 202 the terminal generates a first facial topology image based on the first facial image, and generates a second facial topology image based on the second facial image.
  • the step of generating a facial topology image based on the facial image in the above step 202 may be as follows: In the facial image, determine a set of facial feature points; according to the preset connection rule, the set of facial features The facial feature points in the points are connected to generate a facial topology image, as shown in Figure 4.
  • the above steps may be: in the first facial image, determine the first set of facial feature points; according to the preset connection rule, the facial features in the first set of facial feature points The points are connected to generate the first facial topology image.
  • the above step 202 includes the following steps 2021 and 2022.
  • the terminal may use a pre-stored facial feature point recognition algorithm to calibrate a set of facial feature points in the first facial image (ie, the first set of faces Feature points). Based on the same algorithm, the second set of facial feature points are calibrated in the second facial image.
  • facial feature point recognition algorithms can be CLM (Constrained local model, constrained local model) algorithm, Cascaded Regression (cascade regression, a facial feature point positioning method) algorithm, CNN (Convolutional Neural Network, convolutional neural network) Algorithms, etc., as long as they can implement algorithms for calibrating facial feature points in facial images, this application does not limit this.
  • the same facial feature point recognition algorithm is used to calibrate the facial image, the number of calibrated facial feature points is the same, and each facial feature point has an identifier, and each identifier corresponds to the facial feature point It has a fixed meaning, for example, the facial feature points of logos 19 to 24 represent the upper edge of the left eyebrow in the facial image, and the facial feature points of logo 49 represent the left eye corner of the left eye in the facial image.
  • the terminal acquires a preset connection rule, connects facial feature points, and generates a first facial topology image of the first facial image.
  • the preset connection rule may be the identification connection rule of the facial feature points.
  • the facial connection points specified in the identification connection rule to be connected are connected through the predetermined identification connection rule, and the first face
  • the image is divided into multiple areas, and each area can be called an area sub-image, and there is no overlap between all area sub-images.
  • the logo connection rule stipulates that certain three facial feature points are connected, that is, the divided sub-images are triangular images. Of course, other numbers of facial feature points can be specified in the logo connection rule according to actual needs. Connected, that is, the divided sub-images are images of other shapes, which is not limited in this application.
  • the processing method for generating the second facial topological image may be: in the second facial image, determine the second group of facial feature points; according to the preset connection rule, the facial feature points in the second group of facial feature points Connect to generate a second facial topology image.
  • the corresponding processing method for generating the second facial topology image can refer to the above processing method for generating the first facial topology image, which will not be repeated here.
  • step 203 the terminal adjusts the first facial topology image according to the first weight corresponding to the first facial topology image to obtain the adjusted first facial topology image, and adjusts the second facial topology image according to the second weight To obtain the adjusted second facial topology image.
  • the terminal first obtains the first weight corresponding to the first facial topology image and the second weight corresponding to the second facial topology image, where the first weight may be the weight of the entire first facial topology image.
  • the first The second weight is the weight of the entire second facial topology image.
  • the first weight may also be the weight of a certain area sub-image in the first face topology image.
  • the second weight is the area sub-image corresponding to the first weight in the second face topology image
  • the weight of the region sub-image corresponding to the image, the first weight and the second weight may each include a plurality of different weight values.
  • the first weight may also be the weight of the facial feature image composed of multiple region sub-images in the first facial topology image
  • the second weight is the weight of the corresponding facial feature image in the second facial topology image.
  • Both the first weight and the second weight may include a plurality of different weight values. For example, if the first weight is the weight of the eyes in the first facial topology image, the second weight is the weight of the eyes in the second facial topology image. For the above method, this application does not specifically limit this.
  • Method 1 The technician can preset the value of the first weight and the value of the second weight.
  • the terminal fuses the first facial topology image and the second facial topology image, the first weight and the second weight correspond to The storage area may directly read the value of the first weight and the value of the second weight.
  • Method 2 The user can determine the value of the first weight and the value of the second weight. For example, if the terminal provides the user with an option to adjust the similarity between the generated baby image and the two facial images, the user can manually adjust the similarity. The degree of similarity is converted into the value of the first weight and the value of the second weight.
  • Method 3 The terminal can randomly set the value of the first weight, then the second weight is 1 minus the difference of the first weight, so that the user does not know which facial image the generated baby image will be more similar to, which increases The uncertainty of the generated baby image also increases the fun.
  • the terminal After acquiring the first weight and the second weight through the above processing steps, the terminal adjusts the first facial topology image according to the first weight to obtain the adjusted first facial topology image, and adjusts the second face according to the second weight Topology image to obtain the adjusted second face topology image.
  • the processing in this step is: : Determine the RGB (Red-Green-Blue, red-green-blue) value of each pixel of the first facial topology image, multiply the RGB value of each pixel by the first weight, and the obtained product is the adjusted first The RGB value of a facial topology image.
  • the RGB value of the first facial topology image is adjusted, that is, the RGB value of the first facial topology image is set to the adjusted RGB value.
  • the second facial topology image is adjusted to obtain the adjusted second facial topology image.
  • the processing steps of obtaining the adjusted second facial topology image refer to the above-mentioned generating the adjusted first facial topology image, which will not be repeated here.
  • the processing method of this step is: taking an area sub-image in the first facial topology image as an example, determine each pixel in the area sub-image RGB value, multiply each RGB value with the first weight, the product is the adjusted RGB value of the sub-image of the area, adjust the sub-image of the area according to the RGB value, that is to say, the The RGB value of the area sub-image is set to this RGB value.
  • the adjusted first facial topology image is obtained.
  • the second facial topology image is adjusted to obtain the adjusted second facial topology image.
  • the processing method of this step is: taking a certain facial image composed of multiple area sub-images in the first facial topology image as an example, determine the RGB value of each pixel in the facial image, and Each RGB value is multiplied by the first weight, and the obtained product is the adjusted RGB value of the facial image, and the facial image is adjusted according to the RGB value, that is, the RGB value of the facial image is set to The RGB value.
  • an adjusted first facial topology image is obtained.
  • the second facial topology image is adjusted to obtain the adjusted second facial topology image.
  • step 204 the terminal fuses the first region sub-image in the adjusted first facial topology image with the corresponding second region sub-image in the adjusted second facial topology image to generate a target image.
  • the first region sub-images are connected according to preset facial feature points, and a part of the images divided in the first face topology image, similarly, the second region sub-images are connected according to preset facial feature points , A part of the image divided in the second facial topology image.
  • the correspondence between the first region sub-image and the second region sub-image means that the identifier of the corresponding facial feature point of the first region sub-image is the same as the identifier of the corresponding facial feature point of the second region sub-image.
  • the terminal After adjusting the first face topology image and the second face topology image according to the first weight and the second weight through the above steps, the terminal will adjust the adjusted first face topology image according to a preset image fusion algorithm
  • the sub-image of the first region of the image is fused with the sub-image of the second region corresponding to the topological image of the adjusted second face to generate a target image.
  • image fusion algorithms such as pixel-level image fusion algorithm, Poisson fusion algorithm or MVC (Mean Values, Mean Coordinates) image fusion algorithm, etc., as long as the first facial topology image and the second facial Any algorithm for topological image fusion may be used, which is not limited in this application.
  • the following uses the image fusion algorithm as a pixel-level image fusion algorithm for illustration.
  • the above step 204 may include the following steps 2041-2042.
  • step 2041 the terminal adjusts the shape and size of the corresponding multiple second-region sub-images in the second facial topology image according to the multiple first-region sub-images in the first facial topology image.
  • the terminal pre-sets a facial topology image as a base image, and another facial topology image as a material, assuming that the terminal determines the first facial topology image as a base image, taking a second region sub-image in the second facial topology image as an example ,
  • the terminal first determines the first area sub-image corresponding to the second area sub-image in the first facial topology image, and then, based on the determined shape and size of the first area sub-image, the terminal The shape and size are adjusted so that the second region sub-image and the corresponding first region sub-image have the same shape and size.
  • step 2042 the adjusted plurality of second region sub-images are attached to the corresponding plurality of first region sub-images, and according to the pre-stored pixel fusion algorithm, the first region sub-image in the first facial topology image The image is fused with the corresponding second region sub-image in the second facial topology image to generate the target image.
  • the above steps 203-204 are to adjust the weights of the first facial topology image and the second facial topology image, and then fuse the adjusted first facial topology image and the second facial topology image to generate the target image.
  • the weights of the first facial topology image and the second facial topology image may not be adjusted, but after step 202, the first facial topology image and the second facial topology image are directly fused to generate a target image. No limitation.
  • the target image in order to make the generated baby image more beautiful and more in line with the baby's image, can be correspondingly beautified, for example, the facial contour in the target image can be liquefied to make the facial contour more round, Enlarge the eye contour in the target image to make the eye contour bigger and more round, etc., or adjust the skin color value of the target image.
  • a first skin color value is obtained at a preset position of the first facial topology image
  • a second skin color value is obtained at a preset position of the second facial topology image.
  • the preset position is the position of the preset area sub-image
  • the skin color value is the RGB value of a pixel in the facial topology image.
  • the skin color value of the baby image may be determined according to the skin color value of the parent ’s facial image, that is, the first facial topology image is extracted at the preset position Skin color value, the second skin color value is extracted at a preset position of the second facial topology image.
  • step 206 the target skin color value at the preset position of the target image is determined according to the first skin color value and the second skin color value.
  • the terminal may calculate the average value of the first skin color value and the second skin color value, and determine it as the target skin color value at the preset position of the target image.
  • the skin color value of the parent's facial image can be appropriately adjusted and correspondingly processed
  • the method may be as follows: acquiring a preset skin tone adjustment value; acquiring the sum of the first skin tone value and the second skin tone value, and subtracting the difference obtained by subtracting the skin tone adjustment value from the sum value to determine the preset position of the target facial topology image Target skin color value.
  • the technician can also pre-set the target skin tone value at the preset position and store the target skin tone value in the terminal.
  • the terminal acquires the target skin tone value at the preset position, in the storage area corresponding to the target skin tone value Simply read the target skin color value.
  • step 207 according to the target skin color value, the terminal adjusts the skin color value at the preset position in the target image.
  • the terminal adjusts the skin color value at the preset position in the target image according to the target skin color value
  • the RGB value of each pixel at the preset position is adjusted to the target skin color value.
  • step 208 the terminal displays the target image after adjusting the skin color value on the target interface.
  • the terminal After adjusting the skin color value at the preset position of the target image, the terminal displays the target image, as shown in FIG. 5.
  • the terminal can first play the pre-stored audio, animation effects, etc., and then display the target image, or display the target image according to the preset display special effects, according to the actual needs
  • the corresponding display settings can be made, which is not limited in this application.
  • the technical personnel can write the server's graphics card driver script to the terminal in the form of a string when writing the update patch package of the above function.
  • the terminal obtains an update patch package, in which the update patch package carries instructions for calling the CPU function of the central processor and instructions for calling the GPU function of the image processor.
  • the CPU functions include using sensors and detecting touch Events, detect trigger events, determine facial feature points.
  • Technicians encapsulate high-latitude CPU logic by using dynamic scripting languages, that is, encapsulate Opengl ES-related primitives into instructions that dynamic scripts can call, and the client-side sensors, touch events, trigger events, and facial key points Recognition functions and other high-level logic are encapsulated in the script driver. Then, the terminal loads the update patch package, that is, it can be updated without issuing a new version of the installation package, which accelerates the iteration efficiency.
  • dynamic scripting languages that is, encapsulate Opengl ES-related primitives into instructions that dynamic scripts can call
  • the client-side sensors, touch events, trigger events, and facial key points Recognition functions and other high-level logic are encapsulated in the script driver.
  • the terminal loads the update patch package, that is, it can be updated without issuing a new version of the installation package, which accelerates the iteration efficiency.
  • the terminal acquires the first facial image and the second facial image, and then generates the first facial topology image and the second facial topology image according to the first facial image and the second facial image, respectively, and then, Adjust the first facial topology image and the second facial topology image according to the first weight and the second weight respectively, and then fuse the adjusted first facial topology image with the adjusted second facial topology image to generate the target image .
  • the generated target image merges the features of the first facial image and the second facial image. Therefore, the generated target image has a large similarity with the first facial image and the second facial image, making the simulation effect better.
  • the generated target image can be adjusted according to the target skin color value, so that the skin color of the generated target image is more in line with the characteristics of the baby, and the generated target image is more beautiful.
  • Fig. 6 is a block diagram of a device for displaying a target image according to an exemplary embodiment. 6, the device includes an acquisition unit 610, a generation unit 620, and a display unit 630.
  • the obtaining unit 610 is configured to obtain a first facial image and a second facial image
  • the generating unit 620 is configured to generate a first facial topology image based on the first facial image and generate a second facial topology image based on the second facial image;
  • the generating unit 620 is further configured to fuse the first facial topology image and the second facial topology image to generate a target image;
  • the display unit 630 is configured to display the target image on the target interface.
  • the generating unit 620 is configured to:
  • a plurality of first region sub-images in the first facial topology image and a plurality of corresponding second region sub-images in the second facial topology image are respectively fused to generate a target image.
  • the generating unit 620 is configured to:
  • the generating unit 620 is configured to:
  • the first region sub-image in the adjusted first facial topology image is fused with the corresponding second region sub-image in the adjusted second facial topology image to generate a target image.
  • the first weight is determined randomly, and the second weight is the difference between 1 and the first weight.
  • the acquiring unit 610 is further configured to acquire the first skin color value at a preset position of the first facial topology image and acquire the second at the preset position of the second facial topology image after generating the target image Skin color value
  • the device also includes:
  • the determining unit is configured to determine the target skin color value at the preset position of the target image according to the first skin color value and the second skin color value;
  • the adjusting unit is configured to adjust the skin color value at the preset position in the target image according to the target skin color value.
  • the determination unit is configured to:
  • the sum of the first skin color value and the second skin color value is obtained, and the difference obtained by subtracting the skin color adjustment value from the sum value is determined as the target skin color value at the preset position of the target facial topology image.
  • the obtaining unit 610 is configured to:
  • the video frame is collected through the camera, and one frame of the video frame is randomly selected as the source image from the collected multi-frame video frames;
  • the display unit 630 is configured to display the first facial image and the second facial image on the target interface after acquiring the first facial image and the second facial image.
  • the generating unit 620 is configured to:
  • the acquiring unit 610 is configured to acquire an update patch package, wherein the update patch package carries instructions for calling the CPU function of the central processor and instructions for calling the GPU function of the image processor.
  • the CPU functions include using sensors, Functions to detect touch events, detect trigger events, and determine facial feature points;
  • the device also includes:
  • the loading unit is configured to load the update patch package.
  • Fig. 7 is a schematic structural diagram of a terminal according to an exemplary embodiment, including: a processor 71; a memory 72 for storing processor executable instructions; wherein, the processor 71 is configured to execute any of the above The method for displaying the target image described in the embodiment.
  • Fig. 8 is a specific structural block diagram of a terminal 700 according to an exemplary embodiment.
  • the terminal 700 may be a mobile phone, a computer, a messaging device, a game console, a tablet device, or the like.
  • the terminal 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input / output (I / O) interface 712, a sensor component 714, ⁇ ⁇ ⁇ 716.
  • the processing component 702 generally controls the overall operations of the device 700, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 702 may include one or more processors 720 to execute instructions to complete all or part of the steps in the above method.
  • the processing component 702 may include one or more modules to facilitate interaction between the processing component 702 and other components.
  • the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
  • the memory 704 is configured to store various types of data to support operations at the terminal 700. Examples of these data include instructions for any application or method operated on the terminal 700, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 704 may be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable and removable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable and removable Programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power supply component 706 provides power to various components of the terminal 700.
  • the power component 706 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 700.
  • the multimedia component 708 includes a screen that provides an output interface between the terminal 700 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation.
  • the multimedia component 708 includes a front camera and / or a rear camera. When the terminal 700 is in an operation mode, such as a shooting mode or a video mode, the front camera and / or the rear camera may receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 710 is configured to output and / or input audio signals.
  • the audio component 710 includes a microphone (MIC).
  • the microphone When the terminal 700 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 704 or sent via the communication component 716.
  • the audio component 710 further includes a speaker for outputting audio signals.
  • the I / O interface 712 provides an interface between the processing component 702 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, or a button. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 714 includes one or more sensors for providing the terminal 700 with status evaluation in various aspects.
  • the sensor component 714 can detect the on / off state of the device 700 and the relative positioning of the components, for example, the components are the display and keypad of the device 700, and the sensor component 714 can also detect the position change of the terminal 700 or a component of the terminal 700 The presence or absence of user contact with the terminal 700, the orientation or acceleration / deceleration of the terminal 700, and the temperature change of the terminal 700.
  • the sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 714 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 716 is configured to facilitate wired or wireless communication between the terminal 700 and other devices.
  • the terminal 700 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof.
  • the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 716 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the terminal 700 may be one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field programmable A gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented to perform the above method.
  • ASIC application specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor or other electronic components are implemented to perform the above method.
  • a computer-readable storage medium including instructions, for example, a memory 704 including instructions, the above instructions may be executed by the processor 720 of the terminal 700 to complete the above method of displaying a target image, the method includes : Acquire the first facial image and the second facial image; generate the first facial topology image based on the first facial image, generate the second facial topology image based on the second facial image; combine the first facial topology image with the second face The topological images are fused to generate the target image; the target image is displayed on the target interface.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, or the like.
  • an application program which includes one or more instructions, and the one or more instructions may be executed by the processor 720 of the terminal 700 to complete the above method of displaying a target image, the method includes : Acquire a first facial image and a second facial image; generate a first facial topology image based on the first facial image, generate a second facial topology image based on the second facial image; compare the first facial topology image with the first The two facial topological images are fused to generate a target image; the target image is displayed on the target interface.
  • the above instructions may also be executed by the processor 720 of the terminal 700 to complete other steps involved in the above exemplary embodiments.

Abstract

The present application relates to the field of image processing, and relates to a method and apparatus for displaying a target image, a terminal, and a storage medium. The method comprises: obtaining a first face image and a second face image; generating a first facial topology image according to the first face image and a second facial topology image according to the second face image; fusing the first facial topological image and the second facial topological image to generate a target image; and displaying the target image on a target interface. By adopting the present application, the simulation effect may be better.

Description

一种显示目标图像的方法、装置、终端及存储介质Method, device, terminal and storage medium for displaying target image
本申请要求于2018年11月9日提交中国专利局、申请号为201811334358.2,发明名称为“显示目标图像的方法、装置、终端及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application filed on November 9, 2018 with the application number 201811334358.2 and the invention titled "Method, Device, Terminal and Storage Media for Displaying Target Images", the entire content of which is cited by reference Incorporated in this application.
技术领域Technical field
本申请涉及图像处理领域,尤其涉及一种显示目标图像的方法、装置、终端及存储介质。The present application relates to the field of image processing, and in particular, to a method, device, terminal, and storage medium for displaying a target image.
背景技术Background technique
目前,很多用户对模拟未来宝宝的长相很感兴趣,例如,通过两个人的图像来模拟未来宝宝的图像,很有趣味性,是一种比较受欢迎的娱乐方式。At present, many users are very interested in simulating the appearance of the future baby. For example, simulating the image of the future baby through the images of two people is very interesting and a popular way of entertainment.
相关技术中,在通过父母的图像模拟宝宝图像时,用户需要先将父亲的图像以及母亲的图像上传至终端,终端接收这两张图像后,在这两张图像中随机选取一张图像,然后获取图像库中预先存储的宝宝图像。将上述选取出的图像与每张宝宝图像分别进行匹配,得到选取的图像与每张宝宝图像的相似度,然后终端将相似度最大的宝宝图像作为模拟得到的结果,在终端的显示屏上进行显示。In the related art, when simulating a baby image through a parent ’s image, the user needs to first upload the father ’s image and the mother ’s image to the terminal. After receiving the two images, the terminal randomly selects one of the two images, and then Get pre-stored baby images in the image library. Match the selected image with each baby image separately to obtain the similarity between the selected image and each baby image, and then use the baby image with the highest similarity as the result of the simulation on the terminal's display screen display.
但是,发明人意识到,上述处理模拟出的宝宝图像,只是相对图像库的各宝宝图像来说与父母的图像的相似度最大,不一定真的与父母的图像很相似,也就是说,根据父母的长相模拟得到的宝宝长相有可能与父母不相似,这样,实际得到的模拟结果与理论上可以得到的模拟结果存在差别,使得模拟效果较差。However, the inventor realized that the baby images simulated by the above processing are only the most similar to the parents ’images with respect to each baby image in the image library, and may not necessarily be very similar to the parents’ images, that is, according to The appearance of the baby obtained by the simulation of the parents' appearance may not be similar to that of the parents. In this way, the actual simulation result is different from the theoretically available simulation result, which makes the simulation effect poor.
发明内容Summary of the invention
本申请提供一种显示目标图像的方法、装置、终端及存储介质,可以解决模拟效果较差的问题。The present application provides a method, device, terminal and storage medium for displaying a target image, which can solve the problem of poor simulation effect.
根据本申请实施例的第一方面,提供一种显示目标图像的方法,包括:According to a first aspect of the embodiments of the present application, a method for displaying a target image is provided, including:
获取第一面部图像以及第二面部图像;Obtain the first facial image and the second facial image;
根据所述第一面部图像生成第一面部拓扑图像,根据所述第二面部图像生成第二面部拓扑图像;Generating a first facial topology image based on the first facial image, and generating a second facial topology image based on the second facial image;
将所述第一面部拓扑图像与所述第二面部拓扑图像进行融合,生成目标图像;Fuse the first facial topology image with the second facial topology image to generate a target image;
在目标界面上显示所述目标图像。The target image is displayed on the target interface.
根据本申请实施例的第二方面,提供一种显示目标图像的装置,包括:According to a second aspect of the embodiments of the present application, there is provided an apparatus for displaying a target image, including:
获取单元,被配置为获取第一面部图像以及第二面部图像;An obtaining unit configured to obtain a first facial image and a second facial image;
生成单元,被配置为根据所述第一面部图像生成第一面部拓扑图像,根据所述第二面部图像生成第二面部拓扑图像;A generating unit configured to generate a first facial topology image based on the first facial image, and generate a second facial topology image based on the second facial image;
所述生成单元,还被配置为将所述第一面部拓扑图像与所述第二面部拓扑图像进行融 合,生成目标图像;The generating unit is further configured to fuse the first facial topology image and the second facial topology image to generate a target image;
显示单元,被配置为在目标界面上显示所述目标图像。The display unit is configured to display the target image on the target interface.
根据本申请实施例的第三方面,提供一种终端,包括:According to a third aspect of the embodiments of the present application, a terminal is provided, including:
处理器;processor;
用于存储处理器可执行指令的存储器;Memory for storing processor executable instructions;
其中,所述处理器被配置为执行一种显示目标图像的方法。Wherein, the processor is configured to perform a method of displaying the target image.
根据本申请实施例的第四方面,提供一种非临时性计算机可读存储介质,当所述存储介质中的指令由服务器的处理器执行时,使得服务器能够执行一种显示目标图像的方法。According to a fourth aspect of the embodiments of the present application, there is provided a non-transitory computer-readable storage medium. When instructions in the storage medium are executed by a processor of a server, the server is enabled to perform a method of displaying a target image.
根据本申请实施例的第五方面,提供一种应用程序,当应用程序在终端在运行时,使得终端执行一种显示目标图像的方法。According to a fifth aspect of the embodiments of the present application, there is provided an application program, which causes the terminal to execute a method of displaying a target image when the application program is running on the terminal.
本申请的实施例提供的技术方案可以包括以下有益效果:终端获取第一面部图像和第二面部图像,然后根据第一面部图像和第二面部图像的融合,来生成目标图像。而相关技术中,在预先存储的宝宝图像中选取与父母的图像相似度最大的宝宝图像作为目标图像,通过这种方法选取的目标图像,只是在有限的宝宝图像中选取的相对来说相似度最大的图像,但选取的图像并没有融合父母的图像的特征,因此目标图像与父母的图像的相似度较低,模拟效果较差。而本申请中,生成的目标图像融合了第一面部图像以及第二面部图像的特征,因此,相较于相关技术,本申请生成的目标图像与第一面部图像以及第二面部图像的相似度更大,因此,使得模拟效果更好。The technical solution provided by the embodiments of the present application may include the following beneficial effects: the terminal acquires the first facial image and the second facial image, and then generates the target image according to the fusion of the first facial image and the second facial image. In the related art, the baby image with the highest similarity to the parent's image is selected as the target image from the pre-stored baby images. The target image selected by this method is only the relatively similarity selected from the limited baby images. The largest image, but the selected image does not integrate the characteristics of the parent's image, so the similarity between the target image and the parent's image is low, and the simulation effect is poor. In this application, the generated target image merges the features of the first facial image and the second facial image. Therefore, compared with the related art, the target image generated by the application and the first facial image and the second facial image The similarity is greater, so the simulation effect is better.
附图说明BRIEF DESCRIPTION
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。The drawings herein are incorporated into the specification and constitute a part of the specification, show embodiments consistent with the application, and are used together with the specification to explain the principles of the application.
图1是根据一示例性实施例示出的一种显示目标图像的方法流程图。Fig. 1 is a flow chart of a method for displaying a target image according to an exemplary embodiment.
图2是根据一示例性实施例示出的一种显示目标图像的方法流程图。Fig. 2 is a flow chart of a method for displaying a target image according to an exemplary embodiment.
图3是根据一示例性实施例示出的一种显示目标图像的界面示意图。Fig. 3 is a schematic diagram showing an interface for displaying a target image according to an exemplary embodiment.
图4是根据一示例性实施例示出的一种显示目标图像的场景示意图。Fig. 4 is a schematic diagram showing a scene of displaying a target image according to an exemplary embodiment.
图5是根据一示例性实施例示出的一种显示目标图像的界面示意图。Fig. 5 is a schematic diagram showing an interface for displaying a target image according to an exemplary embodiment.
图6是根据一示例性实施例示出的一种显示目标图像的装置框图。Fig. 6 is a block diagram of a device for displaying a target image according to an exemplary embodiment.
图7是根据一示例性实施例示出的一种终端的结构示意框图;Fig. 7 is a schematic block diagram showing a structure of a terminal according to an exemplary embodiment;
图8是根据一示例性实施例示出的一种终端700的具体结构示意框图。Fig. 8 is a schematic block diagram of a specific structure of a terminal 700 according to an exemplary embodiment.
具体实施方式detailed description
图1是根据一示例性实施例示出的一种显示目标图像的方法的流程图,如图1所示,该方法用于终端中,包括以下步骤。Fig. 1 is a flowchart of a method for displaying a target image according to an exemplary embodiment. As shown in Fig. 1, the method is used in a terminal and includes the following steps.
在步骤101中,获取第一面部图像以及第二面部图像。In step 101, a first facial image and a second facial image are acquired.
在步骤102中,根据第一面部图像生成第一面部拓扑图像,根据第二面部图像生成第 二面部拓扑图像。In step 102, a first facial topology image is generated based on the first facial image, and a second facial topology image is generated based on the second facial image.
在步骤103中,将第一面部拓扑图像与第二面部拓扑图像进行融合,生成目标图像。In step 103, the first facial topology image and the second facial topology image are fused to generate a target image.
在步骤104中,在目标界面上显示目标图像。In step 104, a target image is displayed on the target interface.
可选地,将第一面部拓扑图像与第二面部拓扑图像进行融合,生成目标图像,包括:Optionally, fusing the first facial topology image with the second facial topology image to generate the target image includes:
将第一面部拓扑图像中的多个第一区域子图像与第二面部拓扑图像中对应的多个第二区域子图像分别进行融合,生成目标图像。A plurality of first region sub-images in the first facial topology image and a plurality of corresponding second region sub-images in the second facial topology image are respectively fused to generate a target image.
可选地,将第一面部拓扑图像中的多个第一区域子图像与第二面部拓扑图像中对应的多个第二区域子图像分别进行融合,生成目标图像,包括:Optionally, fusing a plurality of first region sub-images in the first facial topology image and a plurality of corresponding second-region sub images in the second facial topology image to generate a target image, including:
根据第一面部拓扑图像中的多个第一区域子图像,对第二面部拓扑图像中对应的多个第二区域子图像进行形状以及大小的调整;Adjusting the shape and size of the corresponding plurality of second area sub-images in the second facial topology image according to the plurality of first area sub-images in the first facial topology image;
将调整后的多个第二区域子图像贴合到对应的多个第一区域子图像上,根据预存的像素融合算法,对第一面部拓扑图像中的第一区域子图像与第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。Fit the adjusted multiple second-region sub-images to the corresponding multiple first-region sub-images, and according to the pre-stored pixel fusion algorithm, the first-region sub-image and the second face in the first facial topology image The corresponding sub-images of the second region in the topological image are fused to generate the target image.
可选地,将第一面部拓扑图像中的第一区域子图像与第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像,包括:Optionally, fusing the first region sub-image in the first facial topology image with the corresponding second region sub-image in the second facial topology image to generate the target image includes:
根据第一面部拓扑图像对应的第一权重,调整第一面部拓扑图像,得到调整后的第一面部拓扑图像;Adjust the first facial topology image according to the first weight corresponding to the first facial topology image to obtain the adjusted first facial topology image;
根据第二面部拓扑图像对应的第二权重,调整第二面部拓扑图像,得到调整后的第二面部拓扑图像;Adjust the second facial topology image according to the second weight corresponding to the second facial topology image to obtain the adjusted second facial topology image;
将调整后的第一面部拓扑图像中的第一区域子图像,与调整后的第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。The first region sub-image in the adjusted first facial topology image is fused with the corresponding second region sub-image in the adjusted second facial topology image to generate a target image.
可选地,第一权重为随机确定,第二权重为1与第一权重的差值。Optionally, the first weight is determined randomly, and the second weight is the difference between 1 and the first weight.
可选地,生成目标图像之后,还包括:Optionally, after generating the target image, the method further includes:
在第一面部拓扑图像的预设位置处获取第一肤色值,在第二面部拓扑图像的预设位置处获取第二肤色值;Acquiring a first skin color value at a preset position of the first facial topology image, and acquiring a second skin color value at a preset position of the second facial topology image;
根据第一肤色值与第二肤色值,确定目标图像的预设位置的目标肤色值;Determine the target skin color value at the preset position of the target image according to the first skin color value and the second skin color value;
根据目标肤色值,对目标图像中的预设位置的肤色值进行调整。According to the target skin color value, the skin color value at the preset position in the target image is adjusted.
可选地,根据第一肤色值与第二肤色值,确定目标图像的预设位置的目标肤色值,包括:Optionally, according to the first skin color value and the second skin color value, determining the target skin color value at the preset position of the target image includes:
获取预先设定的肤色调整值;Obtain preset skin tone adjustment values;
获取第一肤色值与第二肤色值的和值,将和值减去肤色调整值得到的差值,确定为目标面部拓扑图像的预设位置的目标肤色值。The sum of the first skin color value and the second skin color value is obtained, and the difference obtained by subtracting the skin color adjustment value from the sum value is determined as the target skin color value at the preset position of the target facial topology image.
可选地,获取第一面部图像以及第二面部图像,包括:Optionally, acquiring the first facial image and the second facial image includes:
当接收到目标图像生成指令时,通过摄像头采集视频帧,从已采集的多帧视频帧中, 随机选取一帧视频帧作为源图像;When receiving the target image generation instruction, the video frame is collected through the camera, and one frame of the video frame is randomly selected as the source image from the collected multi-frame video frames;
在源图像中提取第一面部图像以及第二面部图像;Extract the first facial image and the second facial image from the source image;
获取第一面部图像以及第二面部图像之后,还包括:After acquiring the first facial image and the second facial image, the method further includes:
在目标界面上,显示第一面部图像以及第二面部图像。On the target interface, the first facial image and the second facial image are displayed.
可选地,根据第一面部图像生成第一面部拓扑图像,根据第二面部图像生成第二面部拓扑图像,包括:Optionally, generating a first facial topology image based on the first facial image and generating a second facial topology image based on the second facial image include:
在第一面部图像中,确定第一组面部特征点;In the first facial image, determine the first set of facial feature points;
按照预先设置的连接规则,将第一组面部特征点中的各面部特征点进行连接,生成第一面部拓扑图像;Connect the facial feature points in the first group of facial feature points according to the preset connection rule to generate the first facial topology image;
在第二面部图像中,确定第二组面部特征点;In the second facial image, determine the second set of facial feature points;
按照预先设置的连接规则,将第二组面部特征点中的各面部特征点进行连接,生成第二面部拓扑图像。Connect the facial feature points in the second group of facial feature points according to the preset connection rule to generate the second facial topology image.
可选地,该方法还包括:Optionally, the method further includes:
获取更新补丁包,其中,更新补丁包中携带有调用中心处理器CPU功能的指令以及调用图像处理器GPU功能的指令,CPU功能包括使用传感器、检测触摸事件、检测触发事件、确定面部特征点功能;Obtain an update patch package, in which the update patch package carries instructions for calling the CPU function of the central processor and instructions for calling the GPU function of the image processor. The CPU functions include using sensors, detecting touch events, detecting trigger events, and determining facial feature points. ;
加载更新补丁包。Load the update patch package.
本申请实施例将结合具体的实施方式,对显示目标图像的方法进行介绍。该方法可以由终端实现,该终端可以是安装有显示目标图像的应用程序的终端。终端至少包括显卡以及CPU(Central Processing Unit,中央处理器)。如图2所示的生成目标图像的方法流程图,本实施例以第一面部图像和第二面部图像分别是父亲的面部图像和母亲的面部图像,生成的目标图像是宝宝图像为例,对生成目标图像的方法进行说明,该方法的处理流程可以包括如下的步骤:The embodiments of the present application will introduce a method of displaying a target image in combination with a specific implementation manner. This method may be implemented by a terminal, which may be a terminal installed with an application program that displays a target image. The terminal includes at least a graphics card and a CPU (Central Processing Unit, central processing unit). As shown in the flowchart of the method for generating a target image shown in FIG. 2, in this embodiment, the first facial image and the second facial image are the father's facial image and the mother's facial image, respectively, and the generated target image is the baby image. The method for generating the target image will be described. The processing flow of this method may include the following steps:
在步骤201中,当终端接收到目标图像生成指令时,通过摄像头采集视频帧,从已采集的多帧视频帧中,随机选取一帧视频帧作为源图像,然后,在源图像中提取第一面部图像以及第二面部图像,然后,在终端的目标界面上,显示第一面部图像以及第二面部图像。In step 201, when the terminal receives the target image generation instruction, it collects video frames through the camera, randomly selects one frame of video frames from the collected multi-frame video frames as the source image, and then extracts the first The facial image and the second facial image, and then, on the target interface of the terminal, the first facial image and the second facial image are displayed.
其中,源图像是指包含了待融合的至少两个面部图像的图像。The source image refers to an image containing at least two facial images to be fused.
当用户想要生成宝宝图像时,即,当用户想要生成目标图像时,用户可以先打开终端上安装的应用程序,然后,用户点击生成宝宝图像的选项,终端接收到生成宝宝图像的选项对应的指令,该指令即目标图像生成指令。然后,终端开启摄像功能,通过摄像头采集视频帧。当终端接收到视频采集停止指令时,或当采集视频帧的持续时长达到预设的持续时长时,终端停止采集视频帧,并在已采集的多帧视频帧中,随机选取至少一帧视频帧作为源图像。When the user wants to generate a baby image, that is, when the user wants to generate a target image, the user can first open the application installed on the terminal, then, the user clicks the option to generate a baby image, the terminal receives the option to generate a baby image The instruction is the target image generation instruction. Then, the terminal starts the camera function and collects video frames through the camera. When the terminal receives the video capture stop instruction, or when the duration of capturing video frames reaches the preset duration, the terminal stops capturing video frames and randomly selects at least one video frame among the multi-frame video frames that have been captured As the source image.
然后在源图像中,终端可以使用预先存储的人脸识别算法,在获取到的源图像中识别 出两个人脸图像,得到第一面部图像以及第二面部图像,然后,终端在目标界面上显示第一面部图像以及第二面部图像。其中,人脸识别算法可以是局部特征分析方法、特征脸方法、基于弹性模型的识别算法、神经网络识别算法、隐马尔可夫模型算法等,只要能实现在源图像中提取第一面部图像以及第二面部图像的算法均可,本申请对此不做限定。Then in the source image, the terminal can use the pre-stored face recognition algorithm to identify two face images in the obtained source image to obtain the first face image and the second face image. Then, the terminal on the target interface The first facial image and the second facial image are displayed. Among them, the face recognition algorithm can be a local feature analysis method, a feature face method, a recognition algorithm based on an elastic model, a neural network recognition algorithm, a hidden Markov model algorithm, etc., as long as the first facial image can be extracted from the source image As well as the algorithm of the second facial image, this application does not limit it.
如果终端在源图像中识别出两个以上的面部图像,则终端可以截取识别出的多个面部图像并显示给用户,使得用户可以手动选择其中两个面部图像分别作为第一面部图像以及第二面部图像,如图3所示。或者,终端也可以向用户发出图像错误提示信息,提示用户重新上传源图像,本申请对此不做限定。如果终端在源图像中识别不出两个面部图像,则发出图像错误提示信息,提示用户重新上传源图像。发出图像错误提示信息的方式可以有多种,例如显示提示文字、显示提示图片、发出语音提示等方式,只要能起到提示用户重新上传源图像的作用即可,本申请对发出图像错误提示信息的方法不做限定。If the terminal recognizes more than two facial images in the source image, the terminal can intercept the multiple recognized facial images and display them to the user, so that the user can manually select two of the facial images as the first facial image and the first Two face images, as shown in Figure 3. Alternatively, the terminal may also send an image error prompt message to the user to prompt the user to upload the source image again, which is not limited in this application. If the terminal cannot recognize two facial images in the source image, it will send an image error prompt message to prompt the user to upload the source image again. There are many ways to send an image error prompt message, such as displaying prompt text, displaying a prompt picture, and issuing a voice prompt. As long as it can play the role of prompting the user to re-upload the source image, this application will send an image error prompt message The method is not limited.
终端确定源图像的处理方式,除了上述步骤中提供的处理方式,也可以是其它的处理方式。例如,终端可以接收用户上传的至少一个图像作为源图像。在这种方式中,用户可以选择上传预先存储在终端中的图像,也可以选择拍摄图像然后上传拍摄的图像。然后,用户可以上传至少一个图像,则终端将接收到的图像作为源图像。再例如,终端可以接收用户上传的至少一个图像以及视频帧,作为源图像。在这种方式中,用户可以既上传至少一个图像,又上传一段视频,终端在接收到的视频中选取至少一帧视频帧,将选取的视频帧以及接收到的图像一同作为源图像。终端确定源图像的处理方式多种多样,可以根据实际需求选取相应的处理方式,本申请对此不做限定。The terminal determines the processing method of the source image. In addition to the processing methods provided in the above steps, other processing methods may also be used. For example, the terminal may receive at least one image uploaded by the user as a source image. In this way, the user can choose to upload the image stored in the terminal in advance, or can choose to take the image and upload the taken image. Then, the user can upload at least one image, and the terminal uses the received image as the source image. For another example, the terminal may receive at least one image and video frame uploaded by the user as the source image. In this way, the user can upload at least one image and a piece of video. The terminal selects at least one video frame from the received video, and uses the selected video frame and the received image as the source image. The terminal determines the source image in various processing methods, and can select a corresponding processing method according to actual needs, which is not limited in this application.
在步骤202中,终端根据第一面部图像生成第一面部拓扑图像,根据第二面部图像生成第二面部拓扑图像。In step 202, the terminal generates a first facial topology image based on the first facial image, and generates a second facial topology image based on the second facial image.
在一种可行的实施例中,上述步骤202中根据面部图像生成面部拓扑图像的步骤可以如下:在面部图像中,确定一组面部特征点;按照预先设置的连接规则,将这一组面部特征点中的各面部特征点进行连接,生成面部拓扑图像,如图4所示。以第一面部图像为例,则上述步骤可以是:在第一面部图像中,确定第一组面部特征点;按照预先设置的连接规则,将第一组面部特征点中的各面部特征点进行连接,生成第一面部拓扑图像,相应的,上述步骤202包括下述步骤2021与步骤2022。In a feasible embodiment, the step of generating a facial topology image based on the facial image in the above step 202 may be as follows: In the facial image, determine a set of facial feature points; according to the preset connection rule, the set of facial features The facial feature points in the points are connected to generate a facial topology image, as shown in Figure 4. Taking the first facial image as an example, the above steps may be: in the first facial image, determine the first set of facial feature points; according to the preset connection rule, the facial features in the first set of facial feature points The points are connected to generate the first facial topology image. Correspondingly, the above step 202 includes the following steps 2021 and 2022.
在步骤2021中,确定第一面部图像以及第二面部图像后,终端可以使用预先存储的面部特征点识别算法,在第一面部图像中标定出一组面部特征点(即第一组面部特征点)。基于相同的算法,在第二面部图像中标定出第二组面部特征点。其中,面部特征点识别算法可以是CLM(Constrained local model,约束局部模型)算法、Cascaded Regression(级联回归,一种人脸特征点定位方法)算法、CNN(Convolutional Neural Network,卷积神经网络)算法等,只要能实现在面部图像中标定出面部特征点的算法均可,本申请对此不做限定。In step 2021, after determining the first facial image and the second facial image, the terminal may use a pre-stored facial feature point recognition algorithm to calibrate a set of facial feature points in the first facial image (ie, the first set of faces Feature points). Based on the same algorithm, the second set of facial feature points are calibrated in the second facial image. Among them, facial feature point recognition algorithms can be CLM (Constrained local model, constrained local model) algorithm, Cascaded Regression (cascade regression, a facial feature point positioning method) algorithm, CNN (Convolutional Neural Network, convolutional neural network) Algorithms, etc., as long as they can implement algorithms for calibrating facial feature points in facial images, this application does not limit this.
对于任意一张面部图像,使用相同的面部特征点识别算法对面部图像进行标定时,标定出的面部特征点的数量相同,且每个面部特征点具有一个标识,每个标识对应的面部特征点具有固定的含义,例如,标识19至标识24的面部特征点代表面部图像中左侧眼眉的上边缘,标识49的面部特征点代表面部图像中左侧眼睛的左眼角。For any facial image, the same facial feature point recognition algorithm is used to calibrate the facial image, the number of calibrated facial feature points is the same, and each facial feature point has an identifier, and each identifier corresponds to the facial feature point It has a fixed meaning, for example, the facial feature points of logos 19 to 24 represent the upper edge of the left eyebrow in the facial image, and the facial feature points of logo 49 represent the left eye corner of the left eye in the facial image.
在步骤2022中,终端获取预先设定的连接规则,对面部特征点进行连接,生成第一面部图像的第一面部拓扑图像。预先设定的连接规则可以是面部特征点的标识连接规则,这种情况下,通过预先设定的标识连接规则,将标识连接规则中规定需要连接的面部特征点进行连接,将第一面部图像划分出多个区域,每个区域可以称为一个区域子图像,所有区域子图像之间不会发生重叠。通常来讲,标识连接规则中规定某三个面部特征点进行连接,即划分出的区域子图像是三角形的图像,当然,也可以根据实际需求设定标识连接规则中规定其它数量的面部特征点进行连接,即划分出的区域子图像为其它形状的图像,本申请对此不做限定。In step 2022, the terminal acquires a preset connection rule, connects facial feature points, and generates a first facial topology image of the first facial image. The preset connection rule may be the identification connection rule of the facial feature points. In this case, the facial connection points specified in the identification connection rule to be connected are connected through the predetermined identification connection rule, and the first face The image is divided into multiple areas, and each area can be called an area sub-image, and there is no overlap between all area sub-images. Generally speaking, the logo connection rule stipulates that certain three facial feature points are connected, that is, the divided sub-images are triangular images. Of course, other numbers of facial feature points can be specified in the logo connection rule according to actual needs. Connected, that is, the divided sub-images are images of other shapes, which is not limited in this application.
同理,生成第二面部拓扑图像的处理方式可以是:在第二面部图像中,确定第二组面部特征点;按照预先设置的连接规则,将第二组面部特征点中的各面部特征点进行连接,生成第二面部拓扑图像。生成第二面部拓扑图像相应的处理方式可以参照上述生成第一面部拓扑图像的处理方式,此处不做赘述。Similarly, the processing method for generating the second facial topological image may be: in the second facial image, determine the second group of facial feature points; according to the preset connection rule, the facial feature points in the second group of facial feature points Connect to generate a second facial topology image. The corresponding processing method for generating the second facial topology image can refer to the above processing method for generating the first facial topology image, which will not be repeated here.
在步骤203中,终端根据第一面部拓扑图像对应的第一权重,调整第一面部拓扑图像,得到调整后的第一面部拓扑图像,并根据第二权重,调整第二面部拓扑图像,得到调整后的第二面部拓扑图像。In step 203, the terminal adjusts the first facial topology image according to the first weight corresponding to the first facial topology image to obtain the adjusted first facial topology image, and adjusts the second facial topology image according to the second weight To obtain the adjusted second facial topology image.
终端先获取第一面部拓扑图像对应的第一权重以及第二面部拓扑图像对应的第二权重,其中,第一权重可以是整个第一面部拓扑图像的权重,在这种情况下,第二权重是整个第二面部拓扑图像的权重。另外,第一权重也可以是第一面部拓扑图像中的某个区域子图像的权重,在这种情况下,第二权重则是第二面部拓扑图像中,与第一权重对应的区域子图像对应的区域子图像的权重,第一权重和第二权重均可以包括多个不同的权重值。除此之外,第一权重还可以是第一面部拓扑图像中的多个区域子图像构成的五官图像的权重,第二权重则是第二面部拓扑图像中对应的五官图像的权重,第一权重和第二权重均可以包括多个不同的权重值,如第一权重为第一面部拓扑图像中眼睛的权重,则第二权重为第二面部拓扑图像中眼睛的权重。对于上述方式,本申请对此不做具体限定。The terminal first obtains the first weight corresponding to the first facial topology image and the second weight corresponding to the second facial topology image, where the first weight may be the weight of the entire first facial topology image. In this case, the first The second weight is the weight of the entire second facial topology image. In addition, the first weight may also be the weight of a certain area sub-image in the first face topology image. In this case, the second weight is the area sub-image corresponding to the first weight in the second face topology image The weight of the region sub-image corresponding to the image, the first weight and the second weight may each include a plurality of different weight values. In addition, the first weight may also be the weight of the facial feature image composed of multiple region sub-images in the first facial topology image, and the second weight is the weight of the corresponding facial feature image in the second facial topology image. Both the first weight and the second weight may include a plurality of different weight values. For example, if the first weight is the weight of the eyes in the first facial topology image, the second weight is the weight of the eyes in the second facial topology image. For the above method, this application does not specifically limit this.
获取第一权重以及第二权重的方式可以有很多种,下面列举几种方式。There may be many ways to obtain the first weight and the second weight, and several methods are listed below.
方式一、技术人员可以预先设定第一权重的数值以及第二权重的数值,当终端对第一面部拓扑图像以及第二面部拓扑图像进行融合时,在第一权重以及第二权重对应的存储区直接读取第一权重的数值以及第二权重的数值即可。Method 1: The technician can preset the value of the first weight and the value of the second weight. When the terminal fuses the first facial topology image and the second facial topology image, the first weight and the second weight correspond to The storage area may directly read the value of the first weight and the value of the second weight.
方式二、可以由用户确定第一权重的数值以及第二权重的数值,如终端向用户提供生成的宝宝图像分别与两张面部图像的相似程度的调整选项,用户可以手动调整相似程度, 终端将相似程度转化为第一权重的数值以及第二权重的数值。Method 2: The user can determine the value of the first weight and the value of the second weight. For example, if the terminal provides the user with an option to adjust the similarity between the generated baby image and the two facial images, the user can manually adjust the similarity. The degree of similarity is converted into the value of the first weight and the value of the second weight.
方式三、终端可以随机设定第一权重的数值,则第二权重为1减去第一权重的差值,这样,用户不知道生成的宝宝图像会与哪一张面部图像更相似,增加了生成的宝宝图像的不确定性,也增加了趣味性。Method 3: The terminal can randomly set the value of the first weight, then the second weight is 1 minus the difference of the first weight, so that the user does not know which facial image the generated baby image will be more similar to, which increases The uncertainty of the generated baby image also increases the fun.
通过上述处理步骤获取到第一权重以及第二权重后,终端根据第一权重,调整第一面部拓扑图像,得到调整后的第一面部拓扑图像,以及根据第二权重,调整第二面部拓扑图像,得到调整后的第二面部拓扑图像。After acquiring the first weight and the second weight through the above processing steps, the terminal adjusts the first facial topology image according to the first weight to obtain the adjusted first facial topology image, and adjusts the second face according to the second weight Topology image to obtain the adjusted second face topology image.
针对第一权重是整个第一面部拓扑图像的权重,第二权重是整个第二面部拓扑图像的权重的情况,对于上述步骤203中的方式一、方式二以及方式三,本步骤的处理为:确定第一面部拓扑图像的每个像素的RGB(Red-Green-Blue,红绿蓝)值,将每个像素的RGB值与第一权重相乘,得到的乘积即为调整后的第一面部拓扑图像的RGB值。根据调整后的第一面部拓扑图像的RGB值,调整第一面部拓扑图像的RGB值,也就是说,将第一面部拓扑图像的RGB值设置为调整后的RGB值。同理,根据第二权重,调整第二面部拓扑图像,得到调整后的第二面部拓扑图像的处理步骤参照上述生成调整后的第一面部拓扑图像,此处不做赘述。For the case where the first weight is the weight of the entire first facial topology image and the second weight is the weight of the entire second facial topology image, for the method 1, method 2, and method 3 in step 203 above, the processing in this step is: : Determine the RGB (Red-Green-Blue, red-green-blue) value of each pixel of the first facial topology image, multiply the RGB value of each pixel by the first weight, and the obtained product is the adjusted first The RGB value of a facial topology image. According to the adjusted RGB value of the first facial topology image, the RGB value of the first facial topology image is adjusted, that is, the RGB value of the first facial topology image is set to the adjusted RGB value. Similarly, according to the second weight, the second facial topology image is adjusted to obtain the adjusted second facial topology image. For the processing steps of obtaining the adjusted second facial topology image, refer to the above-mentioned generating the adjusted first facial topology image, which will not be repeated here.
针对第一权重是第一面部拓扑图像中的某个区域子图像的权重,第二权重是第二面部拓扑图像中,与第一权重对应的区域子图像对应的区域子图像的权重的情况,对于上述步骤203中的方式一、方式二以及方式三,本步骤的处理方式为:以第一面部拓扑图像中的某个区域子图像为例,确定该区域子图像中的每个像素的RGB值,将每个RGB值与第一权重相乘,得到的乘积即为该区域子图像的调整后的RGB值,根据该RGB值对该区域子图像进行调整,也就是说,将该区域子图像的RGB值设置为该RGB值。基于相同的处理方式,对第一面部拓扑图像中的每个区域子图像进行调整后,得到调整后的第一面部拓扑图像。同理,根据第二权重,调整第二面部拓扑图像,得到调整后的第二面部拓扑图像的处理步骤参照上述生成调整后的第一面部拓扑图像,此处不做赘述。For the case where the first weight is the weight of a certain area sub-image in the first face topology image, and the second weight is the weight of the area sub-image corresponding to the area sub-image corresponding to the first weight in the second face topology image For the method 1, method 2, and method 3 in the above step 203, the processing method of this step is: taking an area sub-image in the first facial topology image as an example, determine each pixel in the area sub-image RGB value, multiply each RGB value with the first weight, the product is the adjusted RGB value of the sub-image of the area, adjust the sub-image of the area according to the RGB value, that is to say, the The RGB value of the area sub-image is set to this RGB value. Based on the same processing method, after adjusting each region sub-image in the first facial topology image, the adjusted first facial topology image is obtained. Similarly, according to the second weight, the second facial topology image is adjusted to obtain the adjusted second facial topology image. For the processing steps of obtaining the adjusted second facial topology image, refer to the above-mentioned generating the adjusted first facial topology image, which will not be repeated here.
针对第一权重是第一面部拓扑图像中的某多个区域子图像构成的五官图像的权重,第二权重是第二面部拓扑图像中对应的五官图像的权重的情况,对于上述方式一、方式二以及方式三,本步骤的处理方式为:以第一面部拓扑图像中由多个区域子图像构成的某个五官图像为例,确定该五官图像中的每个像素的RGB值,将每个RGB值与第一权重相乘,得到的乘积即为该五官图像的调整后的RGB值,根据该RGB值对该五官图像进行调整,也就是说,将该五官图像的RGB值设置为该RGB值。基于相同的处理方式,对第一面部拓扑图像中的每个五官图像进行调整后,得到调整后的第一面部拓扑图像。同理,根据第二权重,调整第二面部拓扑图像,得到调整后的第二面部拓扑图像的处理步骤参照上述生成调整后的第一面部拓扑图像,此处不做赘述。For the case where the first weight is the weight of a facial feature image composed of a plurality of regional sub-images in the first facial topology image, and the second weight is the weight of the corresponding facial feature image in the second facial topology image, for the above method 1, Method two and method three, the processing method of this step is: taking a certain facial image composed of multiple area sub-images in the first facial topology image as an example, determine the RGB value of each pixel in the facial image, and Each RGB value is multiplied by the first weight, and the obtained product is the adjusted RGB value of the facial image, and the facial image is adjusted according to the RGB value, that is, the RGB value of the facial image is set to The RGB value. Based on the same processing method, after adjusting each facial feature image in the first facial topology image, an adjusted first facial topology image is obtained. Similarly, according to the second weight, the second facial topology image is adjusted to obtain the adjusted second facial topology image. For the processing steps of obtaining the adjusted second facial topology image, refer to the above-mentioned generating the adjusted first facial topology image, which will not be repeated here.
在步骤204中,终端将调整后的第一面部拓扑图像中的第一区域子图像,与调整后的 第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。In step 204, the terminal fuses the first region sub-image in the adjusted first facial topology image with the corresponding second region sub-image in the adjusted second facial topology image to generate a target image.
其中,第一区域子图像是根据预设的面部特征点相连接,在第一面部拓扑图像中划分出的一部分图像,同理,第二区域子图像是根据预设的面部特征点相连接,在第二面部拓扑图像中划分出的一部分图像。第一区域子图像与第二区域子图像相对应,是指第一区域子图像的对应的面部特征点的标识与第二区域子图像的对应的面部特征点的标识相同。Among them, the first region sub-images are connected according to preset facial feature points, and a part of the images divided in the first face topology image, similarly, the second region sub-images are connected according to preset facial feature points , A part of the image divided in the second facial topology image. The correspondence between the first region sub-image and the second region sub-image means that the identifier of the corresponding facial feature point of the first region sub-image is the same as the identifier of the corresponding facial feature point of the second region sub-image.
通过上述步骤根据第一权重以及第二权重分别对第一面部拓扑图像以及第二面部拓扑图像进行调整后,终端根据预先设定的图像融合算法,将调整后的第一面部拓扑图像中的第一区域子图像,与调整后的第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。其中,图像融合算法可以有多种,如像素级图像融合算法、泊松融合算法或者MVC(Mean Value Coordinates,均值坐标)图像融合算法等,只要能实现将第一面部拓扑图像与第二面部拓扑图像融合的算法均可,本申请对此不做限定。After adjusting the first face topology image and the second face topology image according to the first weight and the second weight through the above steps, the terminal will adjust the adjusted first face topology image according to a preset image fusion algorithm The sub-image of the first region of the image is fused with the sub-image of the second region corresponding to the topological image of the adjusted second face to generate a target image. Among them, there can be many image fusion algorithms, such as pixel-level image fusion algorithm, Poisson fusion algorithm or MVC (Mean Values, Mean Coordinates) image fusion algorithm, etc., as long as the first facial topology image and the second facial Any algorithm for topological image fusion may be used, which is not limited in this application.
下面以图像融合算法为像素级图像融合算法进行举例说明,上述步骤204可以包括下述步骤2041-2042。The following uses the image fusion algorithm as a pixel-level image fusion algorithm for illustration. The above step 204 may include the following steps 2041-2042.
在步骤2041中,终端根据第一面部拓扑图像中的多个第一区域子图像,对第二面部拓扑图像中对应的多个第二区域子图像进行形状以及大小的调整。In step 2041, the terminal adjusts the shape and size of the corresponding multiple second-region sub-images in the second facial topology image according to the multiple first-region sub-images in the first facial topology image.
终端预先设定一个面部拓扑图像为底图,另一个面部拓扑图像作为素材,假设终端将第一面部拓扑图像确定为底图,以第二面部拓扑图像中的一个第二区域子图像为例,终端先在第一面部拓扑图像中确定与该第二区域子图像对应的第一区域子图像,然后,根据确定出的第一区域子图像的形状和大小,对该第二区域子图像进行形状以及大小的调整,使得第二区域子图像与对应的第一区域子图像的形状和大小相同。The terminal pre-sets a facial topology image as a base image, and another facial topology image as a material, assuming that the terminal determines the first facial topology image as a base image, taking a second region sub-image in the second facial topology image as an example , The terminal first determines the first area sub-image corresponding to the second area sub-image in the first facial topology image, and then, based on the determined shape and size of the first area sub-image, the terminal The shape and size are adjusted so that the second region sub-image and the corresponding first region sub-image have the same shape and size.
在步骤2042中,将调整后的多个第二区域子图像贴合到对应的多个第一区域子图像上,根据预存的像素融合算法,对第一面部拓扑图像中的第一区域子图像与第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。In step 2042, the adjusted plurality of second region sub-images are attached to the corresponding plurality of first region sub-images, and according to the pre-stored pixel fusion algorithm, the first region sub-image in the first facial topology image The image is fused with the corresponding second region sub-image in the second facial topology image to generate the target image.
上述步骤203-204为调整第一面部拓扑图像以及第二面部拓扑图像的权重,然后根据调整后的第一面部拓扑图像以及第二面部拓扑图像进行融合,来生成目标图像,当然,也可以不调整第一面部拓扑图像以及第二面部拓扑图像的权重,而是在步骤202之后,直接将第一面部拓扑图像与第二面部拓扑图像进行融合,生成目标图像,本申请对此不做限定。The above steps 203-204 are to adjust the weights of the first facial topology image and the second facial topology image, and then fuse the adjusted first facial topology image and the second facial topology image to generate the target image. Of course, The weights of the first facial topology image and the second facial topology image may not be adjusted, but after step 202, the first facial topology image and the second facial topology image are directly fused to generate a target image. No limitation.
可选地,为了使得生成的宝宝图像更美观,更符合宝宝的形象,可以对目标图像进行相应的美化处理,如可以对目标图像中的面部轮廓进行液化处理,使得面部轮廓变得更圆,对目标图像中的眼睛轮廓进行放大处理,使得眼睛轮廓变得更大更圆等,或者对目标图像的肤色值进行调整。Optionally, in order to make the generated baby image more beautiful and more in line with the baby's image, the target image can be correspondingly beautified, for example, the facial contour in the target image can be liquefied to make the facial contour more round, Enlarge the eye contour in the target image to make the eye contour bigger and more round, etc., or adjust the skin color value of the target image.
在步骤205中,在第一面部拓扑图像的预设位置处获取第一肤色值,在第二面部拓扑图像的预设位置处获取第二肤色值。In step 205, a first skin color value is obtained at a preset position of the first facial topology image, and a second skin color value is obtained at a preset position of the second facial topology image.
其中,预设位置是预设的区域子图像的位置,肤色值为面部拓扑图像中某个像素的 RGB值。Wherein, the preset position is the position of the preset area sub-image, and the skin color value is the RGB value of a pixel in the facial topology image.
为了进一步体现出生成的宝宝图像与父母的面部图像之间的关联,可以根据父母的面部图像的肤色值确定宝宝图像的肤色值,即在第一面部拓扑图像的预设位置处提取第一肤色值,在第二面部拓扑图像的预设位置处提取第二肤色值。In order to further reflect the association between the generated baby image and the parent ’s facial image, the skin color value of the baby image may be determined according to the skin color value of the parent ’s facial image, that is, the first facial topology image is extracted at the preset position Skin color value, the second skin color value is extracted at a preset position of the second facial topology image.
在步骤206中,根据第一肤色值与第二肤色值,确定目标图像的预设位置的目标肤色值。In step 206, the target skin color value at the preset position of the target image is determined according to the first skin color value and the second skin color value.
确定第一肤色值以及第二肤色值后,终端可以计算第一肤色值以及第二肤色值的平均值,将其确定为目标图像的预设位置的目标肤色值。After determining the first skin color value and the second skin color value, the terminal may calculate the average value of the first skin color value and the second skin color value, and determine it as the target skin color value at the preset position of the target image.
另外,考虑到宝宝的肤色一般会比较白,因此,在根据父母的面部图像的肤色值确定预设位置的目标肤色值时,可以对父母的面部图像的肤色值进行适当的调整,相应的处理方式可以如下:获取预先设定的肤色调整值;获取第一肤色值与第二肤色值的和值,将和值减去肤色调整值得到的差值,确定为目标面部拓扑图像的预设位置的目标肤色值。In addition, considering that the skin color of the baby is generally white, when determining the target skin color value at the preset position according to the skin color value of the parent's facial image, the skin color value of the parent's facial image can be appropriately adjusted and correspondingly processed The method may be as follows: acquiring a preset skin tone adjustment value; acquiring the sum of the first skin tone value and the second skin tone value, and subtracting the difference obtained by subtracting the skin tone adjustment value from the sum value to determine the preset position of the target facial topology image Target skin color value.
除此之外,技术人员也可以预先设定预设位置的目标肤色值,并将目标肤色值存储在终端中,当终端获取预设位置的目标肤色值时,在目标肤色值对应的存储区直接读取目标肤色值即可。In addition, the technician can also pre-set the target skin tone value at the preset position and store the target skin tone value in the terminal. When the terminal acquires the target skin tone value at the preset position, in the storage area corresponding to the target skin tone value Simply read the target skin color value.
在步骤207中,根据目标肤色值,终端对目标图像中的预设位置的肤色值进行调整。In step 207, according to the target skin color value, the terminal adjusts the skin color value at the preset position in the target image.
当终端根据目标肤色值对目标图像中的预设位置的肤色值进行调整时,将预设位置的每个像素的RGB值调整为目标肤色值。When the terminal adjusts the skin color value at the preset position in the target image according to the target skin color value, the RGB value of each pixel at the preset position is adjusted to the target skin color value.
在步骤208中,终端在目标界面上显示调整肤色值后的目标图像。In step 208, the terminal displays the target image after adjusting the skin color value on the target interface.
对目标图像的预设位置的肤色值进行调整后,终端显示目标图像,如图5所示。另外,终端在显示目标图像时,为了增加美观度和趣味性,可以先播放预先存储的音频、动画效果等,然后显示出目标图像,或者按照预设的显示特效来显示目标图像,根据实际需求可以进行相应的显示设置,本申请对此不做限定。After adjusting the skin color value at the preset position of the target image, the terminal displays the target image, as shown in FIG. 5. In addition, when the terminal displays the target image, in order to increase the beauty and interest, it can first play the pre-stored audio, animation effects, etc., and then display the target image, or display the target image according to the preset display special effects, according to the actual needs The corresponding display settings can be made, which is not limited in this application.
相关技术中,当应用程序进行更新时,可能需要在终端安装新版本的安装包,以此来增加新功能。但这样浪费网络资源,且技术人员在更新时,需要做出新版本的安装包,浪费人力物力,无法做到快速更新。为了实现快速的更新迭代,技术人员在编写上述功能的更新补丁包时,可以使终端的显卡驱动脚本通过字符串形式进行服务器到终端的下发。当用户通过更新来获取上述功能时,终端获取更新补丁包,其中,更新补丁包中携带有调用中心处理器CPU功能的指令以及调用图像处理器GPU功能的指令,CPU功能包括使用传感器、检测触摸事件、检测触发事件、确定面部特征点功能。技术人员通过使用动态脚本语言进行高纬度CPU逻辑封装,即,将Opengl ES相关的原语都封装为动态脚本可以调用的指令,并且,将客户端传感器、触摸事件、触发事件、脸部关键点识别功能等相关的高层逻辑都封装到脚本驱动器中。然后,终端加载更新补丁包,即实现了不用下发新版本的安装包就可以进行更新,加快了迭代效率。In the related art, when the application program is updated, a new version of the installation package may need to be installed on the terminal to add new functions. However, this wastes network resources, and technicians need to make a new version of the installation package when updating, which wastes manpower and material resources and cannot be updated quickly. In order to achieve a rapid update iteration, the technical personnel can write the server's graphics card driver script to the terminal in the form of a string when writing the update patch package of the above function. When the user obtains the above functions through updating, the terminal obtains an update patch package, in which the update patch package carries instructions for calling the CPU function of the central processor and instructions for calling the GPU function of the image processor. The CPU functions include using sensors and detecting touch Events, detect trigger events, determine facial feature points. Technicians encapsulate high-latitude CPU logic by using dynamic scripting languages, that is, encapsulate Opengl ES-related primitives into instructions that dynamic scripts can call, and the client-side sensors, touch events, trigger events, and facial key points Recognition functions and other high-level logic are encapsulated in the script driver. Then, the terminal loads the update patch package, that is, it can be updated without issuing a new version of the installation package, which accelerates the iteration efficiency.
本申请实施例提供的方法,终端获取第一面部图像和第二面部图像,然后根据第一面部图像和第二面部图像分别生成第一面部拓扑图像以及第二面部拓扑图像,然后,根据第一权重以及第二权重分别调整第一面部拓扑图像以及第二面部拓扑图像,之后,将调整后的第一面部拓扑图像与调整后的第二面部拓扑图像进行融合,生成目标图像。这样,生成的目标图像融合了第一面部图像以及第二面部图像的特征,因此,生成的目标图像与第一面部图像以及第二面部图像的相似度较大,使得模拟效果更好。此外,可以根据目标肤色值对生成的目标图像进行调整,使得生成的目标图像的肤色更符合宝宝的特征,使得生成的目标图像更美观。In the method provided in the embodiment of the present application, the terminal acquires the first facial image and the second facial image, and then generates the first facial topology image and the second facial topology image according to the first facial image and the second facial image, respectively, and then, Adjust the first facial topology image and the second facial topology image according to the first weight and the second weight respectively, and then fuse the adjusted first facial topology image with the adjusted second facial topology image to generate the target image . In this way, the generated target image merges the features of the first facial image and the second facial image. Therefore, the generated target image has a large similarity with the first facial image and the second facial image, making the simulation effect better. In addition, the generated target image can be adjusted according to the target skin color value, so that the skin color of the generated target image is more in line with the characteristics of the baby, and the generated target image is more beautiful.
图6是根据一示例性实施例示出的一种显示目标图像的装置框图。参照图6,该装置包括获取单元610,生成单元620和显示单元630。Fig. 6 is a block diagram of a device for displaying a target image according to an exemplary embodiment. 6, the device includes an acquisition unit 610, a generation unit 620, and a display unit 630.
该获取单元610,被配置为获取第一面部图像以及第二面部图像;The obtaining unit 610 is configured to obtain a first facial image and a second facial image;
该生成单元620,被配置为根据第一面部图像生成第一面部拓扑图像,根据第二面部图像生成第二面部拓扑图像;The generating unit 620 is configured to generate a first facial topology image based on the first facial image and generate a second facial topology image based on the second facial image;
该生成单元620,还被配置为将第一面部拓扑图像与第二面部拓扑图像进行融合,生成目标图像;The generating unit 620 is further configured to fuse the first facial topology image and the second facial topology image to generate a target image;
该显示单元630,被配置为在目标界面上显示目标图像。The display unit 630 is configured to display the target image on the target interface.
可选地,该生成单元620,被配置为:Optionally, the generating unit 620 is configured to:
将第一面部拓扑图像中的多个第一区域子图像与第二面部拓扑图像中对应的多个第二区域子图像分别进行融合,生成目标图像。A plurality of first region sub-images in the first facial topology image and a plurality of corresponding second region sub-images in the second facial topology image are respectively fused to generate a target image.
可选地,该生成单元620,被配置为:Optionally, the generating unit 620 is configured to:
根据第一面部拓扑图像中的多个第一区域子图像,对第二面部拓扑图像中对应的多个第二区域子图像进行形状以及大小的调整;Adjusting the shape and size of the corresponding plurality of second area sub-images in the second facial topology image according to the plurality of first area sub-images in the first facial topology image;
将调整后的多个第二区域子图像贴合到对应的多个第一区域子图像上,根据预存的像素融合算法,对第一面部拓扑图像中的第一区域子图像与第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。Fit the adjusted multiple second-region sub-images to the corresponding multiple first-region sub-images, and according to the pre-stored pixel fusion algorithm, the first-region sub-image and the second face in the first facial topology image The corresponding sub-images of the second region in the topological image are fused to generate the target image.
可选地,该生成单元620被配置为:Optionally, the generating unit 620 is configured to:
根据第一面部拓扑图像对应的第一权重,调整第一面部拓扑图像,得到调整后的第一面部拓扑图像;Adjust the first facial topology image according to the first weight corresponding to the first facial topology image to obtain the adjusted first facial topology image;
根据第二面部拓扑图像对应的第二权重,调整第二面部拓扑图像,得到调整后的第二面部拓扑图像;Adjust the second facial topology image according to the second weight corresponding to the second facial topology image to obtain the adjusted second facial topology image;
将调整后的第一面部拓扑图像中的第一区域子图像,与调整后的第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。The first region sub-image in the adjusted first facial topology image is fused with the corresponding second region sub-image in the adjusted second facial topology image to generate a target image.
可选地,第一权重为随机确定,第二权重为1与第一权重的差值。Optionally, the first weight is determined randomly, and the second weight is the difference between 1 and the first weight.
可选地,该获取单元610,还被配置为生成目标图像之后,在第一面部拓扑图像的预 设位置处获取第一肤色值,在第二面部拓扑图像的预设位置处获取第二肤色值;Optionally, the acquiring unit 610 is further configured to acquire the first skin color value at a preset position of the first facial topology image and acquire the second at the preset position of the second facial topology image after generating the target image Skin color value
该装置还包括:The device also includes:
确定单元,被配置为根据第一肤色值与第二肤色值,确定目标图像的预设位置的目标肤色值;The determining unit is configured to determine the target skin color value at the preset position of the target image according to the first skin color value and the second skin color value;
调整单元,被配置为根据目标肤色值,对目标图像中的预设位置的肤色值进行调整。The adjusting unit is configured to adjust the skin color value at the preset position in the target image according to the target skin color value.
可选地,该确定单元,被配置为:Optionally, the determination unit is configured to:
获取预先设定的肤色调整值;Obtain preset skin tone adjustment values;
获取第一肤色值与第二肤色值的和值,将和值减去肤色调整值得到的差值,确定为目标面部拓扑图像的预设位置的目标肤色值。The sum of the first skin color value and the second skin color value is obtained, and the difference obtained by subtracting the skin color adjustment value from the sum value is determined as the target skin color value at the preset position of the target facial topology image.
可选地,该获取单元610,被配置为:Optionally, the obtaining unit 610 is configured to:
当接收到目标图像生成指令时,通过摄像头采集视频帧,从已采集的多帧视频帧中,随机选取一帧视频帧作为源图像;When receiving the target image generation instruction, the video frame is collected through the camera, and one frame of the video frame is randomly selected as the source image from the collected multi-frame video frames;
在源图像中提取第一面部图像以及第二面部图像;Extract the first facial image and the second facial image from the source image;
该显示单元630,被配置为在获取第一面部图像以及第二面部图像之后,在目标界面上,显示第一面部图像以及第二面部图像。The display unit 630 is configured to display the first facial image and the second facial image on the target interface after acquiring the first facial image and the second facial image.
可选地,该生成单元620,被配置为:Optionally, the generating unit 620 is configured to:
在第一面部图像中,确定第一组面部特征点;In the first facial image, determine the first set of facial feature points;
按照预先设置的连接规则,将第一组面部特征点中的各面部特征点进行连接,生成第一面部拓扑图像;Connect the facial feature points in the first group of facial feature points according to the preset connection rule to generate the first facial topology image;
在第二面部图像中,确定第二组面部特征点;In the second facial image, determine the second set of facial feature points;
按照预先设置的连接规则,将第二组面部特征点中的各面部特征点进行连接,生成第二面部拓扑图像。Connect the facial feature points in the second group of facial feature points according to the preset connection rule to generate the second facial topology image.
可选地,该获取单元610,被配置为获取更新补丁包,其中,该更新补丁包中携带有调用中心处理器CPU功能的指令以及调用图像处理器GPU功能的指令,CPU功能包括使用传感器、检测触摸事件、检测触发事件、确定面部特征点功能;Optionally, the acquiring unit 610 is configured to acquire an update patch package, wherein the update patch package carries instructions for calling the CPU function of the central processor and instructions for calling the GPU function of the image processor. The CPU functions include using sensors, Functions to detect touch events, detect trigger events, and determine facial feature points;
该装置还包括:The device also includes:
加载单元,被配置为加载该更新补丁包。The loading unit is configured to load the update patch package.
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Regarding the device in the above embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment related to the method, and will not be elaborated here.
图7是根据一示例性实施例示出的一种终端的结构示意图,包括:处理器71;用于存储处理器可执行指令的存储器72;其中,所述处理器71被配置为执行上述任一实施例所述的显示目标图像的方法。Fig. 7 is a schematic structural diagram of a terminal according to an exemplary embodiment, including: a processor 71; a memory 72 for storing processor executable instructions; wherein, the processor 71 is configured to execute any of the above The method for displaying the target image described in the embodiment.
图8是根据一示例性实施例示出的一种终端700的具体结构框图。例如,终端700可以是移动电话,计算机,消息收发设备,游戏控制台,平板设备等终端。Fig. 8 is a specific structural block diagram of a terminal 700 according to an exemplary embodiment. For example, the terminal 700 may be a mobile phone, a computer, a messaging device, a game console, a tablet device, or the like.
参照图8,终端700可以包括以下一个或多个组件:处理组件702,存储器704,电源组件706,多媒体组件708,音频组件710,输入/输出(I/O)的接口712,传感器组件714,以及通信组件716。8, the terminal 700 may include one or more of the following components: a processing component 702, a memory 704, a power component 706, a multimedia component 708, an audio component 710, an input / output (I / O) interface 712, a sensor component 714,与 通信 组 716.
处理组件702通常控制装置700的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件702可以包括一个或多个处理器720来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件702可以包括一个或多个模块,便于处理组件702和其他组件之间的交互。例如,处理组件702可以包括多媒体模块,以方便多媒体组件708和处理组件702之间的交互。The processing component 702 generally controls the overall operations of the device 700, such as operations associated with display, phone calls, data communications, camera operations, and recording operations. The processing component 702 may include one or more processors 720 to execute instructions to complete all or part of the steps in the above method. In addition, the processing component 702 may include one or more modules to facilitate interaction between the processing component 702 and other components. For example, the processing component 702 may include a multimedia module to facilitate interaction between the multimedia component 708 and the processing component 702.
存储器704被配置为存储各种类型的数据以支持在终端700的操作。这些数据的示例包括用于在终端700上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器704可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。The memory 704 is configured to store various types of data to support operations at the terminal 700. Examples of these data include instructions for any application or method operated on the terminal 700, contact data, phone book data, messages, pictures, videos, and the like. The memory 704 may be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable and removable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
电源组件706为终端700的各种组件提供电力。电源组件706可以包括电源管理系统,一个或多个电源,及其他与为装置700生成、管理和分配电力相关联的组件。The power supply component 706 provides power to various components of the terminal 700. The power component 706 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 700.
多媒体组件708包括在所述终端700和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件708包括一个前置摄像头和/或后置摄像头。当终端700处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。The multimedia component 708 includes a screen that provides an output interface between the terminal 700 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or sliding action, but also detect the duration and pressure related to the touch or sliding operation. In some embodiments, the multimedia component 708 includes a front camera and / or a rear camera. When the terminal 700 is in an operation mode, such as a shooting mode or a video mode, the front camera and / or the rear camera may receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
音频组件710被配置为输出和/或输入音频信号。例如,音频组件710包括一个麦克风(MIC),当终端700处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器704或经由通信组件716发送。在一些实施例中,音频组件710还包括一个扬声器,用于输出音频信号。The audio component 710 is configured to output and / or input audio signals. For example, the audio component 710 includes a microphone (MIC). When the terminal 700 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal. The received audio signal may be further stored in the memory 704 or sent via the communication component 716. In some embodiments, the audio component 710 further includes a speaker for outputting audio signals.
I/O接口712为处理组件702和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。The I / O interface 712 provides an interface between the processing component 702 and a peripheral interface module. The peripheral interface module may be a keyboard, a click wheel, or a button. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
传感器组件714包括一个或多个传感器,用于为终端700提供各个方面的状态评估。例如,传感器组件714可以检测到设备700的打开/关闭状态,组件的相对定位,例如所述组件为装置700的显示器和小键盘,传感器组件714还可以检测终端700或终端700一个 组件的位置改变,用户与终端700接触的存在或不存在,终端700方位或加速/减速和终端700的温度变化。传感器组件714可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件714还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件714还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。The sensor component 714 includes one or more sensors for providing the terminal 700 with status evaluation in various aspects. For example, the sensor component 714 can detect the on / off state of the device 700 and the relative positioning of the components, for example, the components are the display and keypad of the device 700, and the sensor component 714 can also detect the position change of the terminal 700 or a component of the terminal 700 The presence or absence of user contact with the terminal 700, the orientation or acceleration / deceleration of the terminal 700, and the temperature change of the terminal 700. The sensor assembly 714 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor component 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 714 may further include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
通信组件716被配置为便于终端700和其他设备之间有线或无线方式的通信。终端700可以接入基于通信标准的无线网络,如WiFi,运营商网络(如2G、3G、4G或5G),或它们的组合。在一个示例性实施例中,通信组件716经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件716还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。The communication component 716 is configured to facilitate wired or wireless communication between the terminal 700 and other devices. The terminal 700 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 716 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 716 further includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
在示例性实施例中,终端700可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。In an exemplary embodiment, the terminal 700 may be one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field programmable A gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented to perform the above method.
在示例性实施例中,还提供了一种包括指令的计算机可读存储介质,例如包括指令的存储器704,上述指令可由终端700的处理器720执行以完成上述显示目标图像的方法,该方法包括:获取第一面部图像以及第二面部图像;根据第一面部图像生成第一面部拓扑图像,根据第二面部图像生成第二面部拓扑图像;将第一面部拓扑图像与第二面部拓扑图像进行融合,生成目标图像;在目标界面上显示目标图像。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, there is also provided a computer-readable storage medium including instructions, for example, a memory 704 including instructions, the above instructions may be executed by the processor 720 of the terminal 700 to complete the above method of displaying a target image, the method includes : Acquire the first facial image and the second facial image; generate the first facial topology image based on the first facial image, generate the second facial topology image based on the second facial image; combine the first facial topology image with the second face The topological images are fused to generate the target image; the target image is displayed on the target interface. For example, the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, or the like.
在示例性实施例中,还提供了一种应用程序,包括一条或多条指令,该一条或多条指令可以由终端700的处理器720执行,以完成上述显示目标图像的方法,该方法包括:获取第一面部图像以及第二面部图像;根据该第一面部图像生成第一面部拓扑图像,根据该第二面部图像生成第二面部拓扑图像;将第一面部拓扑图像与第二面部拓扑图像进行融合,生成目标图像;在目标界面上显示该目标图像。可选地,上述指令还可以由终端700的处理器720执行以完成上述示例性实施例中所涉及的其他步骤。In an exemplary embodiment, an application program is also provided, which includes one or more instructions, and the one or more instructions may be executed by the processor 720 of the terminal 700 to complete the above method of displaying a target image, the method includes : Acquire a first facial image and a second facial image; generate a first facial topology image based on the first facial image, generate a second facial topology image based on the second facial image; compare the first facial topology image with the first The two facial topological images are fused to generate a target image; the target image is displayed on the target interface. Optionally, the above instructions may also be executed by the processor 720 of the terminal 700 to complete other steps involved in the above exemplary embodiments.

Claims (22)

  1. 一种显示目标图像的方法,包括:A method for displaying a target image includes:
    获取第一面部图像以及第二面部图像;Obtain the first facial image and the second facial image;
    根据所述第一面部图像生成第一面部拓扑图像,根据所述第二面部图像生成第二面部拓扑图像;Generating a first facial topology image based on the first facial image, and generating a second facial topology image based on the second facial image;
    将所述第一面部拓扑图像与所述第二面部拓扑图像进行融合,生成目标图像;Fuse the first facial topology image with the second facial topology image to generate a target image;
    在目标界面上显示所述目标图像。The target image is displayed on the target interface.
  2. 根据权利要求1所述的显示目标图像的方法,所述将所述第一面部拓扑图像与所述第二面部拓扑图像进行融合,生成目标图像,包括:The method of displaying a target image according to claim 1, the fusing the first facial topology image and the second facial topology image to generate a target image includes:
    将所述第一面部拓扑图像中的多个第一区域子图像与所述第二面部拓扑图像中对应的多个第二区域子图像分别进行融合,生成目标图像。A plurality of first area sub-images in the first face topology image and a plurality of corresponding second area sub-images in the second face topology image are respectively fused to generate a target image.
  3. 根据权利要求2所述的显示目标图像的方法,所述将所述第一面部拓扑图像中的多个第一区域子图像与所述第二面部拓扑图像中对应的多个第二区域子图像分别进行融合,生成目标图像,包括:The method for displaying a target image according to claim 2, wherein the plurality of first area sub-images in the first facial topology image and the corresponding second area sub-images in the second facial topology image The images are fused separately to generate the target image, including:
    根据所述第一面部拓扑图像中的多个第一区域子图像,对所述第二面部拓扑图像中对应的多个第二区域子图像进行形状以及大小的调整;Adjusting the shape and size of the corresponding second area sub-images in the second facial topology image according to the multiple first area sub-images in the first facial topology image;
    将调整后的多个第二区域子图像贴合到对应的多个第一区域子图像上,根据预存的像素融合算法,对所述第一面部拓扑图像中的第一区域子图像与所述第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。The adjusted plurality of second region sub-images are attached to the corresponding plurality of first region sub-images, and according to the pre-stored pixel fusion algorithm, the first region sub-image in the first facial topology image is The sub-images corresponding to the second region in the second facial topology image are fused to generate a target image.
  4. 根据权利要求2所述的显示目标图像的方法,所述将所述第一面部拓扑图像中的多个第一区域子图像与所述第二面部拓扑图像中对应的多个第二区域子图像分别进行融合,生成目标图像,包括:The method for displaying a target image according to claim 2, wherein the plurality of first area sub-images in the first facial topology image and the corresponding second area sub-images in the second facial topology image The images are fused separately to generate the target image, including:
    根据所述第一面部拓扑图像对应的第一权重,调整所述第一面部拓扑图像,得到调整后的第一面部拓扑图像;Adjusting the first facial topology image according to the first weight corresponding to the first facial topology image to obtain the adjusted first facial topology image;
    根据所述第二面部拓扑图像对应的第二权重,调整所述第二面部拓扑图像,得到调整后的第二面部拓扑图像;Adjusting the second facial topology image according to the second weight corresponding to the second facial topology image to obtain the adjusted second facial topology image;
    将所述调整后的第一面部拓扑图像中的第一区域子图像,与所述调整后的第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。The first region sub-image in the adjusted first facial topology image is fused with the corresponding second region sub-image in the adjusted second facial topology image to generate a target image.
  5. 根据权利要求4所述的显示目标图像的方法,所述第一权重为随机确定,所述第二权重为1与所述第一权重的差值。The method for displaying a target image according to claim 4, wherein the first weight is determined randomly, and the second weight is a difference between 1 and the first weight.
  6. 根据权利要求1所述的显示目标图像的方法,所述生成目标图像之后,还包括:The method for displaying a target image according to claim 1, after generating the target image, further comprising:
    在所述第一面部拓扑图像的预设位置处获取第一肤色值,在所述第二面部拓扑图像的预设位置处获取第二肤色值;Acquiring a first skin color value at a preset position of the first facial topology image, and acquiring a second skin color value at a preset position of the second facial topology image;
    根据所述第一肤色值与所述第二肤色值,确定所述目标图像的预设位置的目标肤色 值;Determine the target skin color value at the preset position of the target image according to the first skin color value and the second skin color value;
    根据所述目标肤色值,对所述目标图像中的预设位置的肤色值进行调整。According to the target skin color value, the skin color value at a preset position in the target image is adjusted.
  7. 根据权利要求6所述的显示目标图像的方法,所述根据所述第一肤色值与所述第二肤色值,确定所述目标图像的预设位置的目标肤色值,包括:The method for displaying a target image according to claim 6, the determining the target skin color value at a preset position of the target image according to the first skin color value and the second skin color value includes:
    获取预先设定的肤色调整值;Obtain preset skin tone adjustment values;
    获取所述第一肤色值与所述第二肤色值的和值,将所述和值减去所述肤色调整值得到的差值,确定为所述目标面部拓扑图像的预设位置的目标肤色值。Obtaining the sum of the first skin color value and the second skin color value, and subtracting the difference between the sum value and the skin color adjustment value to determine the target skin color at the preset position of the target facial topology image value.
  8. 根据权利要求1所述的显示目标图像的方法,所述获取第一面部图像以及第二面部图像,包括:The method for displaying a target image according to claim 1, the acquiring the first facial image and the second facial image includes:
    当接收到目标图像生成指令时,通过摄像头采集视频帧,从已采集的多帧视频帧中,随机选取一帧视频帧作为源图像;When receiving the target image generation instruction, the video frame is collected through the camera, and one frame of the video frame is randomly selected as the source image from the collected multi-frame video frames;
    在所述源图像中提取第一面部图像以及第二面部图像;Extracting the first facial image and the second facial image from the source image;
    所述获取第一面部图像以及第二面部图像之后,还包括:After acquiring the first facial image and the second facial image, the method further includes:
    在所述目标界面上,显示所述第一面部图像以及第二面部图像。On the target interface, the first facial image and the second facial image are displayed.
  9. 根据权利要求1所述的显示目标图像的方法,所述根据所述第一面部图像生成第一面部拓扑图像,根据所述第二面部图像生成第二面部拓扑图像,包括:The method of displaying a target image according to claim 1, said generating a first facial topology image based on the first facial image and generating a second facial topology image based on the second facial image includes:
    在所述第一面部图像中,确定第一组面部特征点;In the first facial image, determine a first set of facial feature points;
    按照预先设置的连接规则,将所述第一组面部特征点中的各面部特征点进行连接,生成第一面部拓扑图像;Connect each facial feature point in the first group of facial feature points according to a preset connection rule to generate a first facial topology image;
    在所述第二面部图像中,确定第二组面部特征点;In the second facial image, determine a second set of facial feature points;
    按照预先设置的连接规则,将所述第二组面部特征点中的各面部特征点进行连接,生成第二面部拓扑图像。Connect the facial feature points in the second group of facial feature points according to a preset connection rule to generate a second facial topology image.
  10. 根据权利要求1所述的显示目标图像的方法,所述方法还包括:The method of displaying a target image according to claim 1, further comprising:
    获取更新补丁包,其中,所述更新补丁包中携带有调用中心处理器CPU功能的指令以及调用图像处理器GPU功能的指令,所述CPU功能包括使用传感器、检测触摸事件、检测触发事件、确定面部特征点功能;Obtain an update patch package, where the update patch package carries instructions to call the CPU function of the central processor and instructions to call the GPU function of the image processor. The CPU functions include using sensors, detecting touch events, detecting trigger events, and determining Facial feature point function;
    加载所述更新补丁包。Load the update patch package.
  11. 一种显示目标图像的装置,包括:A device for displaying target images, including:
    获取单元,被配置为获取第一面部图像以及第二面部图像;An obtaining unit configured to obtain a first facial image and a second facial image;
    生成单元,被配置为根据所述第一面部图像生成第一面部拓扑图像,根据所述第二面部图像生成第二面部拓扑图像;A generating unit configured to generate a first facial topology image based on the first facial image, and generate a second facial topology image based on the second facial image;
    所述生成单元,还被配置为将所述第一面部拓扑图像与所述第二面部拓扑图像进行融合,生成目标图像;The generating unit is further configured to fuse the first facial topology image and the second facial topology image to generate a target image;
    显示单元,被配置为在目标界面上显示所述目标图像。The display unit is configured to display the target image on the target interface.
  12. 根据权利要求11所述的显示目标图像的装置,所述生成单元,被配置为:The apparatus for displaying a target image according to claim 11, wherein the generating unit is configured to:
    将所述第一面部拓扑图像中的多个第一区域子图像与所述第二面部拓扑图像中对应的多个第二区域子图像分别进行融合,生成目标图像。A plurality of first area sub-images in the first face topology image and a plurality of corresponding second area sub-images in the second face topology image are respectively fused to generate a target image.
  13. 根据权利要求12所述的显示目标图像的装置,所述生成单元,被配置为:The apparatus for displaying a target image according to claim 12, wherein the generating unit is configured to:
    根据所述第一面部拓扑图像中的多个第一区域子图像,对所述第二面部拓扑图像中对应的多个第二区域子图像进行形状以及大小的调整;Adjusting the shape and size of the corresponding second area sub-images in the second facial topology image according to the multiple first area sub-images in the first facial topology image;
    将调整后的多个第二区域子图像贴合到对应的多个第一区域子图像上,根据预存的像素融合算法,对所述第一面部拓扑图像中的第一区域子图像与所述第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。The adjusted plurality of second region sub-images are attached to the corresponding plurality of first region sub-images, and according to the pre-stored pixel fusion algorithm, the first region sub-image in the first facial topology image is The sub-images corresponding to the second region in the second facial topology image are fused to generate a target image.
  14. 根据权利要求12所述的显示目标图像的装置,所述生成单元,被配置为:The apparatus for displaying a target image according to claim 12, wherein the generating unit is configured to:
    根据所述第一面部拓扑图像对应的第一权重,调整所述第一面部拓扑图像,得到调整后的第一面部拓扑图像;Adjusting the first facial topology image according to the first weight corresponding to the first facial topology image to obtain the adjusted first facial topology image;
    根据所述第二面部拓扑图像对应的第二权重,调整所述第二面部拓扑图像,得到调整后的第二面部拓扑图像;Adjusting the second facial topology image according to the second weight corresponding to the second facial topology image to obtain the adjusted second facial topology image;
    将所述调整后的第一面部拓扑图像中的第一区域子图像,与所述调整后的第二面部拓扑图像中对应的第二区域子图像进行融合,生成目标图像。The first region sub-image in the adjusted first facial topology image is fused with the corresponding second region sub-image in the adjusted second facial topology image to generate a target image.
  15. 根据权利要求14所述的显示目标图像的装置,所述第一权重为随机确定,所述第二权重为1与所述第一权重的差值。The apparatus for displaying a target image according to claim 14, wherein the first weight is determined randomly, and the second weight is a difference between 1 and the first weight.
  16. 根据权利要求11所述的显示目标图像的装置,所述装置还包括:The apparatus for displaying a target image according to claim 11, further comprising:
    所述获取单元,还被配置为生成目标图像之后,在所述第一面部拓扑图像的预设位置处获取第一肤色值,在所述第二面部拓扑图像的预设位置处获取第二肤色值;The acquiring unit is further configured to acquire a first skin color value at a preset position of the first facial topology image and acquire a second at a preset position of the second facial topology image after generating the target image Skin color value
    确定单元,被配置为根据所述第一肤色值与所述第二肤色值,确定所述目标图像的预设位置的目标肤色值;A determining unit configured to determine a target skin color value at a preset position of the target image according to the first skin color value and the second skin color value;
    调整单元,被配置为根据所述目标肤色值,对所述目标图像中的预设位置的肤色值进行调整。The adjusting unit is configured to adjust the skin color value at a preset position in the target image according to the target skin color value.
  17. 根据权利要求16所述的显示目标图像的装置,所述确定单元,被配置为:The apparatus for displaying a target image according to claim 16, the determination unit is configured to:
    获取预先设定的肤色调整值;Obtain preset skin tone adjustment values;
    获取所述第一肤色值与所述第二肤色值的和值,将所述和值减去所述肤色调整值得到的差值,确定为所述目标面部拓扑图像的预设位置的目标肤色值。Obtaining the sum of the first skin color value and the second skin color value, and subtracting the difference between the sum value and the skin color adjustment value to determine the target skin color at the preset position of the target facial topology image value.
  18. 根据权利要求11所述的显示目标图像的装置,所述获取单元,被配置为:The apparatus for displaying a target image according to claim 11, the acquisition unit is configured to:
    当接收到目标图像生成指令时,通过摄像头采集视频帧,从已采集的多帧视频帧中,随机选取一帧视频帧作为源图像;When receiving the target image generation instruction, the video frame is collected through the camera, and one frame of the video frame is randomly selected as the source image from the collected multi-frame video frames;
    在所述源图像中提取第一面部图像以及第二面部图像;Extracting the first facial image and the second facial image from the source image;
    所述显示单元,被配置为在获取第一面部图像以及第二面部图像之后,在所述目标界面上,显示所述第一面部图像以及第二面部图像。The display unit is configured to display the first facial image and the second facial image on the target interface after acquiring the first facial image and the second facial image.
  19. 根据权利要求11所述的显示目标图像的装置,所述生成单元,被配置为:The apparatus for displaying a target image according to claim 11, wherein the generating unit is configured to:
    在所述第一面部图像中,确定第一组面部特征点;In the first facial image, determine a first set of facial feature points;
    按照预先设置的连接规则,将所述第一组面部特征点中的各面部特征点进行连接,生成第一面部拓扑图像;Connect each facial feature point in the first group of facial feature points according to a preset connection rule to generate a first facial topology image;
    在所述第二面部图像中,确定第二组面部特征点;In the second facial image, determine a second set of facial feature points;
    按照预先设置的连接规则,将所述第二组面部特征点中的各面部特征点进行连接,生成第二面部拓扑图像。Connect the facial feature points in the second group of facial feature points according to a preset connection rule to generate a second facial topology image.
  20. 根据权利要求11所述的显示目标图像的装置,所述方法还包括:The apparatus for displaying a target image according to claim 11, the method further comprising:
    所述获取单元,被配置为获取更新补丁包,其中,所述更新补丁包中携带有调用中心处理器CPU功能的指令以及调用图像处理器GPU功能的指令,所述CPU功能包括使用传感器、检测触摸事件、检测触发事件、确定面部特征点功能;The acquiring unit is configured to acquire an update patch package, wherein the update patch package carries instructions to call the CPU function of the central processor and instructions to call the GPU function of the image processor. The CPU functions include using a sensor and detecting Touch events, detect trigger events, determine facial feature points;
    加载单元,被配置为加载所述更新补丁包。The loading unit is configured to load the update patch package.
  21. 一种终端,包括:A terminal, including:
    处理器;processor;
    用于存储处理器可执行指令的存储器;Memory for storing processor executable instructions;
    其中,所述处理器被配置为:Wherein, the processor is configured to:
    获取第一面部图像以及第二面部图像;Obtain the first facial image and the second facial image;
    根据所述第一面部图像生成第一面部拓扑图像,根据所述第二面部图像生成第二面部拓扑图像;Generating a first facial topology image based on the first facial image, and generating a second facial topology image based on the second facial image;
    将所述第一面部拓扑图像与所述第二面部拓扑图像进行融合,生成目标图像;Fuse the first facial topology image with the second facial topology image to generate a target image;
    在目标界面上显示所述目标图像。The target image is displayed on the target interface.
  22. 一种计算机可读存储介质,当所述存储介质中的指令由服务器的处理器执行时,使得服务器能够执行一种显示目标图像的方法,所述方法包括:A computer-readable storage medium, when instructions in the storage medium are executed by a processor of a server, enable the server to execute a method of displaying a target image, the method including:
    获取第一面部图像以及第二面部图像;Obtain the first facial image and the second facial image;
    根据所述第一面部图像生成第一面部拓扑图像,根据所述第二面部图像生成第二面部拓扑图像;Generating a first facial topology image based on the first facial image, and generating a second facial topology image based on the second facial image;
    将所述第一面部拓扑图像与所述第二面部拓扑图像进行融合,生成目标图像;Fuse the first facial topology image with the second facial topology image to generate a target image;
    在目标界面上显示所述目标图像。The target image is displayed on the target interface.
PCT/CN2019/107085 2018-11-09 2019-09-20 Method and apparatus for displaying target image, terminal, and storage medium WO2020093798A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811334358.2A CN109523461A (en) 2018-11-09 2018-11-09 Method, apparatus, terminal and the storage medium of displaying target image
CN201811334358.2 2018-11-09

Publications (1)

Publication Number Publication Date
WO2020093798A1 true WO2020093798A1 (en) 2020-05-14

Family

ID=65773650

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/107085 WO2020093798A1 (en) 2018-11-09 2019-09-20 Method and apparatus for displaying target image, terminal, and storage medium

Country Status (2)

Country Link
CN (1) CN109523461A (en)
WO (1) WO2020093798A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109523461A (en) * 2018-11-09 2019-03-26 北京达佳互联信息技术有限公司 Method, apparatus, terminal and the storage medium of displaying target image
CN111339833B (en) * 2020-02-03 2022-10-28 重庆特斯联智慧科技股份有限公司 Identity verification method, system and equipment based on face edge calculation
CN111340690A (en) * 2020-03-23 2020-06-26 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111784604B (en) * 2020-06-29 2022-02-18 北京字节跳动网络技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN112991248A (en) * 2021-03-10 2021-06-18 维沃移动通信有限公司 Image processing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012074878A (en) * 2010-09-28 2012-04-12 Nintendo Co Ltd Image generation program, imaging device, imaging system, and image generation method
CN103295210A (en) * 2012-03-01 2013-09-11 汉王科技股份有限公司 Infant image composition method and device
CN107609506A (en) * 2017-09-08 2018-01-19 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN107767335A (en) * 2017-11-14 2018-03-06 上海易络客网络技术有限公司 A kind of image interfusion method and system based on face recognition features' point location
CN107852443A (en) * 2015-07-21 2018-03-27 索尼公司 Message processing device, information processing method and program
CN108229330A (en) * 2017-12-07 2018-06-29 深圳市商汤科技有限公司 Face fusion recognition methods and device, electronic equipment and storage medium
CN109523461A (en) * 2018-11-09 2019-03-26 北京达佳互联信息技术有限公司 Method, apparatus, terminal and the storage medium of displaying target image

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPR259301A0 (en) * 2001-01-18 2001-02-15 Polymerat Pty Ltd Polymers having co-continuous architecture
JP2004005265A (en) * 2002-05-31 2004-01-08 Omron Corp Image composing method, device and system
US9019300B2 (en) * 2006-08-04 2015-04-28 Apple Inc. Framework for graphics animation and compositing operations
CN103489011A (en) * 2013-09-16 2014-01-01 广东工业大学 Three-dimensional face identification method with topology robustness
CN103927531B (en) * 2014-05-13 2017-04-05 江苏科技大学 It is a kind of based on local binary and the face identification method of particle group optimizing BP neural network
CN105447864B (en) * 2015-11-20 2018-07-27 小米科技有限责任公司 Processing method, device and the terminal of image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012074878A (en) * 2010-09-28 2012-04-12 Nintendo Co Ltd Image generation program, imaging device, imaging system, and image generation method
CN103295210A (en) * 2012-03-01 2013-09-11 汉王科技股份有限公司 Infant image composition method and device
CN107852443A (en) * 2015-07-21 2018-03-27 索尼公司 Message processing device, information processing method and program
CN107609506A (en) * 2017-09-08 2018-01-19 百度在线网络技术(北京)有限公司 Method and apparatus for generating image
CN107767335A (en) * 2017-11-14 2018-03-06 上海易络客网络技术有限公司 A kind of image interfusion method and system based on face recognition features' point location
CN108229330A (en) * 2017-12-07 2018-06-29 深圳市商汤科技有限公司 Face fusion recognition methods and device, electronic equipment and storage medium
CN109523461A (en) * 2018-11-09 2019-03-26 北京达佳互联信息技术有限公司 Method, apparatus, terminal and the storage medium of displaying target image

Also Published As

Publication number Publication date
CN109523461A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
US11503377B2 (en) Method and electronic device for processing data
WO2020093798A1 (en) Method and apparatus for displaying target image, terminal, and storage medium
CN109167950B (en) Video recording method, video playing method, device, equipment and storage medium
KR101649596B1 (en) Method, apparatus, program, and recording medium for skin color adjustment
JP6357589B2 (en) Image display method, apparatus, program, and recording medium
WO2021164587A1 (en) Live streaming interface processing method and apparatus
CN109191549B (en) Method and device for displaying animation
WO2020063084A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN108712603B (en) Image processing method and mobile terminal
WO2022037111A1 (en) Image processing method and apparatus, interactive display apparatus, and electronic device
WO2021063096A1 (en) Video synthesis method, apparatus, electronic device, and storage medium
KR102449670B1 (en) Method for creating video data using cameras and server for processing the method
WO2022227393A1 (en) Image photographing method and apparatus, electronic device, and computer readable storage medium
WO2021043121A1 (en) Image face changing method, apparatus, system, and device, and storage medium
WO2022198934A1 (en) Method and apparatus for generating video synchronized to beat of music
KR20160127606A (en) Mobile terminal and the control method thereof
CN110807769B (en) Image display control method and device
CN112788354A (en) Live broadcast interaction method and device, electronic equipment, storage medium and program product
EP3975046B1 (en) Method and apparatus for detecting occluded image and medium
KR102409103B1 (en) Method for editing image
CN109308740B (en) 3D scene data processing method and device and electronic equipment
CN108234888B (en) Image processing method and mobile terminal
WO2021237744A1 (en) Photographing method and apparatus
CN110662103B (en) Multimedia object reconstruction method and device, electronic equipment and readable storage medium
WO2021237592A1 (en) Anchor point information processing method, apparatus and device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19881996

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19881996

Country of ref document: EP

Kind code of ref document: A1