WO2021169736A1 - 美颜处理方法及装置 - Google Patents

美颜处理方法及装置 Download PDF

Info

Publication number
WO2021169736A1
WO2021169736A1 PCT/CN2021/074638 CN2021074638W WO2021169736A1 WO 2021169736 A1 WO2021169736 A1 WO 2021169736A1 CN 2021074638 W CN2021074638 W CN 2021074638W WO 2021169736 A1 WO2021169736 A1 WO 2021169736A1
Authority
WO
WIPO (PCT)
Prior art keywords
beauty
parameter information
original
parameters
original parameter
Prior art date
Application number
PCT/CN2021/074638
Other languages
English (en)
French (fr)
Inventor
张兴华
王学智
陈凯
尚凤仪
伍凡
马鸿浩
陶柳西
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Priority to JP2022550940A priority Critical patent/JP7516535B2/ja
Priority to EP21761290.2A priority patent/EP4113430A4/en
Publication of WO2021169736A1 publication Critical patent/WO2021169736A1/zh
Priority to US17/885,942 priority patent/US11769286B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the technical field of beauty treatment, and in particular, to a beauty treatment method and device.
  • beauty parameters are usually set for a preset face shape for the user to use for beauty.
  • Current beauty solutions cannot meet the user's personalized beauty needs, and the beauty effect needs to be improved.
  • the present disclosure provides a beauty treatment method and device, which can solve the problem that the beauty effect in the related art needs to be improved.
  • the first purpose of the present disclosure is to propose a beauty processing method that can generate personalized beauty parameters, meet the user's personalized beauty needs, and improve the beauty effect.
  • the second objective of the present disclosure is to provide a beauty treatment device.
  • the third purpose of the present disclosure is to propose an electronic device.
  • the fourth objective of the present disclosure is to provide a computer-readable storage medium.
  • An embodiment of the first aspect of the present disclosure proposes a beauty processing method, including:
  • the beautification part is processed according to the part beautification parameter to generate a target face image.
  • An embodiment of the second aspect of the present disclosure proposes a beauty processing device, including:
  • the extraction module is used to obtain the original face image, and extract the original parameter information corresponding to the feature of the beauty part;
  • a determining module configured to determine a part beauty parameter corresponding to the beauty part according to the original parameter information
  • the processing module is configured to process the beauty part according to the beauty parameter of the part to generate a target face image.
  • An embodiment of the third aspect of the present disclosure proposes an electronic device, including: a processor and a memory; wherein the memory is used to store executable program code; the processor reads the executable program stored in the memory The code is used to run a program corresponding to the executable program code for executing the beauty processing method described in the embodiment of the first aspect.
  • An embodiment of the fourth aspect of the present disclosure proposes a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the beauty processing method as described in the embodiment of the first aspect is implemented.
  • An embodiment in the above publication has the following advantages or beneficial effects: because the original face image is acquired, the original parameter information corresponding to the part features of the beautifying part is extracted; the part beauty corresponding to the beautifying part is determined according to the original parameter information.
  • Face parameters The beautifying parts are processed according to the part beautifying parameters to generate the target face image. In this way, personalized beauty parameters can be generated according to features such as the position and proportion of various beauty parts in the face, so as to meet the user's personalized beauty needs, improve the accuracy of the beauty parameters, and improve the beauty effect.
  • FIG. 1 is a schematic flowchart of a beauty processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of another beauty processing method provided by an embodiment of the present disclosure.
  • FIG. 3 is a schematic structural diagram of a beauty processing device provided by an embodiment of the disclosure.
  • FIG. 4 is a schematic structural diagram of another beauty processing device provided by an embodiment of the disclosure.
  • FIG. 5 shows a schematic structural diagram of an electronic device suitable for implementing the embodiments of the present disclosure
  • FIG. 6 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • FIG. 1 is a schematic flowchart of a beauty processing method provided by an embodiment of the present disclosure. As shown in FIG. 1, the method includes:
  • Step 101 Obtain an original face image, and extract original parameter information corresponding to the part features of the beauty part.
  • the execution subject is an electronic device, including but not limited to devices with camera processing functions such as smart phones and wearable devices.
  • an original face image can be obtained, and then the original face image is processed through a related face recognition technology to extract original parameter information corresponding to the feature of the beauty site.
  • the user's face image can be captured when the user turns on the camera function, and the face image can be recognized, so as to extract the original parameter information corresponding to the feature of the beauty part.
  • the face image to be processed can be acquired, and the face image to be processed can be recognized, so as to extract the original parameter information corresponding to the feature of the beauty part.
  • the beauty parts may include face shape, eyes, eyebrows, nose, mouth, skin, etc.
  • Each beauty part corresponds to one or more part features.
  • the part features include but are not limited to the size (length, width), position, distance from other parts, etc., for example, the part features corresponding to the eyes include eye size, The distance between the eyes, the distance between the eyes and the eyebrows, etc.
  • the original parameter information is a relative value, for example, relative to the size of the face.
  • extracting the original parameter information corresponding to the part features of the beauty part includes: querying a preset local beauty database, acquiring the local features corresponding to the beauty part, and extracting from the original face image The first original parameter information corresponding to the local feature.
  • a partial beauty database is preset, and the beauty site and the local features corresponding to the beauty site are stored in the local beauty database.
  • the local features corresponding to the eyes include the size of the eyes, the distance between the eyes, and the difference between the eyes and the eyebrows. Distance and so on.
  • Step 102 Determine a part beauty parameter corresponding to the beauty part according to the original parameter information.
  • determining the part beauty parameter corresponding to the beauty part according to the original parameter information includes: querying a local beauty database to determine the part beauty parameter corresponding to the first original parameter information.
  • the mapping relationship between the first original parameter information and the part beauty parameter is stored in the local beauty database, and the part beauty parameter corresponding to the first original parameter information is determined by querying the mapping relationship in the local beauty database, where , Different first original parameter information can correspond to different part beauty parameters, for example, the distance between the eyes is different, the determined eye beauty parameters are different, the specific mapping relationship between the first original parameter information and the part beauty parameters can be through a large number of experiments
  • the data determination can also be set according to needs, for example, it can be achieved through AI (Artificial Intelligence) modeling, and there is no specific limitation here.
  • AI Artificial Intelligence
  • the first original parameter information is calculated by the related algorithm to generate the beauty parameters of the part.
  • multiple first original parameter information can be calculated to generate beauty parameters.
  • Step 103 Process the beauty parts according to the beauty parameters of the parts to generate a target face image.
  • the image to be beautified is acquired, and after the beauty parameters of the parts are determined based on each beauty part, each beautification part in the beautification image is processed by the part beauty parameters to generate the target person after the beautification process Face image.
  • each beauty site may include multiple adjustment items.
  • face shape may include adjustment items such as size, cheekbones, mandible, forehead, chin, hairline, etc.
  • skin may include skin rejuvenation and skin rejuvenation.
  • Skin color and other adjustment items there can be multiple beauty parameters for each beauty site.
  • the matched part beauty parameter is determined, and the part beauty parameter is sent to the rendering engine, and the beauty rendering process is performed for each adjustment item, so as to generate the target face image after the beauty process.
  • default beauty parameters are usually set for a preset face shape for the user to use for beauty.
  • the default beauty parameters are set for a standard face, where the standard face refers to the average Model face generated by facial features. Due to the differences in individual faces, such as the same long face, some people's faces are closer to a rectangle, some people are closer to an ellipse; for the same face shape, the distribution and characteristics of the facial features are also different, such as long elliptical faces and different users' mouths There are differences in size and eye size. Due to individual face differences, the scheme of setting default beauty parameters may not be suitable for all users. Therefore, related solutions cannot meet the user's personalized beauty needs. The accuracy of the beauty parameters is not high, and the beauty effect needs to be improved.
  • the beauty processing method of the embodiment of the present disclosure extracts the original parameter information corresponding to the feature of the beauty site by acquiring the original face image; determines the beauty parameters of the beauty site corresponding to the beauty site according to the original parameter information;
  • the face parameters process the beauty parts to generate the target face image.
  • personalized beauty parameters can be generated according to features such as the position and proportion of various beauty parts in the face, so as to meet the user's personalized beauty needs, improve the accuracy of the beauty parameters, and improve the beauty effect.
  • FIG. 2 is a schematic flowchart of another beauty processing method provided by an embodiment of the present disclosure. As shown in FIG. 2, the method includes:
  • Step 201 Obtain an original face image.
  • Step 202 Query a preset global beauty database to obtain global features corresponding to the beauty parts.
  • a global beauty database is preset, and the beauty site and the global features corresponding to the beauty site are stored in the global beauty database, where the global features may include the local beauty site associated with the current beauty site Features, for example, the global features corresponding to the eyes include face width, face length, forehead height, and so on.
  • Step 203 Extract second original parameter information corresponding to the global feature from the original face image.
  • Step 204 Query the global beauty database, and determine the global beauty parameters corresponding to the second original parameter information.
  • mapping relationship between the second original parameter information and the global beauty parameters is stored in the global beauty database, where different second original parameter information may correspond to different global beauty parameters.
  • the quantified value of the specific relationship between the second original parameter information and the global beauty parameter can be determined according to needs, for example, realized by AI modeling.
  • the beauty part is the mouth
  • the corresponding adjustment item is lip thickness
  • the corresponding global features include face length and face width
  • the extracted second original parameter information includes face length X and face width Y.
  • the face database determines the beauty parameter A corresponding to the face length X and the face width Y as the global beauty parameter of the adjustment item.
  • a global beauty database can be set corresponding to different user attributes.
  • Different user attributes have different mapping relationships in the global beauty database, where user attributes include but are not limited to gender, race, country, etc.
  • the default beauty parameters of the preset face shape can be adjusted according to the global beauty parameters. Make adjustments to generate beauty parameters for the part.
  • the global beauty parameters can be added to the default beauty parameters to obtain the part beauty parameters. It should be noted that the default beauty parameters are adjusted according to the global beauty parameters, and the realization of generating the part beauty parameters is not limited to There is no specific restriction here.
  • Step 205 Adjust the beauty parameters of the part corresponding to the first original parameter information according to the global beauty parameters.
  • the local features corresponding to the beauty parts are acquired by querying the preset local beauty database, and the first original parameter information corresponding to the local features is extracted from the original face image And, query the local beauty database to determine the beauty parameters of the part corresponding to the first original parameter information. Furthermore, the part beauty parameters corresponding to the first original parameter information are adjusted according to the global beauty parameters. For example, if the part beauty parameter 1 is determined according to the first original parameter, the global beauty parameter and the part beauty parameter 1 can be added to obtain the part beauty parameter 2, and the part beauty parameter 2 is used as the final part beauty parameter.
  • a confidence interval may be preset, and the adjusted part beauty parameters can be matched with the confidence interval.
  • the beauty part can be processed according to the part beauty parameters.
  • querying the preset local beauty database includes: acquiring user attributes, and querying the local beauty database corresponding to the user attributes to acquire the part beauty parameters.
  • a local beauty database is set corresponding to different user attributes, and the mapping relationship between the first original parameter information and the part beauty parameters is stored in the local beauty database, and the mapping relationships corresponding to different user attributes are different.
  • user attributes include, but are not limited to, gender, race, country, etc.
  • implementation methods for obtaining user attributes include, but are not limited to, face recognition. As a result, it can be applied to users of different genders and nationalities, which is conducive to product promotion.
  • the target face image after the beautification process is not necessarily the most "beautiful". ", there may even be uncoordinated negative optimization.
  • the beauty processing method of the embodiment of the present disclosure obtains the global features corresponding to the beauty parts by querying the preset global beauty database. Furthermore, the second original parameter information corresponding to the global feature is extracted from the original face image. Further, query the global beauty database, determine the global beauty parameters corresponding to the second original parameter information, and adjust the beauty parameters of the part corresponding to the first original parameter information according to the global beauty parameters. In this way, it is possible to meet the user's personalized beauty needs, and to realize the associated adjustment of the beauty mode, so that the overall face image after beautification is more coordinated, negative optimization is avoided, and the beauty effect is improved.
  • the method further includes: storing the part beauty parameter and assigning a user identifier to the part beauty parameter, where the user identifier Corresponds to the original parameter information. Furthermore, the image to be beautified is acquired, the original parameter information corresponding to the part feature of the beautifying part is extracted from the image to be beautified, and the original parameter information is matched with the pre-stored original parameter information to determine the corresponding target user identification. Further, call the part beauty parameters corresponding to the target user identifier to process the beauty parts to generate the target face image.
  • beauty parts include face shape, eyes, nose, mouth, etc.
  • each beauty part includes multiple adjustment items.
  • a corresponding reference item is preset, wherein there may be one or more reference items corresponding to each adjustment item.
  • Beauty area Adjustment Reference item Face shape size Face length, face width, cheekbones, mandible, forehead, chin, lip width To Cheekbones Face length, face width, chin To lower jaw Face length, face width, distance between chin and mouth To Forehead Face length, forehead height (distance between eyebrows and hairline), nose length, mandible height Eye Eye distance Face width, eye distance To size Face width, face length, eye distance To Eye position Face length, distance between eyes and eyebrows, forehead height nose Long nose Face length, distance between nose tip and eyebrows, distance between nose tip and chin To Nose width Face width, eye distance To Nose height Eye distance, eye size, mouth size mouth width Wide face and thick lips To Thick lips Face length, face width To Mouth position Distance between mouth and eyebrows, distance between mouth and chin
  • beautification processing preset the beautification priority, and perform beautification processing on the adjustment items of each beautification part according to the order corresponding to the beautification priority. For example, you can perform the beautification according to the order of the adjustment items in the table from top to bottom. Beauty treatment. As an example, first perform a beauty processing on the size of the face, and extract the original parameter information of the features of the face length, face width, cheekbones, mandible, forehead, chin, lip width, etc. from the face image, according to the above original parameter information A beautification parameter 1 corresponding to the size of the face is generated, and then beautification processing is performed on the size of the face according to the beautification parameter 1.
  • the parameter information of the face length and face width is updated and recorded, and the face length and face width after the beauty processing are obtained, according to the parameters of the face length, face width and the extracted chin after the beauty processing Information, determine the beauty parameter 2 corresponding to the cheekbones, and perform the beauty treatment on the cheekbones according to the beauty parameter 2.
  • the beautification processing of the subsequent adjustment items can be implemented with reference to the above example, which will not be repeated here. It should be noted that when processing the adjustment items, the beauty parameters can be determined according to one of the reference items, or the beauty parameters can be determined according to multiple items in the reference items, and other reference items can be added as needed. There are no restrictions.
  • personalized beauty parameters can be generated according to features such as the position and proportion of various beauty parts in the face, so as to meet the user's personalized beauty needs, improve the accuracy of the beauty parameters, and improve the beauty effect.
  • the beautification mode of associated adjustment is realized, so that the overall face image after beautification is more coordinated, negative optimization is avoided, and the beauty effect is improved.
  • the present disclosure also proposes a beauty treatment device.
  • FIG. 3 is a schematic structural diagram of a beauty processing device provided by an embodiment of the disclosure. As shown in FIG. 3, the device includes: an extraction module 10, a determination module 20, and a processing module 30.
  • the extraction module 10 is used to obtain an original face image, and extract original parameter information corresponding to the part feature of the beauty part.
  • the determining module 20 is configured to determine the beauty-beauty parameters of the part corresponding to the beauty-beauty part according to the original parameter information.
  • the processing module 30 is configured to process the beauty parts according to the beauty parameters of the parts to generate a target face image.
  • the device shown in FIG. 4 further includes: an adjustment module 40, a storage module 50, and a calling module 60.
  • the adjustment module 40 is configured to query the global beauty database to determine the global beauty parameters corresponding to the second original parameter information; according to the global beauty parameters, adjust the part beauty parameters corresponding to the first original parameter information.
  • the extraction module 10 includes: a second query unit for querying a preset global beauty database to obtain global features corresponding to beauty parts; a second extraction unit for extracting information corresponding to global features from the original face image The second original parameter information;
  • the extraction module 10 includes: a first query unit, configured to query a preset partial beauty database, and obtain the local features corresponding to the beauty parts.
  • the first extraction unit is configured to extract the first original parameter information corresponding to the local feature from the original face image;
  • the determining module 20 is specifically configured to query the local beauty database, and determine the part beauty parameters corresponding to the first original parameter information.
  • the first query unit is specifically configured to: obtain user attributes, and query the local beauty database corresponding to the user attributes; wherein the local beauty database stores the first original parameter information and the part beauty parameters
  • the mapping relationship of different user attributes is different.
  • the storage module 50 is used to store the beauty parameters of the part and assign a user identifier to the beauty parameters of the part, where the user identifier corresponds to the original parameter information.
  • the calling module 60 is configured to determine the corresponding target user ID according to the original parameter information; call the part beauty parameters corresponding to the target user ID to process the beauty part, and generate the target face image.
  • the beauty processing device of the embodiment of the present disclosure extracts the original parameter information corresponding to the feature of the beauty part by acquiring the original face image; determines the beauty parameter of the part corresponding to the beauty part according to the original parameter information;
  • the face parameters process the beauty parts to generate the target face image.
  • personalized beauty parameters can be generated according to features such as the position and proportion of various beauty parts in the face, so as to meet the user's personalized beauty needs, improve the accuracy of the beauty parameters, and improve the beauty effect.
  • the present disclosure also proposes an electronic device.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 5 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 800 may include a processing device (such as a central processing unit, a graphics processor, etc.) 801, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 802 or from a storage device 808
  • the program in the memory (RAM) 803 executes various appropriate actions and processing.
  • various programs and data required for the operation of the electronic device 800 are also stored.
  • the processing device 801, the ROM 802, and the RAM 803 are connected to each other through a bus 804.
  • An input/output (I/O) interface 805 is also connected to the bus 804.
  • the following devices can be connected to the I/O interface 805: including input devices 806 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, vibration An output device 807 such as a device; a storage device 808 such as a magnetic tape, a hard disk, etc.; and a communication device 809.
  • the communication device 809 may allow the electronic device 800 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 5 shows an electronic device 800 with various devices, it should be understood that it is not required to implement or have all of the illustrated devices. It may be implemented alternatively or provided with more or fewer devices.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 809, or installed from the storage device 808, or installed from the ROM 802.
  • the processing device 801 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device When the above-mentioned one or more programs are executed by the electronic device, the electronic device: obtains at least two Internet protocol addresses; A node evaluation request for an Internet Protocol address, wherein the node evaluation device selects an Internet Protocol address from the at least two Internet Protocol addresses and returns it; receives the Internet Protocol address returned by the node evaluation device; wherein, the obtained The Internet Protocol address indicates the edge node in the content distribution network.
  • the aforementioned computer-readable medium carries one or more programs, and when the aforementioned one or more programs are executed by the electronic device, the electronic device: receives a node evaluation request including at least two Internet Protocol addresses; Among the at least two Internet Protocol addresses, select an Internet Protocol address; return the selected Internet Protocol address; wherein, the received Internet Protocol address indicates an edge node in the content distribution network.
  • the computer program code used to perform the operations of the present disclosure may be written in one or more programming languages or a combination thereof.
  • the above-mentioned programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure can be implemented in software or hardware. Wherein, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
  • the first obtaining unit can also be described as "a unit for obtaining at least two Internet Protocol addresses.”
  • the present disclosure also proposes a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the method for generating a slideshow as described in the above-mentioned embodiment is implemented.
  • FIG. 6 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure.
  • a computer-readable storage medium 300 according to an embodiment of the present disclosure has non-transitory computer-readable instructions 310 stored thereon.
  • the non-transitory computer-readable instruction 310 is executed by the processor, all or part of the steps of the slideshow generation method of the foregoing embodiments of the present disclosure are executed.
  • the present disclosure also proposes a computer program product.
  • the instructions in the computer program product are executed by a processor, the method for generating slides as described in the preceding embodiments is implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本公开提出一种美颜处理方法及装置,其中,方法包括: 获取原始人脸图像,提取与美颜部位的部位特征对应的原始参数信息; 根据原始参数信息确定与美颜部位对应的部位美颜参数; 根据部位美颜参数对美颜部位进行处理,生成目标人脸图像。由此,能够生成个性化的美颜参数,满足用户个性化的美颜需求,提高美颜参数精准度,提高美颜效果。

Description

美颜处理方法及装置
优先权信息
本申请要求北京字节跳动网络技术有限公司于2020年02月25日提交的,申请名称为“美颜处理方法及装置”的、中国专利申请号“202010115136.2”的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开涉及美颜处理技术领域,尤其涉及一种美颜处理方法及装置。
背景技术
为了满足用户在生活中的需要,在电子设备中添加美颜功能已经非常普遍,例如,市场上很多相机应用都提供了美颜功能,用户可以通过美颜功能对人脸图像进行美颜处理,从而实现美颜的目的。
相关技术中,通常针对预设脸型设置美颜参数,以供用户美颜使用,目前的美颜方案难以满足用户个性化的美颜需求,美颜效果有待提高。
发明内容
本公开提供一种美颜处理方法及装置,能够解决相关技术中美颜效果有待提高的问题。
为此,本公开的第一个目的在于提出一种美颜处理方法,能够生成个性化的美颜参数,满足用户个性化的美颜需求,提高美颜效果。
本公开的第二个目的在于提出一种美颜处理装置。
本公开的第三个目的在于提出一种电子设备。
本公开的第四个目的在于提出一种计算机可读存储介质。
本公开第一方面实施例提出了一种美颜处理方法,包括:
获取原始人脸图像,提取与美颜部位的部位特征对应的原始参数信息;
根据所述原始参数信息确定与所述美颜部位对应的部位美颜参数;
根据所述部位美颜参数对所述美颜部位进行处理,生成目标人脸图像。
本公开第二方面实施例提出了一种美颜处理装置,包括:
提取模块,用于获取原始人脸图像,提取与美颜部位的部位特征对应的原始参数信息;
确定模块,用于根据所述原始参数信息确定与所述美颜部位对应的部位美颜参数;
处理模块,用于根据所述部位美颜参数对所述美颜部位进行处理,生成目标人脸图像。
本公开第三方面实施例提出了一种电子设备,包括:处理器和存储器;其中,所述存 储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序,用于执行如第一方面实施例所述的美颜处理方法。
本公开第四方面实施例提出了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面实施例所述的美颜处理方法。
上述公开中的一个实施例具有如下优点或有益效果:由于采用了获取原始人脸图像,提取与美颜部位的部位特征对应的原始参数信息;根据原始参数信息确定与美颜部位对应的部位美颜参数;根据部位美颜参数对美颜部位进行处理,生成目标人脸图像。由此,能够根据各美颜部位在人脸中的位置、占比等特征,生成个性化的美颜参数,满足用户个性化的美颜需求,提高美颜参数精准度,提高美颜效果。
本公开附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本公开的实践了解到。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本公开实施例所提供的一种美颜处理方法的流程示意图;
图2为本公开实施例所提供的另一种美颜处理方法的流程示意图;
图3为本公开实施例所提供的一种美颜处理装置的结构示意图;
图4为本公开实施例所提供的另一种美颜处理装置的结构示意图;
图5示出了适于用来实现本公开实施例的电子设备的结构示意图;
图6为图示根据本公开的实施例的计算机可读存储介质的示意图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
下面参考附图描述本公开实施例的美颜处理方法及装置。
图1为本公开实施例所提供的一种美颜处理方法的流程示意图,如图1所示,该方法包括:
步骤101,获取原始人脸图像,提取与美颜部位的部位特征对应的原始参数信息。
本公开实施例中,执行主体为电子设备,包括但不限于智能手机、穿戴式设备等具备拍照处理功能的设备。
本实施例中,在进行美颜处理时,可以获取原始人脸图像,进而,通过相关人脸识别技术对原始人脸图像进行处理,提取与美颜部位的部位特征对应的原始参数信息。作为一种示例,可以在用户开启相机功能时拍摄用户的人脸图像,并对人脸图像进行识别,从而提取与美颜部位的部位特征对应的原始参数信息。作为另一种示例,可以在用户使用美颜功能时,获取待处理的人脸图像,并对待处理的人脸图像进行识别,从而提取与美颜部位的部位特征对应的原始参数信息。
其中,美颜部位可包括脸型、眼睛、眉毛、鼻子、嘴巴、皮肤等。每个美颜部位对应一个或多个部位特征,部位特征包括但不限于各部位的大小(长度、宽度)、位置、与其他部位之间的距离等,例如眼睛对应的部位特征包括眼睛大小、眼间距、眼睛与眉毛的距离等。通过提取部位特征对应的原始参数信息,获取各部位特征的特征值,例如眼间距X。可选地,原始参数信息为相对值,例如相对于脸部的大小。
在本公开的一个实施例中,提取与美颜部位的部位特征对应的原始参数信息包括:查询预设的局部美颜数据库,获取与美颜部位对应的局部特征,从原始人脸图像中提取与局部特征对应的第一原始参数信息。
作为一种示例,预先设置局部美颜数据库,在局部美颜数据库中存储美颜部位、与美颜部位对应的局部特征,例如,眼睛对应的局部特征包括眼睛大小、眼间距、眼睛与眉毛的距离等。通过查询局部美颜数据库,获取与美颜部位对应的局部特征,进而,基于原始人脸图像对局部特征进行特征提取,从而获取局部特征对应的第一原始参数信息。
步骤102,根据原始参数信息确定与美颜部位对应的部位美颜参数。
在本公开的一个实施例中,根据原始参数信息确定与美颜部位对应的部位美颜参数,包括:查询局部美颜数据库,确定与第一原始参数信息对应的部位美颜参数。
作为一种示例,在局部美颜数据库中存储第一原始参数信息与部位美颜参数的映射关系,通过查询局部美颜数据库中的映射关系确定第一原始参数信息对应的部位美颜参数,其中,不同的第一原始参数信息可以对应不同的部位美颜参数,比如,眼间距不同,确定的眼睛的美颜参数不同,第一原始参数信息与部位美颜参数的具体映射关系可以通过大量实验数据确定,也可以根据需要进行设置,例如可以通过AI(Artificial Intelligence,人工智能)建模实现,此处不作具体限制。
作为另一种示例,通过相关算法对第一原始参数信息进行计算,生成部位美颜参数。比如,可以对多个第一原始参数信息进行计算,生成美颜参数。
步骤103,根据部位美颜参数对美颜部位进行处理,生成目标人脸图像。
本实施例中,获取待美颜图像,基于每个美颜部位确定部位美颜参数后,通过部位美颜参数对待美颜图像中的各美颜部位进行处理,生成美颜处理后的目标人脸图像。
作为一种示例,每个美颜部位可包括多个调整项,例如脸型可包括大小、颧骨、下颚骨、额头、下巴、发际线等调整项,再例如皮肤可包括磨皮、嫩肤、肤色等调整项,每个美颜部位对应的部位美颜参数可以有多个。对于每个调整项分别确定匹配的部位美颜参数,将部位美颜参数发送至渲染引擎,针对各调整项进行美颜渲染处理,从而生成美颜处理后的目标人脸图像。
相关技术中,在进行美颜处理时,通常针对预设脸型设置默认美颜参数,以供用户美颜使用,例如,针对标准脸设置默认美颜参数,其中,标准脸是指根据用户的平均脸型特点生成的模型脸型。由于个体的人脸存在差异,例如同样是长脸,有人脸型趋近矩形,有人趋近椭圆;对于相同的脸型,五官的分布及特征也存在区别,比如都是椭圆型的长脸,不同用户的嘴巴大小、眼睛大小存在区别。由于个体人脸差异,设置默认美颜参数的方案并不一定适用于所有用户,因此,相关方案难以满足用户个性化的美颜需求,美颜参数精准度不高,美颜效果有待提高。
本公开实施例的美颜处理方法,通过获取原始人脸图像,提取与美颜部位的部位特征对应的原始参数信息;根据原始参数信息确定与美颜部位对应的部位美颜参数;根据部位美颜参数对美颜部位进行处理,生成目标人脸图像。由此,能够根据各美颜部位在人脸中的位置、占比等特征,生成个性化的美颜参数,满足用户个性化的美颜需求,提高美颜参数精准度,提高美颜效果。
基于上述实施例,还可以获取与美颜部位对应的全局特征,通过全局特征对应的第二原始参数信息确定全局美颜参数,根据全局美颜参数对部位美颜参数进行调整,实现关联美颜调整,提高美颜效果。
图2为本公开实施例所提供的另一种美颜处理方法的流程示意图,如图2所示,该方法包括:
步骤201,获取原始人脸图像。
步骤202,查询预设的全局美颜数据库,获取与美颜部位对应的全局特征。
本实施例中,预先设置全局美颜数据库,在全局美颜数据库中存储美颜部位、与美颜部位对应的全局特征,其中,全局特征可包括与当前美颜部位关联的美颜部位的局部特征,例如,眼睛对应的全局特征包括脸宽、脸长、额头高度等。
步骤203,从原始人脸图像中提取与全局特征对应的第二原始参数信息。
本实施例中,在查询全局美颜数据库,获取与美颜部位对应的全局特征后,基于原始人脸图像对全局特征进行特征提取,进而获取全局特征对应的第二原始参数信息。
步骤204,查询全局美颜数据库,确定与第二原始参数信息对应的全局美颜参数。
本实施例中,在全局美颜数据库中存储第二原始参数信息与全局美颜参数的映射关系, 其中,不同的第二原始参数信息可以对应不同的全局美颜参数。其中,第二原始参数信息与全局美颜参数之间的具体关系量化取值可以根据需要确定,例如通过AI建模实现。
作为一种示例,美颜部位为嘴巴,对应调整项为唇厚,对应的全局特征包括脸长、脸宽,提取的第二原始参数信息包括脸长X、脸宽Y,进而,根据全局美颜数据库确定与脸长X、脸宽Y对应的美颜参数A,作为该调整项的全局美颜参数。
可选地,可以针对不同用户属性对应设置全局美颜数据库,用户属性不同,全局美颜数据库中的映射关系不同,其中,用户属性包括但不限于性别、人种、国家等。
在本公开的一个实施例中,对于预设脸型的美颜处理场景,在确定与第二原始参数信息对应的全局美颜参数之后,可以根据全局美颜参数对预设脸型的默认美颜参数进行调整,生成部位美颜参数。例如,可以将全局美颜参数与默认美颜参数相加,得到部位美颜参数,需要说明的是,根据全局美颜参数对默认美颜参数进行调整,生成部位美颜参数的实现方式不仅限于相加,此处不作具体限制。
步骤205,根据全局美颜参数对与第一原始参数信息对应的部位美颜参数进行调整。
本实施例中,在获取原始人脸图像之后,通过查询预设的局部美颜数据库,获取与美颜部位对应的局部特征,从原始人脸图像中提取与局部特征对应的第一原始参数信息,以及,查询局部美颜数据库,确定与第一原始参数信息对应的部位美颜参数。进而,根据全局美颜参数对与第一原始参数信息对应的部位美颜参数进行调整。例如,根据第一原始参数确定部位美颜参数1,可以将全局美颜参数与部位美颜参1相加,得到部位美颜参数2,将部位美颜参数2作为最终的部位美颜参数。
可选地,可以预设置信区间,将调整后得到的部位美颜参数与置信区间进行匹配,当处于置信区间内时,允许根据部位美颜参数对美颜部位进行处理。
可选地,查询预设的局部美颜数据库包括:获取用户属性,查询与用户属性对应的局部美颜数据库,以获取部位美颜参数。其中,针对不同用户属性对应设置局部美颜数据库,局部美颜数据库中存储第一原始参数信息与部位美颜参数的映射关系,不同用户属性对应的映射关系不同。其中,用户属性包括但不限于性别、人种、国家等,获取用户属性的实现方式包括但不限于人脸识别等。由此,能够适用于不同性别、民族的用户,有利于产品推广。
在实际应用过程中,由于人脸通常作为一个整体,如果仅单独对各美颜部位进行美颜处理至平均值或最优值,美颜处理后的目标人脸图像并不一定是最“美”的,甚至会存在不协调的负优化情况。
本公开实施例的美颜处理方法,通过查询预设的全局美颜数据库,获取与美颜部位对应的全局特征。进而,从原始人脸图像中提取与全局特征对应的第二原始参数信息。进一 步,查询全局美颜数据库,确定与第二原始参数信息对应的全局美颜参数,并根据全局美颜参数对与第一原始参数信息对应的部位美颜参数进行调整。由此,能够满足用户个性化的美颜需求,并且,实现关联调整的美颜方式,使美颜后的人脸图像整体更协调,避免负优化,提高美颜效果。
在本公开的一个实施例中,在根据原始参数信息确定与美颜部位对应的部位美颜参数之后,还包括:存储部位美颜参数,并为部位美颜参数分配用户标识,其中,用户标识与原始参数信息对应。进而,获取待美颜的图像,根据待美颜图像提取与美颜部位的部位特征对应的原始参数信息,根据原始参数信息与预存的原始参数信息进行匹配,确定对应的目标用户标识。进一步,调用与目标用户标识对应的部位美颜参数对美颜部位进行处理,生成目标人脸图像。由此,通过将生成的部位美颜参数按照用户标识进行存储,以及每当检测到人脸发生变化时,重新计算新的美颜参数,并统一以用户标识保存,能够在进行美颜时,获取预存的与当前用户对应的美颜参数,无需重新计算,提高处理效率。
基于上述实施例,下面结合实际应用场景进行说明。
参照表格,美颜部位包括脸型、眼睛、鼻子、嘴巴等,每个美颜部位包括多个调整项。对于每个调整项,预先设置对应的参考项,其中,每个调整项对应的参考项可以有一个,可以有多个。
美颜部位 调整项 参考项
脸型 大小 脸长、脸宽、颧骨、下颚骨、额头、下巴、唇宽
  颧骨 脸长、脸宽、下巴
  下颌骨 脸长、脸宽、下巴与嘴巴距离
  额头 脸长、额头高度(眉毛与发际线距离)、鼻长、下颌骨高度
眼睛 眼间距 脸宽、眼间距
  大小 脸宽、脸长、眼间距
  眼睛位置 脸长、眼睛眉毛距离、额头高度
鼻子 鼻长 脸长、鼻尖与眉毛距离、鼻尖与下巴距离
  鼻宽 脸宽、眼间距
  鼻高 眼间距、眼睛大小、嘴巴大小
嘴巴 宽度 脸宽、唇厚
  唇厚 脸长、脸宽
  嘴巴位置 嘴巴与眉毛距离、嘴巴与下巴距离
  唇色 脸部肤色
  牙齿 美白
  嘴型 微笑脸
在进行美颜处理时,预先设置美颜优先级,根据美颜优先级对应的顺序对各美颜部位的调整项进行美颜处理,例如,可以按照表中调整项由上到下的顺序进行美颜处理。作为一种示例,首先对脸型大小进行美颜处理,通过人脸图像提取脸长、脸宽、颧骨、下颚骨、额头、下巴、唇宽等部位特征的原始参数信息,根据上述原始参数信息生成脸型大小对应的美颜参数1,进而根据美颜参数1针对脸型大小进行美颜处理。进一步,对脸型大小进行美颜处理后更新记录脸长、脸宽的参数信息,获取美颜处理后的脸长、脸宽,根据美颜处理后的脸长、脸宽以及提取的下巴的参数信息,确定颧骨对应的美颜参数2,根据美颜参数2对颧骨进行美颜处理。后续调整项的美颜处理可参照上述示例实现,此处不再赘述。需要说明的是,在对调整项进行处理时,可以根据参考项中的一项确定美颜参数,也可以根据参考项中的多项确定美颜参数,还可以根据需要增加其他参考项,此处不作限制。由此,能够根据各美颜部位在人脸中的位置、占比等特征,生成个性化的美颜参数,满足用户个性化的美颜需求,提高美颜参数精准度,提高美颜效果。并且,实现关联调整的美颜方式,使美颜后的人脸图像整体更协调,避免负优化,提高美颜效果。
本公开还提出一种美颜处理装置。
图3为本公开实施例所提供的一种美颜处理装置的结构示意图,如图3所示,该装置包括:提取模块10,确定模块20,处理模块30。
其中,提取模块10,用于获取原始人脸图像,提取与美颜部位的部位特征对应的原始参数信息。
确定模块20,用于根据原始参数信息确定与美颜部位对应的部位美颜参数。
处理模块30,用于根据部位美颜参数对美颜部位进行处理,生成目标人脸图像。
在图3的基础上,图4所示的装置还包括:调整模块40,存储模块50,调用模块60。
其中,调整模块40,用于查询全局美颜数据库,确定与第二原始参数信息对应的全局美颜参数;根据全局美颜参数对与第一原始参数信息对应的部位美颜参数进行调整。
提取模块10包括:第二查询单元,用于查询预设的全局美颜数据库,获取与美颜部位对应的全局特征;第二提取单元,用于从原始人脸图像中提取与全局特征对应的第二原始参数信息;
在本公开的一个实施例中,提取模块10包括:第一查询单元,用于查询预设的局部美颜数据库,获取与美颜部位对应的局部特征。第一提取单元,用于从原始人脸图像中提取与局部特征对应的第一原始参数信息;
确定模块20具体用于:查询局部美颜数据库,确定与第一原始参数信息对应的部位美颜参数。
在本公开的一个实施例中,第一查询单元具体用于:获取用户属性,查询与用户属性对应的局部美颜数据库;其中,局部美颜数据库中存储第一原始参数信息与部位美颜参数的映射关系,不同用户属性对应的映射关系不同。
存储模块50,用于存储部位美颜参数,并为部位美颜参数分配用户标识,其中,用户标识与原始参数信息对应。
调用模块60,用于根据原始参数信息确定对应的目标用户标识;调用与目标用户标识对应的部位美颜参数对美颜部位进行处理,生成目标人脸图像。
需要说明的是,前述实施例对美颜处理方法的解释说明同样适用于本实施例的美颜处理装置,此处不再赘述。
本公开实施例的美颜处理装置,通过获取原始人脸图像,提取与美颜部位的部位特征对应的原始参数信息;根据原始参数信息确定与美颜部位对应的部位美颜参数;根据部位美颜参数对美颜部位进行处理,生成目标人脸图像。由此,能够根据各美颜部位在人脸中的位置、占比等特征,生成个性化的美颜参数,满足用户个性化的美颜需求,提高美颜参数精准度,提高美颜效果。
本公开还提出一种电子设备。
下面参考图5,其示出了适于用来实现本公开实施例的电子设备800的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图5示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图5所示,电子设备800可以包括处理装置(例如中央处理器、图形处理器等)801,其可以根据存储在只读存储器(ROM)802中的程序或者从存储装置808加载到随机访问存储器(RAM)803中的程序而执行各种适当的动作和处理。在RAM 803中,还存储有电子设备800操作所需的各种程序和数据。处理装置801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(I/O)接口805也连接至总线804。
通常,以下装置可以连接至I/O接口805:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置806;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置807;包括例如磁带、硬盘等的存储装置808;以及通信装置809。通信装置809可以允许电子设备800与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有各种装置的电子设备800,但是应理解的是,并不要求实施或具备所有示出的 装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置809从网络上被下载和安装,或者从存储装置808被安装,或者从ROM 802被安装。在该计算机程序被处理装置801执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取至少两个网际协议地址;向节点评价设备发送包括所述至少两个网际协议地址的节点评价请求,其中,所述节点评价设备从所述至少两个网际协议地址中,选取网际协议地址并返回;接收所述节点评价设备返回的网际协议地址;其中,所获取的网际协议地址指示内容分发网络中的边缘节点。
或者,上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:接收包括至少两个网际协议地址的节点评价请求;从所述至少两个网际协议地址中,选取网际协议地址;返回选取出的网际协议地址;其中, 接收到的网际协议地址指示内容分发网络中的边缘节点。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
为了实现上述实施例,本公开还提出一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如前述实施例所述的幻灯片生成方法。
图6为图示根据本公开的实施例的计算机可读存储介质的示意图。如图6所示,根据本公开实施例的计算机可读存储介质300,其上存储有非暂态计算机可读指令310。当该非暂态计算机可读指令310由处理器运行时,执行前述的本公开各实施例的幻灯片生成方法的全部或部分步骤。
为了实现上述实施例,本公开还提出一种计算机程序产品,当该计算机程序产品中的指令由处理器执行时,实现如前述实施例所述的幻灯片生成方法。
尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (12)

  1. 一种美颜处理方法,其特征在于,包括:
    获取原始人脸图像,提取与美颜部位的部位特征对应的原始参数信息;
    根据所述原始参数信息确定与所述美颜部位对应的部位美颜参数;
    根据所述部位美颜参数对所述美颜部位进行处理,生成目标人脸图像。
  2. 如权利要求1所述的方法,其特征在于,所述提取与美颜部位的部位特征对应的原始参数信息,包括:
    查询预设的局部美颜数据库,获取与所述美颜部位对应的局部特征;
    从所述原始人脸图像中提取与所述局部特征对应的第一原始参数信息;
    所述根据所述原始参数信息确定与所述美颜部位对应的部位美颜参数,包括:
    查询所述局部美颜数据库,确定与所述第一原始参数信息对应的部位美颜参数。
  3. 如权利要求2所述的方法,其特征在于,所述提取与美颜部位的部位特征对应的原始参数信息,还包括:
    查询预设的全局美颜数据库,获取与所述美颜部位对应的全局特征;
    从所述原始人脸图像中提取与所述全局特征对应的第二原始参数信息;
    在所述根据所述部位美颜参数对所述美颜部位进行处理之前,还包括:
    查询所述全局美颜数据库,确定与所述第二原始参数信息对应的全局美颜参数;
    根据所述全局美颜参数对与所述第一原始参数信息对应的部位美颜参数进行调整。
  4. 如权利要求2所述的方法,其特征在于,所述查询预设的局部美颜数据库包括:
    获取用户属性,查询与所述用户属性对应的局部美颜数据库;其中,局部美颜数据库中存储第一原始参数信息与部位美颜参数的映射关系,不同用户属性对应的映射关系不同。
  5. 如权利要求1所述的方法,其特征在于,还包括:
    存储所述部位美颜参数,并为所述部位美颜参数分配用户标识,其中,用户标识与原始参数信息对应;
    在提取与美颜部位的部位特征对应的原始参数信息之后,还包括:
    根据所述原始参数信息确定对应的目标用户标识;
    调用与所述目标用户标识对应的部位美颜参数对所述美颜部位进行处理,生成目标人脸图像。
  6. 一种美颜处理装置,其特征在于,包括:
    提取模块,用于获取原始人脸图像,提取与美颜部位的部位特征对应的原始参数信息;
    确定模块,用于根据所述原始参数信息确定与所述美颜部位对应的部位美颜参数;
    处理模块,用于根据所述部位美颜参数对所述美颜部位进行处理,生成目标人脸图像。
  7. 如权利要求6所述的装置,其特征在于,所述提取模块包括:
    第一查询单元,用于查询预设的局部美颜数据库,获取与所述美颜部位对应的局部特征;
    第一提取单元,用于从所述原始人脸图像中提取与所述局部特征对应的第一原始参数信息;
    所述确定模块具体用于:
    查询所述局部美颜数据库,确定与所述第一原始参数信息对应的部位美颜参数。
  8. 如权利要求7所述的装置,其特征在于,所述提取模块包括:
    第二查询单元,用于查询预设的全局美颜数据库,获取与所述美颜部位对应的全局特征;
    第二提取单元,用于从所述原始人脸图像中提取与所述全局特征对应的第二原始参数信息;
    所述装置还包括:
    调整模块,用于查询所述全局美颜数据库,确定与所述第二原始参数信息对应的全局美颜参数;
    根据所述全局美颜参数对与所述第一原始参数信息对应的部位美颜参数进行调整。
  9. 如权利要求7所述的装置,其特征在于,所述第一查询单元具体用于:
    获取用户属性,查询与所述用户属性对应的局部美颜数据库;其中,局部美颜数据库中存储第一原始参数信息与部位美颜参数的映射关系,不同用户属性对应的映射关系不同。
  10. 如权利要求6所述的装置,其特征在于,还包括:
    存储模块,用于存储所述部位美颜参数,并为所述部位美颜参数分配用户标识,其中,用户标识与原始参数信息对应;
    调用模块,用于根据所述原始参数信息确定对应的目标用户标识;
    调用与所述目标用户标识对应的部位美颜参数对所述美颜部位进行处理,生成目标人脸图像。
  11. 一种电子设备,其特征在于,包括处理器和存储器;
    其中,所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于实现如权利要求1-5中任一项所述的美颜处理方法。
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-5中任一项所述的美颜处理方法。
PCT/CN2021/074638 2020-02-25 2021-02-01 美颜处理方法及装置 WO2021169736A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022550940A JP7516535B2 (ja) 2020-02-25 2021-02-01 美容処理方法及び装置
EP21761290.2A EP4113430A4 (en) 2020-02-25 2021-02-01 BEAUTY TREATMENT METHOD AND DEVICE
US17/885,942 US11769286B2 (en) 2020-02-25 2022-08-11 Beauty processing method, electronic device, and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010115136.2 2020-02-25
CN202010115136.2A CN111275650B (zh) 2020-02-25 2020-02-25 美颜处理方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/885,942 Continuation US11769286B2 (en) 2020-02-25 2022-08-11 Beauty processing method, electronic device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2021169736A1 true WO2021169736A1 (zh) 2021-09-02

Family

ID=71002324

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/074638 WO2021169736A1 (zh) 2020-02-25 2021-02-01 美颜处理方法及装置

Country Status (5)

Country Link
US (1) US11769286B2 (zh)
EP (1) EP4113430A4 (zh)
JP (1) JP7516535B2 (zh)
CN (1) CN111275650B (zh)
WO (1) WO2021169736A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275650B (zh) * 2020-02-25 2023-10-17 抖音视界有限公司 美颜处理方法及装置
CN112669233A (zh) * 2020-12-25 2021-04-16 北京达佳互联信息技术有限公司 图像处理方法、装置、电子设备、存储介质及程序产品
CN113222841A (zh) * 2021-05-08 2021-08-06 北京字跳网络技术有限公司 一种图像处理方法、装置、设备及介质
CN113427486B (zh) * 2021-06-18 2022-10-28 上海非夕机器人科技有限公司 机械臂控制方法、装置、计算机设备、存储介质和机械臂
CN117995356B (zh) * 2024-04-03 2024-07-19 西弥斯医疗科技(湖南)有限公司 基于图像识别的自动电疗系统

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632165A (zh) * 2013-11-28 2014-03-12 小米科技有限责任公司 一种图像处理的方法、装置及终端设备
CN105825486A (zh) * 2016-04-05 2016-08-03 北京小米移动软件有限公司 美颜处理的方法及装置
CN107766831A (zh) * 2017-10-31 2018-03-06 广东欧珀移动通信有限公司 图像处理方法、装置、移动终端和计算机可读存储介质
CN107886484A (zh) * 2017-11-30 2018-04-06 广东欧珀移动通信有限公司 美颜方法、装置、计算机可读存储介质和电子设备
CN107993209A (zh) * 2017-11-30 2018-05-04 广东欧珀移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN109584151A (zh) * 2018-11-30 2019-04-05 腾讯科技(深圳)有限公司 人脸美化方法、装置、终端及存储介质
WO2019190142A1 (en) * 2018-03-29 2019-10-03 Samsung Electronics Co., Ltd. Method and device for processing image
CN111275650A (zh) * 2020-02-25 2020-06-12 北京字节跳动网络技术有限公司 美颜处理方法及装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008102440A1 (ja) * 2007-02-21 2008-08-28 Tadashi Goino 化粧顔画像生成装置及び方法
JP2014149678A (ja) 2013-02-01 2014-08-21 Panasonic Corp 美容支援装置、美容支援システム、美容支援方法、並びに美容支援プログラム
CN103605975B (zh) * 2013-11-28 2018-10-19 小米科技有限责任公司 一种图像处理的方法、装置及终端设备
WO2017177259A1 (en) * 2016-04-12 2017-10-19 Phi Technologies Pty Ltd System and method for processing photographic images
JP6859611B2 (ja) * 2016-06-09 2021-04-14 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
JP6897036B2 (ja) * 2016-09-13 2021-06-30 カシオ計算機株式会社 画像処理装置、画像処理方法及びプログラム
JP6753276B2 (ja) 2016-11-11 2020-09-09 ソニー株式会社 情報処理装置、および情報処理方法、並びにプログラム
CN107274355A (zh) * 2017-05-22 2017-10-20 奇酷互联网络科技(深圳)有限公司 图像处理方法、装置和移动终端
CN107862274A (zh) * 2017-10-31 2018-03-30 广东欧珀移动通信有限公司 美颜方法、装置、电子设备和计算机可读存储介质
CN107680128B (zh) * 2017-10-31 2020-03-27 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN107730444B (zh) * 2017-10-31 2022-02-01 Oppo广东移动通信有限公司 图像处理方法、装置、可读存储介质和计算机设备
CN108257097A (zh) * 2017-12-29 2018-07-06 努比亚技术有限公司 美颜效果调整方法、终端及计算机可读存储介质
GB2572435B (en) * 2018-03-29 2022-10-05 Samsung Electronics Co Ltd Manipulating a face in an image
CN110472489B (zh) * 2019-07-05 2023-05-05 五邑大学 一种人脸美丽等级预测方法、装置及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632165A (zh) * 2013-11-28 2014-03-12 小米科技有限责任公司 一种图像处理的方法、装置及终端设备
CN105825486A (zh) * 2016-04-05 2016-08-03 北京小米移动软件有限公司 美颜处理的方法及装置
CN107766831A (zh) * 2017-10-31 2018-03-06 广东欧珀移动通信有限公司 图像处理方法、装置、移动终端和计算机可读存储介质
CN107886484A (zh) * 2017-11-30 2018-04-06 广东欧珀移动通信有限公司 美颜方法、装置、计算机可读存储介质和电子设备
CN107993209A (zh) * 2017-11-30 2018-05-04 广东欧珀移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
WO2019190142A1 (en) * 2018-03-29 2019-10-03 Samsung Electronics Co., Ltd. Method and device for processing image
CN109584151A (zh) * 2018-11-30 2019-04-05 腾讯科技(深圳)有限公司 人脸美化方法、装置、终端及存储介质
CN111275650A (zh) * 2020-02-25 2020-06-12 北京字节跳动网络技术有限公司 美颜处理方法及装置

Also Published As

Publication number Publication date
EP4113430A1 (en) 2023-01-04
US20220392128A1 (en) 2022-12-08
JP2023515144A (ja) 2023-04-12
CN111275650B (zh) 2023-10-17
US11769286B2 (en) 2023-09-26
JP7516535B2 (ja) 2024-07-16
EP4113430A4 (en) 2023-08-09
CN111275650A (zh) 2020-06-12

Similar Documents

Publication Publication Date Title
WO2021169736A1 (zh) 美颜处理方法及装置
US20240078838A1 (en) Face reenactment
WO2019223421A1 (zh) 卡通人脸图像的生成方法、装置及计算机存储介质
US10938725B2 (en) Load balancing multimedia conferencing system, device, and methods
WO2016145830A1 (zh) 图像处理方法、终端及计算机存储介质
WO2021083069A1 (zh) 用于训练换脸模型的方法和设备
WO2019233256A1 (zh) 人脸贴图生成方法、装置、可读存储介质及移动终端
KR102045575B1 (ko) 스마트 미러 디스플레이 장치
CN105096353B (zh) 一种图像处理方法及装置
WO2020077914A1 (zh) 图像处理方法、装置、硬件装置
JP7209851B2 (ja) 画像変形の制御方法、装置およびハードウェア装置
WO2020244074A1 (zh) 表情交互方法、装置、计算机设备及可读存储介质
WO2021004113A1 (zh) 语音合成方法、装置、计算机设备及存储介质
WO2022100680A1 (zh) 混血人脸图像生成方法、模型训练方法、装置和设备
WO2022193910A1 (zh) 数据处理方法、装置、系统、电子设备和可读存储介质
WO2021135286A1 (zh) 视频的处理方法、视频的搜索方法、终端设备及计算机可读存储介质
JP2018538608A (ja) 顔検証方法および電子デバイス
WO2021093595A1 (zh) 验证用户身份的方法以及电子设备
CN111429338B (zh) 用于处理视频的方法、装置、设备和计算机可读存储介质
WO2021120626A1 (zh) 一种图像处理方法、终端及计算机存储介质
US20240046538A1 (en) Method for generating face shape adjustment image, model training method, apparatus and device
WO2021223724A1 (zh) 信息处理方法、装置和电子设备
CN108021905A (zh) 图片处理方法、装置、终端设备及存储介质
WO2022166908A1 (zh) 风格图像生成方法、模型训练方法、装置和设备
WO2022237633A1 (zh) 一种图像处理方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21761290

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022550940

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021761290

Country of ref document: EP

Effective date: 20220926