WO2019241981A1 - 美颜处理方法、装置、无人机及手持平台 - Google Patents

美颜处理方法、装置、无人机及手持平台 Download PDF

Info

Publication number
WO2019241981A1
WO2019241981A1 PCT/CN2018/092310 CN2018092310W WO2019241981A1 WO 2019241981 A1 WO2019241981 A1 WO 2019241981A1 CN 2018092310 W CN2018092310 W CN 2018092310W WO 2019241981 A1 WO2019241981 A1 WO 2019241981A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
image
processed
information
tracking
Prior art date
Application number
PCT/CN2018/092310
Other languages
English (en)
French (fr)
Inventor
钟承群
白高平
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2018/092310 priority Critical patent/WO2019241981A1/zh
Priority to CN201880031569.0A priority patent/CN110678904A/zh
Publication of WO2019241981A1 publication Critical patent/WO2019241981A1/zh
Priority to US17/116,600 priority patent/US20210287394A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/40Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/62Retouching, i.e. modification of isolated colours only or in isolated picture areas only
    • H04N1/628Memory colours, e.g. skin or sky
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/643Hue control means, e.g. flesh tone control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • the present application relates to the field of image processing technology, and in particular, to a method and device for beauty treatment, a drone, and a handheld platform.
  • the image can be beautified, and the image after the beautification process is displayed to the user. For users to enjoy.
  • the present application provides a beauty treatment method, device, drone and handheld platform to improve the efficiency of beauty treatment.
  • an embodiment of the present application provides a beauty treatment method.
  • the beauty treatment method may include:
  • the tracking information includes characteristic information of the target object
  • Make-up processing is performed on the target object according to the feature information corresponding to the target to achieve the beautification processing of the target object. Compared with the prior art, there is no need to perform face recognition on the image to be processed, and it is not necessary to extract facial feature information in the image to be processed, thereby improving the efficiency of the beauty treatment.
  • the characteristic information of the target object includes at least one of position information of the target object in the image to be processed, feature information of facial features, and face contour information.
  • the beautifying the target object according to the feature information corresponding to the target includes:
  • the beauty treatment includes at least one of the following:
  • Dermabrasion treatment skin tone conversion treatment, and / or whitening treatment.
  • the method before the acquiring the tracking information of the target object in the image to be processed, the method further includes:
  • the acquiring tracking information of a target object in an image to be processed includes:
  • the tracking information of the target object is obtained through the tracking technology.
  • the acquiring the tracking information of the target object by using the tracking technology includes:
  • Feature information of the target object in the image to be processed is calculated by a tracking technique.
  • the acquiring the tracking information of the target object in the image to be processed includes:
  • tracking information of a target object in the image to be processed is acquired.
  • the method further includes:
  • storing the feature information of the target object in the image to be processed according to a user configuration instruction includes:
  • feature information of the target object is stored in a memory module XMP of the image to be processed.
  • the storing the characteristic information of the target object in a video to which the image to be processed belongs according to a user configuration instruction includes:
  • feature information of the target object is stored in a subtitle text of a video to which the image to be processed belongs or metadata of the video.
  • an embodiment of the present application provides a beauty treatment device, which may include a processor and a memory;
  • the memory is used to store program instructions
  • the processor is configured to obtain tracking information of a target object in an image to be processed; the tracking information includes characteristic information of the target object;
  • the processor is further configured to perform a beautification process on the target object according to the feature information corresponding to the target, so as to implement a beautification process on the target object.
  • the feature information of the target object includes at least one of position information of the target object in the to-be-processed image, feature information, and face contour information.
  • the processor is specifically configured to perform a beauty treatment on the target object according to the characteristic information of the target object;
  • the beauty treatment includes at least one of the following:
  • Dermabrasion treatment skin tone conversion treatment, and / or whitening treatment.
  • the processor is further configured to determine a target object in the image to be processed, and enable a tracking technology
  • the processor is specifically configured to obtain tracking information of the target object through the tracking technology.
  • the processor is specifically configured to calculate feature information of the target object in the image to be processed by using a tracking technology.
  • the processor is specifically configured to acquire tracking information of a target object in the image to be processed during a video shooting process.
  • the processor is further configured to store feature information of the target object in the image to be processed according to a user configuration instruction; or according to a user configuration instruction in the image to be processed to which the target object belongs. Feature information of the target object is stored in the video.
  • the processor is specifically configured to store feature information of the target object in a memory module XMP of the image to be processed according to the user configuration instruction.
  • the processor is specifically configured to store the target object in a subtitle text of a video to which the image to be processed belongs or metadata of the video according to the user configuration instruction.
  • the characteristic information is specifically configured to store the target object in a subtitle text of a video to which the image to be processed belongs or metadata of the video according to the user configuration instruction.
  • an embodiment of the present application further provides a drone, and the drone may include: the beauty treatment device shown in the second aspect.
  • an embodiment of the present application further provides a handheld platform, and the handheld platform may include the beauty treatment device shown in the second aspect.
  • an embodiment of the present application further provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the beauty treatment shown in the first aspect is performed. Approach.
  • the beauty treatment method, device, drone, and handheld platform provided in the embodiments of the present application can directly obtain the tracking information of the target object when performing the beauty treatment on the target object, because the tracking information includes the characteristic information of the target object, After the feature information of the target object is obtained, the target object can be directly subjected to the beauty treatment according to the feature information of the target object, so as to realize the beautification process of the target object.
  • the target object can be directly subjected to the beauty treatment according to the feature information of the target object, so as to realize the beautification process of the target object.
  • FIG. 1 is a schematic flowchart of a beauty treatment method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of another beauty treatment method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of determining a target object according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a beauty treatment device according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a drone according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a handheld platform according to an embodiment of the present application.
  • the beauty treatment method provided in the embodiments of the present application can be applied to devices such as a drone, a handheld platform, and a remote controller.
  • devices such as a drone, a handheld platform, and a remote controller.
  • the tracking information of the target object can be obtained directly through the tracking technology in the drone. Because the tracking information includes the characteristic information of the target object, no The human-machine can directly perform the beautification processing on the target object according to the characteristic information of the target object to achieve the beautification processing of the target object.
  • the drone can also send the tracking information of the target object to the terminal device connected to it, so that the terminal device can directly beauty the target object according to the characteristic information of the target object.
  • Face treatment to achieve landscaping of the target object It can be seen that, compared with the prior art, the beauty treatment method provided in the embodiment of the present application does not need to perform face recognition on the image to be processed, and does not need to extract facial feature information in the image to be processed, thereby improving the beauty treatment process. effectiveness.
  • FIG. 1 is a schematic flowchart of a beauty treatment method according to an embodiment of the present application.
  • the beauty treatment method may include:
  • the tracking information includes characteristic information of the target object.
  • the target object may be a person, or an animal or a scene.
  • the characteristic information of the target object includes at least one of position information of the target object in the image to be processed, feature information, and facial contour information.
  • the feature information may include eyebrow characteristics, eye characteristics, nose characteristics, mouth characteristics, and ear characteristics of the target object.
  • the position information of the target object in the image to be processed may be represented by coordinate points.
  • the center point of the shooting screen may be used. It is the coordinate origin, and of course, other points may be used as the coordinate origin.
  • the embodiment of the present application only uses the center point of the shooting screen as the coordinate origin for illustration, but it does not mean that the embodiment of the present application is limited to this.
  • the feature information of the target object may include only any one of position information, facial feature information, and face contour information of the target object in the image to be processed. Any two of the position information, facial features information, and face contour information in the image to be processed may also include the target object's position information, facial features information, and face contour information in the image to be processed.
  • the feature information of the target object also includes information such as the expression of the target object.
  • the embodiment of the present application only uses the feature information of the target object to include the position information of the target object in the image to be processed, feature features information, and face contour information. At least one of them is taken as an example for description, but it does not mean that the embodiments of the present application are limited to this.
  • the beauty treatment includes at least one of the following: a microdermabrasion treatment, a skin color conversion treatment, and / or a whitening treatment.
  • the microdermabrasion treatment can be understood as a sanding treatment on the skin to make rough skin smoother;
  • the skin color transformation treatment can be understood as a transformation treatment on the skin to brighten the skin tone;
  • the whitening treatment can be understood as fairness on the skin Treatment, which makes the skin more fair.
  • the beauty treatment may also include beauty treatments such as thin face treatment, heightening treatment, wrinkle removal treatment, brightening treatment, and eye enlargement treatment.
  • the embodiments of the present application only use these beauty treatments as an example for description, but it does not mean that the embodiments of the present application are limited to this.
  • the embodiment of the present application only uses the target object as a person to perform the beauty treatment on the person in the image as an example.
  • the target object may also be a scene.
  • the beauty The processing may include a beauty mode such as brightness processing, contrast processing, saturation processing, sharpening processing, color temperature processing, or light shading adjustment processing.
  • the target object After the feature information of the target object is obtained through S101, the target object can be directly subjected to the beauty treatment according to the feature information of the target object. Compared with the prior art, there is no need to perform face recognition on the image to be processed, and it is not necessary to extract facial feature information in the image to be processed, thereby improving the efficiency of the beauty treatment.
  • the beauty treatment method provided in the embodiment of the present application can directly obtain the tracking information of the target object when performing the beauty treatment on the target object. Because the tracking information includes the characteristic information of the target object, the characteristic information of the target object is obtained. After that, the target object can be directly processed according to the characteristic information of the target object to realize the beautification process of the target object. Compared with the prior art, there is no need to perform face recognition on the image to be processed, and it is not necessary to extract facial feature information in the image to be processed, thereby improving the efficiency of the beauty treatment.
  • FIG. 2 is another beauty provided by the embodiment of the present application.
  • the face treatment method may further include:
  • the tracking technology is a technology based on vision and chase.
  • the tracking technology can be implemented by software code.
  • the software code can be integrated into the processor to implement tracking of the target object.
  • FIG. 3 is a schematic diagram of determining a target object according to an embodiment of the present application.
  • the image of Zhang San can be placed on the target on the shooting screen.
  • Frame to determine Zhang San in the target frame as the target object to be tracked; after determining the target object Zhang San, the tracking technology can be started to realize the tracking of Zhang San through the tracking technology.
  • the tracking technology when the tracking technology is enabled, the tracking technology may be started by clicking an on button of the tracking technology, or the tracking technology may be triggered by voice input. Of course, it may also be performed by a gesture (without touching the display) Screen) to start the tracking technology.
  • the embodiments of the present application only use these methods as examples for illustration, but it does not mean that the embodiments of the present application are limited to this.
  • the tracking information is used to calculate the feature information of the target object in the image to be processed.
  • the tracking information includes characteristic information of the target object.
  • the feature information of the target object includes at least one of position information of the target object in the image to be processed, feature features information, and face contour information.
  • the feature information of the target object also includes information such as the expression of the target object.
  • the embodiment of the present application only uses the feature information of the target object to include at least one of position information of the target object in the image to be processed, feature features information, and face contour information as an example, but does not represent the present invention. The application examples are limited to this.
  • the drone When the tracking information of the target object in the image to be processed is obtained through tracking technology, the drone will track the movement of the target object and organize the composition so that the position of the target object and the target frame of the shooting screen remain relatively unchanged.
  • the tracking technology calculates the feature information of the target object in each frame of the image, thereby obtaining the feature information of the target object in the image to be processed.
  • the tracking information of the target object in the image to be processed may be acquired during the video shooting process.
  • the following S203 may be performed: directly based on the target object included in the tracking information
  • the feature information is subjected to beautification processing, thereby outputting the image after the beautification processing.
  • the drone may not perform the beauty treatment according to the characteristic information of the target object included in the tracking information, but execute the following S204: the acquired
  • the tracking information is stored. If it is necessary to perform the beauty treatment on the image to be processed subsequently, the pre-stored tracking information is found, and the beauty treatment is performed according to the feature information of the target object included in the tracking information.
  • S203 Perform beauty treatment on the target object according to the characteristic information of the target object.
  • the beautification treatment includes at least one of the following: microdermabrasion treatment, skin color conversion treatment, and / or whitening treatment.
  • microdermabrasion treatment skin color conversion treatment
  • skin color conversion treatment skin color conversion treatment
  • / or whitening treatment it can include facial treatments such as face-lifting, heightening, wrinkle-removing, brightening, and eye enlargement.
  • facial treatments such as face-lifting, heightening, wrinkle-removing, brightening, and eye enlargement.
  • the embodiments of the present application only use these beauty treatments as an example for description, but it does not mean that the embodiments of the present application are limited to this.
  • the target object After the tracking information of the target object in the image to be processed is obtained through the tracking technology in S202, the target object can be directly subjected to the beauty treatment according to the characteristic information of the target object. Compared with the prior art, there is no need to perform face recognition on the image to be processed, and it is not necessary to extract facial feature information in the image to be processed, thereby improving the efficiency of the beauty treatment.
  • S204 Store the characteristic information of the target object in the image to be processed according to the user configuration instruction; or store the characteristic information of the target object in the video to which the image to be processed belongs according to the user configuration instruction.
  • the characteristic information of the target object may be stored in the memory of the image to be processed (ExtremeMemory Profile (XMP); if the user configuration instruction instructs to store the characteristic information of the target object and the image to be processed is a video, the characteristic information of the target object may also be stored in the subtitle text of the video to which the image to be processed belongs or the video Metadata; of course, the feature information of the target object can also be stored separately in a file, and a one-to-one mapping relationship between the feature information of the target object and the corresponding image to be processed can be established in the file.
  • XMP ExtremeMemory Profile
  • the stored feature information of the target object may also be transmitted via wifi or Universal Serial Bus (USB) and other online transmission methods.
  • An application Application, APP
  • An application sent to the terminal device, so that the terminal device can also perform a beautification process on the target object according to the characteristic information of the target object.
  • beautification processing it is also possible to directly perform beauty treatment on the target object according to the characteristic information of the target object to realize the beautification processing of the target object.
  • the beauty treatment method of the terminal device in the prior art there is no need to perform face recognition on the image to be processed, and no need to extract facial feature information in the image to be processed, thereby improving the efficiency of the beauty treatment.
  • the target object in the image to be processed when taking a beauty photo, you can first determine the target object in the image to be processed, and enable the tracking technology by using the motor tracking technology start button; after starting the tracking technology, you can use it during the video shooting process.
  • the tracking information of the target object in the to-be-processed image is obtained through tracking technology; since the tracking information includes the characteristic information of the target object, after the characteristic information of the target object is obtained, the target object can be directly processed according to the characteristic information of the target object. Beauty treatment to achieve the beautification of the target object. Compared with the prior art, there is no need to perform face recognition on the image to be processed, and it is not necessary to extract facial feature information in the image to be processed, thereby improving the efficiency of the beauty treatment.
  • FIG. 4 is a schematic structural diagram of a beauty treatment device 40 according to an embodiment of the present application.
  • the beauty treatment device 40 may include a processor 401 and a memory 402.
  • the memory 402 is configured to store program instructions.
  • the processor 401 is configured to acquire tracking information of a target object in an image to be processed; the tracking information includes characteristic information of the target object.
  • the processor 401 is further configured to perform a beauty treatment on the target object according to the feature information corresponding to the target.
  • the feature information of the target object includes at least one of position information of the target object in the image to be processed, feature information of the facial features, and face contour information.
  • the processor 401 is specifically configured to perform beauty treatment on the target object according to the characteristic information of the target object; the beauty treatment includes at least one of the following:
  • Dermabrasion treatment skin tone conversion treatment, and / or whitening treatment.
  • the processor 401 is further configured to determine a target object in an image to be processed and enable a tracking technology.
  • the processor 401 is specifically configured to obtain tracking information of a target object through a tracking technology.
  • the processor 401 is specifically configured to calculate the feature information of the target object in the image to be processed by using a tracking technology.
  • the processor 401 is specifically configured to acquire tracking information of a target object in an image to be processed during a video shooting process.
  • the processor 401 is further configured to store the characteristic information of the target object in the image to be processed according to the user configuration instruction; or store the characteristic information of the target object in the video to which the image to be processed belongs according to the user configuration instruction.
  • the processor 401 is specifically configured to store the characteristic information of the target object in the memory module XMP of the image to be processed according to a user configuration instruction.
  • the processor 401 is specifically configured to store feature information of the target object in the subtitle text of the video to which the image to be processed belongs or in the metadata of the video according to a user configuration instruction.
  • the above-mentioned beauty treatment device 40 may correspondingly execute the technical solution of the beauty treatment method of any embodiment, and its implementation principles and technical effects are similar, and details are not described herein again.
  • FIG. 5 is a schematic structural diagram of a drone 50 according to an embodiment of the present application.
  • the drone 50 may include a beauty treatment device 40 shown in any one of the foregoing embodiments, and its implementation principle And technical effects are similar to the implementation principles and technical effects of the beauty treatment method, respectively, and are not repeated here.
  • FIG. 6 is a schematic structural diagram of a handheld platform 60 according to an embodiment of the present application.
  • the handheld platform 60 may include a beauty treatment device 40 shown in any of the foregoing embodiments, and its implementation principles and technologies The effects are similar to the implementation principles and technical effects of the beauty treatment method, respectively, and are not repeated here.
  • An embodiment of the present application further provides a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the beauty treatment method shown in any of the foregoing embodiments is executed, and the implementation thereof The principle and technical effect are similar to the realization principle and technical effect of the beauty treatment method, respectively, and will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供一种美颜处理方法、装置、无人机及手持平台,该方法包括:获取待处理图像中目标对象的追踪信息;其中,追踪信息包括目标对象的特征信息;根据目标对应的特征信息对目标对象进行美颜处理。本实施例提供的美颜处理方法、装置、无人机及手持平台,提高了美颜处理的效率。

Description

美颜处理方法、装置、无人机及手持平台 技术领域
本申请涉及图像处理技术领域,尤其涉及一种美颜处理方法、装置、无人机及手持平台。
背景技术
随着图像处理技术的不断发展,对于具有照相或者摄像功能的设备而言,在向用户显示图像之前,可以先对该图像将进行美颜处理,并将美颜处理后的图像显示给用户,以供用户欣赏。
现有技术中,在对图像进行美颜处理时,需要先对该图像进行人脸识别,确定该图像中存在人脸信息,并提取出人的眼睛、鼻子和嘴等相关特征信息,之后,再对人脸进行保持边缘,模糊杂质及双边滤波器等后期处理,从而达到美颜的效果。
然而,采用现有的美颜处理方法,使得美颜处理的效率不高。
发明内容
本申请提供一种美颜处理方法、装置、无人机及手持平台,以提高美颜处理的效率。
第一方面,本申请实施例提供一种美颜处理方法,该美颜处理方法可以包括:
获取待处理图像中目标对象的追踪信息;所述追踪信息包括所述目标对象的特征信息;
根据所述目标对应的特征信息对所述目标对象进行美颜处理,以实现对目标对象的美化处理。与现有技术相比,无需对待处理图像进行人脸识别,并无需提取待处理图像中的人脸特征信息,从而提高美颜处理的效率。
在一种可能的实现方式中,所述目标对象的特征信息包括所述目标对象在所述待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的至少一 种。
在一种可能的实现方式中,所述根据所述目标对应的特征信息对所述目标对象进行美颜处理,包括:
根据所述目标对象的特征信息对所述目标对象进行美颜处理;所述美颜处理包括下述至少一种:
磨皮处理、肤色变换处理或/且美白处理。
在一种可能的实现方式中,所述获取待处理图像中目标对象的追踪信息之前,还包括:
确定所述待处理图像中的目标对象,并启用追踪技术;
所述获取待处理图像中目标对象的追踪信息,包括:
通过所述追踪技术获取所述目标对象的追踪信息。
在一种可能的实现方式中,所述通过所述追踪技术获取所述目标对象的追踪信息,包括:
通过追踪技术计算所述待处理图像中所述目标对象的特征信息。
在一种可能的实现方式中,所述获取待处理图像中目标对象的追踪信息,包括:
在视频拍摄过程中,获取所述待处理图像中目标对象的追踪信息。
在一种可能的实现方式中,所述获取待处理图像中目标对象的追踪信息之后,还包括:
根据用户配置指令在所述待处理图像中存储所述目标对象的特征信息;或者根据用户配置指令在所述待处理图像所属的视频中存储所述目标对象的特征信息。
在一种可能的实现方式中,所述根据用户配置指令在所述待处理图像中存储所述目标对象的特征信息,包括:
根据所述用户配置指令,在所述待处理图像的内存条XMP中存储所述目标对象的特征信息。
在一种可能的实现方式中,所述根据用户配置指令在所述待处理图像所属的视频中存储所述目标对象的特征信息,包括:
根据所述用户配置指令,在所述待处理图像所属的视频的字幕文本中或者所述视频的元数据中存储所述目标对象的特征信息。
第二方面,本申请实施例提供一种美颜处理装置,该美颜处理装置可以包括处理器及存储器;
其中,所述存储器,用于存储程序指令;
所述处理器,用于获取待处理图像中目标对象的追踪信息;所述追踪信息包括所述目标对象的特征信息;
所述处理器,还用于根据所述目标对应的特征信息对所述目标对象进行美颜处理,以实现对目标对象的美化处理。与现有技术相比,无需对待处理图像进行人脸识别,并无需提取待处理图像中的人脸特征信息,从而提高美颜处理的效率。
在一种可能的实现方式中,所述目标对象的特征信息包括所述目标对象在所述待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的至少一种。
在一种可能的实现方式中,所述处理器,具体用于根据所述目标对象的特征信息对所述目标对象进行美颜处理;所述美颜处理包括下述至少一种:
磨皮处理、肤色变换处理或/且美白处理。
在一种可能的实现方式中,所述处理器,还用于确定所述待处理图像中的目标对象,并启用追踪技术;
所述处理器,具体用于通过所述追踪技术获取所述目标对象的追踪信息。
在一种可能的实现方式中,所述处理器,具体用于通过追踪技术计算所述待处理图像中所述目标对象的特征信息。
在一种可能的实现方式中,所述处理器,具体用于在视频拍摄过程中,获取所述待处理图像中目标对象的追踪信息。
在一种可能的实现方式中,所述处理器,还用于根据用户配置指令在所述待处理图像中存储所述目标对象的特征信息;或者根据用户配置指令在所述待处理图像所属的视频中存储所述目标对象的特征信息。
在一种可能的实现方式中,所述处理器,具体用于根据所述用户配置指令,在所述待处理图像的内存条XMP中存储所述目标对象的特征信息。
在一种可能的实现方式中,所述处理器,具体用于根据所述用户配置指令,在所述待处理图像所属的视频的字幕文本中或者所述视频的元数据中存储所述目标对象的特征信息。
第三方面,本申请实施例还提供一种无人机,该无人机可以包括:上述第二方面所示的美颜处理装置。
第四方面,本申请实施例还提供一种手持平台,该手持平台可以包括:上述第二方面所示的美颜处理装置。
第五方面,本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,在所述计算机程序被处理器执行时,执行上述第一方面所示的美颜处理方法。
本申请实施例提供的美颜处理方法、装置、无人机及手持平台,在对目标对象进行美颜处理时,可以直接获取目标对象的追踪信息,由于该追踪信息包括目标对象的特征信息,使得在获取到目标对象的特征信息之后,可以直接根据该目标对象的特征信息对目标对象进行美颜处理,以实现对目标对象的美化处理。与现有技术相比,无需对待处理图像进行人脸识别,并无需提取待处理图像中的人脸特征信息,从而提高美颜处理的效率。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种美颜处理方法的流程示意图;
图2为本申请实施例提供的另一种美颜处理方法的流程示意图;
图3为本申请实施例提供的一种确定目标对象的示意图;
图4为本申请实施例提供的一种美颜处理装置的结构示意图;
图5为本申请实施例提供的一种无人机的结构示意图;
图6为本申请实施例提供的一种手持平台的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于 本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围,在不冲突的情况下,下述的实施例及实施方式中的特征可以相互组合。
本申请实施例提供的美颜处理方法,可以应用于无人机、手持平台、遥控器等设备。以无人机为例,当用户通过无人机拍摄视频目标对象时,可以直接通过无人机中的追踪技术获取目标对象的追踪信息,由于该追踪信息包括目标对象的特征信息,因此,无人机中可以直接根据该目标对象的特征信息对目标对象进行美颜处理,以实现对目标对象的美化处理。当然,无人机在通过追踪技术获取目标对象的追踪信息之后,还可以将目标对象的追踪信息发送给与其连接的终端设备,使得终端设备可以直接根据该目标对象的特征信息对目标对象进行美颜处理,以实现对目标对象的美化处理。由此可见,本申请实施例提供的美颜处理方法,与现有技术相比,无需对待处理图像进行人脸识别,并无需提取待处理图像中的人脸特征信息,从而提高美颜处理的效率。
下面以具体的实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程在某些实施例中不再赘述。下面将结合附图,对本申请的实施例进行描述。
图1为本申请实施例提供的一种美颜处理方法的流程示意图,请参见图1所示,该美颜处理方法可以包括:
S101、获取待处理图像中目标对象的追踪信息。
其中,追踪信息包括目标对象的特征信息。可选的,该目标对象可以为人物,也可以为动物或者景物。在本申请实施例中,当目标对象为人物时,可选的,该目标对象的特征信息包括目标对象在待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的至少一种。进一步地,五官特征信息可以包括目标对象的眉毛特征、眼睛特征、鼻子特征、嘴巴特征及耳朵特征。
示例的,在本申请实施例中,目标对象在待处理图像中的位置信息可以通过坐标点表示,在通过坐标点表示目标对象在待处理图像中的位置信息时,可以以拍摄屏幕的中心点为坐标原点,当然,也可以以其它点为坐标原点,在此,本申请实施例只是以拍摄屏幕的中心点为坐标原点为例进行说明,但 并不代表本申请实施例仅局限于此。
需要说明的是,在本申请实施例中,目标对象的特征信息可以只包括目标对象在待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的任一个,也可以包括目标对象在待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的任意两个,也可以目标对象在待处理图像中的位置信息、五官特征信息和人脸轮廓信息均包括。当然,目标对象的特征信息还包括目标对象的表情等信息,在此,本申请实施例只是以目标对象的特征信息包括目标对象在待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的至少一种为例进行说明,但并不代表本申请实施例仅局限于此。
在通过S101获取到待处理图像中目标对象的特征信息之后,就可以执行下述S102:
S102、根据目标对应的特征信息对目标对象进行美颜处理。
可选的,美颜处理包括下述至少一种:磨皮处理、肤色变换处理或/且美白处理。
其中,磨皮处理可以理解为是对皮肤的打磨处理,使得粗糙的皮肤变得更加光滑;肤色变换处理可以理解是对肤色的变换处理,使得肤色变亮;美白处理可以理解为对皮肤白皙度的处理,从而使得皮肤变得更加白皙。当然,美颜处理除了上述磨皮处理、肤色变换处理或/且美白处理之外,还可以包括瘦脸处理、增高处理、祛皱处理、亮眼处理、眼睛放大处理等美颜处理。当然,本申请实施例只是以这些美颜处理为例进行说明,但并不代表本申请实施例仅局限于此。
需要说明的是,本申请实施例只是以目标对象为人物,对图像中的人物进行美颜处理为例进行说明,当然,目标对象也可以为景物,当对景物进行美颜处理时,美颜处理可以包括亮度处理、对比度处理、饱和度处理、锐化处理、色温处理或光线明暗调节处理等美颜模式。
在通过S101获取到目标对象的特征信息之后,就可以直接根据该目标对象的特征信息对目标对象进行美颜处理。与现有技术相比,无需对待处理图像进行人脸识别,并无需提取待处理图像中的人脸特征信息,从而提高美颜处理的效率。
本申请实施例提供的美颜处理方法,在对目标对象进行美颜处理时,可 以直接获取目标对象的追踪信息,由于该追踪信息包括目标对象的特征信息,使得在获取到目标对象的特征信息之后,可以直接根据该目标对象的特征信息对目标对象进行美颜处理,以实现对目标对象的美化处理。与现有技术相比,无需对待处理图像进行人脸识别,并无需提取待处理图像中的人脸特征信息,从而提高美颜处理的效率。
基于图1所示的实施例,进一步地,为了更清楚地说明在本申请实施例所示的美颜处理方法,请参见图2所示,图2为本申请实施例提供的另一种美颜处理方法的流程示意图,该美颜处理方法还可以包括:
S201、确定待处理图像中的目标对象,并启用追踪技术。
其中,追踪技术是一种基于视觉跟追的技术,该追踪技术可以通过软件代码实现,具体在实现时,可以将该软件代码集成在处理器中,从而实现对目标对象的追踪。
示例的,请参见图3所示,图3为本申请实施例提供的一种确定目标对象的示意图,在确定目标对象为张三时,可以通过将张三的图像放在在拍摄屏幕的目标框中,以确定目标框中的张三为待追踪的目标对象;在确定目标对象张三之后,就可以启动该追踪技术,以通过该追踪技术实现对张三的追踪。示例的,在本申请实施例中,在启用追踪技术时,可以通过点击追踪技术的开启按钮启动追踪技术,也可以通过语音输入的方式触发启动追踪技术,当然,也可以通过手势(不触摸显示屏)的方式启动追踪技术,当然,本申请实施例只是以这几种方式为例进行说明,但并不代表本申请实施例仅局限于此。
在通过S201确定待处理图像中的目标对象,并启用追踪技术之后,就可以执行下述S202:
S202、通过追踪技术获取待处理图像中目标对象的追踪信息。
可选的,通过追踪技术计算待处理图像中目标对象的特征信息。
其中,追踪信息包括目标对象的特征信息。可选的,目标对象的特征信息包括目标对象在待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的至少一种,当然,目标对象的特征信息还包括目标对象的表情等信息,在此,本申请实施例只是以目标对象的特征信息包括目标对象在待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的至少一种为例进行说明, 但并不代表本申请实施例仅局限于此。
在通过追踪技术获取待处理图像中目标对象的追踪信息时,无人机就会跟踪目标物体移动,并组织好构图,使得目标对象的位置与拍摄屏幕的目标框保持相对不变,并通过该追踪技术计算每一帧图像中目标对象的特征信息,从而获取待处理图像中目标对象的特征信息。
需要说明的是,在获取待处理图像中目标对象的追踪信息时,可以在视频拍摄过程中,获取待处理图像中目标对象的追踪信息。例如,对于无人机而言,在视频拍摄过程中,例如美颜拍照,在获取待处理图像中的目标对象的追踪信息之后,可以执行下述S203:直接根据追踪信息中包括的目标对象的特征信息进行美颜处理,从而输出美颜处理后的图像。当然,无人机在获取到待处理图像中的目标对象的追踪信息之后,也可以不根据追踪信息中包括的目标对象的特征信息进行美颜处理,而是执行下述S204:将获取到的追踪信息进行存储,若后续需要对该待处理图像进行美颜处理时,再查找到预先存储的追踪信息,并根据该追踪信息中包括的目标对象的特征信息进行美颜处理。
S203、根据目标对象的特征信息对目标对象进行美颜处理。
可选的,美颜处理包括下述至少一种:磨皮处理、肤色变换处理或/且美白处理,当然,美颜处理除了上述磨皮处理、肤色变换处理或/且美白处理之外,还可以包括瘦脸处理、增高处理、祛皱处理、亮眼处理、眼睛放大处理等美颜处理。当然,本申请实施例只是以这些美颜处理为例进行说明,但并不代表本申请实施例仅局限于此。
在S202通过追踪技术获取待处理图像中目标对象的追踪信息之后,就可以直接根据该目标对象的特征信息对目标对象进行美颜处理。与现有技术相比,无需对待处理图像进行人脸识别,并无需提取待处理图像中的人脸特征信息,从而提高美颜处理的效率。
此外,需要说明的是,在获取到待处理图像中目标对象的特征信息之后,也可以不根据追踪信息中包括的目标对象的特征信息进行美颜处理,而是存储目标对象的特征信息,即执行下述S204:
S204、根据用户配置指令在待处理图像中存储目标对象的特征信息;或者根据用户配置指令在待处理图像所属的视频中存储目标对象的特征信息。
示例的,在接收到用户配置指令时,若用户配置指令指示存储目标对象的特征信息,且该待处理图像为图片时,可以将目标对象的特征信息存储在待处理图像的内存条(Extreme Memory Profile,XMP)中;若用户配置指令指示存储目标对象的特征信息,且该待处理图像为视频时,也可以将目标对象的特征信息存储在待处理图像所属的视频的字幕文本中或者视频的元数据中;当然,也可以单独将目标对象的特征信息存储在一个文件中,并在该文件中建立目标对象的特征信息与对应的待处理图像之间的一一映射关系。
需要说明的是,在本申请实施例中,在存储目标对象的特征信息之后,还可以将存储的目标对象的特征信息通过wifi或者通用串行总线(Universal Serial Bus,USB)及其他在线传输方式发送给终端设备的应用程序(Application,APP),使得终端设备还可以根据该目标对象的特征信息对目标对象进行美颜处理,这样对于终端设备而言,在对目标对象进行美颜处理时,也可以直接根据目标对象的特征信息对目标对象进行美颜处理,以实现对目标对象的美化处理。与现有技术中终端设备的美颜处理方法相比,无需对待处理图像进行人脸识别,并无需提取待处理图像中的人脸特征信息,从而提高美颜处理的效率。
需要说明的是,在本申请实施例中,S203和S204之间并无先后顺序,可以先执行S203,再执行S204,也可以先执行S204,再执行S203,当然,也可以同时执行S203和S204,在此,本申请实施例只是以先执行S203,再执行S204为例进行说明,但并不代表本申请实施例仅局限于此。
在实际应用过程中,在进行美颜拍照时,可以通过先确定待处理图像中的目标对象,并通过电机追踪技术的开启键启用追踪技术;在启动追踪技术之后,就可以在视频拍摄过程中,通过追踪技术获取待处理图像中目标对象的追踪信息;由于该追踪信息包括目标对象的特征信息,使得在获取到目标对象的特征信息之后,可以直接根据该目标对象的特征信息对目标对象进行美颜处理,以实现对目标对象的美化处理。与现有技术相比,无需对待处理图像进行人脸识别,并无需提取待处理图像中的人脸特征信息,从而提高美颜处理的效率。
图4为本申请实施例提供的一种美颜处理装置40的结构示意图,请参见图4所示,该美颜处理装置40可以包括处理器401及存储器402。
其中,存储器402,用于存储程序指令。
处理器401,用于获取待处理图像中目标对象的追踪信息;追踪信息包括目标对象的特征信息。
处理器401,还用于根据目标对应的特征信息对目标对象进行美颜处理。
可选的,目标对象的特征信息包括目标对象在待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的至少一种。
可选的,处理器401,具体用于根据目标对象的特征信息对目标对象进行美颜处理;美颜处理包括下述至少一种:
磨皮处理、肤色变换处理或/且美白处理。
可选的,处理器401,还用于确定待处理图像中的目标对象,并启用追踪技术。
处理器401,具体用于通过追踪技术获取目标对象的追踪信息。
可选的,处理器401,具体用于通过追踪技术计算待处理图像中目标对象的特征信息。
可选的,处理器401,具体用于在视频拍摄过程中,获取待处理图像中目标对象的追踪信息。
可选的,处理器401,还用于根据用户配置指令在待处理图像中存储目标对象的特征信息;或者根据用户配置指令在待处理图像所属的视频中存储目标对象的特征信息。
可选的,处理器401,具体用于根据用户配置指令,在待处理图像的内存条XMP中存储目标对象的特征信息。
可选的,处理器401,具体用于根据用户配置指令,在待处理图像所属的视频的字幕文本中或者视频的元数据中存储目标对象的特征信息。
上述美颜处理装置40,对应地可执行任一实施例的美颜处理方法的技术方案,其实现原理和技术效果类似,在此不再赘述。
图5为本申请实施例提供的一种无人机50的结构示意图,请参见图5所示,该无人机50可以包括上述任一实施例所示的美颜处理装置40,其实现原理和技术效果,分别与美颜处理方法的实现原理和技术效果类似,在此不再赘述。
图6为本申请实施例提供的一种手持平台60的结构示意图,请参见图6 所示,该手持平台60可以包括上述任一实施例所示的美颜处理装置40,其实现原理和技术效果,分别与美颜处理方法的实现原理和技术效果类似,在此不再赘述。
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,在计算机程序被处理器执行时,执行上述任一实施例所示的美颜处理方法,其实现原理和技术效果,分别与美颜处理方法的实现原理和技术效果类似,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (21)

  1. 一种美颜处理方法,其特征在于,包括:
    获取待处理图像中目标对象的追踪信息;所述追踪信息包括所述目标对象的特征信息;
    根据所述目标对应的特征信息对所述目标对象进行美颜处理。
  2. 根据权利要求1所述的方法,其特征在于,
    所述目标对象的特征信息包括所述目标对象在所述待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的至少一种。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述目标对应的特征信息对所述目标对象进行美颜处理,包括:
    根据所述目标对象的特征信息对所述目标对象进行美颜处理;所述美颜处理包括下述至少一种:
    磨皮处理、肤色变换处理或/且美白处理。
  4. 根据权利要求2或3所述的方法,其特征在于,所述获取待处理图像中目标对象的追踪信息之前,还包括:
    确定所述待处理图像中的目标对象,并启用追踪技术;
    所述获取待处理图像中目标对象的追踪信息,包括:
    通过所述追踪技术获取所述目标对象的追踪信息。
  5. 根据权利要求4所述的方法,其特征在于,所述通过所述追踪技术获取所述目标对象的追踪信息,包括:
    通过追踪技术计算所述待处理图像中所述目标对象的特征信息。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述获取待处理图像中目标对象的追踪信息,包括:
    在视频拍摄过程中,获取所述待处理图像中目标对象的追踪信息。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述获取待处理图像中目标对象的追踪信息之后,还包括:
    根据用户配置指令在所述待处理图像中存储所述目标对象的特征信息;或者根据用户配置指令在所述待处理图像所属的视频中存储所述目标对象的特征信息。
  8. 根据权利要求7所述的方法,其特征在于,所述根据用户配置指令在 所述待处理图像中存储所述目标对象的特征信息,包括:
    根据所述用户配置指令,在所述待处理图像的内存条XMP中存储所述目标对象的特征信息。
  9. 根据权利要求7所述的方法,其特征在于,所述根据用户配置指令在所述待处理图像所属的视频中存储所述目标对象的特征信息,包括:
    根据所述用户配置指令,在所述待处理图像所属的视频的字幕文本中或者所述视频的元数据中存储所述目标对象的特征信息。
  10. 一种美颜处理装置,其特征在于,包括处理器及存储器;
    其中,所述存储器,用于存储程序指令;
    所述处理器,用于获取待处理图像中目标对象的追踪信息;所述追踪信息包括所述目标对象的特征信息;
    所述处理器,还用于根据所述目标对应的特征信息对所述目标对象进行美颜处理。
  11. 根据权利要求10所述的装置,其特征在于,
    所述目标对象的特征信息包括所述目标对象在所述待处理图像中的位置信息、五官特征信息和人脸轮廓信息中的至少一种。
  12. 根据权利要求11所述的装置,其特征在于,
    所述处理器,具体用于根据所述目标对象的特征信息对所述目标对象进行美颜处理;所述美颜处理包括下述至少一种:
    磨皮处理、肤色变换处理或/且美白处理。
  13. 根据权利要求11或12所述的装置,其特征在于,
    所述处理器,还用于确定所述待处理图像中的目标对象,并启用追踪技术;
    所述处理器,具体用于通过所述追踪技术获取所述目标对象的追踪信息。
  14. 根据权利要求13所述的装置,其特征在于,
    所述处理器,具体用于通过追踪技术计算所述待处理图像中所述目标对象的特征信息。
  15. 根据权利要求10-14任一项所述的装置,其特征在于,
    所述处理器,具体用于在视频拍摄过程中,获取所述待处理图像中目标对象的追踪信息。
  16. 根据权利要求10-15任一项所述的装置,其特征在于,
    所述处理器,还用于根据用户配置指令在所述待处理图像中存储所述目标对象的特征信息;或者根据用户配置指令在所述待处理图像所属的视频中存储所述目标对象的特征信息。
  17. 根据权利要求16所述的装置,其特征在于,
    所述处理器,具体用于根据所述用户配置指令,在所述待处理图像的内存条XMP中存储所述目标对象的特征信息。
  18. 根据权利要求16所述的装置,其特征在于,
    所述处理器,具体用于根据所述用户配置指令,在所述待处理图像所属的视频的字幕文本中或者所述视频的元数据中存储所述目标对象的特征信息。
  19. 一种无人机,其特征在于,包括:上述权利要求10-18任一项所述的美颜处理装置。
  20. 一种手持平台,其特征在于,包括:上述权利要求10-18任一项所述的美颜处理装置。
  21. 一种计算机可读存储介质,其特征在于,
    计算机可读存储介质上存储有计算机程序,在所述计算机程序被处理器执行时,执行权利要求1-9任一项所述的美颜处理方法。
PCT/CN2018/092310 2018-06-22 2018-06-22 美颜处理方法、装置、无人机及手持平台 WO2019241981A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2018/092310 WO2019241981A1 (zh) 2018-06-22 2018-06-22 美颜处理方法、装置、无人机及手持平台
CN201880031569.0A CN110678904A (zh) 2018-06-22 2018-06-22 美颜处理方法、装置、无人机及手持平台
US17/116,600 US20210287394A1 (en) 2018-06-22 2020-12-09 Beauty processing method, device, unmanned aerial vehicle, and handheld gimbal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/092310 WO2019241981A1 (zh) 2018-06-22 2018-06-22 美颜处理方法、装置、无人机及手持平台

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/116,600 Continuation US20210287394A1 (en) 2018-06-22 2020-12-09 Beauty processing method, device, unmanned aerial vehicle, and handheld gimbal

Publications (1)

Publication Number Publication Date
WO2019241981A1 true WO2019241981A1 (zh) 2019-12-26

Family

ID=68983150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/092310 WO2019241981A1 (zh) 2018-06-22 2018-06-22 美颜处理方法、装置、无人机及手持平台

Country Status (3)

Country Link
US (1) US20210287394A1 (zh)
CN (1) CN110678904A (zh)
WO (1) WO2019241981A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188085A (zh) * 2020-09-04 2021-01-05 上海摩象网络科技有限公司 一种图像处理方法及手持云台相机

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100024264A (ko) * 2008-08-25 2010-03-05 삼성디지털이미징 주식회사 디지털 영상 처리기에서 뷰티처리 장치 및 방법
CN106776619A (zh) * 2015-11-20 2017-05-31 百度在线网络技术(北京)有限公司 用于确定目标对象的属性信息的方法和装置
CN107995415A (zh) * 2017-11-09 2018-05-04 深圳市金立通信设备有限公司 一种图像处理方法、终端及计算机可读介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100024264A (ko) * 2008-08-25 2010-03-05 삼성디지털이미징 주식회사 디지털 영상 처리기에서 뷰티처리 장치 및 방법
CN106776619A (zh) * 2015-11-20 2017-05-31 百度在线网络技术(北京)有限公司 用于确定目标对象的属性信息的方法和装置
CN107995415A (zh) * 2017-11-09 2018-05-04 深圳市金立通信设备有限公司 一种图像处理方法、终端及计算机可读介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188085A (zh) * 2020-09-04 2021-01-05 上海摩象网络科技有限公司 一种图像处理方法及手持云台相机

Also Published As

Publication number Publication date
CN110678904A (zh) 2020-01-10
US20210287394A1 (en) 2021-09-16

Similar Documents

Publication Publication Date Title
US10438329B2 (en) Image processing method and image processing apparatus
WO2016110199A1 (zh) 一种表情迁移方法、电子设备及系统
WO2015074476A1 (zh) 一种图像处理方法、装置和存储介质
WO2016145830A1 (zh) 图像处理方法、终端及计算机存储介质
WO2016180126A1 (zh) 一种图像处理方法和装置
CN107730448B (zh) 基于图像处理的美颜方法及装置
WO2022179025A1 (zh) 图像处理方法及装置、电子设备和存储介质
JP5949331B2 (ja) 画像生成装置、画像生成方法及びプログラム
KR20160118001A (ko) 촬영 장치, 그 제어 방법 및 컴퓨터로 판독 가능한 기록매체.
WO2019237745A1 (zh) 人脸图像处理方法、装置、电子设备及计算机可读存储介质
CN110599410B (zh) 图像处理的方法、装置、终端及存储介质
KR102193638B1 (ko) 헤어 스타일 시뮬레이션 서비스를 제공하는 방법, 시스템 및 비일시성의 컴퓨터 판독 가능 기록 매체
US9437026B2 (en) Image creating device, image creating method and recording medium
US11769286B2 (en) Beauty processing method, electronic device, and computer-readable storage medium
JP7159357B2 (ja) 画像処理方法及び装置、電子機器並びに記憶媒体
CN109325924A (zh) 图像处理方法、装置、终端及存储介质
US11284020B2 (en) Apparatus and method for displaying graphic elements according to object
WO2016086479A1 (zh) 一种图像处理方法及装置
WO2019241981A1 (zh) 美颜处理方法、装置、无人机及手持平台
KR101507410B1 (ko) 모바일 단말의 라이브 메이크업 촬영 방법 및 장치
CN116684394A (zh) 媒体内容处理方法、装置、设备、可读存储介质及产品
CN110770742B (zh) 基于面部特征点的摇动动作识别系统和方法
CN109685741A (zh) 一种图像处理方法、装置及计算机存储介质
CN111373409B (zh) 获取颜值变化的方法及终端
WO2022042502A1 (zh) 美颜功能开启方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18923659

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18923659

Country of ref document: EP

Kind code of ref document: A1