CN107480615A - U.S. face processing method, device and mobile device - Google Patents

U.S. face processing method, device and mobile device Download PDF

Info

Publication number
CN107480615A
CN107480615A CN201710643844.1A CN201710643844A CN107480615A CN 107480615 A CN107480615 A CN 107480615A CN 201710643844 A CN201710643844 A CN 201710643844A CN 107480615 A CN107480615 A CN 107480615A
Authority
CN
China
Prior art keywords
face
models
depth information
speckle pattern
prestore
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710643844.1A
Other languages
Chinese (zh)
Other versions
CN107480615B (en
Inventor
唐城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710643844.1A priority Critical patent/CN107480615B/en
Publication of CN107480615A publication Critical patent/CN107480615A/en
Application granted granted Critical
Publication of CN107480615B publication Critical patent/CN107480615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of U.S. face processing method, device and mobile device, and this method includes, based on the structure light for being incident upon face, gathering speckle pattern corresponding to face;According to the depth information of speckle pattern, compared with the depth information for the face 3D models that prestore, obtain comparison result;U.S. face processing is carried out to plane picture corresponding to face according to comparison result.The depth information of the speckle pattern collected by the present invention by being then based on structure light, data source is accurate, and, by the depth information of speckle pattern corresponding to face, compared with the depth information for the face 3D models that prestore, performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.

Description

U.S. face processing method, device and mobile device
Technical field
The present invention relates to mobile device technology field, more particularly to a kind of U.S. face processing method, device and mobile device.
Background technology
With the development of mobile device, user can be taken pictures using mobile device, during taking pictures, be had to clapping Take the demand that the picture of gained is beautified.
The content of the invention
It is contemplated that at least solves one of technical problem in correlation technique to a certain extent.
Therefore, the present invention proposes a kind of U.S. face processing method, device and mobile device, collected by being then based on structure light Speckle pattern depth information, data source is accurate, and, by the depth information of speckle pattern corresponding to face, with the people that prestores The depth information of face 3D models compares, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
The U.S. face processing method that first aspect present invention embodiment proposes, including:Based on the structure light for being incident upon face, adopt Collect speckle pattern corresponding to the face;According to the depth information of the speckle pattern, believe with the depth for the face 3D models that prestore Breath compares, and obtains comparison result;U.S. face processing is carried out to plane picture corresponding to the face according to the comparison result.
The U.S. face processing method that first aspect present invention embodiment proposes, by based on the structure light for being incident upon face, adopting Speckle pattern corresponding to collecting face, according to the depth information of speckle pattern, compares with the depth information for the face 3D models that prestore, Comparison result is obtained, U.S. face processing is carried out to plane picture corresponding to face according to comparison result, by being then based on structure gloss The depth information of the speckle pattern collected, data source is accurate, and, it is and pre- by the depth information of speckle pattern corresponding to face The depth information for depositing face 3D models compares, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
The U.S. face processing unit that second aspect of the present invention embodiment proposes, including:Acquisition module, for based on being incident upon people The structure light of face, gather speckle pattern corresponding to the face;Comparing module, for being believed according to the depth of the speckle pattern Breath, compares with the depth information for the face 3D models that prestore, obtains comparison result;U.S. face module, for being tied according to described compare Fruit carries out U.S. face processing to plane picture corresponding to the face.
The U.S. face processing unit that second aspect of the present invention embodiment proposes, by based on the structure light for being incident upon face, adopting Speckle pattern corresponding to collecting face, according to the depth information of speckle pattern, compares with the depth information for the face 3D models that prestore, Comparison result is obtained, U.S. face processing is carried out to plane picture corresponding to face according to comparison result, by being then based on structure gloss The depth information of the speckle pattern collected, data source is accurate, and, it is and pre- by the depth information of speckle pattern corresponding to face The depth information for depositing face 3D models compares, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
The U.S. face processing unit that third aspect present invention embodiment proposes, it is characterised in that including:Processor;For depositing Store up the memory of processor-executable instruction;Wherein, the processor is configured as:Based on the structure light for being incident upon face, adopt Collect speckle pattern corresponding to the face;According to the depth information of the speckle pattern, believe with the depth for the face 3D models that prestore Breath compares, and obtains comparison result;U.S. face processing is carried out to plane picture corresponding to the face according to the comparison result.
The U.S. face processing unit that third aspect present invention embodiment proposes, by based on the structure light for being incident upon face, adopting Speckle pattern corresponding to collecting face, according to the depth information of speckle pattern, compares with the depth information for the face 3D models that prestore, Comparison result is obtained, U.S. face processing is carried out to plane picture corresponding to face according to comparison result, by being then based on structure gloss The depth information of the speckle pattern collected, data source is accurate, and, it is and pre- by the depth information of speckle pattern corresponding to face The depth information for depositing face 3D models compares, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
Fourth aspect present invention embodiment proposes a kind of non-transitorycomputer readable storage medium, when the storage is situated between Instruction in matter by terminal computing device when so that terminal is able to carry out a kind of U.S. face processing method, and methods described includes: Based on the structure light for being incident upon face, speckle pattern corresponding to the face is gathered;According to the depth information of the speckle pattern, Compared with the depth information for the face 3D models that prestore, obtain comparison result;It is corresponding to the face according to the comparison result Plane picture carry out U.S. face processing.
The non-transitorycomputer readable storage medium that fourth aspect present invention embodiment proposes, by based on being incident upon people The structure light of face, gather speckle pattern corresponding to face, according to the depth information of speckle pattern, the depth with the face 3D models that prestore Degree information compares, and obtains comparison result, U.S. face processing is carried out to plane picture corresponding to face according to comparison result, due to being The depth information of the speckle pattern collected based on structure light, data source is accurate, and, by the depth of speckle pattern corresponding to face Information is spent, is compared with the depth information for the face 3D models that prestore, performing has targetedly U.S. face processing, lifts U.S. face efficiency And effect.
Fifth aspect present invention also proposes a kind of mobile device, and the mobile device includes memory and processor, described to deposit Computer-readable instruction is stored in reservoir, when the instruction is by the computing device so that the computing device is as originally The U.S. face processing method that invention first aspect embodiment proposes.
The mobile device that fifth aspect present invention embodiment proposes, by based on the structure light for being incident upon face, gathering people Speckle pattern corresponding to face, according to the depth information of speckle pattern, compare, obtain with the depth information for the face 3D models that prestore Comparison result, U.S. face processing is carried out to plane picture corresponding to face according to comparison result, collected by being then based on structure light Speckle pattern depth information, data source is accurate, and, by the depth information of speckle pattern corresponding to face, with the people that prestores The depth information of face 3D models compares, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
Of the invention above-mentioned and/or additional aspect and advantage will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is the schematic flow sheet for the U.S. face processing method that one embodiment of the invention proposes;
Fig. 2 is structure light schematic diagram in correlation technique;
Fig. 3 is the projection set schematic diagram of structure light in the embodiment of the present invention;
Fig. 4 is the schematic flow sheet for the U.S. face processing method that another embodiment of the present invention proposes;
Fig. 5 is the schematic device of a projective structure light;
Fig. 6 is the schematic flow sheet for the U.S. face processing method that another embodiment of the present invention proposes;
Fig. 7 a are a sub-image area schematic diagram in the embodiment of the present invention;
Fig. 7 b are another sub-image area schematic diagram in the embodiment of the present invention;
Fig. 8 is the schematic flow sheet for the U.S. face processing method that another embodiment of the present invention proposes;
Fig. 9 is the structural representation for the U.S. face processing unit that one embodiment of the invention proposes;
Figure 10 is the structural representation for the U.S. face processing unit that another embodiment of the present invention proposes;
Figure 11 is the schematic diagram of image processing circuit in one embodiment.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, is only used for explaining the present invention, and is not considered as limiting the invention.On the contrary, this All changes that the embodiment of invention includes falling into the range of the spirit and intension of attached claims, modification and equivalent Thing.
Fig. 1 is the schematic flow sheet for the U.S. face processing method that one embodiment of the invention proposes.
Embodiments of the invention can be applied during user is taken pictures using mobile device, or, can also Apply during user is taken pictures using camera apparatus, this is not restricted.
Further, user can be by setting class application program of taking pictures in a mobile device to be carried out during taking pictures U.S. face processing, is not restricted to this.
Wherein, application program can refer to run software program on an electronic device, and electronic equipment is, for example, personal electricity Brain (Personal Computer, PC), cloud device or mobile device, mobile device such as smart mobile phone, or flat board electricity Brain etc..Terminal can be the hardware that smart mobile phone, tablet personal computer, personal digital assistant, e-book etc. have various operating systems Equipment, this is not restricted.
Can be, for example, the center of mobile device on hardware it should be noted that the executive agent of the embodiment of the present invention Processor (Central Processing Unit, CPU), can be, for example, the phase of the class of taking pictures in mobile device on software Service is closed, this is not restricted.
Referring to Fig. 1, this method includes:
Step 101:Based on the structure light for being incident upon face, speckle pattern corresponding to face is gathered.
It is known that the set of projections of direction in space light beam is collectively referred to as structure light, as shown in Fig. 2 Fig. 2 is to be tied in correlation technique Structure light schematic diagram, the equipment of generating structure light can project to luminous point, line, grating, grid or speckle on testee Certain projector equipment or instrument or the laser for generating laser beam.
Alternatively, referring to Fig. 3, Fig. 3 is the projection set schematic diagram of structure light in the embodiment of the present invention.With the throwing of structure light The set that photograph album is combined into a little carries out example, and the set of the point can be referred to as speckle set.
Projection set is specially speckle set corresponding to structure light in the embodiment of the present invention, i.e. this is used for projective structure The device of light is specifically to project luminous point on testee, by the way that luminous point is projected on testee so that generation is tested Speckle set of the object under structure light, rather than line, grating, grid or speckle are projected on testee, due to speckle Memory space needed for set is smaller, thus, it is possible to avoid influenceing the operational efficiency of mobile device, the storage for saving equipment is empty Between.
In an embodiment of the present invention, can be by project structured light to face, collection is related based on structure light human face Some view data.Due to the physical characteristic of structure light, the view data collected by structure light, people can be reflected The depth information of face, the depth information can be, for example, the 3D information of face, taken pictures by the depth information for being then based on face During carry out U.S. face processing, thus, lift U.S. face effect.
Alternatively, in some embodiments, referring to Fig. 4, before step 101, in addition to:
Step 100:When shot region identifies face, projective structure light.
In an embodiment of the present invention, the device for being capable of projective structure light can be configured in a mobile device in advance, and then, When shot region identifies face, the device for being used for projective structure light is opened, with projective structure light.
Referring to Fig. 5, Fig. 5 is the schematic device of a projective structure light, is combined into line with the set of projections of structure light and is shown Example, is combined into that the principle of the structure light of speckle is similar, and the device can include the projector and video camera for set of projections, wherein, throwing The project structured light of certain pattern in testee surface, is formed modulated by testee surface configuration on the surface by emitter Line 3-D view.The 3-D view is by the camera detection in another location, so as to obtain the two-dimentional fault image of line. The relative position and testee surface profile that the distortion degree of line is depended between the projector and video camera, intuitively, along the line The displacement (or skew) shown is proportional to testee apparent height, and the distortion of line illustrates the change of testee plane Change, discontinuously show the physical clearance on testee surface, when the timing of relative position one between the projector and video camera, by The two-dimentional fault image coordinate of line can reappear testee surface tri-dimensional profile.
Wherein, shot region can be, for example, that unlatching mobile device is taken pictures after class application program, the focusing taken pictures Region, user can be treated the face taken pictures based on the focusing area and be identified, and when identifying face, triggering opens one The device of individual projective structure light, with projective structure light.
By when shot region identifies face, ability projective structure light, the energy consumption of mobile device can be saved.
Step 102:According to the depth information of speckle pattern, compare, obtain with the depth information for the face 3D models that prestore Comparison result.
Wherein, the depth information can specifically for example, the distance of the profile of face and face, profile can be, for example, Coordinate value of each point in rectangular coordinate system in space on face, distance can be, for example, each o'clock on face relative to one The distance of individual reference position, the reference position can be some positions on mobile device, and this is not restricted.
Specifically, depth information can be obtained according to the distortion of speckle image.
According to the physical characteristic of structure light, if being incident upon on a three-dimensional testee, it projects set Speckle distortion occurs in speckle image, i.e. the arrangement mode of certain some speckle is offset with other speckles.
Therefore, in an embodiment of the present invention, these offset informations can be based on, determine the Two-Dimensional Speckle image of distortion Depth information corresponding to coordinate conduct, and the 3D information of face is directly restored according to the depth information.
In an embodiment of the present invention, the face 3D models that prestore, and its corresponding depth information are predetermined, are somebody's turn to do The face 3D models that prestore are a benchmark face 3D model, and the corresponding depth information is deep corresponding to benchmark face 3D models Information is spent, for example, the face 3D models of model, or the face 3D models of star, this is not restricted.
Wherein, the face 3D models that prestore can be that user chooses from multiple first face 3D models.
Alternatively, in some embodiments, comparison result is the depth information of speckle pattern, the depth with the face 3D models that prestore The different information spent between information.
In an embodiment of the present invention, because the face 3D models that prestore are a benchmark face 3D model, this is corresponding Depth information is depth information corresponding to benchmark face 3D models, therefore, according to the depth information of speckle pattern, with the people that prestores The depth information of face 3D models compares, and obtains comparison result, can support subsequently based on the comparison result to corresponding to face Plane picture carries out U.S. face processing, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
Step 103:U.S. face processing is carried out to plane picture corresponding to face according to comparison result.
Wherein, plane picture corresponding to face can be collected by the camera of mobile device, for example, can be When gathering speckle pattern corresponding to face, plane picture corresponding to face is also gathered.
It is understood that the construction feature based on face, in plane picture corresponding to face, different pixels is relative The depth information answered is different or identical.
Therefore, embodiments of the invention, can be according to the depth information of the speckle pattern of face, with the face 3D models that prestore Depth information between different information, U.S. face processing is carried out to plane picture corresponding to face, if for example, the face 3D moulds that prestore In the depth information of type, the profile of face is goose egg shape, then plane picture can be entered based on the profile of the face of the goose egg shape The U.S. face processing of row, is not restricted to this.
In the present embodiment, by based on the structure light for being incident upon face, speckle pattern corresponding to face being gathered, according to speckle The depth information of pattern, compared with the depth information for the face 3D models that prestore, comparison result is obtained, according to comparison result to people Plane picture corresponding to face carries out U.S. face processing, the depth information of the speckle pattern collected by being then based on structure light, data Source is accurate, and, the depth information of speckle pattern corresponding to face compares with the depth information for the face 3D models that prestore, Performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
Fig. 6 is the schematic flow sheet for the U.S. face processing method that another embodiment of the present invention proposes.
Depth information of the present embodiment using comparison result as speckle pattern, between the depth information for the face 3D models that prestore Different information carry out example, this is not restricted.
The different information can be, for example, luminance difference, colour of skin difference, effect of shadow difference etc., and this is not restricted.
Referring to Fig. 6, this method includes:
Step 601:Based on the structure light for being incident upon face, speckle pattern corresponding to face is gathered.
In an embodiment of the present invention, can be by project structured light to face, collection is related based on structure light human face Some view data.Due to the physical characteristic of structure light, the view data collected by structure light, people can be reflected The depth information of face, the depth information can be, for example, the 3D information of face, taken pictures by the depth information for being then based on face During carry out U.S. face processing, thus, lift U.S. face effect.
Step 602:According to the depth information of speckle pattern, compare, obtain with the depth information for the face 3D models that prestore Comparison result.
Wherein, the depth information can specifically for example, the distance of the profile of face and face, profile can be, for example, Coordinate value of each point in rectangular coordinate system in space on face, distance can be, for example, each o'clock on face relative to one The distance of individual reference position, the reference position can be some positions on mobile device, and this is not restricted.
Specifically, depth information can be obtained according to the distortion of speckle image.
According to the physical characteristic of structure light, if being incident upon on a three-dimensional testee, it projects set Speckle distortion occurs in speckle image, i.e. the arrangement mode of certain some speckle is offset with other speckles.
Therefore, in an embodiment of the present invention, these offset informations can be based on, determine the Two-Dimensional Speckle image of distortion Depth information corresponding to coordinate conduct, and the 3D information of face is directly restored according to the depth information.
In an embodiment of the present invention, the face 3D models that prestore, and its corresponding depth information are predetermined, are somebody's turn to do The face 3D models that prestore are a benchmark face 3D model, and the corresponding depth information is deep corresponding to benchmark face 3D models Information is spent, for example, the face 3D models of model, or the face 3D models of star, this is not restricted.
Wherein, the face 3D models that prestore can be that user chooses from multiple first face 3D models.
In an embodiment of the present invention, because the face 3D models that prestore are a benchmark face 3D model, this is corresponding Depth information is depth information corresponding to benchmark face 3D models, therefore, according to the depth information of speckle pattern, with the people that prestores The depth information of face 3D models compares, and obtains comparison result, can support subsequently based on the comparison result to corresponding to face Plane picture carries out U.S. face processing, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
Step 603:According to the depth information of speckle pattern, determine multiple corresponding to different depth information in plane picture Sub-image area.
Wherein, plane picture corresponding to face can be collected by the camera of mobile device, for example, can be When gathering speckle pattern corresponding to face, plane picture corresponding to face is also gathered.
It is understood that the construction feature based on face, in plane picture corresponding to face, different pixels is relative The depth information answered is different or identical.
Therefore, in an embodiment of the present invention, can determine first more corresponding to different depth information in plane picture Individual sub-image area, for example, can be in plane picture, depth information is identical, and adjacent multiple pixels are true two-by-two for position It is set to a sub-image area, multiple sub-image areas can be determined based on identical method, in the sub-image area, each The depth information of pixel is consistent, and, the depth information of pixel is inconsistent in different sub-image areas.
It is a sub-image area schematic diagram in the embodiment of the present invention referring to Fig. 7 a and Fig. 7 b, Fig. 7 a, the subgraph in the figure Be located at forehead at eyebrow peak as region, Fig. 7 b are another sub-image area schematic diagram in the embodiment of the present invention, in the figure Sub-image area is located at forehead at hair line, it is to be understood that can be including one or more in each plane picture Individual sub-image area.
Step 604:By in multiple sub-image areas, sub-image area corresponding to the depth information belonging to different information determines For target sub-image area.
In an embodiment of the present invention, it may be determined that the depth information of speckle pattern in step 602, with the face 3D moulds that prestore Sub-image area corresponding to the depth information belonging to different information between the depth information of type is defined as target sub-image area, That is, using sub-image area corresponding to the depth information that information is had differences in plane picture as target sub-image area.
Step 605:According to the depth information for the face 3D models that prestore, target sub-image area is adjusted.
In an embodiment of the present invention, because the face 3D models that prestore are a benchmark face 3D model, this is corresponding Depth information is depth information corresponding to benchmark face 3D models, therefore, right according to the depth information for the face 3D models that prestore Target sub-image area is adjusted, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.And it is to mesh Mark sub-image area is adjusted, rather than whole plane picture is adjusted, and further ensures U.S. face specific aim, lifting U.S. Face treatment effeciency.
In the present embodiment, by based on the structure light for being incident upon face, speckle pattern corresponding to face being gathered, according to speckle The depth information of pattern, compared with the depth information for the face 3D models that prestore, comparison result is obtained, according to comparison result to people Plane picture corresponding to face carries out U.S. face processing, the depth information of the speckle pattern collected by being then based on structure light, data Source is accurate, and, the depth information of speckle pattern corresponding to face compares with the depth information for the face 3D models that prestore, Performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.Because the face 3D models that prestore are a benchmark people Face 3D models, the corresponding depth information is depth information corresponding to benchmark face 3D models, therefore, according to the face 3D that prestores The depth information of model, target sub-image area is adjusted, performing has targetedly U.S. face processing, lifts U.S. face efficiency And effect.And by determining multiple sub-image areas corresponding to different depth information in plane picture, it is simple easily to realize, and And make it that U.S. face effect is more accurate.Carried out by being adjusted to target sub-image area, rather than to whole plane picture Adjustment, further ensures U.S. face specific aim, lifts U.S. face treatment effeciency.
Fig. 8 is the schematic flow sheet for the U.S. face processing method that another embodiment of the present invention proposes.
Referring to Fig. 8, this method includes:
Step 801:Multiple face 3D models are obtained, and it is every based on the structure light for being incident upon each face 3D models, collection Speckle pattern corresponding to individual face 3D models.
Step 802:By the depth information of speckle pattern corresponding to each face 3D models, make with its affiliated face 3D model For the first face 3D models, multiple first face 3D models are obtained.
In an embodiment of the present invention, face 3D models are for example, the face 3D models of model, or the face 3D of star Model, this is not restricted.
By regarding multiple face 3D models as benchmark face 3D models, subsequently directly the depth of face speckle pattern is believed Cease and compared with the depth information of benchmark face 3D models, U.S. face effect and U.S. face automaticity can be lifted.
Wherein it is possible to according to the experimental data of U.S. face related application, or with webpage correlation technique such as crawler technology etc. from Multiple face 3D models are obtained on webpage, and based on the structure light for being incident upon each face 3D models, gather each face 3D Speckle pattern corresponding to model, by the depth information of speckle pattern corresponding to each face 3D models, with its affiliated face 3D mould Type obtains multiple first face 3D models as the first face 3D models, multiple the first by determining in the embodiment of the present invention Face 3D models, the personalized U.S. face demand for the user that can meet to take pictures, lifting user use stickiness.
Further, by the depth information of speckle pattern corresponding to each face 3D models, with its affiliated face 3D mould Type after obtaining multiple first face 3D models, can store multiple first face 3D models as the first face 3D models Into being locally stored, can be easy to, subsequently directly from the first face 3D models of middle calling are locally stored, lift U.S. face treatment effeciency.
Step 803:Based on the structure light for being incident upon face, speckle pattern corresponding to face is gathered.
In an embodiment of the present invention, can be by project structured light to face, collection is related based on structure light human face Some view data.Due to the physical characteristic of structure light, the view data collected by structure light, people can be reflected The depth information of face, the depth information can be, for example, the 3D information of face, taken pictures by the depth information for being then based on face During carry out U.S. face processing, thus, lift U.S. face effect.
Step 804:Receive selection instruction of the user to multiple first face 3D models.
Step 805:The first face 3D model corresponding by instruction is chosen, as the face 3D models that prestore.
Wherein, by the depth information of speckle pattern corresponding to each face 3D models, make with its affiliated face 3D model For the first face 3D models, after obtaining multiple first face 3D models, may be configured to make user choose the first face 3D The interface of model, and be shown for icon corresponding to interface configuration so that user can be directly based upon individual demand choosing The first face 3D models of needs are taken to lift user experience degree as the face 3D models that prestore.
Step 806:According to the depth information of speckle pattern, compare, obtain with the depth information for the face 3D models that prestore Comparison result.
Wherein, the depth information can specifically for example, the distance of the profile of face and face, profile can be, for example, Coordinate value of each point in rectangular coordinate system in space on face, distance can be, for example, each o'clock on face relative to one The distance of individual reference position, the reference position can be some positions on mobile device, and this is not restricted.
Specifically, depth information can be obtained according to the distortion of speckle image.
According to the physical characteristic of structure light, if being incident upon on a three-dimensional testee, it projects set Speckle distortion occurs in speckle image, i.e. the arrangement mode of certain some speckle is offset with other speckles.
Therefore, in an embodiment of the present invention, these offset informations can be based on, determine the Two-Dimensional Speckle image of distortion Depth information corresponding to coordinate conduct, and the 3D information of face is directly restored according to the depth information.
In an embodiment of the present invention, the face 3D models that prestore, and its corresponding depth information are predetermined, are somebody's turn to do The face 3D models that prestore are a benchmark face 3D model, and the corresponding depth information is deep corresponding to benchmark face 3D models Information is spent, for example, the face 3D models of model, or the face 3D models of star, this is not restricted.
Wherein, the face 3D models that prestore can be that user chooses from multiple first face 3D models.
In an embodiment of the present invention, because the face 3D models that prestore are a benchmark face 3D model, this is corresponding Depth information is depth information corresponding to benchmark face 3D models, therefore, according to the depth information of speckle pattern, with the people that prestores The depth information of face 3D models compares, and obtains comparison result, can support subsequently based on the comparison result to corresponding to face Plane picture carries out U.S. face processing, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
Step 807:U.S. face processing is carried out to plane picture corresponding to face according to comparison result.
Wherein, plane picture corresponding to face can be collected by the camera of mobile device, for example, can be When gathering speckle pattern corresponding to face, plane picture corresponding to face is also gathered.
It is understood that the construction feature based on face, in plane picture corresponding to face, different pixels is relative The depth information answered is different or identical.
Therefore, embodiments of the invention, can be according to the depth information of the speckle pattern of face, with the face 3D models that prestore Depth information between different information, U.S. face processing is carried out to plane picture corresponding to face, if for example, the face 3D moulds that prestore In the depth information of type, the profile of face is goose egg shape, then plane picture can be entered based on the profile of the face of the goose egg shape The U.S. face processing of row, is not restricted to this.
In the present embodiment, by determining multiple first face 3D models, the personalized U.S. face need for the user that can meet to take pictures Ask, lifting user uses stickiness.By regarding multiple face 3D models as benchmark face 3D models, subsequently directly by face speckle The depth information of the depth information of pattern and benchmark face 3D models compares, and can lift U.S. face effect and U.S. face automation journey Degree.By based on the structure light for being incident upon face, gathering speckle pattern corresponding to face, according to the depth information of speckle pattern, Compared with the depth information for the face 3D models that prestore, comparison result is obtained, according to comparison result to plan corresponding to face As carrying out U.S. face processing, the depth information of the speckle pattern collected by being then based on structure light, data source is accurate, and, will The depth information of speckle pattern corresponding to face, compared with the depth information for the face 3D models that prestore, execution has specific aim U.S. face processing, lift U.S. face efficiency and effect.Instructed by receiving selection of the user to multiple first face 3D models;Will choosing The first face 3D models corresponding to instruction fetch, as the face 3D models that prestore, lift user experience degree.
Fig. 9 is the structural representation for the U.S. face processing unit that one embodiment of the invention proposes.
Referring to Fig. 9, the device 900 includes:
Acquisition module 901, for based on the structure light for being incident upon face, gathering speckle pattern corresponding to face.
Comparing module 902, for the depth information according to speckle pattern, ratio is done with the depth information for the face 3D models that prestore It is right, obtain comparison result.
U.S. face module 903, for carrying out U.S. face processing to plane picture corresponding to face according to comparison result.
Alternatively, in some embodiments, comparison result is the depth information of speckle pattern, the depth with the face 3D models that prestore The different information spent between information, referring to Figure 10, U.S. face module 903, including:
First determination sub-module 9031, for the depth information according to speckle pattern, determine different deep in plane picture Spend multiple sub-image areas corresponding to information.
Second determination sub-module 9032, for by multiple sub-image areas, the depth information belonging to different information to be corresponding Sub-image area be defined as target sub-image area.
Submodule 9033 is adjusted, for the depth information according to the face 3D models that prestore, target sub-image area is carried out Adjustment.
Alternatively, in some embodiments, the device 900 also includes:
Projection module 904, for when shot region identifies face, projective structure light.
Acquisition module 905, for obtaining multiple face 3D models, and based on the structure for being incident upon each face 3D models Light, gather speckle pattern corresponding to each face 3D models.
Determining module 906, for by the depth information of speckle pattern corresponding to each face 3D models, with its affiliated face 3D models obtain multiple first face 3D models as the first face 3D models.
Receiving module 907, for receiving selection instruction of the user to multiple first face 3D models.
Module 908 is chosen, for the first face 3D model corresponding by instruction is chosen, as the face 3D models that prestore.
It should be noted that the explanation in earlier figures 1- Fig. 8 embodiments to U.S. face processing method embodiment is also suitable In the U.S. face processing unit 900 of the embodiment, its realization principle is similar, and here is omitted.
In the present embodiment, by based on the structure light for being incident upon face, speckle pattern corresponding to face being gathered, according to speckle The depth information of pattern, compared with the depth information for the face 3D models that prestore, comparison result is obtained, according to comparison result to people Plane picture corresponding to face carries out U.S. face processing, the depth information of the speckle pattern collected by being then based on structure light, data Source is accurate, and, the depth information of speckle pattern corresponding to face compares with the depth information for the face 3D models that prestore, Performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
The embodiment of the present invention also provides a kind of mobile device.Above-mentioned mobile device includes image processing circuit, at image Managing circuit can utilize hardware and/or component software to realize, it may include define ISP (Image Signal Processing, figure As signal transacting) the various processing units of pipeline.Figure 11 is the schematic diagram of image processing circuit in one embodiment.Such as Figure 11 institutes Show, for purposes of illustration only, only showing the various aspects of the image processing techniques related to the embodiment of the present invention.
As shown in figure 11, image processing circuit includes imaging device 910, ISP processors 930 and control logic device 940.Into As equipment 910 may include there is one or more lens 912, the camera of imaging sensor 914 and structured light projector 916. Structured light projector 916 is by structured light projection to measured object.Wherein, the structured light patterns can be laser stripe, Gray code, sine Striped or, speckle pattern of random alignment etc..Imaging sensor 914 catches the structure light image that projection is formed to measured object, And send structure light image to ISP processors 930, acquisition measured object is demodulated to structure light image by ISP processors 930 Depth information.Meanwhile imaging sensor 914 can also catch the color information of measured object.It is of course also possible to by two images Sensor 914 catches the structure light image and color information of measured object respectively.
Wherein, by taking pattern light as an example, ISP processors 930 are demodulated to structure light image, are specifically included, from this The speckle image of measured object is gathered in structure light image, by the speckle image of measured object with reference speckle image according to pre-defined algorithm View data calculating is carried out, each speckle point for obtaining speckle image on measured object dissipates relative to reference to the reference in speckle image The displacement of spot.The depth value of each speckle point of speckle image is calculated using trigonometry conversion, and according to the depth Angle value obtains the depth information of measured object.
It is, of course, also possible to obtain the depth image by the method for binocular vision or based on jet lag TOF method Information etc., is not limited herein, as long as can obtain or belong to this by the method for the depth information that measured object is calculated The scope that embodiment includes.
After the color information that ISP processors 930 receive the measured object that imaging sensor 914 captures, it can be tested View data corresponding to the color information of thing is handled.ISP processors 930 are analyzed view data can be used for obtaining It is determined that and/or imaging device 910 one or more control parameters image statistics.Imaging sensor 914 may include color Color filter array (such as Bayer filters), imaging sensor 914 can obtain to be caught with each imaging pixel of imaging sensor 914 Luminous intensity and wavelength information, and provide one group of raw image data being handled by ISP processors 930.
ISP processors 930 handle raw image data pixel by pixel in various formats.For example, each image pixel can Bit depth with 8,10,12 or 14 bits, ISP processors 930 can be carried out at one or more images to raw image data Reason operation, image statistics of the collection on view data.Wherein, image processing operations can be by identical or different bit depth Precision is carried out.
ISP processors 930 can also receive pixel data from video memory 920.Video memory 920 can be memory device The independent private memory in a part, storage device or electronic equipment put, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving raw image data, ISP processors 930 can carry out one or more image processing operations.
After ISP processors 930 get color information and the depth information of measured object, it can be merged, obtained 3-D view.Wherein, can be extracted by least one of appearance profile extracting method or contour feature extracting method corresponding The feature of measured object.Such as pass through active shape model method ASM, active appearance models method AAM, PCA PCA, discrete The methods of cosine transform method DCT, the feature of measured object is extracted, is not limited herein.It will be extracted respectively from depth information again The feature of measured object and feature progress registration and the Fusion Features processing that measured object is extracted from color information.Herein refer to Fusion treatment can be the feature that will be extracted in depth information and color information directly combination or by different images Middle identical feature combines after carrying out weight setting, it is possibility to have other amalgamation modes, finally according to the feature after fusion, generation 3-D view.
The view data of 3-D view can be transmitted to video memory 920, to carry out other place before shown Reason.ISP processors 930 from the reception processing data of video memory 920, and to the processing data carry out original domain in and Image real time transfer in RGB and YCbCr color spaces.The view data of 3-D view may be output to display 960, for Family is watched and/or further handled by graphics engine or GPU (Graphics Processing Unit, graphics processor).This Outside, the output of ISP processors 930 also be can be transmitted to video memory 920, and display 960 can be read from video memory 920 View data.In one embodiment, video memory 920 can be configured as realizing one or more frame buffers.In addition, The output of ISP processors 930 can be transmitted to encoder/decoder 950, so as to encoding/decoding image data.The picture number of coding According to can be saved, and decompressed before being shown in the equipment of display 960.Encoder/decoder 950 can by CPU or GPU or Coprocessor is realized.
The image statistics that ISP processors 930 determine, which can be transmitted, gives the unit of control logic device 940.Control logic device 940 It may include the processor and/or microcontroller for performing one or more routines (such as firmware), one or more routines can be according to connecing The image statistics of receipts, determine the control parameter of imaging device 910.
In the embodiment of the present invention, the step of realizing U.S. face processing method with image processing techniques in Figure 11, may refer to Embodiment is stated, will not be repeated here.
In order to realize above-described embodiment, the present invention also proposes a kind of non-transitorycomputer readable storage medium, works as storage Instruction in medium by terminal computing device when so that terminal is able to carry out a kind of U.S. face processing method, and method includes:Base In the structure light for being incident upon face, speckle pattern corresponding to face is gathered;According to the depth information of speckle pattern, with the face that prestores The depth information of 3D models compares, and obtains comparison result;U.S. face is carried out to plane picture corresponding to face according to comparison result Processing.
Non-transitorycomputer readable storage medium in the present embodiment, by based on the structure light for being incident upon face, adopting Speckle pattern corresponding to collecting face, according to the depth information of speckle pattern, compares with the depth information for the face 3D models that prestore, Comparison result is obtained, U.S. face processing is carried out to plane picture corresponding to face according to comparison result, by being then based on structure gloss The depth information of the speckle pattern collected, data source is accurate, and, it is and pre- by the depth information of speckle pattern corresponding to face The depth information for depositing face 3D models compares, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
In order to realize above-described embodiment, the present invention also proposes a kind of computer program product, when in computer program product Instruction when being executed by processor, perform a kind of U.S. face processing method, method includes:Based on the structure light for being incident upon face, adopt Collect speckle pattern corresponding to face;According to the depth information of speckle pattern, compared with the depth information for the face 3D models that prestore, Obtain comparison result;U.S. face processing is carried out to plane picture corresponding to face according to comparison result.
Computer program product in the present embodiment, by based on the structure light for being incident upon face, gathering corresponding to face Speckle pattern, according to the depth information of speckle pattern, compared with the depth information for the face 3D models that prestore, obtain comparing knot Fruit, U.S. face processing, the speckle collected by being then based on structure light are carried out to plane picture corresponding to face according to comparison result The depth information of pattern, data source is accurate, and, by the depth information of speckle pattern corresponding to face, with the face 3D moulds that prestore The depth information of type compares, and performing has targetedly U.S. face processing, lifts U.S. face efficiency and effect.
It should be noted that in the description of the invention, term " first ", " second " etc. are only used for describing purpose, without It is understood that to indicate or implying relative importance.In addition, in the description of the invention, unless otherwise indicated, the implication of " multiple " It is two or more.
Any process or method described otherwise above description in flow chart or herein is construed as, and represents to include Module, fragment or the portion of the code of the executable instruction of one or more the step of being used to realize specific logical function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
It should be appreciated that each several part of the present invention can be realized with hardware, software, firmware or combinations thereof.Above-mentioned In embodiment, software that multiple steps or method can be performed in memory and by suitable instruction execution system with storage Or firmware is realized.If, and in another embodiment, can be with well known in the art for example, realized with hardware Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal Discrete logic, have suitable combinational logic gate circuit application specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that to realize all or part of step that above-described embodiment method carries Suddenly it is that by program the hardware of correlation can be instructed to complete, described program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, including one or a combination set of the step of embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould Block can both be realized in the form of hardware, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and as independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only storage, disk or CD etc..
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Necessarily refer to identical embodiment or example.Moreover, specific features, structure, material or the feature of description can be any One or more embodiments or example in combine in an appropriate manner.
Although embodiments of the invention have been shown and described above, it is to be understood that above-described embodiment is example Property, it is impossible to limitation of the present invention is interpreted as, one of ordinary skill in the art within the scope of the invention can be to above-mentioned Embodiment is changed, changed, replacing and modification.

Claims (12)

1. a kind of U.S. face processing method, it is characterised in that comprise the following steps:
Based on the structure light for being incident upon face, speckle pattern corresponding to the face is gathered;
According to the depth information of the speckle pattern, compared with the depth information for the face 3D models that prestore, obtain comparison result;
U.S. face processing is carried out to plane picture corresponding to the face according to the comparison result.
2. U.S. face processing method as claimed in claim 1, it is characterised in that the comparison result is the depth of the speckle pattern Spend information, and the depth information of the face 3D models that prestore between different information, it is described according to the comparison result to institute State plane picture corresponding to face and carry out U.S. face processing, including:
According to the depth information of the speckle pattern, multiple subgraphs corresponding to different depth information in the plane picture are determined As region;
By in the multiple sub-image area, sub-image area corresponding to the depth information belonging to the different information is defined as mesh Mark sub-image area;
According to the depth information of the face 3D models that prestore, the target sub-image area is adjusted.
3. U.S. face processing method as claimed in claim 1, it is characterised in that described based on the structure light for being incident upon face, Before gathering speckle pattern corresponding to the face, in addition to:
When shot region identifies the face, the structure light is projected.
4. the U.S. face processing method as described in claim any one of 1-3, it is characterised in that described based on being incident upon face Structure light, before gathering speckle pattern corresponding to the face, in addition to:
Multiple face 3D models are obtained, and based on the structure light for being incident upon each face 3D models, gather each face 3D Speckle pattern corresponding to model;
By the depth information of speckle pattern corresponding to each face 3D models, with its affiliated face 3D model as the first Face 3D models, obtain multiple first face 3D models.
5. U.S. face processing method as claimed in claim 4, it is characterised in that believed described according to the depth of the speckle pattern Breath, before being compared with the depth information for the face 3D models that prestore, in addition to:
Receive selection instruction of the user to the multiple first face 3D models;
The first corresponding face 3D models of instruction are chosen by described, as the face 3D models that prestore.
A kind of 6. U.S. face processing unit, it is characterised in that including:
Acquisition module, for based on the structure light for being incident upon face, gathering speckle pattern corresponding to the face;
Comparing module, for the depth information according to the speckle pattern, compared with the depth information for the face 3D models that prestore, Obtain comparison result;
U.S. face module, for carrying out U.S. face processing to plane picture corresponding to the face according to the comparison result.
7. U.S. face processing unit as claimed in claim 6, it is characterised in that the comparison result is the depth of the speckle pattern The different information spent between information, and the depth information of the face 3D models that prestore, the U.S. face module, including:
First determination sub-module, for the depth information according to the speckle pattern, determine different deep in the plane picture Spend multiple sub-image areas corresponding to information;
Second determination sub-module, for by the multiple sub-image area, the depth information belonging to the different information to be corresponding Sub-image area be defined as target sub-image area;
Submodule is adjusted, for the depth information for the face 3D models that prestored according to, the target sub-image area is carried out Adjustment.
8. U.S. face processing unit as claimed in claim 6, it is characterised in that also include:
Projection module, for when shot region identifies the face, projecting the structure light.
9. the U.S. face processing unit as described in claim any one of 6-8, it is characterised in that also include:
Acquisition module, for obtaining multiple face 3D models, and based on the structure light for being incident upon each face 3D models, gather institute State speckle pattern corresponding to each face 3D models;
Determining module, for by the depth information of speckle pattern corresponding to each face 3D models, with its affiliated face 3D Model obtains multiple first face 3D models as the first face 3D models.
10. U.S. face processing unit as claimed in claim 9, it is characterised in that also include:
Receiving module, for receiving selection instruction of the user to the multiple first face 3D models;
Module is chosen, for choosing the first corresponding face 3D models of instruction by described, as the face 3D models that prestore.
11. a kind of non-transitorycomputer readable storage medium, is stored thereon with computer program, it is characterised in that the program The U.S. face processing method as any one of claim 1-5 is realized when being executed by processor.
12. a kind of mobile device, including memory and processor, computer-readable instruction is stored in the memory, it is described When instruction is by the computing device so that U.S. face processing of the computing device as any one of claim 1 to 5 Method.
CN201710643844.1A 2017-07-31 2017-07-31 Beauty treatment method and device and mobile equipment Active CN107480615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710643844.1A CN107480615B (en) 2017-07-31 2017-07-31 Beauty treatment method and device and mobile equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710643844.1A CN107480615B (en) 2017-07-31 2017-07-31 Beauty treatment method and device and mobile equipment

Publications (2)

Publication Number Publication Date
CN107480615A true CN107480615A (en) 2017-12-15
CN107480615B CN107480615B (en) 2020-01-10

Family

ID=60598156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710643844.1A Active CN107480615B (en) 2017-07-31 2017-07-31 Beauty treatment method and device and mobile equipment

Country Status (1)

Country Link
CN (1) CN107480615B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108550185A (en) * 2018-05-31 2018-09-18 Oppo广东移动通信有限公司 Beautifying faces treating method and apparatus
CN108647636A (en) * 2018-05-09 2018-10-12 深圳阜时科技有限公司 Identification authentication method, identification authentication device and electronic equipment
CN108682050A (en) * 2018-08-16 2018-10-19 Oppo广东移动通信有限公司 U.S. face method and apparatus based on threedimensional model
CN109194943A (en) * 2018-08-29 2019-01-11 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109544445A (en) * 2018-12-11 2019-03-29 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN113379817A (en) * 2021-01-12 2021-09-10 四川深瑞视科技有限公司 Depth information acquisition method, device and system based on speckles

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268928A (en) * 2014-08-29 2015-01-07 小米科技有限责任公司 Picture processing method and device
CN105513007A (en) * 2015-12-11 2016-04-20 惠州Tcl移动通信有限公司 Mobile terminal based photographing beautifying method and system, and mobile terminal
CN106778524A (en) * 2016-11-25 2017-05-31 努比亚技术有限公司 A kind of face value based on dual camera range finding estimates devices and methods therefor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268928A (en) * 2014-08-29 2015-01-07 小米科技有限责任公司 Picture processing method and device
CN105513007A (en) * 2015-12-11 2016-04-20 惠州Tcl移动通信有限公司 Mobile terminal based photographing beautifying method and system, and mobile terminal
CN106778524A (en) * 2016-11-25 2017-05-31 努比亚技术有限公司 A kind of face value based on dual camera range finding estimates devices and methods therefor

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647636A (en) * 2018-05-09 2018-10-12 深圳阜时科技有限公司 Identification authentication method, identification authentication device and electronic equipment
CN108647636B (en) * 2018-05-09 2024-03-05 深圳阜时科技有限公司 Identity authentication method, identity authentication device and electronic equipment
CN108550185A (en) * 2018-05-31 2018-09-18 Oppo广东移动通信有限公司 Beautifying faces treating method and apparatus
CN108682050A (en) * 2018-08-16 2018-10-19 Oppo广东移动通信有限公司 U.S. face method and apparatus based on threedimensional model
CN109194943A (en) * 2018-08-29 2019-01-11 维沃移动通信有限公司 A kind of image processing method and terminal device
CN109544445A (en) * 2018-12-11 2019-03-29 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN113379817A (en) * 2021-01-12 2021-09-10 四川深瑞视科技有限公司 Depth information acquisition method, device and system based on speckles

Also Published As

Publication number Publication date
CN107480615B (en) 2020-01-10

Similar Documents

Publication Publication Date Title
CN107480615A (en) U.S. face processing method, device and mobile device
US9317970B2 (en) Coupled reconstruction of hair and skin
JP6270157B2 (en) Image processing system and image processing method
CN107517346A (en) Photographic method, device and mobile device based on structure light
CN107452034B (en) Image processing method and device
CN107392874A (en) U.S. face processing method, device and mobile device
CN107480613A (en) Face identification method, device, mobile terminal and computer-readable recording medium
CN107209007A (en) Method, circuit, equipment, accessory, system and the functionally associated computer-executable code of IMAQ are carried out with estimation of Depth
CN107481317A (en) The facial method of adjustment and its device of face 3D models
CN107483845B (en) Photographic method and its device
CN107370950B (en) Focusing process method, apparatus and mobile terminal
CN107465906A (en) Panorama shooting method, device and the terminal device of scene
CN107481304A (en) The method and its device of virtual image are built in scene of game
CN107610077A (en) Image processing method and device, electronic installation and computer-readable recording medium
CN107610171B (en) Image processing method and device
CN107507269A (en) Personalized three-dimensional model generating method, device and terminal device
CN107707831A (en) Image processing method and device, electronic installation and computer-readable recording medium
WO2017214735A1 (en) Systems and methods for obtaining a structured light reconstruction of a 3d surface
US8633926B2 (en) Mesoscopic geometry modulation
CN107463659A (en) Object search method and its device
CN107438161A (en) Shooting picture processing method, device and terminal
CN107330974A (en) merchandise display method, device and mobile device
CN109191393A (en) U.S. face method based on threedimensional model
CN107705356A (en) Image processing method and device
CN107437268A (en) Photographic method, device, mobile terminal and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: Guangdong Opel Mobile Communications Co., Ltd.

GR01 Patent grant
GR01 Patent grant