CN109190533A - Image processing method and device, electronic equipment, computer readable storage medium - Google Patents

Image processing method and device, electronic equipment, computer readable storage medium Download PDF

Info

Publication number
CN109190533A
CN109190533A CN201810961977.8A CN201810961977A CN109190533A CN 109190533 A CN109190533 A CN 109190533A CN 201810961977 A CN201810961977 A CN 201810961977A CN 109190533 A CN109190533 A CN 109190533A
Authority
CN
China
Prior art keywords
image
face
target
dimensional model
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810961977.8A
Other languages
Chinese (zh)
Other versions
CN109190533B (en
Inventor
黄杰文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810961977.8A priority Critical patent/CN109190533B/en
Publication of CN109190533A publication Critical patent/CN109190533A/en
Application granted granted Critical
Publication of CN109190533B publication Critical patent/CN109190533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

This application involves a kind of image processing method, device, electronic equipment, computer readable storage mediums, which comprises obtains the first image and corresponding second image, second image includes the corresponding depth information of the first image;Three-dimensional modeling is carried out to target face according to the first image and the second image, obtains target human face three-dimensional model;The five features in the target human face three-dimensional model is extracted, and obtains the corresponding face adjusting parameter of the five features;The five features in the target human face three-dimensional model is adjusted according to face adjusting parameter.Above-mentioned image processing method, device, electronic equipment, computer readable storage medium can more accurately be handled image.

Description

Image processing method and device, electronic equipment, computer readable storage medium
Technical field
This application involves field of computer technology, more particularly to a kind of image processing method, device, electronic equipment, meter Calculation machine readable storage medium storing program for executing.
Background technique
Camera is usually to be made of a two-dimensional pixel matrix, and true object is tool to the image that object acquires There is three-dimensional space characteristic.So more accurately to express the feature of object three-dimensional modeling, three obtained can be carried out to object Dimension module can more realistically reflect the spatiality of object.
Summary of the invention
The embodiment of the present application provides a kind of image processing method, device, electronic equipment, computer readable storage medium, can More accurately to handle image.
A kind of image processing method, comprising:
The first image and corresponding second image are obtained, second image includes the corresponding depth letter of the first image Breath;
Three-dimensional modeling is carried out to target face according to the first image and the second image, obtains target face three-dimensional mould Type;
The five features in the target human face three-dimensional model is extracted, and obtains the corresponding face adjustment of the five features Parameter;
The five features in the target human face three-dimensional model is adjusted according to face adjusting parameter.
A kind of image processing apparatus, comprising:
Image collection module, for obtaining the first image and corresponding second image, second image includes described the The corresponding depth information of one image;
Model generation module is obtained for carrying out three-dimensional modeling to target face according to the first image and the second image To target human face three-dimensional model;
Parameter acquisition module for extracting the five features in the target human face three-dimensional model, and obtains the face The corresponding face adjusting parameter of feature;
Face adjust module, for according to face adjusting parameter to the five features in the target human face three-dimensional model into Row adjustment.
A kind of electronic equipment, including memory and processor store computer program, the calculating in the memory When machine program is executed by the processor, so that the processor executes following steps:.
The first image and corresponding second image are obtained, second image includes the corresponding depth letter of the first image Breath;
Three-dimensional modeling is carried out to target face according to the first image and the second image, obtains target face three-dimensional mould Type;
The five features in the target human face three-dimensional model is extracted, and obtains the corresponding face adjustment of the five features Parameter;
The five features in the target human face three-dimensional model is adjusted according to face adjusting parameter.
A kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program Following steps are realized when being executed by processor:
The first image and corresponding second image are obtained, second image includes the corresponding depth letter of the first image Breath;
Three-dimensional modeling is carried out to target face according to the first image and the second image, obtains target face three-dimensional mould Type;
The five features in the target human face three-dimensional model is extracted, and obtains the corresponding face adjustment of the five features Parameter;
The five features in the target human face three-dimensional model is adjusted according to face adjusting parameter.
Above-mentioned image processing method, device, electronic equipment, computer readable storage medium, available first image and Second image, and three-dimensional modeling is carried out to target face according to the first image and the second image and obtains target faceform.Then The five features in target human face three-dimensional model is extracted, five features is adjusted according to the face adjusting parameter of acquisition.Mesh It marks not only comprising the texture information of target face in human face three-dimensional model, also includes the depth information of target face, in this way right When five features is adjusted, it can be adjusted in conjunction with more fully information, so that the processing of image is more accurate.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is the applied environment figure of image processing method in one embodiment;
Fig. 2 is the flow chart of image processing method in one embodiment;
Fig. 3 is the flow chart of image processing method in another embodiment;
Fig. 4 is the schematic diagram that TOF calculates depth information in one embodiment;
Fig. 5 is the flow chart of image processing method in another embodiment;
Fig. 6 is the flow chart of image processing method in another embodiment;
Fig. 7 is the displaying schematic diagram of human face three-dimensional model in one embodiment;
Fig. 8 is the software frame figure that image processing method is realized in one embodiment;
Fig. 9 is the schematic diagram that image processing method is realized in one embodiment;
Figure 10 is the structural schematic diagram of image processing apparatus in one embodiment;
Figure 11 is the schematic diagram of image processing circuit in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and It is not used in restriction the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe various elements herein, But these elements should not be limited by these terms.These terms are only used to distinguish the first element from the other element.Citing comes It says, in the case where not departing from scope of the present application, the first image can be known as the second image, and similarly, it can be by second Image is known as the first image.First image and the second image both image, but it is not same image.
Fig. 1 is the applied environment figure of image processing method in one embodiment.As shown in Figure 1, can be on electronic equipment 10 Installation camera 102 and camera 104, camera 102 and camera 104 can be used for simultaneously shooting target face, obtain To the first image and the second image.It is understood that can also only install a camera on electronic equipment, then can distinguish The first image and the second image are obtained by different electronic equipments.It, can after getting the first image and the second image To carry out three-dimensional modeling to target face according to the first image and the second image, target human face three-dimensional model is obtained.Extract target Five features in human face three-dimensional model, and obtain the corresponding face adjusting parameter of five features.It is finally adjusted and is joined according to face Five features in several pairs of target human face three-dimensional models is adjusted.
Fig. 2 is the flow chart of image processing method in one embodiment.As shown in Fig. 2, the image processing method includes step Rapid 202 to step 208.Wherein:
Step 202, the first image and corresponding second image are obtained, the second image includes the corresponding depth letter of the first image Breath.
Camera can be installed on electronic equipment, and image is obtained by the camera of installation.Camera can be according to obtaining The difference of the image taken is divided into the first-class type of Laser video camera head, visible image capturing, and the available laser irradiation of Laser video camera head arrives Image is formed by object, it is seen that be formed by image on the available radiation of visible light to object of light video camera head.Electronics is set It is standby that several cameras can be above installed, and the position installed is without limitation.
For example, a camera can be installed on the front panel of electronic equipment, two overleaf are installed on panel and is taken the photograph As head, camera can also be installed on the inside of electronic equipment in a manner of embedded, then be beaten by way of rotating or sliding Open camera.Specifically, mountable front camera and rear camera on electronic equipment, front camera and rear camera Image can be obtained from different visual angles, general front camera can obtain image, postposition from the positive visual angle of electronic equipment Camera can obtain image from the back side visual angle of electronic equipment.
First image and the second image, which can be, to be obtained by the camera installed on same electronic equipment, can also be with It is that the camera installed on distinct electronic apparatuses obtains.Obtain the first image and the second image respectively by distinct electronic apparatuses When, the first image and the second image can be passed to same after obtaining the first image and the second image respectively by electronic equipment It is handled in electronic equipment.
Specifically, the first image and the second image can be the picture that electronic equipment passes through camera real-time capture current scene What face generated, it is also possible to be stored in the image of electronic equipment local, it is not limited here.First image is corresponding with the second image Identical photographed scene.Second image can be with texture information two dimensional image, that is, not include the flat image of depth information; First image refers to 3-D image corresponding with the second image, and the first image is comprising subject in the second image in shooting field Distribution situation, that is, depth information in scape.Depth information refers to the distance between subject letter in camera and photographed scene Breath, such as depth information can be 1 meter, 1.2 meters, 3.4 meters etc..
Step 204, three-dimensional modeling is carried out to target face according to the first image and the second image, it is three-dimensional obtains target face Model.
In embodiment provided by the present application, the first image and the second image are acquisitions when shooting to target face , when shooting to target face, the information such as texture, the color of target face are obtained, generate the first image, obtain target The depth information of face generates the second image.The type of first image and the second image it is not limited here, for example, the first figure As can be RGB (Red Green Blue, RGB) image, infrared image etc., the second image can be depth (Depth) figure Picture, it is without being limited thereto.
Three-dimensional modeling can be carried out to target face according to the first image and the second image, obtain target face three-dimensional mould Type.Specifically, threedimensional model can be used to indicate that the polygon space stereochemical structure of object.Threedimensional model can generally use three-dimensional Grid (3Dimensions mesh, 3D mesh) structure is indicated, and grid is made of the point cloud data of object.Point cloud It generally may include three-dimensional coordinate (XYZ), laser reflection intensity (Intensity) and colouring information (RGB), final root in data Three-dimensional grid is depicted as according to point cloud data.
Step 206, the five features in target human face three-dimensional model is extracted, and obtains the corresponding face adjustment of five features Parameter.
In one embodiment, five features is for indicating the corresponding feature ginseng in face position in target human face three-dimensional model Number, such as can be the size of nose, the spacing of eyes, the width of eyebrow etc..Face adjusting parameter is exactly for face The parameter that feature is adjusted, face adjusting parameter can be what electronic equipment obtained automatically, are also possible to user and are manually entered , it is not limited here.
For example, the corresponding face adjusting parameter of different faces can be stored in advance in electronic equipment, when according to the first image After establishing target human face three-dimensional model with the second image, according to the first image recognition target face, then by target face with Pre-stored face is matched, and the corresponding face adjusting parameter of the face to match is searched.
Step 208, the five features in target human face three-dimensional model is adjusted according to face adjusting parameter.
In embodiment provided by the present application, both comprising texture, the color etc. of target face in target human face three-dimensional model Information, and include the depth information of target face, according to face in the adjustable target human face three-dimensional model of face adjusting parameter The information such as size, color, texture, depth.
For example, the processing such as thin face, amplification eyes, adjustment chin length, anti-acne can be carried out to target human face three-dimensional model. Wherein, the face mask of target human face three-dimensional model can be reduced when thin face is handled, amplification eyes can be with target person when handling The eye sizes of face three-dimensional model become larger, and adjustment chin length can be by the size of the chin of target human face three-dimensional model when handling It reduces, the texture in the small pox region of target human face three-dimensional model can be smoothed by anti-acne when handling.
In embodiment provided by the present application, after generating target human face three-dimensional model, electronic equipment can be according to target Human face three-dimensional model, which generates, refers to face adjusting parameter.Wherein, with reference to the face adjustment ginseng that face adjusting parameter is for reference Number, such as target human face three-dimensional model can be analyzed by artificial intelligence, it obtains with reference to face adjusting parameter.It is referred to After face adjusting parameter, it can be shown by target human face three-dimensional model and with reference to face adjusting parameter, user can root Face adjusting parameter is inputted according to the target human face three-dimensional model of displaying and with reference to face adjusting parameter, electronic equipment is further according to user The face adjusting parameter of input handles the five features in target human face three-dimensional model.
It, can also be three-dimensional by the target face after processing after being handled according to the face adjusting parameter that user inputs Model is stored as benchmark face threedimensional model, and next time is again handled the corresponding human face three-dimensional model of target face When, the available benchmark face threedimensional model is used as reference to be handled.Specifically, obtaining and showing that five features is corresponding With reference to face adjusting parameter, and obtain the face adjusting parameter inputted according to reference face adjusting parameter;It will be adjusted according to face Target human face three-dimensional model after parameter adjustment is stored as benchmark face threedimensional model.
Image processing method provided by the above embodiment, available first image and the second image, and according to the first figure Picture and the second image carry out three-dimensional modeling to target face and obtain target faceform.Then it extracts in target human face three-dimensional model Five features, five features is adjusted according to the face adjusting parameter of acquisition.It is not only wrapped in target human face three-dimensional model The texture information of the face containing target also includes the depth information of target face, in this way when being adjusted to five features, It can be adjusted in conjunction with more fully information, so that the processing of image is more accurate.
Fig. 3 is the flow chart of image processing method in another embodiment.As shown in figure 3, being wrapped in the image processing method Step 302 is included to step 312.Wherein:
Step 302, when shooting to target face, the first original that the first camera is acquired according to the first frame per second is obtained Beginning image, and obtain at least two the second original images that second camera is acquired according to the second frame per second;Wherein, the first frame per second is small In the second frame per second.
In the embodiment of the present application, electronic equipment at least installs two cameras, and respectively the first camera and second are taken the photograph As head.Electronic equipment can control the first camera and second camera while expose when shooting to target face, And the first original image is obtained by the first camera, the second original image is obtained by second camera.
It is understood that the image that the first camera and second camera are obtained both for Same Scene, first is taken the photograph Picture head acquires the first original image with the first frame per second, and second camera acquires the first original image with the second frame per second.Wherein, first Frame per second can guarantee in identical exposure period that second camera can acquire multiple second originals less than the second frame per second in this way Beginning image.
Specifically, at least two the second original images of second camera acquisition can be used for synthesizing a depth image, Thus can be to avoid second camera in sampling depth image, the cavitation of generation improves the accuracy of image.Example Such as, the first camera can obtain the first original image with 30 frames/second speed, and second camera can be with 120 frames/second Speed obtains the second original image.In this way in identical exposure period, the first camera acquires first original image, Second camera can acquire four the second original images.
Step 304, the first image is generated according to the first original image, and generates the according at least two the second original images Two images.
Specifically, optical signal can be converted to electric signal by the imaging sensor in camera, it is converted into after electric signal The original image of formation cannot be directly processed device processing, need to carry out the other processor processing of ability after certain format conversion. First original image refers to that the original image of the first camera acquisition, the second original image refer to the original of second camera acquisition Image.
In one embodiment, the first camera can be visible image capturing head, and second camera can be Laser video camera Head, the corresponding laser emitter of mountable second camera on electronic equipment.In the laser irradiation to object of laser emitter, lead to Second camera is crossed to obtain the second original image generated when laser irradiation object, the second original image is former for generating first The corresponding depth information of beginning image.
First original image of the first camera acquisition, can be generated corresponding first image, the first image can be located Manage device processing.For example, the first original image obtained can be the image of RAW format, the first image can be from the figure of RAW format Image as being converted into YUV (Luma Chrominance Chroma, brightness, coloration, concentration) format, shape after format conversion At YUV image be generate the first image, then the first image is handled.The second of second camera acquisition is original Image is also possible to the image of RAW format, since the second original image of acquisition is at least two, it is possible to original by second Image synthesizes Depth (depth) image, as the second image.
In one embodiment, the step of generating the first image according to the first original image specifically includes: original by first Image carries out the first format conversion, generates the first image.For example, the first camera is visible image capturing head, the first original image It can be the image of RAW format, the first image can be the image of yuv format, and the first original image of RAW format is carried out the One format conversion, so that it may obtain the first image of yuv format.
The second image is generated according to the second original image to specifically include: will at least two the second original images be packaged, And the second original image after packing is subjected to the second format conversion, generate the second image.Specifically, getting the second original graph As after, to prevent the second original image from losing in transmission process, the second original image can be packaged, it in this way can be with Make the second original image form an entirety on memory to be transmitted, to prevent frame losing.The second original image after packing can To carry out the second format conversion, second image is then generated.
For example, second camera can be Laser video camera head, electronic equipment can also install a laser emitter, laser Transmitter emits laser wave with certain frequency, by the flight time for calculating laser wave, so that it may calculate object to second The distance of camera.Specifically, acquire laser wave by second camera is formed by the second original graph after object reflects Then picture obtains the second image according to the second original image.
Fig. 4 is the schematic diagram that TOF calculates depth information in one embodiment.As shown in figure 4, laser emitter can emit One laser wave, the laser wave of transmitting will form the laser wave of a reflection after object reflection, according to the laser of transmitting The depth information of object can be calculated in the phase difference of wave and received laser wave.Laser video camera head actual acquisition image When, it can control different shutters and switched in different times, then form different reception signals, thus by multiple fast Door switch acquires different images depth image is calculated.In one embodiment, it is assumed that Laser video camera head is by four A shutter controls and receives laser wave signal, and the laser wave signal that shutter 1, shutter 2, shutter 3, shutter 4 receive is respectively Q1、Q2、Q3、Q4, then the formula for calculating depth information is as follows:
Wherein, C is the light velocity, and f is the tranmitting frequency of laser wave.Four the second original images can be carried out the by above-mentioned formula The conversion of two formats, generates the second image of corresponding Depth format.It is understood that the figure of the second original image obtained When as quantity difference, the corresponding formula that second original image is carried out the second format conversion may also be different.Specifically, can be with Corresponding second format conversion formula is obtained according to the amount of images of the second original image, according to the second format conversion formula will beat The second original image after packet carries out the second format conversion, obtains the second image.
When shooting to target face, target face and electronic equipment can be kept stationary, thus Corresponding first image of target face and the second image can only be obtained from an angle.It can also make target person face or electronic equipment Movement can thus obtain corresponding first image of target face and the second image from multiple angles.For example, electronic equipment pair 90 ° of target face left-right rotation, then rotate upwardly and downwardly 45 ° again, and obtain multiple first images and multiple during rotation Second image.The target human face three-dimensional model generated by multiple first images and the second image is more accurate.
Step 306, benchmark face threedimensional model is obtained.
In one embodiment, benchmark face threedimensional model is the mould being fitted when rebuilding target human face three-dimensional model Type can store one or more benchmark face threedimensional models in electronic equipment.After obtaining the first image and the second image, First image and the second image can be fitted with benchmark face threedimensional model, generate target human face three-dimensional model.
For example, different ethnic groups, gender and the human face three-dimensional model at age often have different characteristics, so electronic equipment It can divide different ethnic groups, gender and age that different benchmark face threedimensional models is respectively set.Specifically, ethnic group, gender and year Age etc. can be used as the attributive character of target face, after getting the first image and the second image, according to the first image and Then it is three-dimensional to obtain corresponding benchmark face according to the attributive character of target face for the attributive character of two image recognition target faces Model.
Step 308, the first image and the second image are fitted with benchmark face threedimensional model, generate target face pair The target human face three-dimensional model answered.
It can establish target human face three-dimensional model according to the first image and the second image, establish target human face three-dimensional model tool Body may include a cloud computing, point cloud matching, data fusion, Surface Creation etc., without being limited thereto.Wherein, point cloud computing refer to Camera establishes world coordinate system, and the depth information in the second image is converted to the three-dimensional coordinate in above-mentioned world coordinate system Process.By multiple second images for shooting from different perspectives come when constructing threedimensional model, between each the second image of acquisition Common portion may be stored in.Point cloud registering is exactly multiple second image superpositions that different time, angle, illumination are obtained The process being fitted in unified world coordinate system.Depth information after point cloud registering is still unordered point cloud at random in space Data are only capable of showing the partial information of scenery.Therefore fusion treatment must be carried out to point cloud data, to obtain finer weight Established model.Specifically, the process of data fusion can be using camera as original point structure volume mesh, volume mesh is point cloud space minute It is cut into pole multi-voxel proton (Voxel), by assigning SDF (Signed Distance Field, effective distance field) value for all voxels Carry out template surface.Tri patch is finally constructed according to the voxel in the volume mesh of building, all tri patch of building into Row connection, to generate the surface of threedimensional model.Finally by the surface of the features such as texture, color in the first image and building into Row fusion, generates last threedimensional model.
After getting benchmark face threedimensional model, so that it may using benchmark face threedimensional model as the structure of above-mentioned building Volume mesh is made, the first image and the second image are fitted with benchmark face threedimensional model, generates target face three-dimensional mould Type.
Step 310, the five features in target human face three-dimensional model is extracted, and obtains the face of benchmark face threedimensional model Face adjusting parameter corresponding to feature.
Pre-establishing in electronic equipment can obtained with the corresponding relationship of benchmark face threedimensional model and face adjusting parameter It, can be according to the corresponding face adjusting parameter of benchmark face obtaining three-dimensional model to after benchmark face threedimensional model.For example, base Quasi- human face three-dimensional model may include " model_01 ", " model_02 ", " model_03 ", " model_01 " corresponding face adjustment Parameter is that eyes amplify 0.5 times, and " model_02 " corresponding face adjusting parameter is 0.2 times of thin face, and " model_03 " is corresponding Face adjusting parameter is that nose gets higher 0.1 times.
Step 312, the five features in target human face three-dimensional model is adjusted according to face adjusting parameter.
It may include the parameter being adjusted to one or more five features in face adjusting parameter, do not limit herein It is fixed.For example, it is X that original five features, which is eye sizes, obtaining face adjusting parameter is that eyes amplify 0.5 times, then according to five Official's adjusting parameter five features adjusted is eye sizes in 1.5X.
Specifically, face adjusting parameter includes that face adjustment direction and face adjust intensity, wherein face adjustment direction is Refer to the direction adjusted to five features, face adjustment intensity refers to the intensity value being adjusted to five features.For example, face tune For perfect square to can be to eyes amplification, face adjustment intensity can refer to the multiple of amplification.Specifically, can be adjusted according to face Intensity and face adjustment direction, are adjusted the five features in target human face three-dimensional model.
In one embodiment, the step of obtaining the first image and the second image can specifically include:
Step 502, the first moment of the first original image of acquisition is obtained, and acquisition at least two the second original images Second moment.
At the time of first moment referred to the first original image of acquisition, the second moment referred to acquisition at least two the second original graphs At the time of picture.When electronic equipment collects the first original image, so that it may the clock of electronic equipment is read, when generating first It carves.When electronic equipment collects the second original image, so that it may read the clock of electronic equipment, generate for the second moment.
It is understood that the second original image of acquisition includes at least two, for example, can acquire four it is second original Image, eight the second original images, nine second original images etc..Second camera is in one the second original image of frame of every generation When, at the time of electronic equipment can all read generation second original image.Specifically, available at least two the second original graphs The acquisition moment of any one the second original image as in, as at least two the second original images corresponding second moment;Or The average value for obtaining each second original image corresponding acquisition moment, as at least two the second original images corresponding second Moment.
For example, five the second original images that second camera obtains, acquisition moment are indicated with " dividing: the second: millisecond " Respectively " 14:25:256 " → " 14:25:364 " → " 14:25:485 " → " 14:25:569 " → " 14:25:691 ", then second Moment can be acquisition moment i.e. " 14:25:256 " for obtaining first the second original image, and it is former to be also possible to third second The acquisition moment of beginning image is " 14:25:485 ", can also be being averaged for the acquisition moment of five the second original images of acquisition Value is " 14:25:473 ", it is not limited here.
Step 504, original according to first when the time interval between the first moment and the second moment is less than interval threshold Image generates the first image, and generates the second image according at least two the second original images.
It is understood that the first camera and second camera can be and shot for Same Scene, so It shoots the first obtained image and the second image is corresponding.Due to obtaining the first image and during the second image, electricity Sub- equipment may generate the case where shake, thus in order to guarantee the first image and the second image be it is corresponding, need to adopt simultaneously Collect the first image and the second image.When the time interval between first time stamp and the second timestamp is less than the first interval threshold When, it is believed that the first image and the second image are obtained for Same Scene, then can according to the first image and the second image into Row processing;When the time interval between first time stamp and the second timestamp is greater than the first interval threshold, then it is assumed that the first figure As and the second image be not to be obtained for Same Scene, the first image that can directly will acquire and the second image abandon.
In one embodiment, the step of obtaining benchmark face threedimensional model can specifically include:
Step 602, target face corresponding target face information in the first image is obtained, by target face information and base Face information in calibration information set is compared.
Specifically, the first human face three-dimensional model and the second human face three-dimensional model can be stored in advance in electronic equipment, first Human face three-dimensional model storage corresponding with face information, the second human face three-dimensional model are general face's threedimensional model of default.It is obtaining When taking benchmark face threedimensional model, can by the first image target face information and electronic equipment in the face information that stores It is compared, and benchmark face threedimensional model is obtained according to comparison result.
In one embodiment, face information is used to indicate the unique features of face, such as can be facial image, face One of the colour of skin, human face five-sense-organ etc. are a variety of.The face information stored in electronic equipment forms a reference information set, will Target face information is compared with the face information of reference information set, judges to whether there is and target face in electronic equipment The face information that information matches.
Step 604, when there is the face information to match with target face information in reference information set, acquisition and mesh First human face three-dimensional model corresponding to the face information that mark face information matches, as benchmark face threedimensional model.
Target face information is matched with face information, if the matching degree of target face information and face information is greater than Matching threshold then determines that target face information matches with the face information.For example, target face information and a certain face information Matching degree reaches 90%, and matching degree 90% is greater than matching threshold 80%, then determines target face information and the face information phase Match.
Electronic equipment pre-establishes the corresponding relationship of face information, the first human face three-dimensional model and face adjusting parameter, when When there is the face information to match with target face information in reference information set, electronic equipment can be obtained to be believed with target face First human face three-dimensional model corresponding to the matched face information of manner of breathing is as benchmark face threedimensional model, and it is the first to obtain this The corresponding face adjusting parameter of face three-dimensional model handles the five features of target human face three-dimensional model.
Step 606, when, there is no when the face information to match with target face information, acquisition is pre- in reference information set The second human face three-dimensional model first stored is as benchmark face threedimensional model.
The second human face three-dimensional model stored in electronic equipment is general human face three-dimensional model, the second human face three-dimensional model Corresponding face adjusting parameter is the processing parameter of default.When there is no match with target face information in reference information set Face information when, just using the second general human face three-dimensional model as benchmark face threedimensional model, and obtain the five of the default Official's adjusting parameter handles the five features of target human face three-dimensional model.
Fig. 7 is the displaying schematic diagram of human face three-dimensional model in one embodiment.As shown in fig. 7, in the three-dimensional coordinate of foundation Human face three-dimensional model 702 is shown in system.The human face three-dimensional model 402 is a three-dimensional model, can show people from multiple angles Face.In the present embodiment, human face three-dimensional model 702 is turned left 135 °, rotates 25 ° still further below, then obtains face three-dimensional mould Type 704.
Image processing method provided by the above embodiment, available first image and the second image, and according to the first figure Picture and the second image carry out three-dimensional modeling to target face and obtain target faceform.Then it extracts in target human face three-dimensional model Five features, five features is adjusted according to the face adjusting parameter of acquisition.It is not only wrapped in target human face three-dimensional model The texture information of the face containing target also includes the depth information of target face, in this way when being adjusted to five features, It can be adjusted in conjunction with more fully information, so that the processing of image is more accurate.
It should be understood that although each step in the flow chart of Fig. 2,3,5,6 is successively shown according to the instruction of arrow, But these steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, these There is no stringent sequences to limit for the execution of step, these steps can execute in other order.Moreover, in Fig. 2,3,5,6 At least part step may include multiple sub-steps perhaps these sub-steps of multiple stages or stage be not necessarily Synchronization executes completion, but can execute at different times, and the execution sequence in these sub-steps or stage also need not Be so successively carry out, but can at least part of the sub-step or stage of other steps or other steps in turn or Person alternately executes.
Fig. 8 is the software frame figure that image processing method is realized in one embodiment.As shown in figure 8, in the software frame Including application layer 80, hardware abstraction layer (Hardware Abstraction Layer, HAL) 82,84 He of kernel (Kernel) layer Hardware (Hardware) layer 86.It wherein, include application program 802 in application layer 80.In hardware abstraction layer 82 include interface 822, Image synchronization module 824, image algorithm module 826 and application algoritic module 828.It include webcam driver in inner nuclear layer 84 842, the module 846 synchronous with camera of camera calibration module 844.It include the first camera 862, second in hardware layer 862 Camera 864 and image processor (Image Signal Processor, ISP) 866.
In one embodiment, application program 802 can be used for initiating image capture instruction, then send out image capture instruction It is sent to interface 822.For example, image capture instruction can be initiated when application program 802 needs to carry out three-dimensional modeling.Interface , can be by webcam driver 842 to the configuration parameter of camera after 822 pairs of image capture instruction parsings, it then will configuration Parameter is sent to image processor 866, and is beaten by the first camera 862 of control of image processor 866 and second camera 864 It opens.After first camera 862 and second camera 864 are opened, the first camera can be controlled by camera synchronization module 846 862 and 864 synchronous acquisition image of second camera.Electronic equipment can shoot target face, pass through the first camera 862 the first original images of acquisition, acquire the second original image by second camera 864.Then respectively according to the first original graph Picture and the second original image generate the first image and the second image, and the first image and the second image are returned to application program 802。
The process of the first image and the second image is acquired, it is specific as follows: the first original image of the first camera 862 acquisition The second original image acquired with second camera 864 can be sent to image processor 866, then pass through image processor 866 First original image and the second original image are sent to camera calibration module 844.Camera calibration module 844 can be by first Original image and the second original image carry out registration process, then by the first original image and the second original image hair after alignment It is sent to hardware abstraction layer 82.Image synchronization module 824 in hardware abstraction layer 82 can be according to the first of the first original image of acquisition At second moment of the second original image of moment and acquisition, judge whether the first original image and the second original image are to obtain simultaneously 's.If so, just the first image can be calculated according to the first original image by image algorithm module 826, and according to the secondth The second image is calculated in original image.First image and the second image can be by be packaged using algoritic module 828 etc. Then reason treated the first image and second image such as will be packaged by interface 822 and be sent to application program 802, using journey After sequence 802 gets the first image and the second image, three-dimensional modeling processing can be carried out according to the first image and the second image.
Fig. 9 is the schematic diagram that image processing method is realized in one embodiment.As shown in figure 9, the first camera and second Camera needs to carry out during acquiring image camera synchronization process, and the first camera can be acquired according to the first frame per second First original image, second camera can acquire at least two the second original images according to the second frame per second.First camera is adopted First original image of collection can stab with corresponding first time and be sent to the first buffer, and the second of second camera acquisition is original Image can be packaged with corresponding flag information, and by the second original image and flag information and corresponding after packing Two timestamps are sent to the second buffer.Wherein, stamp is used for the first moment for indicating to acquire the first original image at the first time, the Two timestamps are used to indicate the second moment of the second original image of acquisition.Time between first time stamp and the second timestamp When interval is less than the first interval threshold, the first original image in the first buffer is read, and the first original image is carried out the The first image is obtained after the conversion of one format, the first image is sent in third buffer;Read in the second buffer Two original images and corresponding flag information obtain after the second original image is then carried out the second format conversion according to flag information The 4th buffer is sent to the second image, and by the second image.First image and the second image be sent to application program it Before, packing processing can be carried out, then by after packing the first image and the second image be sent in the 5th buffer.Using journey Sequence can be from the first image and the second image after being packaged be read, and according to the first image of reading and second in the 5th buffer Image carries out the processing such as three-dimensional modeling.
Figure 10 is the structural schematic diagram of image processing apparatus in one embodiment.As shown in Figure 10, the image processing apparatus 1000 include that image collection module 1002, model generation module 1004, parameter acquisition module 1006 and face adjust module 1008. Wherein:
Image collection module 1002, for obtaining the first image and corresponding second image, second image includes institute State the corresponding depth information of the first image.
Model generation module 1004 is built for carrying out three-dimensional to target face according to the first image and the second image Mould obtains target human face three-dimensional model.
Parameter acquisition module 1006, for extracting the five features in the target human face three-dimensional model, and described in acquisition The corresponding face adjusting parameter of five features.
Face adjust module 1008, for special to the face in the target human face three-dimensional model according to face adjusting parameter Sign is adjusted.
Image processing apparatus provided by the above embodiment, available first image and the second image, and according to the first figure Picture and the second image carry out three-dimensional modeling to target face and obtain target faceform.Then it extracts in target human face three-dimensional model Five features, five features is adjusted according to the face adjusting parameter of acquisition.It is not only wrapped in target human face three-dimensional model The texture information of the face containing target also includes the depth information of target face, in this way when being adjusted to five features, It can be adjusted in conjunction with more fully information, so that the processing of image is more accurate.
In one embodiment, image collection module 1002 is also used to when shooting to target face, obtains first The first original image that camera is acquired according to the first frame per second, and obtain second camera is acquired according to the second frame per second at least two Open the second original image;Wherein, first frame per second is less than second frame per second;First is generated according to first original image Image, and the second image is generated according at least two second original images.
In one embodiment, when image collection module 1002 is also used to obtain the first of acquisition first original image It carves, and acquires the second moment of at least two second original images;When between first moment and the second moment When time interval is less than interval threshold, the first image is generated according to first original image, and according to described at least two the Two original images generate the second image.
In one embodiment, model generation module 1004 is also used to obtain benchmark face threedimensional model;By described first Image and the second image are fitted with the benchmark face threedimensional model, generate the corresponding target face three-dimensional mould of target face Type.
In one embodiment, it is corresponding in the first image to be also used to obtain target face for model generation module 1004 Target face information, the target face information is compared with the face information in reference information set;When the base When there is the face information to match with the target face information in calibration information set, obtain and the target face information phase First human face three-dimensional model corresponding to matched face information, as benchmark face threedimensional model;When the reference information collection There is no when the face information to match with the target face information in conjunction, the second human face three-dimensional model is obtained as benchmark people Face three-dimensional model.
In one embodiment, parameter acquisition module 1006 is also used to obtain the face spy of the benchmark face threedimensional model The corresponding face adjusting parameter of sign.
In one embodiment, parameter acquisition module 1006 is also used to obtain and show the corresponding reference of the five features Face adjusting parameter, and obtain according to the face adjusting parameter inputted with reference to face adjusting parameter;It will be according to the face Target human face three-dimensional model after adjusting parameter adjustment is stored as benchmark face threedimensional model.
In one embodiment, the face adjusting parameter includes face adjustment direction and face adjustment intensity;Face tune Mould preparation block 1008 is also used to adjust intensity and the face adjustment direction according to the face, to the target human face three-dimensional model In five features be adjusted.
The division of modules is only used for for example, in other embodiments, can will scheme in above-mentioned image processing apparatus As processing unit is divided into different modules as required, to complete all or part of function of above-mentioned image processing apparatus.
Specific about image processing apparatus limits the restriction that may refer to above for image processing method, herein not It repeats again.Modules in above-mentioned image processing apparatus can be realized fully or partially through software, hardware and combinations thereof.On Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
Realizing for the modules in image processing apparatus provided in the embodiment of the present application can be the shape of computer program Formula.The computer program can be run in terminal or server.The program module that the computer program is constituted is storable in terminal Or on the memory of server.When the computer program is executed by processor, method described in the embodiment of the present application is realized Step.
The embodiment of the present application also provides a kind of electronic equipment.It include image processing circuit in above-mentioned electronic equipment, at image Reason circuit can use hardware and or software component realization, it may include define ISP (Image Signal Processing, figure As signal processing) the various processing units of pipeline.Figure 11 is the schematic diagram of image processing circuit in one embodiment.Such as Figure 11 institute Show, for purposes of illustration only, only showing the various aspects of image processing techniques relevant to the embodiment of the present application.
As shown in figure 11, image processing circuit includes the first ISP processor 1130, the 2nd ISP processor 1140 and control Logic device 1150.First camera 1110 includes one or more first lens 1112 and the first imaging sensor 1114.First Imaging sensor 1114 may include colour filter array (such as Bayer filter), and the first imaging sensor 1114 can be obtained with first The luminous intensity and wavelength information that each imaging pixel of imaging sensor 1114 captures, and providing can be by the first ISP processor One group of image data of 1130 processing.Second camera 1120 includes one or more second lens 1122 and the second image sensing Device 1124.Second imaging sensor 1124 may include colour filter array (such as Bayer filter), and the second imaging sensor 1124 can Luminous intensity and wavelength information that each imaging pixel of the second imaging sensor 1124 captures are obtained, and providing can be by second One group of image data of the processing of ISP processor 1140.
First image transmitting of the first camera 1110 acquisition is handled to the first ISP processor 1130, at the first ISP It, can be by statistical data (brightness of such as image, the contrast value of image, the image of the first image after managing the first image of processing of device 1130 Color etc.) be sent to control logic device 1150, control logic device 1150 can determine the first camera 1110 according to statistical data Control parameter, so that the first camera 1110 can carry out auto-focusing, the operation such as automatic exposure according to control parameter.First figure As that can store after the first ISP processor 1130 is handled into video memory 1160, the first ISP processor 1130 The image that stores in video memory 1160 can be read with to handling.In addition, the first image passes through ISP processor 1130 It can be sent directly to display 1170 after being handled and shown that display 1170 can also be read in video memory 1160 Image to be shown.
Wherein, the first ISP processor 1130 handles image data pixel by pixel in various formats.For example, each image Pixel can have the bit depth of 8,10,12 or 14 bits, and the first ISP processor 1130 can carry out image data one or more The statistical information of image processing operations, collection about image data.Wherein, image processing operations can be by identical or different locating depth Precision is spent to carry out.
Video memory 1160 can be independent special in a part, storage equipment or electronic equipment of memory device It with memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving from the first 1114 interface of imaging sensor, the first ISP processor 1130 can carry out one or more A image processing operations, such as time-domain filtering.Image data that treated can be transmitted to video memory 1160, so as to shown Other processing is carried out before.First ISP processor 1130 receives processing data from video memory 1160, and to the processing Data carry out the image real time transfer in RGB and YCbCr color space.First ISP processor 1130 treated image data May be output to display 1170, for user watch and/or by graphics engine or GPU (Graphics Processing Unit, Graphics processor) it is further processed.In addition, the output of the first ISP processor 1130 also can be transmitted to video memory 1160, and Display 1170 can read image data from video memory 1160.In one embodiment, video memory 1160 can be matched It is set to the one or more frame buffers of realization.
The statistical data that first ISP processor 1130 determines can be transmitted to control logic device 1150.For example, statistical data can Including automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 1112 shadow correction of the first lens etc. the One imaging sensor, 1114 statistical information.Control logic device 1150 may include executing the processing of one or more routines (such as firmware) Device and/or microcontroller, one or more routines can statistical data based on the received, determine the control ginseng of the first camera 1110 Several and the first ISP processor 1130 control parameter.For example, the control parameter of the first camera 1110 may include gain, exposure Time of integration of control, stabilization parameter, flash of light control parameter, 1112 control parameter of the first lens (such as focus or zoom coke Away from) or the combination of these parameters etc..ISP control parameter may include for automatic white balance and color adjustment (for example, at RGB During reason) 1112 shadow correction parameter of gain level and color correction matrix and the first lens.
Similarly, the second image transmitting that second camera 1120 acquires is handled to the 2nd ISP processor 1140, the After two ISP processors 1140 handle the first image, can by the statistical data of the second image (brightness of such as image, image contrast Value, color of image etc.) it is sent to control logic device 1150, control logic device 1150 can determine the second camera shooting according to statistical data First 1120 control parameter, so that second camera 1120 can carry out the operation such as auto-focusing, automatic exposure according to control parameter. Second image can store after the 2nd ISP processor 1140 is handled into video memory 1160, the 2nd ISP processor 1140 can also read the image that stores in video memory 1160 with to handling.In addition, the second image is handled by ISP Device 1140 can be sent directly to display 1170 after being handled and be shown, display 1170 can also read video memory Image in 1160 is to be shown.Second camera 1120 and the 2nd ISP processor 1140 also may be implemented such as the first camera shooting First 1110 and the first treatment process described in ISP processor 1130.
The following are realize image processing method with image processing techniques in Figure 11.
The embodiment of the present application also provides a kind of computer readable storage mediums.One or more is executable comprising computer The non-volatile computer readable storage medium storing program for executing of instruction, when the computer executable instructions are executed by one or more processors When, so that the step of processor executes image processing method provided by the above embodiment.
A kind of computer program product comprising instruction, when run on a computer, so that computer execution is above-mentioned The image processing method that embodiment provides.
It may include non-to any reference of memory, storage, database or other media used in the embodiment of the present application Volatibility and/or volatile memory.Suitable nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM in a variety of forms may be used , such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (10)

1. a kind of image processing method characterized by comprising
The first image and corresponding second image are obtained, second image includes the corresponding depth information of the first image;
Three-dimensional modeling is carried out to target face according to the first image and the second image, obtains target human face three-dimensional model;
The five features in the target human face three-dimensional model is extracted, and obtains the corresponding face adjustment ginseng of the five features Number;
The five features in the target human face three-dimensional model is adjusted according to face adjusting parameter.
2. the method according to claim 1, wherein the first image of the acquisition and corresponding second image, packet It includes:
When shooting to target face, the first original image that the first camera is acquired according to the first frame per second is obtained, and obtain At least two the second original images for taking second camera to be acquired according to the second frame per second;Wherein, first frame per second is less than described Second frame per second;
The first image is generated according to first original image, and generates the second figure according at least two second original images Picture.
3. according to the method described in claim 2, it is characterized in that, described generate the first figure according to first original image Picture, and the second image is generated according at least two second original images, comprising:
The first moment for acquiring first original image is obtained, and acquires the second of at least two second original images Moment;
When the time interval between first moment and the second moment is less than interval threshold, according to first original image The first image is generated, and generates the second image according at least two second original images.
4. the method according to claim 1, wherein it is described according to the first image and the second image to target Face carries out three-dimensional modeling, obtains target human face three-dimensional model, comprising:
Obtain benchmark face threedimensional model;
The first image and the second image are fitted with the benchmark face threedimensional model, it is corresponding to generate target face Target human face three-dimensional model;
It is described to obtain the corresponding face adjusting parameter of the five features, comprising:
Obtain face adjusting parameter corresponding to the five features of the benchmark face threedimensional model.
5. according to the method described in claim 4, it is characterized in that, the acquisition benchmark face threedimensional model, comprising:
Target face corresponding target face information in the first image is obtained, the target face information and benchmark are believed Face information in breath set is compared;
When there is the face information to match with the target face information in the reference information set, obtain and the mesh First human face three-dimensional model corresponding to the face information that mark face information matches, as benchmark face threedimensional model;
When, there is no when the face information to match with the target face information, acquisition is deposited in advance in the reference information set Second human face three-dimensional model of storage is as benchmark face threedimensional model.
6. the method according to claim 1, wherein described obtain the corresponding face adjustment ginseng of the five features Number, comprising:
It obtains and shows that the five features is corresponding and refer to face adjusting parameter, and obtain according to described with reference to face adjustment ginseng The face adjusting parameter of number input;
The method also includes:
It is carried out according to the target human face three-dimensional model after face adjusting parameter adjustment as benchmark face threedimensional model Storage.
7. the method according to claim 1, wherein the face adjusting parameter includes face adjustment direction and five Official adjusts intensity;
It is described that the five features in the target human face three-dimensional model is adjusted according to face adjusting parameter, comprising:
Intensity and the face adjustment direction are adjusted according to the face, to the five features in the target human face three-dimensional model It is adjusted.
8. a kind of image processing apparatus characterized by comprising
Image collection module, for obtaining the first image and corresponding second image, second image includes first figure As corresponding depth information;
Model generation module obtains mesh for carrying out three-dimensional modeling to target face according to the first image and the second image Mark human face three-dimensional model;
Parameter acquisition module for extracting the five features in the target human face three-dimensional model, and obtains the five features Corresponding face adjusting parameter;
Face adjust module, for being adjusted according to face adjusting parameter to the five features in the target human face three-dimensional model It is whole.
9. a kind of electronic equipment, including memory and processor, computer program, the computer are stored in the memory When program is executed by the processor, so that the processor executes the step of the method as described in any one of claims 1 to 7 Suddenly.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of method as described in any one of claims 1 to 7 is realized when being executed by processor.
CN201810961977.8A 2018-08-22 2018-08-22 Image processing method and device, electronic equipment and computer readable storage medium Active CN109190533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810961977.8A CN109190533B (en) 2018-08-22 2018-08-22 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810961977.8A CN109190533B (en) 2018-08-22 2018-08-22 Image processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109190533A true CN109190533A (en) 2019-01-11
CN109190533B CN109190533B (en) 2021-07-09

Family

ID=64919499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810961977.8A Active CN109190533B (en) 2018-08-22 2018-08-22 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109190533B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903375A (en) * 2019-02-21 2019-06-18 Oppo广东移动通信有限公司 Model generating method, device, storage medium and electronic equipment
CN110944112A (en) * 2019-11-22 2020-03-31 维沃移动通信有限公司 Image processing method and electronic equipment
CN111415397A (en) * 2020-03-20 2020-07-14 广州虎牙科技有限公司 Face reconstruction and live broadcast method, device, equipment and storage medium
CN113427486A (en) * 2021-06-18 2021-09-24 上海非夕机器人科技有限公司 Mechanical arm control method and device, computer equipment, storage medium and mechanical arm

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1790421A (en) * 2001-11-27 2006-06-21 三星电子株式会社 Apparatus and method for depth image-based representation of3-dimensional object
CN102959941A (en) * 2010-07-02 2013-03-06 索尼电脑娱乐公司 Information processing system, information processing device, and information processing method
CN103797790A (en) * 2011-07-25 2014-05-14 索尼电脑娱乐公司 Moving image capture device, information processing system, information processing device, and image data processing method
CN105938627A (en) * 2016-04-12 2016-09-14 湖南拓视觉信息技术有限公司 Processing method and system for virtual plastic processing on face
CN106580301A (en) * 2016-12-21 2017-04-26 广州心与潮信息科技有限公司 Physiological parameter monitoring method, device and hand-held device
US9746369B2 (en) * 2012-02-15 2017-08-29 Apple Inc. Integrated optoelectronic modules based on arrays of emitters and microlenses
CN107124604A (en) * 2017-06-29 2017-09-01 诚迈科技(南京)股份有限公司 A kind of utilization dual camera realizes the method and device of 3-D view
CN107333047A (en) * 2017-08-24 2017-11-07 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN107404362A (en) * 2017-09-15 2017-11-28 青岛海信移动通信技术股份有限公司 A kind of synchronous method and device of dual camera data frame
CN107592449A (en) * 2017-08-09 2018-01-16 广东欧珀移动通信有限公司 Three-dimension modeling method, apparatus and mobile terminal
CN107689073A (en) * 2016-08-05 2018-02-13 阿里巴巴集团控股有限公司 The generation method of image set, device and image recognition model training method, system
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107845057A (en) * 2017-09-25 2018-03-27 维沃移动通信有限公司 One kind is taken pictures method for previewing and mobile terminal
CN108062791A (en) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 A kind of method and apparatus for rebuilding human face three-dimensional model
CN108140235A (en) * 2015-10-14 2018-06-08 高通股份有限公司 For generating the system and method that image vision is shown

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1790421A (en) * 2001-11-27 2006-06-21 三星电子株式会社 Apparatus and method for depth image-based representation of3-dimensional object
CN102959941A (en) * 2010-07-02 2013-03-06 索尼电脑娱乐公司 Information processing system, information processing device, and information processing method
CN103797790A (en) * 2011-07-25 2014-05-14 索尼电脑娱乐公司 Moving image capture device, information processing system, information processing device, and image data processing method
US9746369B2 (en) * 2012-02-15 2017-08-29 Apple Inc. Integrated optoelectronic modules based on arrays of emitters and microlenses
CN108140235A (en) * 2015-10-14 2018-06-08 高通股份有限公司 For generating the system and method that image vision is shown
CN105938627A (en) * 2016-04-12 2016-09-14 湖南拓视觉信息技术有限公司 Processing method and system for virtual plastic processing on face
CN107689073A (en) * 2016-08-05 2018-02-13 阿里巴巴集团控股有限公司 The generation method of image set, device and image recognition model training method, system
CN106580301A (en) * 2016-12-21 2017-04-26 广州心与潮信息科技有限公司 Physiological parameter monitoring method, device and hand-held device
CN107124604A (en) * 2017-06-29 2017-09-01 诚迈科技(南京)股份有限公司 A kind of utilization dual camera realizes the method and device of 3-D view
CN107592449A (en) * 2017-08-09 2018-01-16 广东欧珀移动通信有限公司 Three-dimension modeling method, apparatus and mobile terminal
CN107333047A (en) * 2017-08-24 2017-11-07 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN107404362A (en) * 2017-09-15 2017-11-28 青岛海信移动通信技术股份有限公司 A kind of synchronous method and device of dual camera data frame
CN107845057A (en) * 2017-09-25 2018-03-27 维沃移动通信有限公司 One kind is taken pictures method for previewing and mobile terminal
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108062791A (en) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 A kind of method and apparatus for rebuilding human face three-dimensional model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CONRAD SANDERSON等: "Information Fusion and Person Verification Using Speech & Face Information", 《DIGITAL SIGNAL PROCESSING》 *
阳昕等: "自适应分辨率与帧速率调整方法", 《中国科技信息》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109903375A (en) * 2019-02-21 2019-06-18 Oppo广东移动通信有限公司 Model generating method, device, storage medium and electronic equipment
CN109903375B (en) * 2019-02-21 2023-06-06 Oppo广东移动通信有限公司 Model generation method and device, storage medium and electronic equipment
CN110944112A (en) * 2019-11-22 2020-03-31 维沃移动通信有限公司 Image processing method and electronic equipment
CN111415397A (en) * 2020-03-20 2020-07-14 广州虎牙科技有限公司 Face reconstruction and live broadcast method, device, equipment and storage medium
CN111415397B (en) * 2020-03-20 2024-03-08 广州虎牙科技有限公司 Face reconstruction and live broadcast method, device, equipment and storage medium
CN113427486A (en) * 2021-06-18 2021-09-24 上海非夕机器人科技有限公司 Mechanical arm control method and device, computer equipment, storage medium and mechanical arm

Also Published As

Publication number Publication date
CN109190533B (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN109118581A (en) Image processing method and device, electronic equipment, computer readable storage medium
JP7139452B2 (en) Image processing method, computer readable storage medium, and electronic device
CN107730445B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN109040591B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN109190533A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN108989606B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108419017B (en) Control method, apparatus, electronic equipment and the computer readable storage medium of shooting
CN104885125B (en) Message processing device, information processing system and information processing method
CN108447017A (en) Face virtual face-lifting method and device
CN107800965B (en) Image processing method, device, computer readable storage medium and computer equipment
CN108055452A (en) Image processing method, device and equipment
CN107509031A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN109151303B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109712192A (en) Camera module scaling method, device, electronic equipment and computer readable storage medium
CN109146906A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN108616700B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108024054A (en) Image processing method, device and equipment
CN106296789B (en) It is a kind of to be virtually implanted the method and terminal that object shuttles in outdoor scene
CN109040746B (en) Camera calibration method and apparatus, electronic equipment, computer readable storage medium
JP2018163648A (en) Image processor, method for image processing, and program
CN109587466A (en) The method and apparatus of colored shadow correction
CN107948618A (en) Image processing method, device, computer-readable recording medium and computer equipment
CN116912393A (en) Face reconstruction method and device, electronic equipment and readable storage medium
CN109166082A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN108629329B (en) Image processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant