CN109167935A - Method for processing video frequency and device, electronic equipment, computer readable storage medium - Google Patents
Method for processing video frequency and device, electronic equipment, computer readable storage medium Download PDFInfo
- Publication number
- CN109167935A CN109167935A CN201811197920.1A CN201811197920A CN109167935A CN 109167935 A CN109167935 A CN 109167935A CN 201811197920 A CN201811197920 A CN 201811197920A CN 109167935 A CN109167935 A CN 109167935A
- Authority
- CN
- China
- Prior art keywords
- light efficiency
- facial image
- image
- model
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/74—Circuits for processing colour signals for obtaining special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/77—Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
Abstract
This application involves a kind of method for processing video frequency, device, electronic equipment, computer readable storage mediums.The described method includes: obtaining the picture frame in video to be processed, the facial image in described image frame is detected;Position coordinates and deflection angle of the facial image in described image frame are obtained, the corresponding light efficiency model of the facial image are generated according to the position coordinates and deflection angle, wherein the light efficiency model is the model for simulating light variation;Light efficiency processing is carried out to described image frame according to the light efficiency model.The accuracy of video processing can be improved in above-mentioned method for processing video frequency, device, electronic equipment, computer readable storage medium.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of method for processing video frequency, device, electronic equipment, meter
Calculation machine readable storage medium storing program for executing.
Background technique
Electronic equipment can obtain video by modes such as shooting, downloading, transmission, can also be to view after obtaining video
Frequency carries out some post-processings.For example, improving the brightness of picture frame in video, the saturation degree for adjusting picture frame or adjustment image
The colour temperature etc. of frame can also add light efficiency to picture frame.The light efficiency of addition can simulate light variation, make the object in picture frame
Show lighting effect.
Summary of the invention
The embodiment of the present application provides a kind of method for processing video frequency, device, electronic equipment, computer readable storage medium, can
To improve the accuracy of video processing.
A kind of method for processing video frequency, comprising:
The picture frame in video to be processed is obtained, the facial image in described image frame is detected;
Obtain position coordinates and deflection angle of the facial image in described image frame, according to the position coordinates and
Deflection angle generates the corresponding light efficiency model of the facial image, wherein the light efficiency model is for simulating light variation
Model;
Light efficiency processing is carried out to described image frame according to the light efficiency model.
A kind of video process apparatus, comprising:
Face detection module detects the facial image in described image frame for obtaining the picture frame in video to be processed;
Model obtains module, for obtaining position coordinates and deflection angle of the facial image in described image frame,
The corresponding light efficiency model of the facial image is generated according to the position coordinates and deflection angle, wherein the light efficiency model is
For simulating the model of light variation;
Light efficiency processing module, for carrying out light efficiency processing to described image frame according to the light efficiency model.
A kind of electronic equipment, including memory and processor store computer program, the calculating in the memory
When machine program is executed by the processor, so that the processor executes following steps:
The picture frame in video to be processed is obtained, the facial image in described image frame is detected;
Obtain position coordinates and deflection angle of the facial image in described image frame, according to the position coordinates and
Deflection angle generates the corresponding light efficiency model of the facial image, wherein the light efficiency model is for simulating light variation
Model;
Light efficiency processing is carried out to described image frame according to the light efficiency model.
A kind of computer readable storage medium, is stored thereon with computer program, and the computer program is held by processor
Following steps are realized when row:
The picture frame in video to be processed is obtained, the facial image in described image frame is detected;
Obtain position coordinates and deflection angle of the facial image in described image frame, according to the position coordinates and
Deflection angle generates the corresponding light efficiency model of the facial image, wherein the light efficiency model is for simulating light variation
Model;
Light efficiency processing is carried out to described image frame according to the light efficiency model.
Above-mentioned method for processing video frequency, device, electronic equipment, computer readable storage medium can read video to be processed
In picture frame, and detect the facial image in the picture frame of reading.Then according to the position coordinates of facial image and deflection angle
Degree generates light efficiency model, carries out light efficiency processing to the picture frame in video to be processed according to the light efficiency model of acquisition.To be processed
After picture frame in video carries out light efficiency processing, the light efficiency of generation will become with the position coordinates and deflection angle of face
Change, video is more accurately handled to realize.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the application environment schematic diagram of method for processing video frequency in one embodiment;
Fig. 2 is the flow chart of method for processing video frequency in one embodiment;
Fig. 3 is the flow chart of method for processing video frequency in another embodiment;
Fig. 4 is the position coordinates schematic diagram of facial image in one embodiment;
Fig. 5 is the deflection angle schematic diagram of facial image in one embodiment;
Fig. 6 is the flow chart of method for processing video frequency in another embodiment;
Fig. 7 is the schematic diagram of human face image sequence in one embodiment;
Fig. 8 is the flow chart of method for processing video frequency in another embodiment;
Fig. 9 is the structural block diagram of the video process apparatus of one embodiment;
Figure 10 is the schematic diagram of image processing circuit in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and
It is not used in restriction the application.
Fig. 1 is the application environment schematic diagram of method for processing video frequency in one embodiment.As shown in Figure 1, the application environment packet
Electronic equipment 10 is included, electronic equipment 10 can acquire video to be processed by the camera 102 of installation.Then electronic equipment 10 can be with
Obtain the picture frame in video to be processed, the facial image in detection image frame;Obtain position of the facial image in picture frame
Coordinate and deflection angle generate the corresponding light efficiency model of facial image according to position coordinates and deflection angle.Wherein, light efficiency model
For the model for simulating light variation.Finally, carrying out light efficiency processing to picture frame according to light efficiency model.In one embodiment
In, electronic equipment 10 can be PC, mobile terminal, personal digital assistant, wearable electronic etc., without being limited thereto.
Fig. 2 is the flow chart of method for processing video frequency in one embodiment.As shown in Fig. 2, the method for processing video frequency includes step
Rapid 202 to step 206.Wherein:
Step 202, the picture frame in video to be processed, the facial image in detection image frame are obtained.
In one embodiment, video to be processed can be what electronic equipment was directly shot by camera, be also possible to
It is downloaded by network, can also be and be stored in electronic equipment local, it is without being limited thereto.It is taken the photograph for example, electronic equipment can be opened
An image is acquired to photographed scene as head, and at interval of 0.3 second, in this way during camera is shot, so that it may acquire
Several images, and a video is generated according to this several image.
It is understood that video to be processed was made of the continuous picture frame of at least two frames, at least two frames are continuous for this
Picture frame can form continuous image frame, to record the dynamic change of object.For example, each in video to be processed
The position of vehicle is different in frame picture frame, thus can be by the change in location of image frame recording vehicle, to record vehicle
The process of position dynamic change.Video to be processed can also include the audio signal corresponding to each picture frame, not limit herein
It is fixed.
Each frame picture frame has corresponding timing to identify in video to be processed, identifies available picture frame according to timing
The sequencing arranged in video to be processed.For example, timing mark can be identified by digital number, can also be passed through
It is identified at the time of specific, it is not limited here.Wherein, in video to be processed between the generation moment of two continuous frames picture frame
Time interval it is shorter, the continuity of the video to be processed corresponsively dynamic changing process of object is better.
In one embodiment, after getting video to be processed, the picture frame in video to be processed can be read frame by frame,
The parts of images frame in video to be processed can also be read, the facial image in the picture frame of reading is then detected, does not do herein
It limits.Electronic equipment can carry out Face datection to the picture frame read after reading picture frame.The people specifically used
Face detection algorithm it is not limited here, such as can be using Local Features Analysis method (Local Face Analysis), master
Componential analysis (Principal Component Analysis, PCA), neural network (Neural Networks) et al.
Face detection algorithm detects the face in picture frame.
Step 204, position coordinates and deflection angle of the facial image in picture frame are obtained, according to position coordinates and deflection
Angle generates the corresponding light efficiency model of facial image, wherein light efficiency model is the model for simulating light variation.
Facial image refers to the region in picture frame where face.It, can after detecting the facial image in picture frame
To obtain position coordinates and deflection angle of the facial image in picture frame.Specifically, picture frame is by several pixel structures
At two-dimensional pixel matrix, two-dimensional coordinate system can be established according to the two-dimensional pixel matrix, and determine according to the arrangement of pixel
Adopted unit coordinate.For example, two-dimensional coordinate system can be established as origin using the pixel in the picture frame most lower left corner, one is often moved right
A pixel, abscissa add 1, often move up a pixel, and ordinate adds 1.
Position coordinates just refer to coordinate of the facial image in the two-dimensional coordinate system established with picture frame.Specifically, face
The position coordinates of image can be expressed as in facial image any one pixel in the two-dimensional coordinate system established according to picture frame
In coordinate.For example, the position coordinates of facial image can be sat with the pixel in bosom in facial image in the two dimension of picture frame
Coordinate in mark system, is also possible to facial image most coordinate of the lower left corner pixel in the two-dimensional coordinate system of picture frame.
Specifically, a 3 d space coordinate system can also be established with the scene presented in picture frame, then facial image
Deflection angle can by the corresponding face of facial image in the 3 d space coordinate system relative to each reference axis
Deflection angle is indicated, without being limited thereto.
In one embodiment, light efficiency model is the model for simulating light variation, can be to image by the light efficiency model
Frame carries out light efficiency processing.For example, the variation of the light such as light efficiency modeling sunlight, tengsten lamp, fluorescent lamp can be passed through.Specifically
, different light efficiency models can be generated according to position coordinates of the facial image in picture frame and deflection angle, thus according to
The light efficiency model of generation carries out light efficiency processing to picture frame.
In one embodiment, generating light efficiency model can specifically include following steps: obtain preset with reference to light efficiency mould
Type;The light source center parameter of light efficiency model is referred to according to position coordinates adjustment, and light efficiency model is referred to according to deflection angle adjustment
Light efficiency angle parameter, generate the corresponding light efficiency model of facial image.A general ginseng can be pre-established in electronic equipment
Light efficiency model is examined, then will be stored with reference to light efficiency model.Light source center parameter is intended to indicate that with reference to light efficiency modeling
Light light source where point parameter, light efficiency angle parameter refers to the irradiating angle of the light with reference to light efficiency modeling
Parameter.After adjusting light source center parameter and light efficiency angle parameter with reference to light efficiency model, the light efficiency model of generation can be simulated
The light of different location and angle.
For example, the light of light efficiency modeling can be adjusted according to the position coordinates and deflection angle of facial image
Light source center point position.When the position coordinates of facial image and deflection angle difference, the light of the light of light efficiency modeling
The position of source central point is also different, it is not limited here.
Step 206, light efficiency processing is carried out to picture frame according to light efficiency model.
Light efficiency processing refers to the processing to image addition light effects.Specifically, the light efficiency model can simulate light
The change curves such as direction, power, color add the light of different directions, power and color by the light efficiency model to picture frame.
For example, light efficiency model can simulate the light variation of incandescent lamp generation, the light variation of tungsten wire light can also be simulated, incandescent lamp produces
Raw light color is partially blue, and the light color that tengsten lamp generates is partially yellow.
Specifically, light efficiency model can be the model for carrying out light efficiency processing to the partial region in picture frame, it is also possible to
The model of light efficiency processing is carried out to the whole region in picture frame, it is not limited here.For example, can be only by light efficiency model
Region where portrait in picture frame carries out light efficiency processing, can also refer to carry out the region where the background in picture frame
Light efficiency processing.After carrying out light efficiency processing to the picture frame in video to be processed, the light efficiency of generation will be sat with the position of face
Mark and deflection angle and change, so that a kind of effect of light dynamic change be presented in video to be processed after treatment.
Method for processing video frequency provided by the above embodiment can read the picture frame in video to be processed, and detect reading
Picture frame in facial image.Then light efficiency model is generated according to the position coordinates of facial image and deflection angle, according to obtaining
The light efficiency model taken carries out light efficiency processing to the picture frame in video to be processed.Light efficiency is carried out to the picture frame in video to be processed
After processing, the light efficiency of generation will change with the position coordinates and deflection angle of face, to realize more acurrate to video
Processing.
Fig. 3 is the flow chart of method for processing video frequency in another embodiment.As shown in figure 3, the method for processing video frequency includes
Step 302 is to step 310.Wherein:
Step 302, the picture frame in video to be processed, the facial image in detection image frame are obtained.
Picture frame in video to be processed has a timing, i.e., these picture frames be according to the sequencing of generation into
Row arrangement.Picture frame in video to be processed can carry out uniquely tagged by timing mark, and by timing mark come
The sequencing of arrangement is marked.Electronic equipment after detecting facial image, can to detect everyone
Face image distributes a face mark, carries out uniquely tagged to facial image by face mark.
For example, can be marked by " the timing mark of picture frame+facial image is numbered ", it is assumed that the timing of picture frame
It is identified as Pic01, facial image number is Face02, then obtained face mark can be " Pic01+Face02 ".Its
In, the timing mark of picture frame can uniquely indicate picture frame, and facial image number can be in same picture frame
Facial image carry out different numbers, then according to above-mentioned face mark can be to each people in each picture frame
Face image carries out uniquely tagged.
Step 304, the face images that will test are classified, at least one face classification is obtained;Wherein, each
The facial image for including in face classification corresponds to same face.
In one embodiment, after carrying out Face datection to the picture frame of acquisition, the picture frame that can will test
In facial image classify, obtain at least one face classification.The face figure for including in the obtained same face classification
As corresponding same face.
Specifically, the facial image detected from the picture frame of acquisition can form a face image set, then
Electronic equipment can traverse the facial image in face image set, when often reading a facial image, can will read
To facial image matched with the facial image in established face classification.If the facial image read with have been established
Facial image in face classification matches, then it is corresponding that the facial image read this assigns to the facial image matched
In face classification;If the facial image read is mismatched with the facial image having been established in face classification, this is read
Facial image individually establish a face classification.
Step 306, the quantity for the facial image for including in each face classification is counted as destination number, by destination number
Facial image included in face classification greater than amount threshold is as target facial image.
Electronic equipment can count the face figure for including in each face classification after classifying to facial image
The quantity of picture, as destination number.Destination number is more, illustrates that the facial image for including in the face classification is more, also illustrates
The number that the corresponding face of the face classification occurs is more.Specifically, destination number can be greater than to the face class of amount threshold
Facial image included in not is handled as target facial image, and to target facial image.
It is possible to further count classification sum of the destination number greater than the face classification of amount threshold, when statistics obtains
Classification sum when being greater than total threshold value, then can count destination number greater than including in each face classification of amount threshold
Average face area is greater than the face classification of area threshold as target face classification by the average face area of facial image.
Finally using the facial image for including in target face classification as target facial image.
Wherein, average face area refers to the area average for the face images for including in face classification.Area can
Being indicated by the pixel quantity for including in facial image, it can also be indicated, not limit herein by other means
It is fixed.For example, including 3 facial images in a face classification, then the area of available each facial image, then will
The area of each facial image is added, then the result that will add up obtains average face area divided by 3.
Step 308, position coordinates and deflection angle of the target facial image in corresponding picture frame are obtained, according to position
Coordinate and deflection angle generate the corresponding light efficiency model of target facial image.
When facial image of the electronic equipment in detection image frame, position coordinates and the deflection of facial image can be obtained simultaneously
Angle, and the position coordinates and deflection angle that will acquire and face mark establish corresponding relationship.Determining target face figure in this way
As after, so that it may according to the corresponding relationship of foundation, obtain position coordinates of the target facial image in corresponding picture frame
And deflection angle.
Fig. 4 is the position coordinates schematic diagram of facial image in one embodiment.As shown in figure 4, can be according to picture frame 40
A two-dimensional coordinate system oxy is established, includes facial image 402 in picture frame 40.Position of the facial image 402 in picture frame 40
Coordinate, can by coordinate of the pixel 404 in the two-dimensional coordinate system oxy of above-mentioned foundation in facial image 402 into
Row indicates.
Fig. 5 is the deflection angle schematic diagram of facial image in one embodiment.As shown in figure 5, can be according to picture frame 50
In scene establish 3 d space coordinate system o'x'y'z', specifically using the lower left corner pixel of picture frame 50 as origin o', with
The direction that origin o' extends to two sides as x' axis positive direction and y' axis positive direction, and with perpendicular to the direction of picture frame 50 make
For z' axis positive direction.In 3 d space coordinate system o'x'y'z', facial image 502 is relative to x' axis, y' axis and z' axis
Deflection angle, respectively 0 °, 0 °, 0 °.Facial image 504 is first attributed to the deflection angle relative to x' axis, y' axis and z' axis, respectively
For 0 °, α, 0 °.
It is understood that obtain target facial image there may be one or multiple, then each getting
After target facial image corresponding position coordinate and deflection angle, so that it may be obtained respectively according to position coordinates and deflection angle
The corresponding light efficiency model of each facial image.Light efficiency processing is carried out further according to the light efficiency model of acquisition.
Step 310, light efficiency processing is carried out to picture frame according to light efficiency model.
In embodiment provided by the present application, the light efficiency for the pixel for including in picture frame can be obtained according to light efficiency model
Parameter carries out light efficiency processing to the pixel for including in picture frame according to light efficiency parameter.Specifically, picture frame is by several pixels
The two-dimensional matrix that point is constituted, each pixel have corresponding pixel value.It therefore, can be according to light after obtaining light efficiency model
Imitate the light efficiency parameter that model calculates each pixel in picture frame.It, can be according to light efficiency parameter pair after calculating light efficiency parameter
Each pixel in picture frame carries out light efficiency processing.Specifically picture frame can be overlapped by light efficiency parameter or product
Mode carries out light efficiency processing, it is not limited here.It is understood that the value range of the pixel value in image be generally [0,
255], therefore in the pixel value of the picture frame after light efficiency processing it cannot be greater than 255.
For example, it is assumed that picture frame is H0(x, y), light efficiency model are P (x, y), then carry out light efficiency processing by stacked system
Picture frame H (x, y) later can be expressed as H (x, y)=(1+P (x, y)) H0(x, y) carries out light by way of product
Treated that picture frame can be expressed as H (x, y)=P (x, y) H for effect0(x,y).It is understood that light efficiency processing can be with
It otherwise realizes, it is not limited here.
It in one embodiment, can also be to each face in picture frame when carrying out light efficiency processing to picture frame
Do different processing in chrominance channel.Specifically, each pixel in picture frame can correspond to one or more color channel values, then
The light efficiency parameter that the corresponding color channel values of each pixel can be calculated according to the light efficiency model of acquisition, further according to light efficiency parameter
Light efficiency processing is carried out to the color channel values of each pixel respectively.
For example, picture frame can correspond to four Color Channels, then it may include four light efficiency in the light efficiency model obtained
Model, each one Color Channel of light efficiency submodel alignment processing, then picture frame can be calculated according to the light efficiency submodel
In corresponding Color Channel light efficiency parameter, then color channel values are carried out at light efficiency according to the light efficiency parameter that is calculated
Reason.
It is understood that obtained image light efficiency increases after each color channel values are carried out with different light efficiency processing
Potent fruit may be the same.For example, the RGB triple channel obtained is worth in corresponding light efficiency parameter, the corresponding light efficiency parameter in the channel R is big
Light efficiency parameter in the channel G and channel B, then being obtained after carrying out light efficiency processing to picture frame according to the light efficiency parameter of acquisition
Light efficiency enhancing image relative image frame be exactly inclined feux rouges effect.
Method for processing video frequency provided by the above embodiment can read the picture frame in video to be processed, and detect reading
Picture frame in facial image.Then the facial image that will test classifies, and according to classification results from detecting
Target facial image is determined in facial image, then generates light efficiency mould according to the position coordinates of target facial image and deflection angle
Type carries out light efficiency processing to the picture frame in video to be processed according to the light efficiency model of acquisition.To the image in video to be processed
After frame carries out light efficiency processing, the light efficiency of generation will change with the position coordinates and deflection angle of face, thus realization pair
Video is more accurately handled.Light efficiency model is determined according to the more facial image of face frequency of occurrence in image/video to be processed,
It is just more targeted that light efficiency processing is carried out in this way, is further increased the accuracy of video processing, is decreased electronic equipment
Power consumption when handling video.
In one embodiment, as shown in fig. 6, determining the step of the destination number of face classification in above-mentioned method for processing video frequency
Suddenly specifically can also include:
Step 602, the facial image for including in each face classification is divided at least one human face image sequence, wherein
The time interval of adjacent two facial images generated between the moment is less than or equal to time threshold in same person's face image sequence
It is worth, the time interval of any two facial images generated between the moment is greater than time threshold in different faces image sequence.
In embodiment provided by the present application, the facial image for including in sorted face classification can further be drawn
It is divided into one or more subclass, the destination number of face classification is then further counted according to the subclass of division.So
When generating video to be processed, the time interval generated between the moment between two continuous frames picture frame is often fixation, therefore
It can judge whether two picture frames are continuous picture frame according to the corresponding time interval for generating the moment of two picture frames.
For example, electronic equipment is at interval of one picture frame of acquisition in 1 second, to generate video to be processed.Assuming that generate to
Handling includes three picture frames in video, respectively " Pic01 " → " Pic02 " → " Pic03 ", corresponding generations moment with " divide:
The form of second " is expressed as " 12:00 " → " 12:01 " → " 12:02 ", so that it may judge picture frame " Pic01 " and " Pic02 "
Generate the moment between time interval be 1 second, " Pic01 " and " Pic03 " generate the moment between time interval be 2 seconds.
So picture frame " Pic01 " and " Pic02 " are just continuous picture frame, and picture frame " Pic01 " and " Pic03 " are not just continuous
Picture frame.
Specifically, facial image with picture frame be it is corresponding, the picture frame in video to be processed has timing, because
This facial image is also to have timing.The generation moment of picture frame is the generation for the facial image for including in the picture frame
Facial image in face classification can be divided into one or more facial image sequences according to the generation moment of picture frame by the moment
Column.Wherein, the time interval in same person's face image sequence between the generation moment of adjacent two facial images is less than or waits
The time interval of any two facial images generated between the moment is greater than the time in time threshold, different faces image sequence
Threshold value.Time threshold can be the time interval at the generation moment of two continuous frames picture frame in video to be processed, be also possible to it
He is worth, it is not limited here.
Fig. 7 is the schematic diagram of human face image sequence in one embodiment.As shown in fig. 7, can will be in each face classification
The generation moment of the corresponding picture frame of the facial image for including indicates on a time shaft 70.Continuous two in video to be processed
The time interval at the generation moment of picture frame is 0.5 second, and the generation moment of the corresponding picture frame of each facial image is used
Facial image on a time axis 70, then can be divided into two human face image sequences, respectively by the formal notation of " second: millisecond "
Human face image sequence 702 and human face image sequence 704.
It step 604, will include the most human face image sequence of facial image in each face classification as target face figure
As sequence.
Specifically, after the facial image for including in each face classification is divided at least one human face image sequence,
The quantity that the facial image for including in each human face image sequence can be counted, will in each face classification comprising facial image most
More human face image sequences is as target human face image sequence.
As shown in fig. 7, the facial image in one of face classification can be divided into human face image sequence 702 and face
Image sequence 704 includes 5 facial images in human face image sequence 702, includes 7 face figures in human face image sequence 704
Picture, then can be by human face image sequence 704 as the corresponding target human face image sequence of the face classification.
Step 606, by the quantity for the facial image for including in target human face image sequence, as target human face image sequence
The destination number of corresponding face classification.
After determining target human face image sequence, the number for the facial image for including in target human face image sequence can be counted
Amount, and the quantity that the facial image for including in obtained target human face image sequence will be counted, as the target facial image sequence
Arrange the destination number of corresponding face classification.Target facial image is obtained further according to the destination number of acquisition.
In one embodiment, as shown in figure 8, after generating light efficiency model, light efficiency can also be adjusted according to facial image
Then model carries out light efficiency processing according to target light efficiency model to generate target light efficiency model, specific:
Step 802, the corresponding depth information of facial image is obtained.
Specifically, the corresponding depth image of picture frame can also be obtained when obtaining the picture frame in video to be processed,
After detecting the facial image in picture frame, so that it may obtain the corresponding depth information of facial image according to the depth image.
It in one embodiment, can be according to binocular telemetry, structure light (Structured Light), flight time (Time of
The methods of) Flight depth image is obtained, it is without being limited thereto.The picture in pixel and picture frame for including in the depth image of acquisition
Vegetarian refreshments is corresponding, and the pixel value of pixel indicates the information such as texture and the color of object in picture frame, pixel in depth image
Pixel value indicates the depth information between object and image collecting device.Therefore the face information in picture frame is being detected, just
The depth information of facial image can be obtained from the corresponding region in depth image.
Step 804, it is adjusted according to light efficiency intensive parameter of the depth information to light efficiency model, generates target light efficiency mould
Type.
It, can be according to the depth information of acquisition by the light of the facial image after getting the depth information of facial image
The light efficiency intensive parameter of effect model is adjusted, and generates target light efficiency model.Light efficiency intensive parameter, which refers to, carries out light efficiency to image
The strength factor of processing, light efficiency intensive parameter is bigger, and the intensity for carrying out light efficiency processing is bigger.
In one embodiment, generate target light efficiency model the step of may include: facial image is divided into it is different
Characteristic area;Respectively according to the corresponding light of characteristic area each in the corresponding depth information adjustment light efficiency model of each characteristic area
Intensive parameter is imitated, target light efficiency model is generated.
Specifically, can detecte the characteristic point in facial image, and facial image is divided according to the characteristic point detected
For different characteristic areas.Characteristic point refers to the point that the edge pixel values in facial image change greatly.Five in facial image
The pixel value variation at official edge is often bigger, therefore can be according to the characteristic point detected come five in locating human face's image
Official.After detecting characteristic point, facial image can be divided into according to the characteristic point detected by different characteristic areas.
In general, the different characteristic region in facial image is influenced difference by light.For example, nasal area is relatively more prominent
Out, so can generally generate bloom on the wing of nose under the influence of light, nose side can generate shade;The place of eyeball is by light
Influence may also generate bloom.Therefore, facial image can be divided into after different characteristic areas, according to the feature of division
Region generates final target light efficiency model.Specifically, after being divided into different characteristic region to facial image, Ke Yigen
The light efficiency intensive parameter that adjustment light efficiency model is removed according to parameters such as the position in different characteristic region and sizes, to obtain target light efficiency
Model.
Step 806, light efficiency processing is carried out to picture frame according to target light efficiency model.
It should be understood that although each step in the flow chart of Fig. 2,3,6,8 is successively shown according to the instruction of arrow,
But these steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, these
There is no stringent sequences to limit for the execution of step, these steps can execute in other order.Moreover, in Fig. 2,3,6,8
At least part step may include multiple sub-steps perhaps these sub-steps of multiple stages or stage be not necessarily
Synchronization executes completion, but can execute at different times, and the execution sequence in these sub-steps or stage also need not
Be so successively carry out, but can at least part of the sub-step or stage of other steps or other steps in turn or
Person alternately executes.
In embodiment provided by the present application, method for processing video frequency specifically can with the following steps are included:
(1) picture frame in video to be processed, the facial image in detection image frame are obtained;
(2) face images that will test are classified, at least one face classification is obtained;Wherein, each face
The facial image for including in classification corresponds to same face;
(3) facial image for including in each face classification is divided at least one human face image sequence, wherein same
The time interval of adjacent two facial images generated between the moment is less than or equal to time threshold in a human face image sequence, no
The time interval generated between the moment with two facial images any in human face image sequence is greater than time threshold;
It (4) will include the most human face image sequence of facial image in each face classification as target facial image sequence
Column;
(5) corresponding as target human face image sequence by the quantity for the facial image for including in target human face image sequence
Face classification destination number;
(6) using facial image included in face classification of the destination number greater than amount threshold as target face figure
Picture;
(7) position coordinates and deflection angle of the target facial image in corresponding picture frame are obtained;
(8) it obtains preset with reference to light efficiency model;
(9) the light source center parameter of light efficiency model is referred to according to position coordinates adjustment, and is adjusted and referred to according to deflection angle
The light efficiency angle parameter of light efficiency model generates the corresponding light efficiency model of target facial image;
(10) the corresponding depth information of target facial image is obtained;
(11) target facial image is divided into different characteristic areas, respectively according to the corresponding depth of each characteristic area
Information adjusts the corresponding light efficiency intensive parameter of each characteristic area in light efficiency model, generates target light efficiency model;
(12) the light efficiency parameter that the pixel for including in picture frame is obtained according to target light efficiency model, according to light efficiency parameter pair
The pixel for including in picture frame carries out light efficiency processing.
Method for processing video frequency provided by the above embodiment can read the picture frame in video to be processed, and detect reading
Picture frame in facial image.Then light efficiency model is generated according to the position coordinates of facial image and deflection angle, according to obtaining
The light efficiency model taken carries out light efficiency processing to the picture frame in video to be processed.Light efficiency is carried out to the picture frame in video to be processed
After processing, the light efficiency of generation will change with the position coordinates and deflection angle of face, to realize more acurrate to video
Processing.
Fig. 9 is the structural block diagram of the video process apparatus of one embodiment.As shown in figure 9, the video process apparatus 900 wraps
Include face detection module 902, model obtains module 904 and light efficiency processing module 906.Wherein:
Face detection module 902 detects the face figure in described image frame for obtaining the picture frame in video to be processed
Picture.
Model obtains module 904, for obtaining position coordinates and deflection angle of the facial image in described image frame
Degree generates the corresponding light efficiency model of the facial image according to the position coordinates and deflection angle, wherein the light efficiency model
For the model for simulating light variation.
Light efficiency processing module 906, for carrying out light efficiency processing to described image frame according to the light efficiency model.
Video process apparatus provided by the above embodiment can read the picture frame in video to be processed, and detect reading
Picture frame in facial image.Then light efficiency model is generated according to the position coordinates of facial image and deflection angle, according to obtaining
The light efficiency model taken carries out light efficiency processing to the picture frame in video to be processed.Light efficiency is carried out to the picture frame in video to be processed
After processing, the light efficiency of generation will change with the position coordinates and deflection angle of face, to realize more acurrate to video
Processing.
In one embodiment, the face images that model obtains that module 904 is also used to will test are classified, and are obtained
To at least one face classification;Wherein, the facial image for including in each face classification corresponds to same face;Count each people
The destination number is greater than the face classification of amount threshold as destination number by the quantity for the facial image for including in face classification
Included in facial image as target facial image;The target facial image is obtained in corresponding described image frame
Position coordinates and deflection angle generate the corresponding light efficiency mould of the target facial image according to the position coordinates and deflection angle
Type.
In one embodiment, model obtains module 904 and is also used to draw the facial image for including in each face classification
Be divided at least one human face image sequence, wherein in same person's face image sequence adjacent two facial images the generation moment it
Between time interval be less than or equal to time threshold, in different faces image sequence the generation moment of any two facial images it
Between time interval be greater than the time threshold;It will include that the most human face image sequence of facial image is made in each face classification
For target human face image sequence;By the quantity for the facial image for including in the target human face image sequence, as the target
The destination number of the corresponding face classification of human face image sequence.
In one embodiment, model acquisition module 904 is also used to obtain preset with reference to light efficiency model;According to institute's rheme
It sets the light source center parameter that Coordinate Adjusting refers to light efficiency model, and is adjusted according to the deflection angle described with reference to light efficiency model
Light efficiency angle parameter generates the corresponding light efficiency model of the facial image.
In one embodiment, model obtains module 904 and is also used to obtain the corresponding depth information of the facial image;Root
It is adjusted according to light efficiency intensive parameter of the depth information to the light efficiency model, generates target light efficiency model.
In one embodiment, model obtains module 904 and is also used to for the facial image to be divided into different characteristic areas
Domain;Respectively according to the corresponding light efficiency of characteristic area each in each characteristic area corresponding depth information adjustment light efficiency model
Intensive parameter generates target light efficiency model.
In one embodiment, light efficiency processing module 906 is also used to be obtained in described image frame according to the light efficiency model
The light efficiency parameter for the pixel for including carries out at light efficiency the pixel for including in described image frame according to the light efficiency parameter
Reason.
In one embodiment, light efficiency processing module 906 is also used to according to the target light efficiency model to described image frame
Carry out light efficiency processing.
The division of modules is only used for for example, in other embodiments, can will regard in above-mentioned video process apparatus
Frequency processing device is divided into different modules as required, to complete all or part of function of above-mentioned video process apparatus.
Specific about video process apparatus limits the restriction that may refer to above for method for processing video frequency, herein not
It repeats again.Modules in above-mentioned video process apparatus can be realized fully or partially through software, hardware and combinations thereof.On
Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form
In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
Realizing for the modules in video process apparatus provided in the embodiment of the present application can be the shape of computer program
Formula.The computer program can be run in terminal or server.The program module that the computer program is constituted is storable in terminal
Or on the memory of server.When the computer program is executed by processor, method described in the embodiment of the present application is realized
Step.
The embodiment of the present application also provides a kind of electronic equipment.It include image processing circuit in above-mentioned electronic equipment, at image
Reason circuit can use hardware and or software component realization, it may include define ISP (Image Signal Processing, figure
As signal processing) the various processing units of pipeline.Figure 10 is the schematic diagram of image processing circuit in one embodiment.Such as Figure 10 institute
Show, for purposes of illustration only, only showing the various aspects of image processing techniques relevant to the embodiment of the present application.
As shown in Figure 10, image processing circuit includes ISP processor 1040 and control logic device 1050.Imaging device 1010
The image data of capture is handled by ISP processor 1040 first, and ISP processor 1040 analyzes image data can with capture
Image statistics for determining and/or imaging device 1010 one or more control parameters.Imaging device 1010 can wrap
Include the camera with one or more lens 1012 and imaging sensor 1014.Imaging sensor 1014 may include colour filter
Array (such as Bayer filter), imaging sensor 1014 can obtain the light captured with each imaging pixel of imaging sensor 1014
Intensity and wavelength information, and the one group of raw image data that can be handled by ISP processor 1040 is provided.1020 (such as top of sensor
Spiral shell instrument) parameter (such as stabilization parameter) of the image procossing of acquisition can be supplied to ISP processing based on 1020 interface type of sensor
Device 1040.1020 interface of sensor can use SMIA, and (Standard Mobile Imaging Architecture, standard are moved
Dynamic Imager Architecture) interface, other serial or parallel camera interfaces or above-mentioned interface combination.
In addition, raw image data can also be sent to sensor 1020 by imaging sensor 1014, sensor 1020 can base
Raw image data ISP processor 1040 is supplied in 1020 interface type of sensor to be handled or sensor 1020
By raw image data storage into video memory 1030.
ISP processor 1040 handles raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processor 1040 can carry out raw image data at one or more images
Reason operation, statistical information of the collection about image data.Wherein, image processing operations can be by identical or different bit depth precision
It carries out.
ISP processor 1040 can also receive pixel data from video memory 1030.For example, 1020 interface of sensor will be former
Beginning image data is sent to video memory 1030, and the raw image data in video memory 1030 is available to ISP processing
Device 1040 is for processing.Video memory 1030 can be only in a part, storage equipment or electronic equipment of memory device
Vertical private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
1014 interface of imaging sensor is come from or from 1020 interface of sensor or from video memory when receiving
When 1030 raw image data, ISP processor 1040 can carry out one or more image processing operations, such as time-domain filtering.ISP
Treated that image data can be transmitted to video memory 1030 for processor 1040, to carry out other place before shown
Reason.ISP processor 1040 from video memory 1030 receive processing data, and to the processing data progress original domain in and
Image real time transfer in RGB and YCbCr color space.Image data that treated may be output to display 1080, for
Family is watched and/or is further processed by graphics engine or GPU (Graphics Processing Unit, graphics processor).This
Outside, the output of ISP processor 1040 also can be transmitted to video memory 1030, and display 1080 can be from video memory 1030
Read image data.In one embodiment, video memory 1030 can be configured to realize one or more frame buffers.This
Outside, the output of ISP processor 1040 can be transmitted to encoder/decoder 1070, so as to encoding/decoding image data.Coding
Image data can be saved, and decompress before being shown in 1080 equipment of display.
Treated that image data can be transmitted to light efficiency module 1060 by ISP, to carry out light to image before shown
Effect processing.Light efficiency module 1060 may include obtaining the light efficiency parameter of each pixel in picture frame to the processing of image data light efficiency,
And light efficiency processing etc. is carried out to picture frame according to light efficiency parameter.It, can after image data is carried out light efficiency processing by light efficiency module 1060
By light efficiency, treated that image data is sent to encoder/decoder 1070, so as to encoding/decoding image data.The figure of coding
As data can be saved, and show in 1080 equipment of display before decompress.It is understood that light efficiency module 1060
Image data that treated can directly issue display 1080 and be shown without encoder/decoder 1070.At ISP
Reason device 1040 treated image data can also the first pass through processing of encoder/decoder 1070, then using light efficiency module
1060 are handled.Wherein, light efficiency module 1060 or encoder/decoder 1070 can be CPU (Central in mobile terminal
Processing Unit, central processing unit) or GPU (Graphics Processing Unit, graphics processor) etc..
The statistical data that ISP processor 1040 determines, which can be transmitted, gives control logic device Unit 1050.For example, statistical data can
It is passed including the images such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 1012 shadow correction of lens
1014 statistical information of sensor.Control logic device 1050 may include execute one or more routines (such as firmware) processor and/or
Microcontroller, one or more routines can statistical data based on the received, determine the control parameter and ISP of imaging device 1010
The control parameter of processor 1040.For example, the control parameter of imaging device 1010 may include 1020 control parameter of sensor (such as
Gain, the time of integration of spectrum assignment, stabilization parameter etc.), camera flash control parameter, 1012 control parameter of lens (such as
Focus or zoom focal length) or these parameters combination.ISP control parameter may include adjusting for automatic white balance and color
The 1012 shadow correction parameter of gain level and color correction matrix and lens of (for example, during RGB processing).
The following are realize method for processing video frequency provided by the above embodiment with image processing techniques in Figure 10.
The embodiment of the present application also provides a kind of computer readable storage mediums.One or more is executable comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when the computer executable instructions are executed by one or more processors
When, so that the step of processor executes method for processing video frequency provided by the above embodiment.
A kind of computer program product comprising instruction, when run on a computer, so that computer execution is above-mentioned
The method for processing video frequency that embodiment provides.
Any reference to memory, storage, database or other media used in this application may include non-volatile
And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled
Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory
(RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM
(SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), enhanced SDRAM
(ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight
Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of method for processing video frequency characterized by comprising
The picture frame in video to be processed is obtained, the facial image in described image frame is detected;
Position coordinates and deflection angle of the facial image in described image frame are obtained, according to the position coordinates and deflection
Angle generates the corresponding light efficiency model of the facial image, wherein the light efficiency model is the model for simulating light variation;
Light efficiency processing is carried out to described image frame according to the light efficiency model.
2. the method according to claim 1, wherein described obtain the facial image in described image frame
Position coordinates and deflection angle generate the corresponding light efficiency model of the facial image according to the position coordinates and deflection angle,
Include:
The face images that will test are classified, at least one face classification is obtained;Wherein, it is wrapped in each face classification
The facial image contained corresponds to same face;
The quantity for the facial image for including in each face classification is counted as destination number, the destination number is greater than quantity
Facial image included in the face classification of threshold value is as target facial image;
Position coordinates and deflection angle of the target facial image in corresponding described image frame are obtained, according to the position
Coordinate and deflection angle generate the corresponding light efficiency model of the target facial image.
3. according to the method described in claim 2, it is characterized in that, the facial image for including in each face classification of statistics
Quantity as destination number, comprising:
The facial image for including in each face classification is divided at least one human face image sequence, wherein the same face figure
As the time interval of two facial images adjacent in sequence generated between the moment is less than or equal to time threshold, different faces figure
As the time interval of two facial images any in sequence generated between the moment is greater than the time threshold;
It will include the most human face image sequence of facial image in each face classification as target human face image sequence;
It is corresponding as the target human face image sequence by the quantity for the facial image for including in the target human face image sequence
Face classification destination number.
4. the method according to claim 1, wherein described generate institute according to the position coordinates and deflection angle
State the corresponding light efficiency model of facial image, comprising:
It obtains preset with reference to light efficiency model;
The light source center parameter of light efficiency model is referred to according to position coordinates adjustment, and according to deflection angle adjustment
With reference to the light efficiency angle parameter of light efficiency model, the corresponding light efficiency model of the facial image is generated.
5. method according to claim 1 to 4, which is characterized in that it is described according to the light efficiency model to institute
It states picture frame and carries out light efficiency processing, comprising:
The light efficiency parameter that the pixel for including in described image frame is obtained according to the light efficiency model, according to the light efficiency parameter pair
The pixel for including in described image frame carries out light efficiency processing.
6. method according to claim 1 to 4, which is characterized in that it is described according to the light efficiency model to institute
State picture frame carry out light efficiency processing before, further includes:
Obtain the corresponding depth information of the facial image;
It is adjusted according to light efficiency intensive parameter of the depth information to the light efficiency model, generates target light efficiency model;
It is described that light efficiency processing is carried out to described image frame according to the light efficiency model, comprising:
Light efficiency processing is carried out to described image frame according to the target light efficiency model.
7. according to the method described in claim 6, it is characterized in that, it is described according to the depth information to the light efficiency model
Light efficiency intensive parameter is adjusted, and generates target light efficiency model, comprising:
The facial image is divided into different characteristic areas;
Respectively according to the corresponding light of characteristic area each in each characteristic area corresponding depth information adjustment light efficiency model
Intensive parameter is imitated, target light efficiency model is generated.
8. a kind of video process apparatus characterized by comprising
Face detection module detects the facial image in described image frame for obtaining the picture frame in video to be processed;
Model obtains module, for obtaining position coordinates and deflection angle of the facial image in described image frame, according to
The position coordinates and deflection angle generate the corresponding light efficiency model of the facial image, wherein the light efficiency model be for
Simulate the model of light variation;
Light efficiency processing module, for carrying out light efficiency processing to described image frame according to the light efficiency model.
9. a kind of electronic equipment, including memory and processor, computer program, the computer are stored in the memory
When program is executed by the processor, so that the processor executes the step of the method as described in any one of claims 1 to 7
Suddenly.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method as described in any one of claims 1 to 7 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811197920.1A CN109167935A (en) | 2018-10-15 | 2018-10-15 | Method for processing video frequency and device, electronic equipment, computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811197920.1A CN109167935A (en) | 2018-10-15 | 2018-10-15 | Method for processing video frequency and device, electronic equipment, computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109167935A true CN109167935A (en) | 2019-01-08 |
Family
ID=64877957
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811197920.1A Pending CN109167935A (en) | 2018-10-15 | 2018-10-15 | Method for processing video frequency and device, electronic equipment, computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109167935A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111932442A (en) * | 2020-07-15 | 2020-11-13 | 厦门真景科技有限公司 | Video beautifying method, device and equipment based on face recognition technology and computer readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170195581A1 (en) * | 2016-01-06 | 2017-07-06 | Canon Kabushiki Kaisha | Image processing apparatus, control method for the same and image capturing apparatus |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN108154171A (en) * | 2017-12-20 | 2018-06-12 | 北京奇艺世纪科技有限公司 | A kind of character recognition method, device and electronic equipment |
CN108229322A (en) * | 2017-11-30 | 2018-06-29 | 北京市商汤科技开发有限公司 | Face identification method, device, electronic equipment and storage medium based on video |
CN108537870A (en) * | 2018-04-16 | 2018-09-14 | 太平洋未来科技(深圳)有限公司 | Image processing method, device and electronic equipment |
CN108537155A (en) * | 2018-03-29 | 2018-09-14 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN108616700A (en) * | 2018-05-21 | 2018-10-02 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
-
2018
- 2018-10-15 CN CN201811197920.1A patent/CN109167935A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170195581A1 (en) * | 2016-01-06 | 2017-07-06 | Canon Kabushiki Kaisha | Image processing apparatus, control method for the same and image capturing apparatus |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN108229322A (en) * | 2017-11-30 | 2018-06-29 | 北京市商汤科技开发有限公司 | Face identification method, device, electronic equipment and storage medium based on video |
CN108154171A (en) * | 2017-12-20 | 2018-06-12 | 北京奇艺世纪科技有限公司 | A kind of character recognition method, device and electronic equipment |
CN108537155A (en) * | 2018-03-29 | 2018-09-14 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN108537870A (en) * | 2018-04-16 | 2018-09-14 | 太平洋未来科技(深圳)有限公司 | Image processing method, device and electronic equipment |
CN108616700A (en) * | 2018-05-21 | 2018-10-02 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
Non-Patent Citations (1)
Title |
---|
苹果汇: "《搜狐网》", 1 June 2018 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111932442A (en) * | 2020-07-15 | 2020-11-13 | 厦门真景科技有限公司 | Video beautifying method, device and equipment based on face recognition technology and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109191403A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108734676A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN110149482A (en) | Focusing method, device, electronic equipment and computer readable storage medium | |
US20210192698A1 (en) | Image Processing Method, Electronic Device, and Non-Transitory Computer-Readable Storage Medium | |
CN107730445A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110334635A (en) | Main body method for tracing, device, electronic equipment and computer readable storage medium | |
CN108805103A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN107945135A (en) | Image processing method, device, storage medium and electronic equipment | |
CN107730444A (en) | Image processing method, device, readable storage medium storing program for executing and computer equipment | |
CN107862663A (en) | Image processing method, device, readable storage medium storing program for executing and computer equipment | |
CN107800965B (en) | Image processing method, device, computer readable storage medium and computer equipment | |
CN108717530B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN108055452A (en) | Image processing method, device and equipment | |
CN107742274A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN108616700B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN108846807A (en) | Light efficiency processing method, device, terminal and computer readable storage medium | |
CN107396079B (en) | White balance adjustment method and device | |
CN109242794B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN109685853A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN109360254A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN110191287A (en) | Focusing method and device, electronic equipment, computer readable storage medium | |
CN109712177A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN108111768A (en) | Control method, apparatus, electronic equipment and the computer readable storage medium of focusing | |
CN109963080A (en) | Image-pickup method, device, electronic equipment and computer storage medium | |
CN109325905B (en) | Image processing method, image processing device, computer readable storage medium and electronic apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190108 |