CN107146196A - A kind of U.S. face method of image and terminal - Google Patents

A kind of U.S. face method of image and terminal Download PDF

Info

Publication number
CN107146196A
CN107146196A CN201710167285.1A CN201710167285A CN107146196A CN 107146196 A CN107146196 A CN 107146196A CN 201710167285 A CN201710167285 A CN 201710167285A CN 107146196 A CN107146196 A CN 107146196A
Authority
CN
China
Prior art keywords
image
face
key point
facial
present image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710167285.1A
Other languages
Chinese (zh)
Inventor
辛浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinli Communication Equipment Co Ltd
Original Assignee
Shenzhen Jinli Communication Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinli Communication Equipment Co Ltd filed Critical Shenzhen Jinli Communication Equipment Co Ltd
Priority to CN201710167285.1A priority Critical patent/CN107146196A/en
Publication of CN107146196A publication Critical patent/CN107146196A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind of U.S. face method of image and terminal, wherein this method includes:Obtain present image;Present image is handled, face key point is obtained;U.S. face is carried out to present image according to face key point to handle, and obtains target image.In the embodiment of the present invention, first obtain present image and it is handled, obtain face key point, U.S. face is carried out to present image further according to face key point is handled, and obtains target image.Due to first detecting face key point, therefore when carrying out U.S. face processing to present image, whitening and mill skin processing can be only carried out to the region beyond face key point, so as to improve the definition of image after U.S. face processing, U.S. face effect is also improved.

Description

A kind of U.S. face method of image and terminal
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of U.S. face method of image and its terminal.
Background technology
It is more and more available for the intelligent terminal taken pictures as intelligent terminal is continued to develop, such as smart mobile phone, tablet personal computer Deng.User using these intelligent terminals when being taken pictures, in order to take satisfied photo, increasing intelligent terminal manufacturer Built-in U.S. face function.Existing U.S. face processing method is usually that whole pictures are carried out with Fuzzy Processing on the whole and tone tune It is whole, to reach visual whitening and mill bark effect.But, such full figure Fuzzy Processing can lose the face in facial image Key point information, so that the image after processing is not clear enough, effect is poor.
The content of the invention
The embodiment of the present invention provides a kind of U.S. face method of image and its terminal, can improve U.S. face effect.
The embodiments of the invention provide a kind of U.S. face method of image, including:
Obtain present image;
Present image is handled, face key point is obtained;
U.S. face is carried out to present image according to face key point to handle, and obtains target image.
The embodiment of the present invention additionally provides a kind of terminal, including:
Acquiring unit, for obtaining present image;
First processing units, for handling present image, obtain face key point;
Second processing unit, handles for carrying out U.S. face to present image according to face key point, obtains target image.
In the embodiment of the present invention, first obtain present image and it is handled, face key point is obtained, further according to face Key point carries out U.S. face to present image and handled, and obtains target image.Due to first detecting face key point, therefore to current When image carries out U.S. face processing, whitening and mill skin processing can be only carried out to the region beyond face key point, so as to improve U.S. The definition of image, also improves U.S. face effect after face processing.
Brief description of the drawings
Technical scheme, is used required in being described below to embodiment in order to illustrate the embodiments of the present invention more clearly Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area For logical technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet for a kind of U.S. face method of image that first embodiment of the invention is provided;
Fig. 2 is the schematic flow sheet for a kind of U.S. face method of image that second embodiment of the invention is provided;
Fig. 3 is the sub-process schematic diagram of step S202 in Fig. 2;
Fig. 4 is a kind of structural representation for strong classifier that one embodiment of the invention is provided;
Fig. 5 is a kind of schematic diagram for feature templates that one embodiment of the invention is provided;
Fig. 6 is the structural representation for the seed window that one embodiment of the invention is provided;
Fig. 7 is the structural representation for another seed window that one embodiment of the invention is provided;
Fig. 8 is the facial image schematic diagram to be measured that one embodiment of the invention is provided;
Fig. 9 is the area image schematic diagram after being handled through conspicuousness;
Figure 10 is face characteristic schematic diagram;
Figure 11 is that shape-index features describe schematic diagram;
Figure 12 is classification schematic diagram;
Figure 13 is a kind of structural representation for terminal that first embodiment of the invention is provided;
Figure 14 is a kind of structural representation for terminal that second embodiment of the invention is provided;
Figure 15 is a kind of structural representation for terminal that third embodiment of the invention is provided.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described.
It should be appreciated that ought be in this specification and in the appended claims in use, term " comprising " and "comprising" be indicated Described feature, entirety, step, operation, the presence of element and/or component, but be not precluded from one or more of the other feature, it is whole Body, step, operation, element, component and/or its presence or addition for gathering.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh for describing specific embodiment And be not intended to limit the present invention.As used in description of the invention and appended claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singulative, " one " and "the" are intended to include plural form.
It will be further appreciated that, the term "and/or" used in description of the invention and appended claims is Refer to any combinations of one or more of the associated item listed and be possible to combination, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, the terminal described in the embodiment of the present invention is including but not limited to such as with touch sensitive surface The mobile phone, laptop computer or tablet PC of (for example, touch-screen display and/or touch pad) etc it is other just Portable device.It is to be further understood that in certain embodiments, the equipment not portable communication device, but with touching Touch the desktop computer of sensing surface (for example, touch-screen display and/or touch pad).
In discussion below, the terminal including display and touch sensitive surface is described.It is, however, to be understood that It is that terminal can include one or more of the other physical user-interface device of such as physical keyboard, mouse and/or control-rod.
Terminal supports various application programs, such as one or more of following:Drawing application program, demonstration application journey Sequence, word-processing application, website create application program, disk imprinting application program, spreadsheet applications, game application Program, telephony application, videoconference application, email application, instant messaging applications, exercise Support application program, photo management application program, digital camera application program, digital camera application program, web-browsing application Program, digital music player application and/or video frequency player application program.
The various application programs that can be performed in terminal can use such as touch sensitive surface at least one is public Physical user-interface device.It can adjust and/or change among applications and/or in corresponding application programs and touch sensitive table The corresponding information shown in the one or more functions and terminal in face.So, the public physical structure of terminal is (for example, touch Sensing surface) the various application programs with user interface directly perceived and transparent for a user can be supported.
Fig. 1 is refer to, is the schematic flow sheet for the U.S. face method of image that first embodiment of the invention is provided, as illustrated, This method may comprise steps of:
S101, obtains present image.
User can send the instruction for opening application of taking pictures by way of touch-control or voice to terminal, and terminal is being received Unlatchings sent to user take pictures using instruction when, application of taking pictures can be opened to obtain initial pictures.Wherein, the initial graph As being stored in the terminal in the form of caching.The initial pictures can be yuv format or rgb format.If this is first Beginning image is yuv format, can extract Y channel images to be used as present image.
S102, is handled present image, obtains face key point.
Terminal can first use method for detecting human face to carry out Face datection to obtain facial image to be measured to present image, then The face area information and face fringe region information of the facial image to be measured are extracted, and according to face area information and face side Edge area information optimizes original shape, and the face key point of present image is detected finally according to the original shape after optimization.The portion The detailed process divided will be described in detail in next embodiment, will not be repeated here.
S103, carries out U.S. face to present image according to face key point and handles, obtain target image.
Face key point includes eyes, nose, the corners of the mouth, eyebrow and face contour etc..Terminal detects that the face is crucial After point, the U.S. face such as whitening and mill skin can be carried out to the region in present image in addition to above-mentioned face key point and is handled, is used The desired image in family, i.e. target image.
The embodiment of the present invention, first obtains present image, and present image is handled, and obtains face key point, then root U.S. face is carried out to present image according to face key point to handle, and obtains target image.Due to first detecting face key point, therefore When U.S. face processing is carried out to present image, whitening and mill skin processing can be only carried out to the region beyond face key point, so as to carry The definition of image, also improves U.S. face effect after high U.S. face processing.
Fig. 2 is refer to, is the schematic flow sheet for the U.S. face method of image that second embodiment of the invention is provided, as illustrated, This method may comprise steps of:
S201, obtains present image.
User can send the instruction for opening application of taking pictures by way of touch-control or voice to terminal, and terminal is being received Unlatchings sent to user take pictures using instruction when, application of taking pictures can be opened to obtain initial pictures.Wherein, the initial graph As being stored in the terminal in the form of caching.The initial pictures can be yuv format or rgb format.If this is first Beginning image is yuv format, can extract Y channel images to be used as present image.
S202, carries out Face datection to present image, obtains facial image to be measured.
Fig. 3 is refer to, step S202 may comprise steps of:
S2021, trains strong classifier.
As an alternative embodiment, the detailed process of training strong classifier is as follows:
(1) training sample T={ (x are selected1, y1), (x2, y2)…(xi, xi)…(xN, yN), and the training sample is stored In specified location, such as sample database.Wherein xiRepresent i-th of sample, yiIt is negative sample (non-face), y that it is represented when=0i It is positive sample (face) that it is represented when=1.N is training samples number.X1 represents the 1st sample, and y1 represents the 1st sample Value, it is negative sample (non-face) that it is represented as y1=0, and it is positive sample (face) that it is represented during y1=1;X2 represents the 2nd sample This, y2 represents the value of the 2nd sample, and it is negative sample (non-face) that it is represented as y2=0, and it is positive sample that it is represented during y2=1 (face);XN represents n-th sample, and yN represents the value of n-th sample, and it is negative sample (non-face), yN that it is represented as yN=0 It is positive sample (face) that it is represented when=1.
(2) the weights distribution D of initialization training sample1, i.e., identical weights are set to each training sample, can represent For:
D1=(w11, w12…w1i…w1N), w1i=1/N, i=1,2 ... N
Wherein, w11 represents the weights corresponding to the 1st sample, and w12 represents the weights corresponding to the 2nd sample, w1i tables Show the weights corresponding to i-th of sample, w1N represents the weights corresponding to n-th sample.
(3) iterations t, t=1,2 ..., N, N is set to be natural number.
(4) weights are normalized:
Wherein, Dt(i) it is the weights of i-th of sample in the t times circulation, qt(i) i-th sample returns in being circulated for the t time One changes weights.
(5) training sample is learnt to obtain multiple Weak Classifiers, and calculates each Weak Classifier in training sample On error in classification rate:D is distributed using with weightstTraining sample study obtain Weak Classifier h (xi, fi, pi, θi), calculate The classification error rate ε of Weak Classifiert
Wherein, a Weak Classifier h (xi, fi, pi, θi) it is by feature fi, threshold θi, and offset position piComposition:
In addition, xiFor a training sample, feature fiWith Weak Classifier hi(xi, fi, pi, θi) there is one-to-one pass System, offset bit piEffect be majorization inequality direction so that inequality symbol be smaller than be equal to number, train one weak point Class device is exactly to find optimal threshold value θiProcess.
(6) in the Weak Classifier determined from (5), finding out one has minimum classification error rate εt(i) weak typing Device ht
(7) factor beta of Weak Classifier is calculated according to error in classification ratet
βtt/(1-εt)
Wherein, the coefficient represents each Weak Classifier weights shared in strong classifier, works as xiWhen correctly being classified, eiValue take 0, when by xiWhen mistakenly classifying, eiValue take 1.And the weights of all training samples are updated with the coefficient:
(8) after the right value update of all training samples, circulation performs step (4) and arrives (7), until after iteration n times, terminating to change In generation, obtain strong classifier H (x):
Wherein, αt=log (1/ βt)。
Can be as shown in Figure 4 according to the strong classifier obtained by the above method.In the figure, the strong classifier is by 3 levels The Weak Classifier composition of connection.
S2022, is reduced to obtain the first image according to default diminution ratio to the present image.
Terminal can be reduced to obtain the first image according to default diminution ratio to the present image, to improve inspection Survey the efficiency of target facial image.For example, the image of one 13,000,000 pixel of terminal processes is, it is necessary to 20ms, if by this 1300 10 times of the image down of ten thousand pixels, corresponding processing time can also reduce.Wherein, default magnification ratio can be according to terminal processes The performance of image is determined.
S2023, first image repeatedly divide to obtain multiple second images, every second image includes many Individual subwindow.
Terminal can carry out first image repeatedly to divide to obtain multiple second images, and every second image includes Multiple subwindows, wherein the number of the subwindow divided every time is more, the Haar characteristic values calculated are also more, detect Facial image is more accurate, but the subwindow divided every time is more, calculates the time of Haar characteristic values and also can accordingly increase, separately Outside, the maximum quantity of subwindow is no more than the maximum subwindow quantity that strong classifier is detected, so the number of the subwindow divided Amount can according to the accuracy of detection facial image, calculate time, quantity of strong classifier subwindow etc. of Haar characteristic values because Element considers.Wherein, Haar characteristic values can be drawn by the calculated for pixel values of the subwindow of image, and for describing image Grey scale change.
For example, the first image can be divided into 20*20 subwindow by terminal for the first time, can then proceed in equal proportion Expand the quantity for dividing subwindow, such as expand the quantity for dividing subwindow according to 3 times of ratio, you can so that first image to be drawn It is divided into 60*60 subwindow, 180*180 subwindow or 540*540 subwindow etc..
S2024, the Haar characteristic values of each subwindow in every second image are calculated according to integrogram.
The pixel value of known each subwindow is needed due to calculating Haar characteristic values, the pixel value of each subwindow can root Calculated according to the integrogram at the end points of subwindow, it is possible to the Haar features of every second image are calculated according to integrogram Value.
, can be with as an alternative embodiment, the above-mentioned Haar characteristic values that each subwindow is calculated according to integrogram Including:The corresponding pixel value of each subwindow is calculated according to the integrogram;Should according to the calculated for pixel values of each subwindow The Haar characteristic values of each subwindow.
It should be noted that the integrogram at any point in present image refers to from the upper left corner of image to this institute's structure Into rectangular area in pixel value value sum a little, similarly for in the image of multiple subwindows second, every sub- window The pixel value sum for all subwindows that integrogram at mouth end points is included for the end points to the image upper left corner.So in meter In the case of calculating the integrogram at each subwindow end points, the pixel value of each subwindow can be calculated according to integrogram, and Can be according to the Haar characteristic values of each subwindow of calculated for pixel values of each subwindow.
Further, when calculating Haar characteristic values, it is necessary first to select suitable feature masterplate, feature masterplate is by two Or multiple rectangles is combined, there are two kinds of rectangles of black and white in feature templates, wherein common feature masterplate such as Fig. 5 institutes Show.Wherein every kind of feature masterplate only corresponds to a kind of feature, but every kind of feature can correspond to various features masterplate, and common feature has Edge feature, linear character, point feature, to corner characteristics, feature masterplate is then placed on gray level image pair according to preset rules In the subwindow answered, the corresponding Haar characteristic values of this feature masterplate placement region are calculated, the Haar characteristic values are in feature masterplate The pixel in white rectangle region and subtracting and is calculated the pixel in black rectangle region.Wherein, preset rules include setting special Levy the size of masterplate, the position that feature masterplate is placed in subwindow, the subwindow that preset rules are divided according to gray level image Quantity is determined.
Wherein, it is of different sizes due to feature masterplate in the case of selected feature masterplate, and in every second image Subwindow in the position placed it is different, so for a feature masterplate, to that should have multiple Haar special in every second image Levy, while multiple feature masterplates can be selected to calculate the Haar features of every second image, in addition, this every the second image The quantity of the subwindow of division is different, so the quantity of the Haar characteristic values of every second image is different.
For example, gray level image can be reduced 1000 times by terminal, and the gray level image after reducing is divided into 20* 20 subwindows, the pixel value of each subwindow is calculated according to integrogram, and its step includes:
1st, the integrogram at each subwindow end points is calculated, here with the end points (i, j) of the subwindow D in calculating such as Fig. 6 Exemplified by the integrogram at place, the integrogram of end points (i, j) is the pixel of each subwindow included by the point to the gray level image upper left corner Sum, is represented by:
Integral (i, j)=subwindow D pixel value+subwindow C pixel value+subwindow B pixel value+subwindow A pixel value;
Because Integral (i-1, j-1)=subwindow A pixel value;
Integral (i-1, j)=subwindow A pixel value+subwindow C pixel values;
Integral (i, j-1)=subwindow B pixel value+subwindow A pixel value;
So, Integral (i, j) can be further expressed as:
Integral (i, j)=Integral (i, j-1)+Integral (i-1, j)-Integral (i-1, j-1)+sub- window Mouth D pixel value;
Wherein, Integral () represents the integrogram of certain point, entered observation and finds that the integrogram of (i, j) point can pass through The pixel and ColumnSum (j) that the integrogram Integral (i, j-1) of (i, j-1) point is arranged plus jth are obtained, i.e., (i, j) is put Integrogram can be expressed as:
Integral (i, j)=Integral (i, j-1)+ColumnSum (j);
Wherein, ColumnSum (0)=0, Integral (0, j)=0, so for 20*20 subwindow, gray level image Integrogram at upper all subwindow end points can be tried to achieve by 19+19+2*19*19=760 iteration.
2nd, the pixel value of each subwindow is calculated according to the integrogram at each subwindow end points, here to calculate subwindow D Pixel value exemplified by, by step 1 understand subwindow D pixel value can by end points (i, j), (i, j-1), (i-1, j) and (i-1, J-1) integrogram at place is calculated, i.e. the pixel value of subwindow D is represented by:
Subwindow D pixel value=Integral (i, j)+Integral (i-1, j-1)-Integral (i-1, j)- Integral(i,j-1);
It can be seen from above formula, as long as the integrogram at each known subwindow end points, it is possible to calculate each subwindow Pixel value.
Further, can be according to the calculated for pixel values Haar of each window after the pixel value of each subwindow is obtained Characteristic value, wherein selecting different feature masterplates, the position of placement is different, and the size of feature masterplate is different, corresponding Haar Characteristic value is different, in selection Fig. 5 by taking the corresponding feature templates of edge feature as an example, as shown in fig. 7, this feature masterplate correspondence area The Haar characteristic values in domain can be subtracted subwindow B pixel value by subwindow A pixel value.
S2025, multiple first faces are detected according to the Haar characteristic values that strong classifier and every second image are obtained Image.
After the Haar characteristic values of each subwindow in calculating every second image, terminal can according to strong classifier and The Haar characteristic values that every second image is obtained detect multiple first facial images, that is to say, that according to second image Haar characteristic values and strong classifier can detect first facial image.Specifically, strong classifier can be by several Weak Classifier is constituted, and the Haar characteristic values of the subwindow of every second image is input in strong classifier, step by step by each Weak Classifier, judges whether Haar characteristic values meet corresponding default face characteristic condition equivalent to Weak Classifier, if meeting, The Haar characteristic values are allowed to pass through, if it is not satisfied, not allowing then the Haar characteristic values to pass through.If one-level does not pass through, then should The corresponding subwindow of Haar characteristic values will be rejected, and be categorized as non-face, can be passed through per one-level, then to the Haar features Value is further handled to find out the corresponding subwindow of Haar characteristic values, and the corresponding subwindow of Haar characteristic values is categorized as Face, is merged to the subwindow that face is categorized as in every second image, to obtain every second image corresponding first Facial image (obtains one for example, subwindow quantity is merged for the face subwindow that detects in 20*20 the second image Open corresponding first facial image).Being obtained according to strong classifier and every second image described in the present embodiment The step of Haar characteristic values detect the method for multiple the first facial images is fairly simple, so as to reduce answering for facial image detection Miscellaneous degree, and the strong classifier can be made up of multiple Weak Classifiers, so improving the accuracy rate of Face datection.
For example, as shown in figure 4, the strong classifier is made up of the Weak Classifier of 3 cascades, it is by subwindow quantity The Haar characteristic values of each subwindow of 24*24 the second image are sequentially inputted in 3 Weak Classifiers, and each Weak Classifier is sentenced Whether the Haar characteristic values of breaking meet corresponding default face characteristic condition, if meeting, allow the Haar characteristic values to pass through, if It is unsatisfactory for, then does not allow the Haar characteristic values to pass through.If one-level does not pass through, then the corresponding subwindow of the Haar characteristic values will Be rejected, and be categorized as non-face, can pass through per one-level, then to the Haar characteristic values further processing to find out the Haar The corresponding subwindow of characteristic value, and the corresponding subwindow of Haar characteristic values is categorized as face, it is 24*24 by subwindow quantity The second image in be categorized as the subwindow of face and merge, using subwindow quantity is corresponding as 24*24 the second image First facial image.Similarly subwindow quantity can be calculated according to above step corresponding the first for 36*36 the second image Face figure.
S2026, by this, multiple first facial images, which are merged, obtains facial image to be measured.
By this, multiple first facial images, which are merged, obtains the target facial image, that is to say, that to different subwindow numbers Multiple facial images that second image of amount is obtained, which are merged, obtains the facial image to be measured.Specifically, by different first Facial image is contrasted, if certain two the first facial image overlapping areas are more than predetermined threshold value, then it is assumed that this two the first Face image represents same face, and this two first faces are merged, i.e., by the position of this two the first faces and size Average value is used as the face location and size obtained after merging;If certain two the first facial image overlapping areas are less than default threshold Value, then it is assumed that two first facial images represent two different faces, and two facial images are merged into an image, The image has two human face regions, by that repeatedly can obtain facial image to be measured to when union operation.Wherein, detected The facial image gone out is as shown in Figure 8.
S203, carries out conspicuousness detection to facial image to be measured, obtains face area information and face fringe region information.
Further, step S203 is specifically included:
(1) to facial image to be measured as shown in Figure 8, discrete cosine transform (Discrete is carried out using formula (1) Cosine Transformation, DCT):
Wherein, x, y, u, v=0,1 ..., N-1.
Further, in formula (1), F (u, v) represents the signal after dct transform, and f (x, y) represents primary signal, N The number of primary signal is represented, c (u), c (v) represent penalty coefficient, it can cause the matrix after dct transform to turn into orthogonal Matrix.
(2) sign reversing is carried out to carrying out the image after dct transform in step (1) using formula (2):
Further, the value obtained by formula (2) is represented from formula (1) is 1 or 0 or -1.
(3) inverse discrete cosine transform is carried out to carrying out the image after sign reversing in step (2) using formula (3) (Inverse Discrete Cosine Transformation, IDCT) is to obtain area image as shown in Figure 9:
Wherein, x, y, u, v=0,1 ..., N-1.
Further, in formula (2), F (u, v) represents the signal after dct transform, and f (x, y) represents primary signal, N The number of primary signal is represented, c (u), c (v) represent penalty coefficient, it can cause the matrix after dct transform to turn into orthogonal Matrix.
(4) contrast facial image to be measured as shown in Figure 8 and area image as shown in Figure 9 obtains face region and face Fringe region.
It should be noted that detecting the method to obtain face area information and face fringe region information by conspicuousness With very strong ageing, it can optimize and reach Millisecond.
S204, according to face area information and face fringe region Advance data quality original shape.
In order to preferably describe step S204, following introduction first is done to wherein involved correlation technique:
(1) cascading linear regression model
Face feature point detection (positioning) problem can be regarded as learning a regression function F, using image I as input, Output θ is characterized position (face shape) a little:θ=F (I)
Briefly, cascade regression model can be unified to be with underframe:Learn multiple regression functions f1 ..., fn-1, Fn } carry out approximating function F:
θ=F (I)=fn(fn-1(…f10,I),I),I)
θ i=fi (θ i-1, I), i=1 ..., n
So-called cascade, i.e. current function fiInput dependence in upper level function fi-1Output θi-1, and each fi's Learning objective is all the actual position θ, θ of Approximation Characteristic point0For original shape.Normal conditions, fiIt is not direct facticity position θ, but return current shape θi-1With the difference between actual position θ:Δθi=θ-θi-1
(2) cascade shape regression model (CascadedPose Regression, CPR)
Based on above-mentioned thought, cascade shape regression model is generated.The basic thought of the model is:Given original shape θ0 (being usually average shape), according to original shape θ0Extract feature (difference of i.e. two pixels) and be used as function f1Input. For each function fiIt is modeled as Random Fern and returns device, predicts current shape θi-1With target shape θ poor Δ θi, and According to Δ θiThe current shape of renewal that predicts the outcome obtains θ i=θi-1+Δθi, it is used as next stage function fi+1Input.This method is not Foot part is to initialization shape θ0Compare sensitive, do repeatedly to test and merge repeatedly predicting the outcome using different initialization Influence of the initialization for algorithm can be alleviated to a certain extent, but the problem can not be fully solved, and repeatedly test can band Carry out extra computing overhead.Thus, it will be seen that original shape θ0The accuracy of face critical point detection will be directly affected.
(3) local binary feature cascade model
Based on above-mentioned thought, the level gang mould based on local binary feature (Local binary feature, LBF) is generated Type.The thought of the cascade model is as follows:
Sd=Sd-1+Rd(I,Sd-1)
Wherein, SdAbsolute shape, R are meant thatdA recurrence device is represented, I represents image, RdRepresent according to image and shape The positional information of shape predicts a deformation, and adds it to currently in shape to constitute a new shape, and d represents cascading layers Number.
Similarly, in the algorithm based on the model, original shape (uses S in the present embodiment0Represent) still directly affect The accuracy of face critical point detection.Based on this, by the face region detected and face fringe region to first in the present embodiment Beginning shape S0Assignment.
Specifically, as shown in Figure 10, to face original shape S from the figure0Characteristic point can be summarized as follows:(1) it is left Eye characteristic point is 37-42, and left side canthus is 37, and right side canthus is 40;(2) right eye characteristic point is 43-48, and left side canthus is 43, Right side canthus is 46;(3) nose characteristic point is 32-36, and intermediate point is 34;(4) corners of the mouth characteristic point is 49-68, wherein left side mouth Exterior angle 49, left side mouth interior angle is 61, and right side mouth exterior angle is 55, and right side mouth interior angle is 65;(5) face fringe region point is 1-17.
For this particular phenomenon, using equation below, face area information and face fringe region information will be detected It is assigned to these characteristic points:
pf=R (rectf)+Tf
Wherein pfFor the characteristic point of above-mentioned related face, R (rectf) for relation between key point and the RECT of detection, Tf For related threshold value.
After above-mentioned relation constraint, original shape S can be more efficiently determined0, realize to original shape S0's Optimization, so that detection is more precisely and quick.
It should be noted that in the cascade shape regression model that original shape refers to, for detecting the original of face key point Face shape.The original shape is usually the average shape according to multiple face samples obtained by multiple face samples, is utilized The accuracy that average shape carries out face critical point detection is relatively low.Therefore, in the embodiment of the present invention, first facial image to be measured is entered The detection of row face area information and face fringe region information etc., then by resulting face area information and face marginal zone Domain information is assigned to original shape, to realize the optimization to original shape, so as to improve the accuracy of face critical point detection.
S205, target local binary feature is obtained according to the original shape and random forest after optimization.
S206, global linear regression training is carried out with predicting shape increment according to target local binary feature.
S207, face key point is obtained according to shape increment.
It should be noted that in view of foregoing to the description based on LBF cascade models, it is crucial for each face in this method Point provides a random forest and is predicted, and the local feature of the corresponding random forest output of all face key points is mutual Connect, constitute local binary feature, afterwards, global recurrence is carried out using the local binary feature, so as to predict deformation, i.e., Predicting shape increment, face key point is obtained finally according to shape increment.
Further, each random forest includes multiple separate random trees again.The spy that training random tree is taken It is shape index (shape-index) feature to levy, and the definition of this feature is:Two characteristic points, two are produced near key point The difference of the pixel of characteristic point is exactly shape-index features.Wherein, the description of shape-index features is as shown in the figure.From Figure 11 In as can be seen that with cascade deeply (i.e. t increase), the scope of random point is tapered into, more accurate to obtain Local feature.
Specifically, when training random tree, input is X={ I, S }, and it is Y=Δs S to predict target.It is actual training with During machine tree, what the training process of each node in tree was just as.Specifically, enter to some node (i.e. face key point) During row training, a feature is first chosen from the shape-index characteristic sets F generated at random in advance, this feature can be by institute There is sample point x to be mapped to a real number set.Understandably, a characteristic set can be also generated at random temporarily, or whole random Tree uses a characteristic set using a characteristic set, or whole random forest.And in the present embodiment, a random tree makes With a characteristic set.Afterwards, a threshold value is generated at random, sample point is assigned in the subtree of left and right, the purpose is to expect a left side Sample point y in right subtree has identical pattern.
Wherein, Feature Selection is can to use equation below:
Δ=S (y | y ∈ Root)-[S (y | y ∈ Left)+S (y | y ∈ Right)]
In above-mentioned formula, F represents characteristic function set, and f represents that the characteristic function chosen (utilizes the spy arrived at random Levy and to calculate a Shape-index features), δ represents the threshold value generated at random, S be used for portraying similarity between sample point or The entropy (being represented by variance) of sample set.For each node, training data (X, Y) will be divided into two parts (X1,Y1) and (X2,Y2), portrayed with variance, so during selection characteristic function f, it is desirable to which variance reduces maximum.
The output of above-mentioned each random tree is represented by a local binary feature (as shown in figure 12), by random forest In connect before and after the corresponding local binary feature of all random trees, just obtained target local binary feature, i.e. LBF is special Levy.Further, global linear regression training is carried out with predicting shape increment using the target local binary feature.Wherein, line Property return can be indicated using formula (4):
Wherein, Δ S-shaped becomes target, and lbf represents feature, WtIt is the parameter of linear regression, λ is used for suppressing model, prevented out Existing over-fitting.Therefore, during predicting shape increment, formula (5) can be used:
Δ S=Wt.lbf (5)
With reference to foregoing description, step S205 to S207 detailed process is as follows:First extracted according to original shape after optimization Shape-index features, carry out random tree training to obtain random forest based on the shape-index features, afterwards, utilize this Random forest is predicted to each key point in facial image to be measured, and it is local that resulting predicting the outcome just constitutes target Binary feature, then use formula (4) and (5) to carry out global linear regression training with predicting shape increment, obtaining shape increment Afterwards, just face key point can be gone out according to the shape incremental detection.
It should be noted that step S201 to S208 realizes the face critical point detection to present image.In above-mentioned people During face critical point detection, believed according to the face area information and face complexion edge obtained by being handled present image Breath optimizes original shape, ensures that the accuracy of face critical point detection, while reducing to improve accuracy Carry out repeatedly the brought computing overhead of test, reduce computational complexity, it is ensured that detection it is ageing.
S208, carries out U.S. face to present image according to face key point and handles, obtain target image.
Face key point includes eyes, nose, the corners of the mouth, eyebrow and face contour etc..Terminal detects that the face is crucial After point, the U.S. face such as whitening and mill skin can be carried out to the region in present image in addition to above-mentioned face key point and is handled, is used The desired image in family, i.e. target image.
The embodiment of the present invention, first obtains present image, and present image is handled, and obtains face key point, then root U.S. face is carried out to present image according to face key point to handle, and obtains target image.Due to first detecting face key point, therefore When U.S. face processing is carried out to present image, whitening and mill skin processing can be only carried out to the region beyond face key point, so as to carry The definition of image, also improves U.S. face effect after high U.S. face processing.
It is a kind of structural representation for terminal that first embodiment of the invention is provided, as illustrated, the terminal referring to Figure 13 It can include:
Acquiring unit 10, for obtaining present image;
First processing units 11, for handling present image, obtain face key point;
Second processing unit 12, handles for carrying out U.S. face to present image according to face key point, obtains target image.
In the embodiment of the present invention, first pass through acquiring unit 10 and obtain present image, and worked as by 11 pairs of first processing units Preceding image is handled, and obtains face key point, then present image is entered according to face key point by second processing unit 12 The face processing of row U.S., obtains target image.Handled due to first detecting face key point, therefore carrying out U.S. face to present image When, whitening and mill skin processing can be only carried out to the region beyond face key point, so as to improve the clear of image after the processing of U.S. face Clear degree, also improves U.S. face effect.
Figure 14 is referred to, is that second embodiment of the invention provides a kind of structural representation of terminal, as illustrated, the terminal It can include:
Acquiring unit 20, for obtaining present image;
First processing units 21, for handling present image, obtain face key point;
Second processing unit 22, handles for carrying out U.S. face to present image according to face key point, obtains target image.
As an alternative embodiment, first processing units 21 are specifically included:
First detection unit 211, for carrying out Face datection to present image, obtains facial image to be measured;
Second detection unit 212, for carrying out conspicuousness detection to facial image to be measured, obtains face area information and people Face fringe region information;
Optimize unit 213, for according to face area information and face fringe region Advance data quality original shape;
3rd detection unit 214, the face key point for detecting present image according to the original shape after optimization.
As an alternative embodiment, the first detection unit 211 specifically for:
Present image is reduced according to default diminution ratio to obtain the first image;
First image repeatedly divide to obtain multiple second images, every second image includes multiple subwindows;
The Haar characteristic values of each subwindow in every second image are calculated according to integrogram;
Multiple first facial images are detected according to the Haar characteristic values that strong classifier and every second image are obtained;
Multiple first facial images are merged and obtain facial image to be measured.
As an alternative embodiment, the second detection unit 212 specifically for:
Facial image to be measured is carried out discrete cosine transform and inverse discrete cosine transform to obtain area image;
Contrast facial image to be measured and area image obtains face area information and face fringe region information.
As an alternative embodiment, the 3rd detection unit 213 specifically for:
Target local binary feature is obtained according to the original shape and random forest after optimization;
Global linear regression training is carried out with predicting shape increment according to target local binary feature;
Face key point is obtained according to shape increment.
In the embodiment of the present invention, first pass through acquiring unit 20 and obtain present image, and worked as by 21 pairs of first processing units Preceding image is handled, and obtains face key point, then present image is entered according to face key point by second processing unit 22 The face processing of row U.S., obtains target image.Handled due to first detecting face key point, therefore carrying out U.S. face to present image When, whitening and mill skin processing can be only carried out to the region beyond face key point, so as to improve the clear of image after the processing of U.S. face Clear degree, also improves U.S. face effect.
Further, in the embodiment of the present invention, when progress face key point is detected, carried out according to present image Face area information and face complexion marginal information obtained by processing optimize original shape, ensure that face key point The accuracy of detection, reduces and repeatedly the brought computing overhead of test is carried out to improve accuracy, reduces computing and answers Miscellaneous degree, it is ensured that detection it is ageing.
It should be noted that the specific workflow of terminal shown in Figure 13 and Figure 14 is done in preceding method flow elements It is described in detail, will not be repeated here.
It is a kind of structural representation for terminal that third embodiment of the invention is provided, as illustrated, the terminal referring to Figure 15 Including:At least one processor 301, such as CPU (Central Processing Unit, central processing unit), at least one use Family interface 303, memory 304, at least one communication bus 302.Wherein, communication bus 302 is used to realize between these components Connection communication.Wherein, user interface 303 can include display screen (Display), keyboard (Keyboard), and optional user connects Mouth 303 can also include wireline interface, the wave point of standard.Memory 304 can be high-speed RAM memory (RamdomAccessMemory, effumability random access memory) or non-labile memory (non- Volatile memory), for example, at least one magnetic disk storage.It is remote that memory 304 optionally can also be that at least one is located at From the storage device of aforementioned processor 301.The terminal that wherein processor 301 can be with reference to described by Figure 13 to 14, memory 304 Middle storage batch processing code, and processor 301 calls the program code stored in memory 304, for performing following operation:
Obtain present image;
Present image is handled, face key point is obtained;
U.S. face is carried out to present image according to face key point to handle, and obtains target image.
As an alternative embodiment, processor 301 is additionally operable to perform following operation:
Face datection is carried out to institute's present image, facial image to be measured is obtained;
Conspicuousness detection is carried out to facial image to be measured, face area information and face fringe region information is obtained;
According to face area information and face fringe region Advance data quality original shape;
The face key point of present image is detected according to the original shape after optimization.
As an alternative embodiment, processor 301 is additionally operable to perform following operation:
Present image is reduced according to default diminution ratio to obtain the first image;
First image repeatedly divide to obtain multiple second images, every second image includes multiple subwindows;
The Haar characteristic values of each subwindow in every second image are calculated according to integrogram;
Multiple first facial images are detected according to the Haar characteristic values that strong classifier and every second image are obtained;
Multiple first facial images are merged and obtain facial image to be measured.
As an alternative embodiment, processor 301 is additionally operable to perform following operation:
Facial image to be measured is carried out discrete cosine transform and inverse discrete cosine transform to obtain area image;
Contrast facial image to be measured and area image obtains face area information and face fringe region information.
As an alternative embodiment, processor 301 is additionally operable to perform following operation:
Target local binary feature is obtained according to the original shape and random forest after optimization;
Global linear regression training is carried out with predicting shape increment according to target local binary feature;
Face key point is obtained according to shape increment.
The embodiment of the present invention, can due to first detecting face key point, therefore when carrying out U.S. face processing to present image Whitening and mill skin processing are only carried out to the region beyond face key point, so that the definition of image after U.S. face processing is improved, Also improve U.S. face effect.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein Member and algorithm steps, can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware With the interchangeability of software, the composition and step of each example are generally described according to function in the above description.This A little functions are performed with hardware or software mode actually, depending on the application-specific and design constraint of technical scheme.Specially Industry technical staff can realize described function to each specific application using distinct methods, but this realization is not It is considered as beyond the scope of this invention.
It is apparent to those skilled in the art that, for convenience of description and succinctly, the end of foregoing description End and the specific work process of unit, may be referred to the corresponding process in preceding method embodiment, will not be repeated here.
, can be by it in several embodiments provided herein, it should be understood that disclosed terminal and method Its mode is realized.For example, device embodiment described above is only schematical, for example, the division of the unit, only Only a kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can be tied Another system is closed or is desirably integrated into, or some features can be ignored, or do not perform.In addition, shown or discussed phase Coupling or direct-coupling or communication connection between mutually can be INDIRECT COUPLING or the communication by some interfaces, device or unit Connection or electricity, mechanical or other forms are connected.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize scheme of the embodiment of the present invention according to the actual needs Purpose.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, can also It is that unit is individually physically present or two or more units are integrated in a unit.It is above-mentioned integrated Unit can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated unit is realized using in the form of SFU software functional unit and as independent production marketing or used When, it can be stored in a computer read/write memory medium.Understood based on such, technical scheme is substantially The part contributed in other words to prior art, or all or part of the technical scheme can be in the form of software product Embody, the computer software product is stored in a storage medium, including some instructions are to cause a computer Equipment (can be personal computer, server, or network equipment etc.) performs the complete of each embodiment methods described of the invention Portion or part steps.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey The medium of sequence code.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, various equivalent modifications can be readily occurred in or replaced Change, these modifications or substitutions should be all included within the scope of the present invention.Therefore, protection scope of the present invention should be with right It is required that protection domain be defined.

Claims (10)

1. a kind of U.S. face method of image, it is characterised in that including:
Obtain present image;
The present image is handled, face key point is obtained;
U.S. face is carried out to the present image according to the face key point to handle, and obtains target image.
2. the method as described in claim 1, it is characterised in that handle the present image, obtains face key point Specifically include:
Face datection is carried out to the present image, facial image to be measured is obtained;
Conspicuousness detection is carried out to the facial image to be measured, face area information and face fringe region information is obtained;
According to the face area information and face fringe region Advance data quality original shape;
The face key point of the present image is detected according to the original shape after optimization.
3. method according to claim 2, it is characterised in that carry out Face datection to detect to treat to the present image Facial image is surveyed to specifically include:
The present image is reduced according to default diminution ratio to obtain the first image;
Described first image repeatedly divide to obtain multiple second images, every second image includes many sub- windows Mouthful;
The Haar characteristic values of each subwindow in every second image are calculated according to integrogram;
Multiple first facial images are detected according to the Haar characteristic values that second image of strong classifier and every is obtained;
Multiple described first facial images are merged and obtain the facial image to be measured.
4. method as claimed in claim 2, it is characterised in that conspicuousness detection is carried out to the facial image to be measured, obtained Face area information and face the fringe region information is specifically included:
The facial image to be measured is carried out discrete cosine transform and inverse discrete cosine transform to obtain area image;
Contrast the facial image to be measured and the area image obtains the face area information and face fringe region information.
5. the method as described in claim any one of 2-4, it is characterised in that institute is detected according to the original shape after optimization The face key point for stating present image is specifically included:
Target local binary feature is obtained according to the original shape and random forest after optimization;
Global linear regression training is carried out with predicting shape increment according to the target local binary feature;
The face key point is obtained according to the shape increment.
6. a kind of terminal, it is characterised in that including:
Acquiring unit, for obtaining present image;
First processing units, for handling the present image, obtain face key point;
Second processing unit, handles for carrying out U.S. face to the present image according to the face key point, obtains target figure Picture.
7. terminal as claimed in claim 6, it is characterised in that the first processing units are specifically included:
First detection unit, for carrying out Face datection to the present image, obtains facial image to be measured;
Second detection unit, for carrying out conspicuousness detection to the facial image to be measured, obtains face area information and face Fringe region information;
Optimize unit, for according to the face area information and face fringe region Advance data quality original shape;
3rd detection unit, for detecting that the face of the present image is crucial according to the original shape after optimization Point.
8. terminal as claimed in claim 7, it is characterised in that first detection unit specifically for:
The present image is reduced according to default diminution ratio to obtain the first image;
Described first image repeatedly divide to obtain multiple second images, every second image includes many sub- windows Mouthful;
The Haar characteristic values of each subwindow in every second image are calculated according to integrogram;
Multiple first facial images are detected according to the Haar characteristic values that second image of strong classifier and every is obtained;
Multiple described first facial images are merged and obtain the facial image to be measured.
9. terminal as claimed in claim 7, it is characterised in that second detection unit specifically for:
The facial image to be measured is carried out discrete cosine transform and inverse discrete cosine transform to obtain area image;
Contrast the facial image to be measured and the area image obtains the face area information and face fringe region information.
10. the terminal as described in claim any one of 7-9, it is characterised in that the 3rd detection unit specifically for:
Target local binary feature is obtained according to the original shape and random forest after optimization;
Global linear regression training is carried out with predicting shape increment according to the target local binary feature;
The face key point is obtained according to the shape increment.
CN201710167285.1A 2017-03-20 2017-03-20 A kind of U.S. face method of image and terminal Withdrawn CN107146196A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710167285.1A CN107146196A (en) 2017-03-20 2017-03-20 A kind of U.S. face method of image and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710167285.1A CN107146196A (en) 2017-03-20 2017-03-20 A kind of U.S. face method of image and terminal

Publications (1)

Publication Number Publication Date
CN107146196A true CN107146196A (en) 2017-09-08

Family

ID=59783474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710167285.1A Withdrawn CN107146196A (en) 2017-03-20 2017-03-20 A kind of U.S. face method of image and terminal

Country Status (1)

Country Link
CN (1) CN107146196A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960201A (en) * 2018-08-01 2018-12-07 西南石油大学 A kind of expression recognition method extracted based on face key point and sparse expression is classified
CN109086711A (en) * 2018-07-27 2018-12-25 华南理工大学 Facial Feature Analysis method, apparatus, computer equipment and storage medium
CN110248242A (en) * 2019-07-10 2019-09-17 广州虎牙科技有限公司 A kind of image procossing and live broadcasting method, device, equipment and storage medium
WO2019237746A1 (en) * 2018-06-14 2019-12-19 北京微播视界科技有限公司 Image merging method and apparatus
WO2019237747A1 (en) * 2018-06-14 2019-12-19 北京微播视界科技有限公司 Image cropping method and apparatus, and electronic device and computer-readable storage medium
WO2020063744A1 (en) * 2018-09-30 2020-04-02 腾讯科技(深圳)有限公司 Face detection method and device, service processing method, terminal device, and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019237746A1 (en) * 2018-06-14 2019-12-19 北京微播视界科技有限公司 Image merging method and apparatus
WO2019237747A1 (en) * 2018-06-14 2019-12-19 北京微播视界科技有限公司 Image cropping method and apparatus, and electronic device and computer-readable storage medium
CN109086711A (en) * 2018-07-27 2018-12-25 华南理工大学 Facial Feature Analysis method, apparatus, computer equipment and storage medium
CN108960201A (en) * 2018-08-01 2018-12-07 西南石油大学 A kind of expression recognition method extracted based on face key point and sparse expression is classified
WO2020063744A1 (en) * 2018-09-30 2020-04-02 腾讯科技(深圳)有限公司 Face detection method and device, service processing method, terminal device, and storage medium
US11256905B2 (en) 2018-09-30 2022-02-22 Tencent Technology (Shenzhen) Company Limited Face detection method and apparatus, service processing method, terminal device, and storage medium
CN110248242A (en) * 2019-07-10 2019-09-17 广州虎牙科技有限公司 A kind of image procossing and live broadcasting method, device, equipment and storage medium
CN110248242B (en) * 2019-07-10 2021-11-09 广州虎牙科技有限公司 Image processing and live broadcasting method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107146196A (en) A kind of U.S. face method of image and terminal
CN107146204A (en) A kind of U.S. face method of image and terminal
US10170104B2 (en) Electronic device, method and training method for natural language processing
US11507800B2 (en) Semantic class localization digital environment
CN106204522B (en) Joint depth estimation and semantic annotation of a single image
CN106776673B (en) Multimedia document summarization
US20230177821A1 (en) Document image understanding
US9208374B2 (en) Information processing apparatus, control method therefor, and electronic device
CN108898181A (en) A kind of processing method, device and the storage medium of image classification model
CN114638914A (en) Image generation method and device, computer equipment and storage medium
US20210279589A1 (en) Electronic device and control method thereof
James et al. Deep learning
CN117083605A (en) Iterative training for text-image-layout transformer models
CN106878614A (en) A kind of image pickup method and terminal
CN107146203A (en) A kind of image weakening method and terminal
CN112926631A (en) Financial text classification method and device and computer equipment
US20240152749A1 (en) Continual learning neural network system training for classification type tasks
US7933449B2 (en) Pattern recognition method
CN110851600A (en) Text data processing method and device based on deep learning
Mekala et al. Gesture recognition using neural networks based on HW/SW cosimulation platform
CN116363363A (en) Unsupervised domain adaptive semantic segmentation method, device, equipment and readable storage medium
CN116957006A (en) Training method, device, equipment, medium and program product of prediction model
JP2020077054A (en) Selection device and selection method
CN114943568A (en) Financial product transaction price prediction model based on GGInform and construction method thereof
JP6947460B1 (en) Programs, information processing equipment, and methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20170908