CN114821737A - Moving end real-time wig try-on method based on three-dimensional face alignment - Google Patents

Moving end real-time wig try-on method based on three-dimensional face alignment Download PDF

Info

Publication number
CN114821737A
CN114821737A CN202210522721.3A CN202210522721A CN114821737A CN 114821737 A CN114821737 A CN 114821737A CN 202210522721 A CN202210522721 A CN 202210522721A CN 114821737 A CN114821737 A CN 114821737A
Authority
CN
China
Prior art keywords
wig
face
model
network
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210522721.3A
Other languages
Chinese (zh)
Other versions
CN114821737B (en
Inventor
杨柏林
赵建东
杨文武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN202210522721.3A priority Critical patent/CN114821737B/en
Publication of CN114821737A publication Critical patent/CN114821737A/en
Application granted granted Critical
Publication of CN114821737B publication Critical patent/CN114821737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a moving end real-time wig try-on method based on three-dimensional face alignment. The method comprises the steps of firstly training a teacher model by using a key point loss function, a shape consistency loss function and a luminosity consistency loss function, then learning the teacher model by using a student model through parameter loss, optimizing the student model by using a vertex loss function, and finally drawing by using an augmented reality technology to realize wig try-on. According to the invention, the calculation amount is greatly reduced by compressing the standard 3DMM model, real-time wig try-on mobile terminal equipment is really realized, and meanwhile, the real-life effect and smooth experience are achieved, and technical support is provided for large-scale popularization of wig try-on technology.

Description

Moving end real-time wig try-on method based on three-dimensional face alignment
Technical Field
The invention belongs to the field of augmented reality, and particularly relates to a moving end real-time wig try-on method based on three-dimensional face alignment.
Background
At present, in fast-paced life, the pressure of alopecia is often brought to people by heavy-pressure work, and for people with serious alopecia, the wig is often used as an image improving choice. However, for a long time, the sales of wigs heavily depends on off-line channels, users often need to go to a physical store to continuously try to wear the wigs to select the more desirable hairstyle, and the wigs are very cumbersome to wear, thus adding time cost to customers invisibly. At the same time, the need to prepare a variety of wigs for wearing by customers also increases the financial pressure on the dealer. Therefore, the virtual wig can be tried on, the time cost for selecting the wig is reduced for customers, the stock of the dealer can be reduced, and the fund pressure is reduced. Meanwhile, online real-time virtual wig try can expand wig sales to markets such as Africa and south America with wider markets.
Although some methods for real-time wig try exist, that is, images are acquired through a camera, and the wig can be displayed to a user in real time according to the selection of the user. However, the existing method has the main problems that the required calculation amount is large, the method needs to be operated on a device with high performance or a special device, real-time try-on a mobile device cannot be carried out, and the popularization is difficult.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for real-time virtual wig try-on mobile terminal equipment based on three-dimensional face alignment. The image of the camera of the mobile terminal device held by the user in real time is acquired, and the augmented virtual reality technology is drawn according to the wig selected by the user, so that the user can directly see the real-time wig try-on effect on the screen.
The technical scheme of the invention is as follows:
step 1, training teacher model
(1-1) collecting face pictures, and obtaining a data set with face angle, race and gender balance through algorithm and manual screening;
(1-2) performing self-supervision training on a teacher network by using the data set and using a resnet50 as a skeleton network through a key point loss function, a shape consistency loss function and a luminosity consistency loss function, wherein the model output is a 106-dimensional parameter vector, 40 dimensions are shape parameters, 20 dimensions are expression parameters, 40 dimensions are texture parameters, and 6 dimensions are posture parameters.
Step 2, training student model
(2-1) the student network takes Ghost Net as a skeleton network, and parameter loss is used for learning the teacher network;
the output of the model of the student network is 66 dimensions, wherein 40 dimensions are shape parameters, 20 dimensions are expression parameters, and the rest 6 dimensions are posture parameters;
and (2-2) fine-tuning the student network by using the vertex loss function to obtain the optimal network parameters.
Step 3, 3DMM model compression
(3-1) carrying out mesh compression on the 3DMM standard neutral human face by using a mesh compression algorithm;
(3-2) finding out the closest vertex in the original object by using the minimum distance between the neutral human face subjected to the mesh compression and the original object;
and (3-3) acquiring parameters corresponding to the obtained vertex from the parameters of the 3DMM model, and reorganizing the vertex index to obtain the simplified 3DMM model.
Step 4, try on wig
(4-1) manufacturing a three-dimensional model of the wig, adjusting the coordinates of the three-dimensional model to a uniform range, and storing the three-dimensional model as an obj file;
(4-2) using the image collected by the equipment as model input, carrying out reasoning, and outputting the three-dimensional face coordinate obtained by calculation by using a simplified 3DMM model;
and (4-3) shielding the wig three-dimensional model by the face obtained by calculation by using an OPENGLES technology and finally drawing the wig three-dimensional model on a screen to realize augmented reality drawing.
The invention has the beneficial effects that:
the wig try-on technical scheme is improved by aiming at the characteristic that the wig try-on technical scheme requires higher hardware configuration requirement, and the required calculated amount is greatly reduced, so that the wig try-on technical scheme can be smoothly used on other portable equipment such as a common mobile phone and an embedded terminal. Meanwhile, the similar try-on effect of the prior technical scheme is achieved, and technical support is provided for large-scale popularization of wig try-on technology.
Drawings
FIG. 1 is a schematic diagram of the training of a teacher model of the present invention;
FIG. 2 is a schematic diagram of the training of a student model of the present invention;
FIG. 3 is a schematic diagram of the 3DMM model compression of the present invention.
Detailed Description
The invention provides a moving end real-time wig try-on method based on three-dimensional face alignment. The user uses common mobile terminal equipment in hands, and the positions of the faces can be obtained in real time through the acquisition of the camera at the front part of the equipment, and the positions of the faces are drawn through an augmented reality technology to achieve the effect of try-on of the wig.
The invention comprises the following four parts:
a first part: training the teacher model, as shown in FIG. 1:
(1) and (3) collecting face pictures by using a face recognition algorithm, and manually screening to obtain a data set with balanced face angles, race and gender.
And (4) using a face recognition algorithm to collect a picture with a face from a picture database, regressing the position of the face in the picture, and cutting to obtain a picture with the face in the middle. Since the deep learning algorithm learns the distribution of data, the data distribution of the data set is critical. For the face rotation angle, a face key point regression algorithm is used to estimate the rotation angle of the face in the picture, and the picture is classified according to the rotation angle. Selecting a picture from the face pictures with the rotation angles of-90 degrees to 90 degrees, wherein the proportion of approximately-30 degrees to 30 degrees, -60 degrees to-30 degrees and 30 degrees to 60 degrees, -90 degrees to-60 degrees and 60 degrees to 90 degrees is 3: 2: 1. and manually screening to obtain the yellow race, white race and black race with the ratio of 1: 1: 1, and a male to female ratio of 1: 1, a face picture database.
(2) The teacher network is trained using this data set with resnet50 as the skeleton network.
The teacher network uses the resnet50 as a skeleton neural network, and the regression target of the parameter regression network is a 106-dimensional parameter vector, wherein 40 dimensions are shape parameters, 20 dimensions are expression parameters, 40 dimensions are texture parameters, and the rest 6 dimensions are posture parameters. The regressed parameters are synthesized into a face picture through an nvdiffrast renderer so as to achieve the effect of self-supervision training. The key point loss function, the shape consistency loss function, and the luminosity consistency loss function are used in the training. The key point loss function is to constrain network training by calculating the difference between the original picture and the synthesized picture using the regression algorithm, and the formula can be expressed as:
Figure BDA0003642469520000041
wherein w i Weight for a particular keypoint, q i Is the keypoint location, q ', regressed from the original picture' i Is the location of the keypoints regressed from the composite image.
The shape consistency loss function, i.e. the pixel area of the face in the original image and the generated picture, should be similar, which is to better constrain the shape parameters, the formula can be expressed as:
Figure BDA0003642469520000051
wherein S i Is the shape coefficient, S 'of the input image' i Is the shape factor of the composite image.
The luminosity consistency loss function compares the pixel values of the face regions of the original image and the synthesized image to constrain texture parameters, so that the texture effect of the synthesized image is more real, and the formula can be expressed as follows:
Figure BDA0003642469520000052
wherein I i Is a pixel value, I 'of a point of the input image' i In order to synthesize a pixel value of a certain point in a picture, i all belong to the range of M, and M is the range of a face image.
And finally, carrying out weighted combination on the loss functions to obtain a final loss function. The formula can be expressed as:
L all =α 1 L lmk2 L shape3 L photo
wherein L is lmk As a loss function of the key points, L shape As a shape loss consistent function, L photo As a function of photometric uniform loss, α 1 、α 2 、α 3 Respectively, corresponding weights.
A second part: training a student model, as shown in fig. 2:
(1) the student network takes Ghost Net as a skeleton network, and a parameter loss function is used for learning the teacher network.
The student network uses Ghost Net as a skeleton neural network, and because of the characteristics of the invention, the student network regression target has texture-related parameters removed compared with the teacher network. The regression objective is 66 dimensions, with 40 dimensions being the shape parameters, 20 dimensions being the expression parameters and the remaining 6 dimensions being the pose parameters. The parameter loss function is used as constraint to train the student network, namely, the difference value between the teacher network inference result and the student network inference result is calculated, and the formula can be expressed as follows:
L param =‖S t -S s ‖+‖E t -E s ‖+‖P t -P s
wherein S represents a shape parameter, E represents an expression parameter, P represents a posture parameter, subscript t represents a teacher model reasoning result, and S represents a student model reasoning result.
(2) And (4) fine-tuning the student network by using a vertex loss function to obtain the optimal network parameters.
And after the training by using the parameter loss function is finished, the vertex loss function is used for fine tuning the student network. The vertex loss network, namely the face vertex obtained by calculating the shape parameter of the teacher network inference result, is compared with the student network result, 100 vertexes are randomly selected each time for the convenience of calculation, and the formula can be expressed as follows:
Figure BDA0003642469520000061
wherein, V t Representing the calculated vertex, V, of the teacher model s Representing the vertices calculated by the student model.
And a third part: 3DMM model compression, as shown in fig. 3:
(1) and carrying out mesh compression on the 3DMM standard neutral face by using a mesh compression algorithm.
The neutral face of the 3DMM standard is input into a mesh compression algorithm to obtain a simplified neutral face, the shape of the simplified face is almost similar to that of the original neutral face, but the number of triangular patches is greatly reduced, so that the effect of reducing the calculated amount is achieved.
(2) And finding out points which are close to the original human face after the grid compression.
The simplified neutral face may have its vertex not matching the vertex in the original face, and the vertex closest to the vertex of the simplified face needs to be screened from the original face and replaced. Can be formulated as:
V result =min(‖V t -V s ‖)
wherein V t Indicating the position, V, of the target vertex to be calculated s The positions of the surrounding vertices are indicated, and min indicates that the vertex with the smallest distance is selected.
(3) And reconstructing the index.
The geometrical relationship of the vertexes of the faces processed by the above method is changed, and the vertex indexes of the faces need to be reconstructed so that the faces can be correctly formed into a face in a triangular patch mode.
The fourth part: the wig is tried on.
(1) Making a three-dimensional model of the wig, adjusting the coordinates of the three-dimensional model to a uniform range, and storing the three-dimensional model as an obj file;
(2) inputting an image acquired by equipment as a student model, carrying out reasoning, and outputting a reconstructed three-dimensional face coordinate calculated by using a simplified 3DMM model;
(3) and (3) shielding the wig three-dimensional model by using the face technology obtained by calculation and finally drawing the wig three-dimensional model on a screen to realize augmented reality drawing.
In the invention, the reconstructed human face plays a role in shielding the hair model in OPENGL space, so that the effect of trying on the wig is achieved, and the key effect is realized mainly through a glColorMask function in OPENGLES.
The embodiments of the present invention have been disclosed above so that those skilled in the art can understand and apply the present invention. Additional modifications will readily occur to those skilled in the art, and consequently, all such modifications and changes as may be made by those skilled in the art based on the teachings herein are deemed to be within the purview of this invention.

Claims (4)

1. A method for trying on a moving end real-time wig based on three-dimensional face alignment is characterized by comprising the following steps:
step 1, training teacher model
(1-1) collecting a face picture by using a face recognition algorithm, and manually screening to obtain a data set with balanced face angles, race and gender;
(1-2) using the data set to perform self-supervision training on a teacher network by using a resnet50 as a skeleton network through a key point loss function, a shape consistency loss function and a luminosity consistency loss function, wherein the teacher network type is a parameter regression network;
the regression target of the teacher network is a 106-dimensional parameter vector, wherein 40 dimensions are shape parameters, 20 dimensions are expression parameters, 40 dimensions are texture parameters, and the rest 6 dimensions are posture parameters;
step 2, training student model
(2-1) the student network takes Ghost Net as a skeleton network, the student network type is also a parameter regression network, and parameter loss is used for learning the teacher network; the regression target of the student network is 66 dimensions, wherein 40 dimensions are shape parameters, 20 dimensions are expression parameters, and the rest 6 dimensions are posture parameters;
(2-2) fine-tuning the student network by using a vertex loss function to obtain an optimal network parameter;
step 3, 3DMM model compression
(3-1) carrying out mesh compression on the 3DMM standard neutral human face by using a mesh compression algorithm;
(3-2) comparing the neutral face subjected to the mesh compression with the original face, and finding out the closest vertex in the original face according to the minimum distance;
(3-3) acquiring parameters corresponding to the vertex obtained in the step (3-2) from the parameters of the 3DMM model, and reorganizing the vertex index to obtain a simplified 3DMM model;
step 4, try on wig
(4-1) manufacturing a three-dimensional model of the wig, adjusting the coordinates of the three-dimensional model to a uniform range, and storing the three-dimensional model as an obj file;
(4-2) inputting the acquired image as a student model, performing reasoning, and outputting a reconstructed three-dimensional face coordinate calculated by using a simplified 3DMM model;
and (4-3) shielding the wig three-dimensional model by the face obtained by calculation by using an OPENGLES technology, and finally drawing the wig three-dimensional model on a screen to realize augmented reality drawing.
2. The method for trying on the wig at the moving end in real time based on three-dimensional face alignment as claimed in claim 1, wherein:
the key point loss function is as follows:
Figure FDA0003642469510000021
wherein w i As the weight of the keypoint, q i Is the keypoint location, q ', regressed from the original picture' i The positions of the regression key points according to the synthetic image;
the shape consistent loss function is as follows:
Figure FDA0003642469510000022
wherein S i Is the shape coefficient, S 'of the input image' i Is the shape factor of the composite image;
the photometric uniformity loss function is as follows:
Figure FDA0003642469510000023
in which I i Is a pixel value, I 'of a point of the input image' i In order to synthesize a pixel value of a certain point in a picture, i all belong to the range of M, and M is the range of a face image.
3. The method for trying on the wig at the moving end in real time based on three-dimensional face alignment as claimed in claim 1, wherein:
in the step 1, a human face key point regression algorithm is used to estimate the rotation angle of the human face in the picture, and the picture is classified according to the rotation angle.
4. The method for trying on the wig at the moving end in real time based on three-dimensional face alignment as claimed in claim 1, wherein:
in the step 4, the reconstructed face is used to shield the wig model in OPENGL space by using the glColorMask function of OPENGLES, and the reconstructed face is not drawn into the image frame.
CN202210522721.3A 2022-05-13 2022-05-13 Mobile-end real-time wig try-on method based on three-dimensional face alignment Active CN114821737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210522721.3A CN114821737B (en) 2022-05-13 2022-05-13 Mobile-end real-time wig try-on method based on three-dimensional face alignment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210522721.3A CN114821737B (en) 2022-05-13 2022-05-13 Mobile-end real-time wig try-on method based on three-dimensional face alignment

Publications (2)

Publication Number Publication Date
CN114821737A true CN114821737A (en) 2022-07-29
CN114821737B CN114821737B (en) 2024-06-04

Family

ID=82515757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210522721.3A Active CN114821737B (en) 2022-05-13 2022-05-13 Mobile-end real-time wig try-on method based on three-dimensional face alignment

Country Status (1)

Country Link
CN (1) CN114821737B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091675A (en) * 2023-04-06 2023-05-09 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN116630350A (en) * 2023-07-26 2023-08-22 瑞茜时尚(深圳)有限公司 Wig wearing monitoring management method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020199693A1 (en) * 2019-03-29 2020-10-08 中国科学院深圳先进技术研究院 Large-pose face recognition method and apparatus, and device
CN112116699A (en) * 2020-08-14 2020-12-22 浙江工商大学 Real-time real-person virtual trial sending method based on 3D face tracking
WO2021051543A1 (en) * 2019-09-18 2021-03-25 平安科技(深圳)有限公司 Method for generating face rotation model, apparatus, computer device and storage medium
WO2021051611A1 (en) * 2019-09-19 2021-03-25 平安科技(深圳)有限公司 Face visibility-based face recognition method, system, device, and storage medium
CN112802031A (en) * 2021-01-06 2021-05-14 浙江工商大学 Real-time virtual hair trial method based on three-dimensional human head tracking
CN114067414A (en) * 2021-11-26 2022-02-18 南京烽火天地通信科技有限公司 Face dense key point detection algorithm based on 3D face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020199693A1 (en) * 2019-03-29 2020-10-08 中国科学院深圳先进技术研究院 Large-pose face recognition method and apparatus, and device
WO2021051543A1 (en) * 2019-09-18 2021-03-25 平安科技(深圳)有限公司 Method for generating face rotation model, apparatus, computer device and storage medium
WO2021051611A1 (en) * 2019-09-19 2021-03-25 平安科技(深圳)有限公司 Face visibility-based face recognition method, system, device, and storage medium
CN112116699A (en) * 2020-08-14 2020-12-22 浙江工商大学 Real-time real-person virtual trial sending method based on 3D face tracking
CN112802031A (en) * 2021-01-06 2021-05-14 浙江工商大学 Real-time virtual hair trial method based on three-dimensional human head tracking
CN114067414A (en) * 2021-11-26 2022-02-18 南京烽火天地通信科技有限公司 Face dense key point detection algorithm based on 3D face

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HEBA NOMANI I.T. DEPARTMENT VESIT, MUMBAI, INDIA ; SHANTA SONDUR: "3D Face Generation from Sketch Using ASM and 3DMM", 《2018 INTERNATIONAL CONFERENCE ON ADVANCES IN COMMUNICATION AND COMPUTING TECHNOLOGY (ICACCT)》, 11 November 2018 (2018-11-11) *
唐博奕;杨文武;赵叶清;杨柏林;金剑秋: "基于3D人脸跟踪的实时真人虚拟试发", 《计算机辅助设计与图形学学报》, vol. 33, no. 9, 2 August 2021 (2021-08-02) *
王灵珍;赖惠成;: "基于多任务级联CNN与中心损失的人脸识别", 计算机仿真, no. 08, 15 August 2020 (2020-08-15) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091675A (en) * 2023-04-06 2023-05-09 腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN116630350A (en) * 2023-07-26 2023-08-22 瑞茜时尚(深圳)有限公司 Wig wearing monitoring management method and system
CN116630350B (en) * 2023-07-26 2023-10-03 瑞茜时尚(深圳)有限公司 Wig wearing monitoring management method and system

Also Published As

Publication number Publication date
CN114821737B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN114821737B (en) Mobile-end real-time wig try-on method based on three-dimensional face alignment
CN109815893B (en) Color face image illumination domain normalization method based on cyclic generation countermeasure network
Guo et al. Subjective and objective visual quality assessment of textured 3D meshes
CN113269872A (en) Synthetic video generation method based on three-dimensional face reconstruction and video key frame optimization
CN110097609B (en) Sample domain-based refined embroidery texture migration method
CN108229279A (en) Face image processing process, device and electronic equipment
CN116109798B (en) Image data processing method, device, equipment and medium
CN109993698A (en) A kind of single image super-resolution texture Enhancement Method based on generation confrontation network
CN110570377A (en) group normalization-based rapid image style migration method
CN113822982B (en) Human body three-dimensional model construction method and device, electronic equipment and storage medium
Liu et al. Image decolorization combining local features and exposure features
CN116997933A (en) Method and system for constructing facial position map
CN106127818A (en) A kind of material appearance based on single image obtains system and method
CN106709504A (en) Detail-preserving high fidelity tone mapping method
US20240029345A1 (en) Methods and system for generating 3d virtual objects
CN112288645A (en) Skull face restoration model construction method, restoration method and restoration system
CN112116699A (en) Real-time real-person virtual trial sending method based on 3D face tracking
Jiang et al. Multi-angle projection based blind omnidirectional image quality assessment
CN113096015B (en) Image super-resolution reconstruction method based on progressive perception and ultra-lightweight network
CN112686817B (en) Image completion method based on uncertainty estimation
Zhang et al. A Reduced-Reference Quality Assessment Metric for Textured Mesh Digital Humans
CN116033279B (en) Near infrared image colorization method, system and equipment for night monitoring camera
CN107909565A (en) Stereo-picture Comfort Evaluation method based on convolutional neural networks
AU2021101766A4 (en) Cartoonify Image Detection Using Machine Learning
CN114359180A (en) Virtual reality-oriented image quality evaluation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant