CN104463777A - Human-face-based real-time depth of field method - Google Patents
Human-face-based real-time depth of field method Download PDFInfo
- Publication number
- CN104463777A CN104463777A CN201410631708.7A CN201410631708A CN104463777A CN 104463777 A CN104463777 A CN 104463777A CN 201410631708 A CN201410631708 A CN 201410631708A CN 104463777 A CN104463777 A CN 104463777A
- Authority
- CN
- China
- Prior art keywords
- face
- human face
- images
- realtime graphic
- masking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a human-face-based real-time depth of field method. The method comprises the steps that real-time images are obtained by obtaining real-time viewing camera data, human face detection is carried out on the real-time images to obtain human face areas and human face key points, then, affine transformation is adopted for obtaining transformed human face contour images according to preset human face contour images, preset human face key points corresponding to the human face contour images, and the obtained human face areas and human face key points, finally, the transformed human face contour images are used as the masking of the human face areas of the real-time images for conducting transparency calculation on the real-time images and blurred images to obtain human face depth of field images, and the human face depth of field images are used as display images to be previewed on a screen in real time. No hardware cost is needed, and no manual intervention operation is needed, so that the automatic depth of field effect processing during self-photographing is achieved, and the self-photographing effect is better and more natural.
Description
Technical field
The present invention relates to technique for taking field, particularly a kind of method of the real time field depth based on face.
Background technology
Along with improving constantly of living standard and scientific and technological level, take pictures and become a kind of common behavior in our daily life, we can optionally take desired image, in order to record memorable a moment or scene.In order to the main body of shooting can be highlighted, Deep Canvas usually can be utilized to make the clear and blurred background of main body of taking, thus allow the shooting body detach out from background, allow the shooting body can be more attractive, particularly when portrait is autodyned.But this function of the depth of field needs the hardware supported of camera, just can make to support the depth of field during shooting, for common capture apparatus, need to carry out depth of field process to image more after the picture is taken, operation bothers very much, and for layman being a larger difficult problem.
Summary of the invention
The present invention is for solving the problem, and provide a kind of method of the real time field depth based on face, it carries out depth of field process to human face region automatically in conjunction with face masking-out, convenient and swift.
For achieving the above object, the technical solution used in the present invention is:
Based on a method for the real time field depth of face, it is characterized in that, comprise the following steps:
10. obtain the camera data of live preview, obtain realtime graphic;
20. pairs of realtime graphics carry out Face datection; If face detected, then obtain human face region and face key point, otherwise using realtime graphic as display image, and perform step 70;
30. pairs of realtime graphics carry out Fuzzy Processing, obtain blurred picture;
The face key point of 40. default facial contour figure and correspondence thereof, and adopt affined transformation to acquire conversion facial contour figure according to the human face region got and face key point;
50. using the masking-out of described conversion facial contour figure as the human face region of described realtime graphic;
Carry out transparency with the realtime graphic of human face region masking-out and described blurred picture described in 60. pairs and calculate face depth image, using face depth image as display image;
70. will show image live preview on screen, and continue to perform step 10.
Preferably, carry out Fuzzy Processing in described step 30 to realtime graphic, described Fuzzy Processing comprises: one or more combinations of intermediate value Fuzzy Processing, Gaussian Blur process, average Fuzzy Processing, process of convolution.
Preferably, using the masking-out of described conversion facial contour figure as the human face region of described realtime graphic in described step 50, the ubiquity of facial contour is mainly utilized to generate a profile diagram in advance, white in described profile diagram represents face contour area, black represents non-face contour area, and grey represents transitional region.
Preferably, carry out transparency to the described realtime graphic with human face region masking-out and described blurred picture in described step 60 and calculate face depth image, these transparency computing method are:
Alpha=FaceColor/255.0;
Wherein, FaceColor is the color value of described conversion facial contour figure; Alpha is the transparency of this conversion facial contour figure as masking-out.
Preferably, carry out transparency to the described realtime graphic with human face region masking-out and described blurred picture in described step 60 and calculate face depth image, the computing method of this face depth image are:
ResultColor=Color*Alpha+BlurColor*(1.0-Alpha);
Wherein, ResultColor is the color value of face depth image; Color is the color value of realtime graphic; Alpha is the transparency of this conversion facial contour figure as masking-out; BlurColor is the color value of blurred picture.
The invention has the beneficial effects as follows:
The method of a kind of real time field depth based on face of the present invention, it obtains realtime graphic by the camera data obtaining live preview, and Face datection acquisition human face region and face key point are carried out to realtime graphic, then by presetting the face key point of facial contour figure and correspondence thereof, and adopt affined transformation to acquire conversion facial contour figure according to the human face region got and face key point, finally described conversion facial contour figure is carried out transparency as the masking-out of the human face region of described realtime graphic to realtime graphic and described blurred picture and calculate face depth image, using face depth image as showing image live preview on screen, without the need to hardware cost, and operate without the need to manual intervention, thus the automatic field depth effect process realized when autodyning, make the better effects if of auto heterodyne more natural.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide a further understanding of the present invention, forms a part of the present invention, and schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the general flow chart of the method for a kind of real time field depth based on face of the present invention;
Fig. 2 is the camera realtime graphic of the present invention one specific embodiment;
Fig. 3 is the conversion facial contour figure adopting affined transformation to acquire to Fig. 2;
Fig. 4 adopts the display image after real time field depth process of the present invention to Fig. 2.
Embodiment
In order to make technical matters to be solved by this invention, technical scheme and beneficial effect clearly, understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
As shown in Figure 1, the method for a kind of real time field depth based on face of the present invention, it comprises the following steps:
10. obtain the camera data of live preview, obtain realtime graphic, as Fig. 2;
20. pairs of realtime graphics carry out Face datection; If face detected, then obtain human face region and face key point, otherwise using realtime graphic as display image, and perform step 70;
30. pairs of realtime graphics carry out Fuzzy Processing, obtain blurred picture;
The face key point of 40. default facial contour figure and correspondence thereof, and adopt affined transformation to acquire conversion facial contour figure, as Fig. 3 according to the human face region got and face key point;
50. using the masking-out of described conversion facial contour figure as the human face region of described realtime graphic;
Carry out transparency with the realtime graphic of human face region masking-out and described blurred picture described in 60. pairs and calculate face depth image, using face depth image as display image, as Fig. 4;
70. will show image live preview on screen, and continue to perform step 10.
Face datection in described step 20 adopts prior art, such as document " P.Viola andM.Jones.Rapid Object Detection using a Boosted Cascade of SimpleFeatures; in:Computer Vision and Pattern Recognition, 2001.CVPR2001.Proceedings of the2001IEEE Computer Society Conference on ".Therefore do not repeat.Obtain the approximate region position of face again according to location after face being detected.
Carry out Fuzzy Processing to realtime graphic in described step 30, described Fuzzy Processing comprises: one or more combinations of intermediate value Fuzzy Processing, Gaussian Blur process, average Fuzzy Processing, process of convolution.Specific as follows:
Intermediate value Fuzzy Processing, i.e. medium filtering process, mainly to the sequence that the color value of the N*N template pixel around pixel to be processed carries out from big to small or from small to large, obtain that color value middle after sorting, i.e. median, is then set to the color value of its median by the color value of this pixel; Wherein, N is fuzzy radius.
Gaussian Blur process, mainly adopts the conversion of each pixel in normal distribution computed image, wherein, at the normal distribution equation of N dimension space is:
At the normal distribution equation of two-dimensional space be:
Wherein r is blur radius, r
2=u
2+ v
2, σ is the standard deviation of normal distribution, and u is the position off-set value of preimage vegetarian refreshments in x-axis, and v is the position off-set value of preimage vegetarian refreshments in y-axis.
Average Fuzzy Processing is typical linear filtering algorithm, it refer on image to object pixel give a template, this template includes the adjacent pixels around it; This adjacent pixels refers to surrounding's 8 pixels centered by target pixel, forms a Filtering Template, namely removes target pixel itself; Original pixel value is replaced again with the mean value of the entire pixels in template.
Process of convolution: convolution is the operation carried out each element in matrix, the function that convolution realizes is determined by the form of its convolution kernel, convolution kernel is the matrix that a size is fixed, had numerical parameter to form, and the center of matrix is reference point or anchor point, and the size of matrix is called that core supports; Calculate the color value after the convolution of a pixel, first the reference point of core is navigated to this pixel, local ambient point corresponding in all the other element set covering theory of core; For in each core pixel, obtain the product of the value of specified point in the value of this pixel and convolution kernel array and ask the cumulative sum of all these products, namely the convolution value of this specified point, substitutes the color value of this pixel by this result; By moving convolution kernel on the entire image, this operation is repeated to each pixel of image.
In described step 40, preset the face key point of facial contour figure and correspondence thereof, and adopting affined transformation to acquire conversion facial contour figure according to the human face region got and face key point, described face key point mainly comprises eye contour, mouth, eyebrow, face mask line, forehead etc.
The basic thought of face mess generation first designs the standard triangle gridding meeting basic face shape and organ distribution, by defining each vertex of a triangle sequence number, obtains the topological relation between the relative position of net point and triangle gridding dough sheet; Then with the reference mark coordinate that human face characteristic point extraction algorithm obtains, calibration distortion is carried out to standard grid, thus realize the personalization face mess generation of different human face photo.
The match point put between curve adopts Lagrange's interpolation to calculate.
The generating algorithm of net point is described below:
Eye contour: have 16 points about eye contour in 88 unique points, and we need to carry out calibration location to 20 points among standard grid.We are according to para-curve on dot generation eyes in left eye angle point, right eye angle point and top; By para-curve under left eye angle point, right eye angle point and following middle dot generation eyes.All 20 point of acquisition is got four first-class horizontal ranges of para-curve.
Mouth: in 88 unique points, mouth profile has 22 points, needs in standard grid to carry out calibration location to 34 points.Generate para-curve 9 ~ 12 and carry out matching, obtain all 34 points.
Eyebrow: in 88 unique points, eyebrow has 16 points, needs in standard grid to carry out calibration location to 20 points.Generate para-curve 1,2,3 and 4 and carry out matching, obtain all 20 points.
Face mask line: have 21 points to represent face mask line in 88 unique points.And in grid chart, have 33 points represent outline line.Outline line is divided into 4 sections, uses para-curve 13 ~ 16 matching respectively.
Forehead: by the forehead trichion of actual face and standard face, both sides cheek peak calculates affine transformation matrix.The effect that forehead part plays in human face expression action is less, and the method that therefore grid of forehead part have employed affined transformation carries out approximate generation.
Other points: as the point at the places such as forehead, cheek, mouth periphery, their coordinate calculates in proportion according to the net point reserving position.
Using the masking-out of described conversion facial contour figure as the human face region of described realtime graphic in described step 50, the ubiquity of facial contour is mainly utilized to generate a profile diagram in advance, white in described profile diagram represents face contour area, black represents non-face contour area, and grey represents transitional region.
In described step 60, transparency is carried out to the described realtime graphic with human face region masking-out and described blurred picture and calculate face depth image, specific as follows:
Transparency computing method are:
Alpha=FaceColor/255.0;
Wherein, FaceColor is the color value of described conversion facial contour figure; Alpha is the transparency of this conversion facial contour figure as masking-out.
The computing method of face depth image are:
ResultColor=Color*Alpha+BlurColor*(1.0-Alpha);
Wherein, ResultColor is the color value of face depth image; Color is the color value of realtime graphic; Alpha is the transparency of this conversion facial contour figure as masking-out; BlurColor is the color value of blurred picture.
The present invention supports depth of field process without the need to camera, and hardware cost is low, and delineates without the need to carrying out manual contours to pending human face region, operate without the need to manual intervention, thus the automatic field depth effect process realized when autodyning, operate easier, and the better effects if of autodyning is more natural.
Above-mentioned explanation illustrate and describes the preferred embodiments of the present invention, be to be understood that the present invention is not limited to the form disclosed by this paper, should not regard the eliminating to other embodiments as, and can be used for other combinations various, amendment and environment, and can in invention contemplated scope herein, changed by the technology of above-mentioned instruction or association area or knowledge.And the change that those skilled in the art carry out and change do not depart from the spirit and scope of the present invention, then all should in the protection domain of claims of the present invention.
Claims (5)
1. based on a method for the real time field depth of face, it is characterized in that, comprise the following steps:
10. obtain the camera data of live preview, obtain realtime graphic;
20. pairs of realtime graphics carry out Face datection; If face detected, then obtain human face region and face key point, otherwise using realtime graphic as display image, and perform step 70;
30. pairs of realtime graphics carry out Fuzzy Processing, obtain blurred picture;
The face key point of 40. default facial contour figure and correspondence thereof, and adopt affined transformation to acquire conversion facial contour figure according to the human face region got and face key point;
50. using the masking-out of described conversion facial contour figure as the human face region of described realtime graphic;
Carry out transparency with the realtime graphic of human face region masking-out and described blurred picture described in 60. pairs and calculate face depth image, using face depth image as display image;
70. will show image live preview on screen, and continue to perform step 10.
2. the method for a kind of real time field depth based on face according to claim 1, it is characterized in that: carry out Fuzzy Processing to realtime graphic in described step 30, described Fuzzy Processing comprises: one or more combinations of intermediate value Fuzzy Processing, Gaussian Blur process, average Fuzzy Processing, process of convolution.
3. the method for a kind of real time field depth based on face according to claim 1, it is characterized in that: using the masking-out of described conversion facial contour figure as the human face region of described realtime graphic in described step 50, the ubiquity of facial contour is mainly utilized to generate a profile diagram in advance, white in described profile diagram represents face contour area, black represents non-face contour area, and grey represents transitional region.
4. the method for a kind of real time field depth based on face according to claim 1 or 3, it is characterized in that: carry out transparency to the described realtime graphic with human face region masking-out and described blurred picture in described step 60 and calculate face depth image, these transparency computing method are:
Alpha=FaceColor/255.0;
Wherein, FaceColor is the color value of described conversion facial contour figure; Alpha is the transparency of this conversion facial contour figure as masking-out.
5. the method for a kind of real time field depth based on face according to claim 4, it is characterized in that: carry out transparency to the described realtime graphic with human face region masking-out and described blurred picture in described step 60 and calculate face depth image, the computing method of this face depth image are:
ResultColor=Color*Alpha+BlurColor*(1.0-Alpha);
Wherein, ResultColor is the color value of face depth image; Color is the color value of realtime graphic; Alpha is the transparency of this conversion facial contour figure as masking-out; BlurColor is the color value of blurred picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410631708.7A CN104463777B (en) | 2014-11-11 | 2014-11-11 | A method of the real time field depth based on face |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410631708.7A CN104463777B (en) | 2014-11-11 | 2014-11-11 | A method of the real time field depth based on face |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104463777A true CN104463777A (en) | 2015-03-25 |
CN104463777B CN104463777B (en) | 2018-11-06 |
Family
ID=52909765
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410631708.7A Active CN104463777B (en) | 2014-11-11 | 2014-11-11 | A method of the real time field depth based on face |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104463777B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104778712A (en) * | 2015-04-27 | 2015-07-15 | 厦门美图之家科技有限公司 | Method and system for pasting image to human face based on affine transformation |
CN106919899A (en) * | 2017-01-18 | 2017-07-04 | 北京光年无限科技有限公司 | The method and system for imitating human face expression output based on intelligent robot |
CN107563329A (en) * | 2017-09-01 | 2018-01-09 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and mobile terminal |
CN109285160A (en) * | 2018-08-29 | 2019-01-29 | 成都品果科技有限公司 | One kind is scratched as method and system |
CN109325924A (en) * | 2018-09-20 | 2019-02-12 | 广州酷狗计算机科技有限公司 | Image processing method, device, terminal and storage medium |
US20190166302A1 (en) * | 2017-11-30 | 2019-05-30 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for blurring preview picture and storage medium |
CN111126344A (en) * | 2019-12-31 | 2020-05-08 | 杭州趣维科技有限公司 | Method and system for generating key points of forehead of human face |
CN111754415A (en) * | 2019-08-28 | 2020-10-09 | 北京市商汤科技开发有限公司 | Face image processing method and device, image equipment and storage medium |
CN113362357A (en) * | 2021-06-03 | 2021-09-07 | 北京三快在线科技有限公司 | Feature point determination method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020081003A1 (en) * | 2000-12-27 | 2002-06-27 | Sobol Robert E. | System and method for automatically enhancing graphical images |
CN1889129A (en) * | 2006-07-20 | 2007-01-03 | 北京中星微电子有限公司 | Fast human face model building method and system based on single-sheet photo |
CN102592141A (en) * | 2012-01-04 | 2012-07-18 | 南京理工大学常熟研究院有限公司 | Method for shielding face in dynamic image |
CN103593834A (en) * | 2013-12-03 | 2014-02-19 | 厦门美图网科技有限公司 | Image enhancement method achieved by intelligently increasing field depth |
CN103973977A (en) * | 2014-04-15 | 2014-08-06 | 联想(北京)有限公司 | Blurring processing method and device for preview interface and electronic equipment |
-
2014
- 2014-11-11 CN CN201410631708.7A patent/CN104463777B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020081003A1 (en) * | 2000-12-27 | 2002-06-27 | Sobol Robert E. | System and method for automatically enhancing graphical images |
CN1889129A (en) * | 2006-07-20 | 2007-01-03 | 北京中星微电子有限公司 | Fast human face model building method and system based on single-sheet photo |
CN102592141A (en) * | 2012-01-04 | 2012-07-18 | 南京理工大学常熟研究院有限公司 | Method for shielding face in dynamic image |
CN103593834A (en) * | 2013-12-03 | 2014-02-19 | 厦门美图网科技有限公司 | Image enhancement method achieved by intelligently increasing field depth |
CN103973977A (en) * | 2014-04-15 | 2014-08-06 | 联想(北京)有限公司 | Blurring processing method and device for preview interface and electronic equipment |
Non-Patent Citations (1)
Title |
---|
梁路宏 等: "基于仿射模板匹配的多角度单人脸定位", 《计算机学报》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104778712B (en) * | 2015-04-27 | 2018-05-01 | 厦门美图之家科技有限公司 | A kind of face chart pasting method and system based on affine transformation |
CN104778712A (en) * | 2015-04-27 | 2015-07-15 | 厦门美图之家科技有限公司 | Method and system for pasting image to human face based on affine transformation |
CN106919899B (en) * | 2017-01-18 | 2020-07-28 | 北京光年无限科技有限公司 | Method and system for simulating facial expression output based on intelligent robot |
CN106919899A (en) * | 2017-01-18 | 2017-07-04 | 北京光年无限科技有限公司 | The method and system for imitating human face expression output based on intelligent robot |
CN107563329A (en) * | 2017-09-01 | 2018-01-09 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and mobile terminal |
US20190166302A1 (en) * | 2017-11-30 | 2019-05-30 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for blurring preview picture and storage medium |
WO2019105158A1 (en) * | 2017-11-30 | 2019-06-06 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for blurring preview picture and storage medium |
US10674069B2 (en) * | 2017-11-30 | 2020-06-02 | Guangdong Oppo Mobile Telecommunications Corp. Ltd. | Method and apparatus for blurring preview picture and storage medium |
CN109285160A (en) * | 2018-08-29 | 2019-01-29 | 成都品果科技有限公司 | One kind is scratched as method and system |
CN109285160B (en) * | 2018-08-29 | 2022-08-02 | 成都品果科技有限公司 | Image matting method and system |
CN109325924B (en) * | 2018-09-20 | 2020-12-04 | 广州酷狗计算机科技有限公司 | Image processing method, device, terminal and storage medium |
CN109325924A (en) * | 2018-09-20 | 2019-02-12 | 广州酷狗计算机科技有限公司 | Image processing method, device, terminal and storage medium |
CN111754415A (en) * | 2019-08-28 | 2020-10-09 | 北京市商汤科技开发有限公司 | Face image processing method and device, image equipment and storage medium |
US11941854B2 (en) | 2019-08-28 | 2024-03-26 | Beijing Sensetime Technology Development Co., Ltd. | Face image processing method and apparatus, image device, and storage medium |
CN111126344A (en) * | 2019-12-31 | 2020-05-08 | 杭州趣维科技有限公司 | Method and system for generating key points of forehead of human face |
CN111126344B (en) * | 2019-12-31 | 2023-08-01 | 杭州趣维科技有限公司 | Method and system for generating key points of forehead of human face |
CN113362357A (en) * | 2021-06-03 | 2021-09-07 | 北京三快在线科技有限公司 | Feature point determination method, device, equipment and storage medium |
CN113362357B (en) * | 2021-06-03 | 2022-08-16 | 北京三快在线科技有限公司 | Feature point determination method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN104463777B (en) | 2018-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104463777A (en) | Human-face-based real-time depth of field method | |
CN109859098B (en) | Face image fusion method and device, computer equipment and readable storage medium | |
JP5463866B2 (en) | Image processing apparatus, image processing method, and program | |
JP2012521708A (en) | Method and apparatus for correcting an image using a saliency map based on color frequency | |
CN111445410A (en) | Texture enhancement method, device and equipment based on texture image and storage medium | |
AU2013258866B2 (en) | Reducing the dynamic range of image data | |
WO2019198570A1 (en) | Video generation device, video generation method, program, and data structure | |
CN113039576A (en) | Image enhancement system and method | |
Wang et al. | Compressibility-aware media retargeting with structure preserving | |
CN116612263B (en) | Method and device for sensing consistency dynamic fitting of latent vision synthesis | |
Zhang | A selection of image processing techniques: from fundamentals to research front | |
CN110473295B (en) | Method and equipment for carrying out beautifying treatment based on three-dimensional face model | |
CN108830804B (en) | Virtual-real fusion fuzzy consistency processing method based on line spread function standard deviation | |
JP2008084338A (en) | Pseudo three-dimensional image forming device, pseudo three-dimensional image forming method and pseudo three-dimensional image forming program | |
CN113989473B (en) | Method and device for relighting | |
Wu et al. | Rectangling irregular videos by optimal spatio-temporal warping | |
CN113379623A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN108364273B (en) | Method for multi-focus image fusion in spatial domain | |
Sun et al. | Fractal pyramid low-light image enhancement network with illumination information | |
JP6615818B2 (en) | VIDEO GENERATION DEVICE, VIDEO GENERATION METHOD, AND PROGRAM | |
Chang et al. | RGNET: a two-stage low-light image enhancement network without paired supervision | |
CN112785489B (en) | Monocular stereoscopic vision image generation method and device | |
CN116777768B (en) | Robust and efficient scanned document image enhancement method and device | |
Raviya et al. | Real time depth data refurbishment in frequency domain and 3D modeling map using Microsoft kinect sensor | |
Matsuoka et al. | Weight optimization for multiple image integration and its applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |