CN107578437A - A kind of depth estimation method based on light-field camera, system and portable terminal - Google Patents
A kind of depth estimation method based on light-field camera, system and portable terminal Download PDFInfo
- Publication number
- CN107578437A CN107578437A CN201710767874.3A CN201710767874A CN107578437A CN 107578437 A CN107578437 A CN 107578437A CN 201710767874 A CN201710767874 A CN 201710767874A CN 107578437 A CN107578437 A CN 107578437A
- Authority
- CN
- China
- Prior art keywords
- light
- image
- light field
- sequence
- field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/571—Depth or shape recovery from multiple images from focus
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of depth estimation method based on light-field camera, system and portable terminal, the depth estimation method includes:Obtain the four-dimensional light field data of light-field camera shooting;Four-dimensional light field data is decoded, obtains light field image;Light field is obtained using weight focus algorithm to focus again sequence;Counterweight focusing sequence establishes thumbnail and concordance list;Search concordance list and obtain alpha value corresponding to pixel;The depth value of object is obtained according to object-image relation.Present invention also offers a kind of depth estimation system of light-field camera and portable terminal.Using depth estimation method, system and the portable terminal of the present invention, depth estimation algorithm is simple, and the burden of processor is small, and the depth map accuracy of collection is high.
Description
Technical field
The invention belongs to image processing field, more particularly to the depth estimation method of light-field camera, system and portable
Terminal.
Background technology
The intensity that light can only be recorded with traditional camera is compared, and light-field camera in camera internal by placing microlens array
Realize the intensity and angle for recording light simultaneously.The multiple camera positions collection images of traditional camera acquisition depth needs, and light field
Using the image under the different angle information reflection different visual angles of four-dimensional light field in camera, carried out deeply so as to more convenient
Spend the calculating of figure.
Depth by obtaining scene can realize that multiple views synthesis, three-dimensionalreconstruction etc. are applied, with virtual reality technology
(VR)And augmented reality(AR)Development, the acquisition of depth map is the hot issue of image and visual field, current depth
It is complicated to spend evaluation method calculating process, larger to the burden of processor, the depth map accuracy gathered is not high, and Chinese invention is special
Profit application CN107038719A discloses a kind of depth estimation method and system based on light field image angle domain pixel, the technology
The depth estimation method of scheme is complicated, processor burden is excessive, and the depth map accuracy of collection is not high.
Therefore, industry needs a kind of method that can easily and fast obtain depth map badly, overcomes existing for prior art not
Foot.
The content of the invention
It is an object of the invention to provide a kind of depth estimation method based on light-field camera, system and portable terminal,
Aim to solve the problem that existing light-field camera depth estimation method complexity is too high, processor burden is excessive, the depth map of collection is accurate
The problem of degree is not high.
In a first aspect, the present invention provides a kind of depth estimation method based on light-field camera, comprise the following steps:
Obtain the four-dimensional light field data of light-field camera shooting;
Four-dimensional light field data is decoded, obtains light field image;
Light field is obtained to light field image using weight focus algorithm to focus again sequence;
Counterweight focusing sequence establishes thumbnail and concordance list;
Search concordance list and obtain alpha value corresponding to pixel;
The depth value of object is obtained according to object-image relation.
Further, the four-dimensional light field data is LF (u, v, s, t), and wherein u, v are the lenticule row built in light-field camera
Number, columns, s, t are pixel column, columns corresponding to each lenticule.
Further, the four-dimensional light field data of decoding, obtains RAW type light field images, still further comprises to decoding
Light field image afterwards is removed mosaic, colour correction processing;
Further, the removal mosaic, the light field image of colour correction processing are RGB type light field images ILF。
Further, it is described light field sequence of focusing again is obtained using weight focus algorithm to light field image to specifically include:If
Surely heavy focusing range alpha ∈ (alphaMin, alphaMax) and the sequence image quantity N that focuses again;
Using weight focus algorithm formula 1, formula 2 to light field image ILFFocused again calculating, obtain N width and focus again sequence chart
Picture, obtain weight focus image sequence Si , i∈(1, N);
(formula 1)
(formula 2).
Further, described counterweight focusing sequence establishes thumbnail and concordance list, specifically includes:
According to weight focus image sequence Si, calculate each pixel P(I, j)The texture gradient value Grad at place(I, j), in focusing figure again
As sequence SiMiddle traversal finds out each pixel P(I, j)Locate the image where texture gradient maximum, record this image index value I,
Travel through all pixels, you can obtain thumbnail Table Iindex。
Further, described lookup thumbnail Table IindexAlpha value corresponding to pixel is obtained, the step is specifically wrapped
Include:
According to weight focusing range alpha ∈ (alphaMin, alphaMax) and thumbnail Table Iindex, search thumbnail table
IindexAlpha value corresponding to different index value is obtained, different alpha values corresponds to different image distance L ', according to the focal length of camera
F, each pixel P is obtained by formula 3(I, j)Object distance L:
(formula 3).
Further, the depth value that object is obtained according to object-image relation specifically includes:According to object-image relation formula 4
Obtain the depth map I of objectdepth=L/f,
(formula 4).
Second aspect:The present invention provides a kind of depth estimation system based on light-field camera, including:
Light field data acquisition module, for obtaining the four-dimensional light field data LF (u, v, s, t) of light-field camera shooting;
Light field data decoder module, the four-dimensional light field data LF (u, v, s, t) for being obtained to light field data acquisition module are carried out
Obtained RAW type light field images are decoded, and mosaic, colour correction processing are removed to decoded image, obtain RGB
Type light field image ILF;
Focus again retrieval module, the module obtains light field using weight focus algorithm and focused again sequence;
Focusing sequence index module again, module counterweight focus image sequence Si Establish thumbnail and concordance list;
Indexed search module, the module search thumbnail Table IindexObtain alpha value corresponding to pixel.
Further, the four-dimensional light field data is LF (u, v, s, t), and wherein u, v are the lenticule row built in light-field camera
Number, columns, s, t are pixel column, columns corresponding to each lenticule.
Further, it is described light field sequence of focusing again is obtained using weight focus algorithm to light field image to specifically include:If
Surely heavy focusing range alpha ∈ (alphaMin, alphaMax) and the sequence image quantity N that focuses again;
Using weight focus algorithm formula 1, formula 2 to light field image ILFFocused again calculating, obtain N width and focus again sequence chart
Picture, obtain weight focus image sequence Si , i∈(1, N);
(formula 1)
(formula 2).
Further, described counterweight focus image sequence Si Thumbnail and concordance list are established, is further comprised:According to
Weight focus image sequence Si, calculate each pixel P(I, j)The texture gradient value Grad at place(I, j), in weight focus image sequence SiIn
Traversal finds out each pixel P(I, j)Locate the image where texture gradient maximum, record this image index value I, travel through all pictures
Element, you can obtain thumbnail Table Iindex。
Further, described lookup thumbnail Table IindexAlpha value corresponding to obtaining pixel specifically includes:According to
Weight focusing range alpha ∈ (alphaMin, alphaMax) and thumbnail Table Iindex, search thumbnail Table IindexObtain
Alpha value corresponding to different index value, different alpha values correspond to different image distance L ', according to the focal length f of camera, by formula 3
Obtain each pixel P(I, j)Object distance L:
(formula 3).
Further, the depth map I of object is obtained according to object-image relation formula 4depth=L/f,
(formula 4).
The third aspect:The present invention provides a kind of computer-readable recording medium, the computer-readable recording medium storage
Have computer program, it is characterised in that the computer program realized when being executed by processor above-mentioned first aspect based on
The step of depth estimation method of light-field camera.
Fourth aspect:The present invention provides a kind of portable terminal, including:
One or more processors;
Memory;And
One or more computer programs, wherein one or more of computer programs are stored in the memory, and
And it is configured to by one or more of computing devices, it is characterised in that computer program described in the computing device
The step of depth estimation methods based on light-field camera of the Shi Shixian as described in any one of claim 1 to 8.
Using the depth estimation method based on light-field camera, system and portable terminal provided by the invention, light is being obtained
After the four-dimensional light field data of field camera shooting, four-dimensional light field data is decoded, obtains light field image, utilize weight focus algorithm to obtain light
Field is focused sequence again, and counterweight focusing sequence establishes thumbnail and concordance list, is searched concordance list and is obtained alpha corresponding to pixel
Value, the depth value of object is obtained according to object-image relation.It is deep using depth estimation method, system and the portable terminal of the present invention
Degree algorithm for estimating is simple, and the burden of processor is small, and the depth map accuracy of collection is high.
Brief description of the drawings
Fig. 1 is the depth estimation method flow chart of light-field camera of the present invention.
Fig. 2 is the module map of the depth estimation system of light-field camera of the present invention.
Fig. 3 is the system architecture diagram of the depth estimation system of light-field camera of the present invention.
Fig. 4 is the depth estimation method four-dimension light field data schematic diagram of light-field camera of the present invention.
Fig. 5 is that the depth estimation method of light-field camera of the present invention obtains weight focus image sequence diagram.
Fig. 6 is that the depth estimation method of field camera of the present invention obtains depth image schematic diagram.
Embodiment
In order that the purpose of the present invention, technical scheme and beneficial effect are more clearly understood, below in conjunction with accompanying drawing and implementation
Example, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only explaining this hair
It is bright, it is not intended to limit the present invention.
Embodiment one:
Referring to Fig. 1, the depth estimation method flow chart for the light-field camera that the embodiment of the present invention one provides, comprises the following steps:
S101, the four-dimensional light field data for obtaining light-field camera shooting, the four-dimensional light field data is LF (u, v, s, t), and wherein u, v are
Lenticule line number built in light-field camera, columns, s, t are pixel column, columns corresponding to each lenticule.
S102, the four-dimensional light field data that step S101 is obtained is decoded for LF (u, v, s, t), obtain RAW type lights
Field picture, the operation such as mosaic, colour correction is further removed to decoded light field image, obtains RGB type light fields
Image ILF。
S103, sequence of being focused again using weight focus algorithm acquisition light field, the step are specifically included:
Setting weight focusing range alpha ∈ (alphaMin, alphaMax) and the sequence image quantity N that focuses again;
Using weight focus algorithm formula 1, formula 2 to light field image ILFFocused again calculating, obtain N width and focus again sequence chart
Picture, obtain weight focus image sequence Si , i∈(1, N)。
(formula 1)
(formula 2)
S104, counterweight focus image sequence Si Thumbnail and concordance list are established, the step specifically includes:
According to weight focus image sequence Si, calculate each pixel P(I, j)The texture gradient value Grad at place(I, j).Different
On focal plane, the definition of the object of different depth level is different, calculates jobbie imaging most clearly image, then should
The plane of delineation is exactly the focusing imaging plane of object, and image objects most clearly show as texture gradient at pixel in the picture
It is maximum.Therefore, in weight focus image sequence SiMiddle traversal finds out pixel P(I, j)Locate the image where texture gradient maximum, note
This image index value I is recorded, travels through all pixels, you can obtain thumbnail Table Iindex。
S105, search thumbnail Table IindexAlpha value corresponding to pixel is obtained, the step specifically includes:
According to the heavy focusing range alpha ∈ (alphaMin, alphaMax) of setting, and thumbnail Table Iindex, pass through
Search thumbnail Table IindexAlpha value corresponding to different index value is obtained, different alpha values represents different image distance L ',
The focal length f of known camera, each pixel P is obtained by formula 3(I, j)Object distance L:
(formula 3)
S106, the depth map I of object is obtained according to object-image relation formula 4depth=L/f,
(formula 4)
Embodiment two:
Referring to Fig. 2, the module map of the depth estimation system for light-field camera of the present invention, the system includes:
Light field data acquisition module 201, for obtaining the four-dimensional light field data of light-field camera shooting, the four-dimensional light field data is LF
(u, v, s, t), wherein u, v be light-field camera built in lenticule line number, columns, s, t are pixel corresponding to each lenticule
Row, column number.
Light field data decoder module 202, the four-dimensional light field data for being obtained to light field data acquisition module 201 are LF
(u, v, s, t) is decoded, and obtains RAW type light field images, further decoded light field image is removed mosaic,
Colour correction is handled, and obtains RGB type light field images ILF。
Focus again retrieval module 203, the module obtains light field using weight focus algorithm and focused again sequence, specific bag
Include:
Setting weight focusing range alpha ∈ (alphaMin, alphaMax) and the sequence image quantity N that focuses again;
Using weight focus algorithm formula 1, formula 2 to light field image ILFFocused again calculating, obtain N width and focus again sequence chart
Picture, obtain weight focus image sequence Si , i∈(1, N)。
(formula 1)
(formula 2)
Focusing sequence index module 204 again, module counterweight focus image sequence Si Thumbnail and concordance list are established, specifically
Including:According to weight focus image sequence Si, calculate each pixel P(I, j)The texture gradient value Grad at place(I, j);Different
On focal plane, the definition of the object of different depth level is different, calculates jobbie imaging most clearly image, then should
The plane of delineation is exactly the focusing imaging plane of object, and image objects most clearly show as texture gradient at pixel in the picture
It is maximum.Therefore, in weight focus image sequence SiMiddle traversal finds out pixel P(I, j)Locate the image where texture gradient maximum, note
This image index value I is recorded, travels through all pixels, you can obtain thumbnail Table Iindex。
Indexed search module 205, the module search thumbnail Table IindexAlpha value corresponding to pixel is obtained, specifically
Including:
According to the heavy focusing range alpha ∈ (alphaMin, alphaMax) of setting, and thumbnail Table Iindex, pass through
Search thumbnail Table IindexAlpha value corresponding to different index value is obtained, different alpha values represents different image distance L ',
The focal length f of known camera, each pixel P is obtained by formula 3(I, j)Object distance L:
(formula 3)
The depth map I of object is further calculated according to formula 4depth=L/f:
Formula 4:。
Embodiment three:
The embodiment of the present invention three additionally provides a kind of computer-readable recording medium, and the computer-readable recording medium storage has
Computer program, the computer program realize that the embodiment of the present invention one provides when being executed by processor based on light-field camera
The step of depth estimation method.
Example IV:
Fig. 3 show the embodiment of the present invention four provide portable terminal concrete structure block diagram, a kind of portable terminal 300,
Including:
One or more processors 302;
Memory 301;And
One or more computer programs, wherein one or more of computer programs are stored in the memory 301,
And it is configured to be performed by one or more of processors 302, is realized described in the computing device during computer program
The step of depth estimation method based on light-field camera that the embodiment of the present invention one provides.
In order to be better understood from technical scheme, Fig. 4 shows the estimation of Depth of the invention based on light-field camera
The four-dimensional light field data schematic diagram of method, system and portable terminal, F is the main lens and lenticule of light-field camera in Fig. 4
Distance;Fig. 5 shows the acquisition of the of the invention depth estimation method based on light-field camera, system and portable terminal focusing figure again
As sequence diagram;Fig. 6 shows the acquisition of depth estimation method of the present invention based on light-field camera, system and portable terminal
Depth image schematic diagram.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
All any modification, equivalent and improvement made within refreshing and principle etc., should be included in the scope of the protection.
Claims (16)
1. a kind of depth estimation method based on light-field camera, it is characterised in that comprise the following steps:
Obtain the four-dimensional light field data of light-field camera shooting;
Four-dimensional light field data is decoded, obtains light field image;
Light field is obtained to light field image using weight focus algorithm to focus again sequence;
Counterweight focusing sequence establishes thumbnail and concordance list;
Search concordance list and obtain alpha value corresponding to pixel;
The depth value of object is obtained according to object-image relation.
2. the depth estimation method based on light-field camera as claimed in claim 1, it is characterised in that the four-dimensional light field data is
LF (u, v, s, t), wherein u, v be light-field camera built in lenticule line number, columns, s, t are picture corresponding to each lenticule
Plain row, column number.
3. the depth estimation method based on light-field camera as claimed in claim 1, it is characterised in that the four-dimensional light field of decoding
The RAW type light field images that data obtain, still further comprise and mosaic, color school are removed to decoded light field image
Positive processing.
4. the depth estimation method based on light-field camera as claimed in claim 3, it is characterised in that the removal mosaic,
The light field image of colour correction processing is RGB type light field images ILF。
5. the depth estimation method based on light-field camera as claimed in claim 4, it is characterised in that described to light field image
Light field sequence of focusing again is obtained using weight focus algorithm to specifically include:
Setting weight focusing range alpha ∈ (alphaMin, alphaMax) and the sequence image quantity N that focuses again;
Using weight focus algorithm formula 1, formula 2 to light field image ILFFocused again calculating, obtain N width and focus again sequence chart
Picture, obtain weight focus image sequence Si , i∈(1, N);
(formula 1)
(formula 2).
6. the depth estimation method based on light-field camera as claimed in claim 5, it is characterised in that described counterweight focusing figure
As sequence establishes thumbnail and concordance list, specifically include:
According to weight focus image sequence Si, calculate each pixel P(I, j)The texture gradient value Grad at place(I, j), in weight focus image
Sequence SiMiddle traversal finds out each pixel P(I, j)Locate the image where texture gradient maximum, record this image index value I, time
Go through all pixels, you can obtain thumbnail Table Iindex。
7. the depth estimation method based on light-field camera as claimed in claim 6, it is characterised in that described lookup index map
As Table IindexAlpha value corresponding to pixel is obtained, the step specifically includes:
According to weight focusing range alpha ∈ (alphaMin, alphaMax) and thumbnail Table Iindex, search thumbnail table
IindexAlpha value corresponding to different index value is obtained, different alpha values corresponds to different image distance L ', according to the focal length of camera
F, each pixel P is obtained by formula 3(I, j)Object distance L:
(formula 3).
8. the depth estimation method based on light-field camera described in claim 7, it is characterised in that described according to object-image relation
The depth value for obtaining object is specially:The depth map I of object is obtained according to formula 4depth=L/f,
(formula 4).
A kind of 9. depth estimation system based on light-field camera, it is characterised in that including:
Light field data acquisition module, for obtaining the four-dimensional light field data LF (u, v, s, t) of light-field camera shooting;
Light field data decoder module, the four-dimensional light field data LF (u, v, s, t) for being obtained to light field data acquisition module are carried out
Decode obtained RAW type light field images, and mosaic is removed to decoded light field image, colour correction handles to obtain
RGB type light field images ILF;
Focus again retrieval module, the module obtains light field using weight focus algorithm and focused again sequence;
Focusing sequence index module again, module counterweight focus image sequence Si Establish thumbnail and concordance list;
Indexed search module, the module search thumbnail Table IindexObtain alpha value corresponding to pixel.
10. the depth estimation system based on light-field camera as claimed in claim 9, it is characterised in that the four-dimensional light field data
For LF (u, v, s, t), wherein u, v are lenticule line number built in light-field camera, columns, and s, t are corresponding to each lenticule
Pixel column, columns.
11. the depth estimation system based on light-field camera as claimed in claim 9, it is characterised in that described to light field figure
Specifically included as obtaining light field sequence of focusing using weight focus algorithm again:Setting weight focusing range alpha ∈ (alphaMin,
AlphaMax) and again focus sequence image quantity N;
Using weight focus algorithm formula 1, formula 2 to light field image ILFFocused again calculating, obtain N width and focus again sequence chart
Picture, obtain weight focus image sequence Si , i∈(1, N);
(formula 1)
(formula 2).
12. the depth estimation system based on light-field camera as claimed in claim 9, it is characterised in that described counterweight focusing
Image sequence Si Thumbnail and concordance list are established, is further comprised:According to weight focus image sequence Si, calculate each pixel
P(I, j)The texture gradient value Grad at place(I, j), in weight focus image sequence SiMiddle traversal finds out each pixel P(I, j)Locate texture ladder
The image spent where maximum, this image index value I is recorded, travel through all pixels, you can obtain thumbnail Table Iindex。
13. the depth estimation system based on light-field camera as claimed in claim 9, it is characterised in that described lookup index
Image table IindexAlpha value corresponding to obtaining pixel specifically includes:According to weight focusing range alpha ∈ (alphaMin,
) and thumbnail Table I alphaMaxindex, search thumbnail Table IindexAlpha value corresponding to different index value is obtained, it is different
Alpha value correspond to different image distance L ', according to the focal length f of camera, each pixel P is obtained by formula 3(I, j)Object distance L:
(formula 3).
14. the depth estimation system based on light-field camera as claimed in claim 13, it is characterised in that public according to object-image relation
Formula 4 obtains the depth map I of objectdepth=L/f,
(formula 4).
15. a kind of computer-readable recording medium, the computer-readable recording medium storage has computer program, and its feature exists
In depth based on light-field camera of the realization as described in any one of claim 1 to 8 when the computer program is executed by processor
The step of spending method of estimation.
16. a kind of portable terminal, including:
One or more processors;
Memory;And
One or more computer programs, wherein one or more of computer programs are stored in the memory, and
And it is configured to by one or more of computing devices, it is characterised in that computer program described in the computing device
The step of depth estimation methods based on light-field camera of the Shi Shixian as described in any one of claim 1 to 8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710767874.3A CN107578437A (en) | 2017-08-31 | 2017-08-31 | A kind of depth estimation method based on light-field camera, system and portable terminal |
PCT/CN2018/101467 WO2019042185A1 (en) | 2017-08-31 | 2018-08-21 | Light-field camera-based depth estimating method and system and portable terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710767874.3A CN107578437A (en) | 2017-08-31 | 2017-08-31 | A kind of depth estimation method based on light-field camera, system and portable terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107578437A true CN107578437A (en) | 2018-01-12 |
Family
ID=61030042
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710767874.3A Pending CN107578437A (en) | 2017-08-31 | 2017-08-31 | A kind of depth estimation method based on light-field camera, system and portable terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107578437A (en) |
WO (1) | WO2019042185A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537836A (en) * | 2018-04-12 | 2018-09-14 | 维沃移动通信有限公司 | A kind of depth data acquisition methods and mobile terminal |
WO2019042185A1 (en) * | 2017-08-31 | 2019-03-07 | 深圳岚锋创视网络科技有限公司 | Light-field camera-based depth estimating method and system and portable terminal |
CN109993764A (en) * | 2019-04-03 | 2019-07-09 | 清华大学深圳研究生院 | A kind of light field depth estimation method based on frequency domain energy distribution |
CN110349132A (en) * | 2019-06-25 | 2019-10-18 | 武汉纺织大学 | A kind of fabric defects detection method based on light-field camera extraction of depth information |
CN110662014A (en) * | 2019-09-25 | 2020-01-07 | 江南大学 | Light field camera four-dimensional data large depth-of-field three-dimensional display method |
WO2020207185A1 (en) * | 2019-04-08 | 2020-10-15 | 深圳市视觉动力科技有限公司 | Three-dimensional light field technology-based optical unmanned aerial vehicle monitoring system |
CN115661223A (en) * | 2022-12-09 | 2023-01-31 | 中国人民解放军国防科技大学 | Light field depth estimation method, light field depth estimation device, computer equipment and storage medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110827343B (en) * | 2019-11-06 | 2024-01-26 | 太原科技大学 | Improved light field depth estimation method based on energy enhanced defocus response |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104683684A (en) * | 2013-11-29 | 2015-06-03 | 华为技术有限公司 | Optical field image processing method and optical field image processing device as well as optical field camera |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101562701B (en) * | 2009-03-25 | 2012-05-02 | 北京航空航天大学 | Digital focusing method and digital focusing device used for optical field imaging |
US8849064B2 (en) * | 2013-02-14 | 2014-09-30 | Fotonation Limited | Method and apparatus for viewing images |
CN104519347B (en) * | 2014-12-10 | 2017-03-01 | 北京智谷睿拓技术服务有限公司 | Light field display control method and device, light field display device |
CN107578437A (en) * | 2017-08-31 | 2018-01-12 | 深圳岚锋创视网络科技有限公司 | A kind of depth estimation method based on light-field camera, system and portable terminal |
-
2017
- 2017-08-31 CN CN201710767874.3A patent/CN107578437A/en active Pending
-
2018
- 2018-08-21 WO PCT/CN2018/101467 patent/WO2019042185A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104683684A (en) * | 2013-11-29 | 2015-06-03 | 华为技术有限公司 | Optical field image processing method and optical field image processing device as well as optical field camera |
Non-Patent Citations (2)
Title |
---|
一度逍遥: ""利用光场进行深度图估计(Depth Estimation)算法之一——聚焦算法"", 《HTTPS://WWW.CNBLOGS.COM/RIDDICK/P/6754554.HTML》 * |
丁江华 等: ""基于微透镜阵列的光场图像深度估计"", 《科学技术与工程》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019042185A1 (en) * | 2017-08-31 | 2019-03-07 | 深圳岚锋创视网络科技有限公司 | Light-field camera-based depth estimating method and system and portable terminal |
CN108537836A (en) * | 2018-04-12 | 2018-09-14 | 维沃移动通信有限公司 | A kind of depth data acquisition methods and mobile terminal |
CN109993764A (en) * | 2019-04-03 | 2019-07-09 | 清华大学深圳研究生院 | A kind of light field depth estimation method based on frequency domain energy distribution |
CN109993764B (en) * | 2019-04-03 | 2021-02-19 | 清华大学深圳研究生院 | Light field depth estimation method based on frequency domain energy distribution |
WO2020207185A1 (en) * | 2019-04-08 | 2020-10-15 | 深圳市视觉动力科技有限公司 | Three-dimensional light field technology-based optical unmanned aerial vehicle monitoring system |
US11978222B2 (en) | 2019-04-08 | 2024-05-07 | Shenzhen Vision Power Technology Co., Ltd. | Three-dimensional light field technology-based optical unmanned aerial vehicle monitoring system |
CN110349132A (en) * | 2019-06-25 | 2019-10-18 | 武汉纺织大学 | A kind of fabric defects detection method based on light-field camera extraction of depth information |
CN110349132B (en) * | 2019-06-25 | 2021-06-08 | 武汉纺织大学 | Fabric flaw detection method based on light field camera depth information extraction |
CN110662014A (en) * | 2019-09-25 | 2020-01-07 | 江南大学 | Light field camera four-dimensional data large depth-of-field three-dimensional display method |
CN115661223A (en) * | 2022-12-09 | 2023-01-31 | 中国人民解放军国防科技大学 | Light field depth estimation method, light field depth estimation device, computer equipment and storage medium |
CN115661223B (en) * | 2022-12-09 | 2023-03-28 | 中国人民解放军国防科技大学 | Light field depth estimation method, light field depth estimation device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2019042185A1 (en) | 2019-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107578437A (en) | A kind of depth estimation method based on light-field camera, system and portable terminal | |
CN107087107B (en) | Image processing apparatus and method based on dual camera | |
Garg et al. | Learning single camera depth estimation using dual-pixels | |
US10997696B2 (en) | Image processing method, apparatus and device | |
CN104699842B (en) | Picture display method and device | |
CN106897648B (en) | Method and system for identifying position of two-dimensional code | |
CN108055452A (en) | Image processing method, device and equipment | |
TW201118791A (en) | System and method for obtaining camera parameters from a plurality of images, and computer program products thereof | |
CN107493432A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
Sen et al. | Practical high dynamic range imaging of everyday scenes: Photographing the world as we see it with our own eyes | |
Agrafiotis et al. | Underwater photogrammetry in very shallow waters: main challenges and caustics effect removal | |
CN107704798A (en) | Image weakening method, device, computer-readable recording medium and computer equipment | |
CN106446883A (en) | Scene reconstruction method based on light label | |
CN108053438A (en) | Depth of field acquisition methods, device and equipment | |
KR20190080388A (en) | Photo Horizon Correction Method based on convolutional neural network and residual network structure | |
Zhang et al. | Synthetic aperture based on plenoptic camera for seeing through occlusions | |
CN107948618A (en) | Image processing method, device, computer-readable recording medium and computer equipment | |
WO2015069063A1 (en) | Method and system for creating a camera refocus effect | |
CN110120012A (en) | The video-splicing method that sync key frame based on binocular camera extracts | |
CN109360176A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
US11734790B2 (en) | Method and apparatus for recognizing landmark in panoramic image and non-transitory computer-readable medium | |
CN109410308A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN109242793A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN108230273A (en) | A kind of artificial compound eye camera three dimensional image processing method based on geological information | |
CN107424135A (en) | Image processing method, device, computer-readable recording medium and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180112 |