CN113225492B - Light field unit image generation method, system and storage medium - Google Patents

Light field unit image generation method, system and storage medium Download PDF

Info

Publication number
CN113225492B
CN113225492B CN202110160006.5A CN202110160006A CN113225492B CN 113225492 B CN113225492 B CN 113225492B CN 202110160006 A CN202110160006 A CN 202110160006A CN 113225492 B CN113225492 B CN 113225492B
Authority
CN
China
Prior art keywords
light field
field camera
point
pixel
included angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110160006.5A
Other languages
Chinese (zh)
Other versions
CN113225492A (en
Inventor
黄辉
吴英
徐文宇
李沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhenxiang Technology Co ltd
Original Assignee
Shenzhen Zhenxiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhenxiang Technology Co ltd filed Critical Shenzhen Zhenxiang Technology Co ltd
Priority to CN202110160006.5A priority Critical patent/CN113225492B/en
Publication of CN113225492A publication Critical patent/CN113225492A/en
Application granted granted Critical
Publication of CN113225492B publication Critical patent/CN113225492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a light field unit image generation method, a system and a storage medium, wherein the method comprises the following steps ofThe method comprises the following steps: acquisition distance L between set point p and light field camera array o Acquiring a pixel value sequence { C } after a light field camera array acquisition point p under the acquisition distance 1 ,C 2 ,C 3 ,…,C n Calculating an arbitrary included angle theta between the nth light field camera and the point p n The matching searching point p and the nth light field camera form a corresponding included angle theta n Pixel point p at the time nk Calculating the pixel direction angle sequence { alpha } of the point p on the display device e1e2e3 ,…,α eq Based on the calculated direction angle alpha eq Included angle theta n Performing matching mapping to generate a coded pixel value S= { p e1 ,p e2 ,p e3 ,…,p eq Normalized processing the encoded pixel value to obtain the final pixel value S nor ={s e1 ,s e2 ,s e3 ,…,s eq The invention avoids the position error in the imaging process of the light field unit image, improves the accuracy of the imaging of the light field unit image, does not need to focus in advance, and is flexible and convenient to collect.

Description

Light field unit image generation method, system and storage medium
Technical Field
The present invention relates to the field of image generation technologies, and in particular, to a method, a system, and a storage medium for generating a light field unit image.
Background
The unit image specifically refers to a certain unit part of an image for generating a stereoscopic image in the integrated imaging process, and the image unit contains more pixels. The unit image is based on the integrated imaging light field display, and the accuracy of the unit image determines the accuracy of the light field display.
Integrated imaging includes both acquisition and reconstruction of the subject.
In the prior art, the object acquisition process adopts a common camera array to acquire, the field of view and the spatial resolution are limited by the size of a projection array, one or more reference surfaces are required to be arranged in the image generation method, and the position error of the reference surfaces is introduced by introducing the reference surfaces, so that the accuracy of the obtained unit images is poor, and the three-dimensional effect reconstructed by the unit images is poor.
Accordingly, there is a need in the art for improvement.
Disclosure of Invention
In view of the shortcomings of the prior art, the invention aims to provide a light field unit image generation method, a system and a storage medium, which aim to improve the accuracy of light field unit image imaging, do not need to focus in advance, and are flexible and convenient to collect.
In order to achieve the above purpose, the invention adopts the following technical scheme:
in a first aspect, the invention provides a light field unit image generation method, which is applied to a light field unit image generation system based on a light field camera array, wherein the system comprises a light field camera array, a display lens array and a display device which are sequentially arranged at intervals, an object to be acquired is arranged between the light field camera array and the display lens array, the light field camera array comprises a plurality of light field cameras for carrying out pixel acquisition on the acquisition object, and the display lens array and the display device are used for reproducing the acquisition object;
setting any point on the acquisition object as p, and setting the coordinate of the point p as x;
the method for generating the unit image required by the reproduction of the acquisition object specifically comprises the following steps:
s10, setting the acquisition distance L between the point p and the light field camera array o Acquiring a pixel value sequence { C } after a light field camera array acquisition point p under the acquisition distance 1 ,C 2 ,C 3 ,…,C n };
Wherein C is n Representing the pixel imaged by the point p at the nth light field camera, C n =(p n1 ,p n2 ,p n3 ,…,p nk ),p nk Representing the kth pixel point acquired by the point p at the nth light field camera;
s20, calculating any included angle theta between the nth light field camera and the point p n According to the included angle theta n In { C 1 ,C 2 ,C 3 ,…,C n Match find points in }p and the nth light field camera form a corresponding included angle theta n Pixel point p at the time nk
Wherein the included angle theta n Within a range, point p is at θ with the nth light field camera n The imaging pixel value set at the included angle is C θn =(p n1 ,p n2 ,p n3 ,…,p nk ) θ
S30, distance L between the set point p and the lens array i At this distance the sequence of pixel direction angles { alpha } of the point p on the display device is calculated e1e2e3 ,…,α eq Q is the number of pixels to be encoded by the display device;
s40, according to the calculated direction angle alpha eq Included angle theta n Matching, namely matching the included angle theta n Corresponding pixel point p on nth light field camera nk Mapped to the direction angle alpha eq Corresponding pixel point p on display device eq Generating an encoded pixel value s= { p e1 ,p e2 ,p e3 ,…,p eq };
S50, carrying out normalization processing on the encoded pixel value to obtain a final pixel value S nor ={s e1 ,s e2 ,s e3 ,…,s eq }。
In a second aspect, the invention provides a light field unit image generating system based on a light field camera array, wherein the system comprises a light field camera array, a display lens array and a display device which are sequentially arranged at intervals, an object to be acquired is arranged between the light field camera array and the display lens array, the light field camera array comprises a plurality of light field cameras for carrying out pixel acquisition on the object to be acquired, and the display lens array and the display device are used for reproducing the object to be acquired;
the system further includes a memory, a processor, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the aforementioned methods when executing the computer program.
In a third aspect, the present invention proposes a computer readable storage medium, in which a computer program is stored, which computer program, when executed, implements the aforementioned method.
According to the method for generating the light field unit image, the light field camera array is adopted to collect the light field information of the collected object, the included angle between the collected object and the light field camera array is matched with the included angle between the collected object and the lens array, and the pixel points on the light field camera after matching are mapped to the corresponding pixel points on the display equipment so as to generate the light field unit image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to the structures shown in these drawings without giving inventive effort to those skilled in the art.
FIG. 1 is a schematic diagram of a first embodiment of a light field unit image generation system based on a light field camera array of the present invention;
FIG. 2 is a schematic diagram of a second embodiment of a light field unit image generation system based on a light field camera array of the present invention;
FIG. 3 is a flowchart of a first embodiment of a light field unit image generating method according to the present invention;
FIG. 4 is a schematic diagram of the principle of the light path in the light field unit image generating method of the present invention;
FIG. 5 is a diagram illustrating the deviation of the point p ray from the light field camera according to the present invention;
FIG. 6 is a schematic diagram of a pixel matching mapping process on a display device according to the present invention;
fig. 7 is a schematic diagram of a small included angle merging and mapping process of a pixel point according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made with reference to the accompanying drawings, in which it is evident that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the invention without undue burden, are within the scope of the invention.
As shown in fig. 1, a system environment to which the light field unit image generating method of the present invention is applied includes a light field camera array 100, a display lens array 200 and a display device 300, which are sequentially arranged at intervals, an object 400 to be acquired is disposed between the light field camera array 100 and the display lens array 200, the light field camera array 100 includes a plurality of light field cameras 101 for performing pixel acquisition on the object 400, and the display lens array 200 and the display device 300 are used for reproducing the object 400; the display lens array 200 includes a plurality of lenses 201.
In the system, the distances between the acquisition object 400 and the light field camera array 100 and the display lens array 200 are reasonably set according to the scene size and the depth of the acquisition object 400, so that the depth change range is as far as possible within the focusing range of the light field camera 101, and light field information with different angles can be acquired as much as possible.
The light field camera array 100 of the present invention adopts a parallel light field arrangement as shown in fig. 1 or a circular arc arrangement or a staggered arrangement as shown in fig. 2.
The light field camera 101 in the light field camera array 100 of the invention completes the collection of light field information, can record the light rays reaching the sensor in different directions, not only collect the energy of each light ray, but also capture the direction information of each light ray. The light field camera can complete the omnibearing capture of scene information by collecting light field information, and can expand the traditional two-dimensional image into four dimensions. The light field camera can overcome the defect that the traditional camera only can display the three-dimensional information observed by us in a planar two-dimensional form, so that the depth information of an object is lost.
Meanwhile, the light field camera records the data of light beams in all directions, and the focus is selected according to the requirement in the data processing in the later stage. Therefore, the light field camera does not need to focus in advance when shooting, and has simple operation and high shooting speed. Taking an image at a time, the 3D image with depth information can be obtained.
The invention sets the light field camera array 100, does not need to introduce a reference surface in the process of generating the light field unit image like the prior art, does not cause position errors, and improves the accuracy of generating the light field unit image.
Specifically, as shown in fig. 3, the light field unit image generating method of the present invention specifically includes the following steps:
referring to fig. 4, any point on the acquisition object 400 is designated as p, and the coordinate of the point p is designated as x. P is p o 、p e The pixel size of the light field camera and the pixel size of the display equipment are respectively.
S10, setting the acquisition distance L between the point p and the light field camera array o Acquiring a pixel value sequence { C } after a light field camera array acquisition point p under the acquisition distance 1 ,C 2 ,C 3 ,…,C n }。
Wherein C is n Representing the pixel imaged by the point p at the nth light field camera, C n =(p n1 ,p n2 ,p n3 ,…,p nk ),p nk The representation point p is at the kth pixel point acquired by the nth light field camera.
Acquisition distance L o Reasonable setting is performed according to the scene size and depth of the acquisition object 400, so that the depth variation range is as far as possible within the focusing range of the light field camera 101.
{C 1 ,C 2 ,C 3 ,…,C n The pixels obtained after the light field camera array 100 acquired point p are known. C (C) 1 Representing the pixel imaged by point p at the 1 st light field camera.
The light field camera can generate a plurality of pixel points for one point, so the pixel imaged by the point p on the nth light field camera can be expressed as C n =(p n1 ,p n2 ,p n3 ,…,p nk ) The pixel imaged by point p at the 1 st light field camera can be represented as C 1 =(p 11 ,p 12 ,p 13 ,…,p 1k )。
k represents generating k pixel points, p 1k The representation point p is at the kth pixel point acquired by the 1 st light field camera.
Because the positions of the light field cameras are different, the number of imaging pixels of the corresponding points is not necessarily the same.
S20, calculating any included angle theta between the nth light field camera and the point p n According to the included angle theta n In { C 1 ,C 2 ,C 3 ,…,C n The matching finding point p in the } and the nth light field camera form a corresponding included angle theta n Pixel point p at the time nk
Wherein the included angle theta n Within a range, point p is at θ with the nth light field camera n The imaging pixel value set at the included angle is C θn =(p n1 ,p n2 ,p n3 ,…,p nk ) θ
As shown in fig. 4, in particular, the method of the present invention is described in
C g For the interval between two adjacent light field cameras in the light field camera array, D is the aperture of the light field camera, n is the nth light field camera, as shown in FIG. 5, ΔD is the deviation value of p point light rays and the nth light field camera, ΔD E [0, D]And is discretely distributed, the number is k, and p nk Matching.
In the present invention, θ n ∈[θ nminnmax ]When ΔD is taken to be 0 and D respectively,
preferably, the included angle theta is used in the step n In { C 1 ,C 2 ,C 3 ,…,C n The specific method for searching the matching in the process is to search the matching of the light path according to the imaging principle of the light field camera.
Because each included angle theta in the present invention n Corresponding to a pixel p nk Calculating to obtain theta n Then, according to the imaging principle of the light field camera, p can be found nk Since ΔD is a discrete distribution and the number is k, θ n Also of limited number of discrete, and p nk Matching.
In the nth light field camera, since the point p and the light field camera deviation value DeltaD have k changes, the included angle theta of the nth light field camera of the point p n There are k, correspondingly, point p is θ with the nth light field camera n The set of imaging pixel values at the included angle is C θn =(p n1 ,p n2 ,p n3 ,…,p nk ) θ
As in the 1 st light field camera, the point p is at θ with the 1 st light field camera 1 The set of imaging pixel values at the included angle is C θ1 =(p 11 ,p 12 ,p 13 ,…,p 1k ) θ
S30, distance L between the set point p and the lens array i At this distance the sequence of pixel direction angles { alpha } of the point p on the display device is calculated e1e2e3 ,…,α eq Q is the number of pixels the display device needs to encode.
In particular, as shown in FIG. 4, the method of the present invention is described in
m is the m-th lens, g is the distance between the lens array and the display device, A g Is the center distance between two adjacent lenses in the lens array.
In the method of the invention, g, A g Layout parameters, which may be referred to as display devices, are set to a lens array focal distance f 1 Then there is
At the same time, the method comprises the steps of,i represents any position on the display device, i.e. [1, q]Its maximum value is determined by the resolution of the display device.
S40, according to the calculated direction angle alpha eq Included angle theta n Matching, namely matching the included angle theta n Corresponding pixel point p on nth light field camera nk Mapped to the direction angle alpha eq Corresponding pixel point p on display device eq Generating an encoded pixel value s= { p e1 ,p e2 ,p e3 ,…,p eq }。
Specifically, as shown in fig. 6, the direction angle α calculated in this step is calculated eq Included angle theta n Matching, namely matching the included angle theta n Corresponding pixel point p on nth light field camera nk Mapped to the direction angle alpha eq Corresponding pixel point p on display device eq The method specifically comprises the following steps:
s41, respectively acquiring the direction angles alpha eq Included angle theta n
S42, the direction angle alpha eq Included angle theta n For comparison, if θ n =α eq
S43, according to the mapping formula, theta in the nth light field camera is calculated n Corresponding pixel point p nk Alpha is filled into the display device eq Corresponding ith pixel position to determine pixel point p eq Gray values of (2);
the mapping formula is as follows:
wherein i.epsilon.1, q.
As shown in fig. 4, if the 4 th light field camera makes an angle θ with one of the points p 4 Pixel orientation on a display device with point pAngle alpha e2 Equal, whereinI.e. the angle between point p and the 2 nd pixel to be encoded of the display device and θ 4 Equal, satisfy:
then θ in the 4 th light field camera 4 The included angle corresponds to the pixel point p 4k Is filled into the display device by alpha e2 Corresponding 2 nd pixel position, thereby determining pixel point p e2 Gray value of (2):
thus, when coding, after mapping each pixel position on the display device in the above steps, the pixel values s= { p of all positions on the display device can be obtained e1 ,p e2 ,p e3 ,…,p eq }。
S50, carrying out normalization processing on the encoded pixel value to obtain a final pixel value S nor ={s e1 ,s e2 ,s e3 ,…,s eq }。
According to the light field unit image generation method, the light field camera array is adopted to collect light field information of the collected object, the included angle between the collected object and the light field camera array is matched with the included angle between the object and the lens array, and the matched pixel points on the light field camera are mapped to the corresponding pixel points on the display equipment so as to generate the light field unit image.
Preferably, as shown in fig. 7, the method of the present invention further includes the steps of combining and mapping small included angles of the pixel points, and specifically includes:
s61, obtaining the included angle theta between the nth light field camera and the point p n
S62, setting a small included angle epsilon.
S63, acquiring θ n The pixels within the epsilon angle are (p nx ,p n(x+1) ,p n(x+2) ,…,p ny ) The pixel is p at the pixel point corresponding to the display equipment exy Then the pixel is merged and mapped to p using a merge formula exy
The combination formula is as follows:
because the light field camera records image point information in different directions, if the included angle epsilon in the direction is smaller, the corresponding pixel values in the included angle epsilon can be combined and mapped to the corresponding position pixels of the display equipment without affecting the watching effect of human eyes. Because the human eyes have certain visual redundancy, the change of the included angle in a small range cannot be resolved, and the invention accumulates pixels in the range and calculates the pixels as one pixel. Compared with the pixel calculation in the range in the prior art, the small included angle merging and mapping mode increases the transition information of the pixels of the display equipment, and enables the change of the visual angle to be smoother.
The invention also provides a light field unit image generating system based on the light field camera array, which comprises the light field camera array 100, the display lens array 200 and the display device 300 which are sequentially arranged at intervals, wherein an object 400 to be acquired is arranged between the light field camera array 100 and the display lens array 200, the light field camera array 100 comprises a plurality of light field cameras 101 for carrying out pixel acquisition on the acquisition object 400, and the display lens array 200 and the display device 300 are used for reproducing the acquisition object 400.
The system further includes a memory, a processor, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the method described above when executing the computer program.
The computer program may be divided into one or more modules/units, which are stored in the memory and executed by the processor to accomplish the present invention, for example. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the asynchronous message processing terminal device.
The master control module may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the above components are merely examples based on a system and do not constitute a limitation of the master control module, and may include more or fewer components than described above, or may combine certain components, or different components, e.g., the master control module may further include input and output devices, network access devices, buses, etc.
The processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a micro processor or the processor may be any conventional processor or the like, which is a control center of the apparatus, and connects various parts of the entire main control module using various interfaces and lines.
The memory may be used to store the computer program and/or modules, and the processor may implement various functions of the device by running or executing the computer program and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to use (such as audio data, phonebook, etc.), and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid state storage device.
The invention also proposes a computer readable storage medium having stored therein a computer program which when executed implements the above-mentioned method.
The modules/units integrated in the light field unit image generation method of the present invention may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as a stand alone product. The specific implementation of the computer readable storage medium of the present invention is substantially the same as the above embodiments of the method for generating a light field unit image, and will not be described herein.
It should be noted that the embodiments described above are merely illustrative, and the units described as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiments provided by the invention, the connection relation between the modules represents that the modules have communication connection therebetween, and can be specifically implemented as one or more communication buses or signal lines. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is illustrative of the present invention and is not to be construed as limiting the scope of the invention, which is defined by the appended claims, rather, as the description of the invention covers all embodiments of the invention.

Claims (8)

1. The light field unit image generation method is characterized by being applied to a light field unit image generation system based on a light field camera array, wherein the system comprises the light field camera array, a display lens array and a display device which are sequentially arranged at intervals, an object to be acquired is arranged between the light field camera array and the display lens array, the light field camera array comprises a plurality of light field cameras for carrying out pixel acquisition on the acquisition object, and the display lens array and the display device are used for reproducing the acquisition object;
setting any point on the acquisition object as p, and setting the coordinate of the point p as x;
the method for generating the unit image required by the reproduction of the acquisition object specifically comprises the following steps:
s10, setting the acquisition distance L between the point p and the light field camera array o Acquiring a pixel value sequence { C } after a light field camera array acquisition point p under the acquisition distance 1 ,C 2 ,C 3 ,…,C n };
Wherein C is n Representing the pixel imaged by the point p at the nth light field camera, C n =(p n1 ,p n2 ,p n3 ,…,p nk ),p nk Representing the kth pixel point acquired by the point p at the nth light field camera;
s20, calculating any included angle theta between the nth light field camera and the point p n According to the included angle theta n In { C 1 ,C 2 ,C 3 ,…,C n The matching finding point p in the } and the nth light field camera form a corresponding included angle theta n Pixel point p at the time nk
Wherein the included angle theta n Within a range, point p is at θ with the nth light field camera n The set of imaging pixel values at the included angle is C θn =(p n1 ,p n2 ,p n3 ,…,p nk ) θ
S30, distance L between the set point p and the lens array i At this distance the sequence of pixel direction angles { alpha } of the point p on the display device is calculated e1e2e3 ,…,α eq Q is the number of pixels to be encoded by the display device;
s40, according to the calculated direction angle alpha eq Included angle theta n Matching, namely matching the included angle theta n Corresponding pixel point p on nth light field camera nk Mapped to the direction angle alpha eq Corresponding pixel point p on display device eq Generating an encoded pixel value s= { p e1 ,p e2 ,p e3 ,…,p eq };
S50, carrying out normalization processing on the encoded pixel value to obtain a final pixel value S nor ={s e1 ,s e2 ,s e3 ,…,s eq }。
2. The method of claim 1, wherein the
C g For the interval between two adjacent light field cameras in the light field camera array, D is the aperture of the light field camera, n is the nth light field camera, deltaD is the deviation value of p point light rays and the nth light field camera, deltaD epsilon [0, D]。
3. The method according to claim 2, wherein the
m is the m-th lens, g is the distance between the lens array and the display device, A g Is the center distance between two adjacent lenses in the lens array.
4. A method according to claim 3, wherein the direction angle α calculated in step S40 eq Included angle theta n Matching, namely matchingThe included angle theta after matching n Corresponding pixel point p on nth light field camera nk Mapped to the direction angle alpha eq Corresponding pixel point p on display device eq The method specifically comprises the following steps:
s41, respectively acquiring the direction angles alpha eq Included angle theta n
S42, the direction angle alpha eq Included angle theta n For comparison, if θ n =α eq
S43, according to the mapping formula, theta in the nth light field camera is calculated n Corresponding pixel point p nk Is filled into the display device by alpha eq Corresponding ith pixel position to determine pixel point p eq Gray values of (2);
the mapping formula is as follows:
wherein i.epsilon.1, q.
5. The method according to claim 1, wherein the step S20 is based on the included angle θ n In { C 1 ,C 2 ,C 3 ,…,C n The method for searching the matching in the light field is specifically to search the matching of the light path according to the imaging principle of the light field camera.
6. The method of claim 1, further comprising combining and mapping small included angles of the pixels, comprising the steps of:
s61, obtaining the included angle theta between the nth light field camera and the point p n
S62, setting a small included angle epsilon;
s63, acquiring θ n The pixels within the epsilon angle are (p nx ,p n(x+1) ,p n(x+2) ,…,p ny ) The pixel is p at the pixel point corresponding to the display device exy Then the pixel is merged and mapped to p using a merge formula exy
The combination formula is as follows:
7. the light field unit image generation system based on the light field camera array is characterized by comprising the light field camera array, the display lens array and the display device which are sequentially arranged at intervals, an object to be acquired is arranged between the light field camera array and the display lens array, the light field camera array comprises a plurality of light field cameras and is used for carrying out pixel acquisition on the acquisition object, and the display lens array and the display device are used for reproducing the acquisition object;
the system further comprises a memory, a processor and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the method according to any of claims 1-6 when executing the computer program.
8. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when executed, implements the method according to any of claims 1-6.
CN202110160006.5A 2021-02-05 2021-02-05 Light field unit image generation method, system and storage medium Active CN113225492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110160006.5A CN113225492B (en) 2021-02-05 2021-02-05 Light field unit image generation method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110160006.5A CN113225492B (en) 2021-02-05 2021-02-05 Light field unit image generation method, system and storage medium

Publications (2)

Publication Number Publication Date
CN113225492A CN113225492A (en) 2021-08-06
CN113225492B true CN113225492B (en) 2023-07-28

Family

ID=77084626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110160006.5A Active CN113225492B (en) 2021-02-05 2021-02-05 Light field unit image generation method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113225492B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110505379A (en) * 2019-08-09 2019-11-26 中国科学院光电技术研究所 A kind of high-resolution optical field imaging system and method
CN110662014A (en) * 2019-09-25 2020-01-07 江南大学 Light field camera four-dimensional data large depth-of-field three-dimensional display method
CN111710001A (en) * 2020-05-26 2020-09-25 东南大学 Object image mapping relation calibration method and device under multi-medium condition

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104506762B (en) * 2014-12-25 2018-09-04 北京智谷睿拓技术服务有限公司 Optical field acquisition control method and device, optical field acquisition equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110505379A (en) * 2019-08-09 2019-11-26 中国科学院光电技术研究所 A kind of high-resolution optical field imaging system and method
CN110662014A (en) * 2019-09-25 2020-01-07 江南大学 Light field camera four-dimensional data large depth-of-field three-dimensional display method
CN111710001A (en) * 2020-05-26 2020-09-25 东南大学 Object image mapping relation calibration method and device under multi-medium condition

Also Published As

Publication number Publication date
CN113225492A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN110264509B (en) Method, apparatus, and storage medium for determining pose of image capturing device
US8355565B1 (en) Producing high quality depth maps
CN111145238A (en) Three-dimensional reconstruction method and device of monocular endoscope image and terminal equipment
Kang et al. Two-view underwater structure and motion for cameras under flat refractive interfaces
WO2019024935A1 (en) Panoramic image generation method and device
US20190012804A1 (en) Methods and apparatuses for panoramic image processing
KR102583723B1 (en) A method and an apparatus for generating data representative of a light field
JP7227969B2 (en) Three-dimensional reconstruction method and three-dimensional reconstruction apparatus
CA3101222C (en) Image processing method and device, and three-dimensional imaging system
CN112927362A (en) Map reconstruction method and device, computer readable medium and electronic device
Zhang et al. Decoding and calibration method on focused plenoptic camera
CN110136048B (en) Image registration method and system, storage medium and terminal
CN116579962A (en) Panoramic sensing method, device, equipment and medium based on fisheye camera
TWI528783B (en) Methods and systems for generating depth images and related computer products
CN115457594A (en) Three-dimensional human body posture estimation method and system, storage medium and electronic equipment
CN113516719B (en) Camera calibration method, system and storage medium based on multiple homography matrixes
CN113225492B (en) Light field unit image generation method, system and storage medium
CN112102404B (en) Object detection tracking method and device and head-mounted display equipment
Popovic et al. Design and implementation of real-time multi-sensor vision systems
JP2018536312A (en) Apparatus and method for encoding an image captured by an optical acquisition system
US11967096B2 (en) Methods and apparatuses of depth estimation from focus information
JP2004514228A (en) Scene restoration and camera calibration with robust use of chirality
CN111292380A (en) Image processing method and device
JP2016218849A (en) Planar conversion parameter estimation device, method and program
GB2560301A (en) Methods and apparatuses for determining positions of multi-directional image capture apparatuses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant