CN112040203B - Computer storage medium, terminal device, image processing method and device - Google Patents

Computer storage medium, terminal device, image processing method and device Download PDF

Info

Publication number
CN112040203B
CN112040203B CN202010910192.5A CN202010910192A CN112040203B CN 112040203 B CN112040203 B CN 112040203B CN 202010910192 A CN202010910192 A CN 202010910192A CN 112040203 B CN112040203 B CN 112040203B
Authority
CN
China
Prior art keywords
light field
pixels
rgb
pixel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010910192.5A
Other languages
Chinese (zh)
Other versions
CN112040203A (en
Inventor
权威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN202010910192.5A priority Critical patent/CN112040203B/en
Publication of CN112040203A publication Critical patent/CN112040203A/en
Application granted granted Critical
Publication of CN112040203B publication Critical patent/CN112040203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Abstract

The disclosure relates to an image processing method, an image processing device, a terminal device and a computer storage medium, and relates to the technical field of image processing. The image processing method comprises the following steps: acquiring an RGB image shot by focusing a target object by an RGB camera of a shooting device at a specified focal length, wherein the RGB image comprises a plurality of RGB pixels; acquiring a light field image shot by a light field camera of a shooting device on a target object, wherein the light field image comprises light field pixels; the optical axis of the light field camera is parallel to the optical axis of the RGB camera; registering the target RGB image and the target light field image to obtain the position corresponding relation between RGB pixels and light field pixels; determining the positions and pixel values of a plurality of target pixels according to the corresponding relation of the pixel values of the RGB pixels and the pixel values and positions of the light field pixels, wherein each target pixel corresponds to one RGB pixel and one light field pixel; and generating a target image according to the position and the pixel value of each target pixel.

Description

Computer storage medium, terminal device, image processing method and device
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal device, and a computer storage medium.
Background
When a mobile terminal such as a mobile phone is used for photographing, in order to achieve the purpose of highlighting an imaging subject, image blurring needs to be performed, but for the mobile terminal, it is difficult to achieve image blurring by adjusting parameters such as an aperture and a focal length of a lens of a camera, and image information generally needs to be processed to generate an image with blurring effect, but the blurring effect needs to be improved due to the limitation of the image information.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide an image processing method, an image processing apparatus, a terminal device, and a computer storage medium.
According to an aspect of the present disclosure, there is provided an image processing method, comprising:
acquiring an RGB image shot by focusing an RGB camera of a shooting device on a target object at a specified focal length, wherein the RGB image comprises a plurality of RGB pixels;
acquiring a light field image shot by a light field camera of the shooting device on the target object, wherein the light field image comprises light field pixels; an optical axis of the light field camera is parallel to an optical axis of the RGB camera;
registering the target RGB image and the target light field image to obtain the position corresponding relation of the RGB pixels and the light field pixels;
determining positions and pixel values of a plurality of target pixels according to the pixel values of the RGB pixels, the pixel values of the light field pixels and the corresponding relationship of the positions, wherein each target pixel corresponds to one RGB pixel and one light field pixel;
and generating a target image according to the position and the pixel value of each target pixel.
According to an aspect of the present disclosure, there is provided an image processing apparatus including:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring an RGB image which is shot by an RGB camera of the shooting device in a focusing mode on a target object at a specified focal length, and the RGB image comprises a plurality of RGB pixels;
a second acquisition unit that acquires a light field image captured of the target object by a light field camera of the capturing device, the light field image including light field pixels; an optical axis of the light field camera is parallel to an optical axis of the RGB camera;
the registration unit is used for registering the target RGB image and the target light field image to obtain the position corresponding relation between the RGB pixels and the light field pixels;
a processing unit, configured to determine positions and pixel values of a plurality of target pixels according to the positions and pixel values of the RGB pixels and the positions and pixel values of the light field pixels and the corresponding relationship between the positions, where each target pixel corresponds to one of the RGB pixels and one of the light field pixels;
and the output unit is used for generating a target image according to the position and the pixel value of each target pixel.
According to an aspect of the present disclosure, there is provided a terminal device including:
a camera device comprising an RGB camera and a light field camera, an optical axis of the light field camera being parallel to an optical axis of the RGB camera; the RGB camera focuses on an RGB image shot by a target object at a specified focal length, and the RGB image comprises a plurality of RGB pixels; the light field camera is used for shooting the target object and obtaining a light field image, and the light field image comprises light field pixels;
an image processing apparatus for implementing the image processing method of any one of the above.
According to an aspect of the present disclosure, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method of any one of the above.
According to the image processing method, the image processing device, the terminal equipment and the computer storage medium, the target image can be generated according to the RGB image and the light field image which are obtained by shooting the same target object, wherein the light field camera can obtain information such as the position, the path, the intensity and the like of light rays, digital focusing can be realized, and data support is provided for a more real simulation virtualization effect; and the definition of the RGB image is higher than that of the light field image, and the RGB image can be used for representing the region needing clear display in the target image. Therefore, after the RGB image and the light field image are fused, the image blurring effect is favorably improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It should be apparent that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived by those of ordinary skill in the art without inventive effort.
Fig. 1 is a flowchart of an embodiment of an image processing method according to the present disclosure.
Fig. 2 is a flowchart of step S120 in an embodiment of the image processing method of the disclosure.
Fig. 3 is a flowchart of step S140 in an embodiment of the image processing method of the disclosure.
Fig. 4 is a flowchart of step S330 in an embodiment of the image processing method of the disclosure.
Fig. 5 is a flowchart of step S420 in an embodiment of the image processing method of the disclosure.
Fig. 6 is a schematic diagram of a light field camera of the present disclosure.
Fig. 7 is a schematic diagram of a positional correspondence relationship of RGB pixels and light field pixels of the present disclosure.
Fig. 8 is a block diagram of an embodiment of an image processing apparatus according to the present disclosure.
Fig. 9 is a block diagram of an embodiment of a terminal device of the present disclosure.
Fig. 10 is a top view of a camera in an embodiment of a terminal device of the present disclosure.
Fig. 11 is a side view of a camera in an embodiment of a terminal device of the present disclosure.
FIG. 12 is a schematic diagram of an embodiment of a computer storage medium of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar structures, and thus their detailed description will be omitted. Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale.
The terms "a," "an," "the," "said" are used to indicate the presence of one or more elements/components/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.
Herein, the object distance, the image distance, and the focal length, which are equal distances with respect to the lenses of the RGB camera and the light field camera, are described with reference to a plane where the optical centers of the lenses of the RGB camera and the light field camera are located, and the specific structure of the lenses is not limited. The plane may be defined as the lens plane, which is perpendicular to the optical axis of the lens.
The embodiment of the disclosure provides an image processing method for realizing image blurring. As shown in fig. 1, the image processing method may include steps S110 to S150, in which:
step S110, acquiring an RGB image shot by focusing a target object by an RGB camera of the shooting device at a specified focal length, wherein the RGB image comprises a plurality of RGB pixels;
step S120, acquiring a light field image shot by a light field camera of the shooting device on the target object, wherein the light field image comprises light field pixels; an optical axis of the light field camera is parallel to an optical axis of the RGB camera;
step S130, registering the target RGB image and the target light field image to obtain the position corresponding relation between the RGB pixel and the light field pixel;
step S140, determining pixel values of a plurality of target pixels according to the pixel values of the RGB pixels, the pixel values of the light field pixels, and the position correspondence relationship, where each target pixel corresponds to one of the RGB pixels and one of the light field pixels;
step S150, generating a target image according to the position and the pixel value of each target pixel.
According to the image processing method, the target image can be generated according to the RGB image and the light field image which are obtained by shooting the same target object, wherein the light field camera can obtain information such as the position, the path, the intensity and the like of light rays, digital focusing can be realized, and data support is provided for a more real simulation virtualization effect; and the definition of the RGB image is higher than that of the light field image, and the RGB image can be used for representing the region needing clear display in the target image. Therefore, after the RGB image and the light field image are fused, the image blurring effect is favorably improved.
The image processing method according to the embodiment of the present disclosure will be described in detail below:
in step S110, an RGB image captured by an RGB camera of a photographing device in focus on a target object at a specified focal length is acquired, the RGB image including a plurality of RGB pixels.
The shooting device can be used for a terminal device, and the terminal device can be a mobile terminal device such as a mobile phone, a tablet computer or a digital camera, and can also be other terminal devices such as a television. The RGB camera may be used to capture an RGB image, which may be a digital camera comprising a lens and a sensor for sensing the intensity of light entering through the lens and impinging on the sensor so as to form an RGB image comprising a plurality of RGB pixels distributed in an array. The sensor may be a CCD sensor or a CMOS sensor. The detailed structure and operation principle of the RGB camera are not described in detail herein, as long as the RGB images can be captured. Further, the RGB camera may be a fixed focus camera, whose focal length is fixed and is a specified focal length. Of course, the RGB camera may also be a zoom camera, but during shooting, shooting at a specified focal length is required, and the focal length is not adjusted during shooting.
The target object may be any object within the environment in which the photographing device is located, and the environment may be an indoor environment or an outdoor environment. The target object can be selected by the user, and can be a billboard, a human face, a flower, a building and the like, and the environment and the target object are not particularly limited.
In case the target object is determined, its distance from the lens of the RGB camera, i.e. the object distance, can then be determined. Before shooting, the object distance can be determined, and a target object can be selected according to the object distance.
In step S120, acquiring a light field image of the target object captured by a light field camera of the capturing device, the light field image including light field pixels; the optical axis of the light field camera is parallel to the optical axis of the RGB camera.
The light field camera may include a lens, a sensor, and a microlens array positioned between the lens and the sensor, the lens and the sensor being in a conjugate relationship with respect to the microlens array. Light passing through the lens may form an image on the microlens array, which may project the image to the sensor for subsampling. And generating a light field image according to the light field information, wherein the light field image comprises a plurality of light field pixels distributed in an array, and the light field information not only comprises light intensity, but also comprises information such as the position of a light ray incidence lens, the path of the light ray and the like. The detailed structure and operation principle of the light field camera are not described in detail herein, as long as the light field image can be captured.
As shown in fig. 10 and 11, the light field camera 102 has an optical axis parallel to the optical axis of the RGB camera 101, and both can be fixed to the same housing or a mounting structure such as a bracket and shoot in the same direction, thereby constituting the shooting apparatus 100 of a dual-camera structure. Further, the lens of the light field camera 102 and the lens of the RGB camera 101 may be located on the same plane perpendicular to the optical axis, and the distance between the two is not particularly limited herein.
In some embodiments of the present disclosure, since the light field camera and the RGB camera both belong to the same shooting device, the object distance between the target object and the RGB camera, which is similar to the object distance between the target object and the light field camera, that is, the object distance between the shooting device and the target object, may be used as the aforementioned specified object distance. The object distance of the light field camera is the distance between the optical center of the equivalent lens of the light field camera and the target object.
As shown in fig. 2, acquiring a light field image of the target object captured by a light field camera of the capturing device; i.e., step S120, may include steps S210-S230, wherein:
and step S210, acquiring light field information of the light field camera of the shooting device shooting the target object.
The light field information may include the location where the light ray entered the light field camera, the path of the light ray, and the intensity of the light ray. For example, as shown in fig. 6, if the coordinates of the intersection points of a light ray passing through the lens plane XY and the microlens plane ST of the light field camera are (x, y) and (s, t), respectively, the microlens array plane ST is the actual image plane of the light ray actually imaged, and the distribution of the light ray can be represented by the light field function L (x, y, s, t). And (x, y) and (s, t) together determine the position and direction of the light ray distribution in space. The lens plane XY may be a plane perpendicular to the optical axis of the lens of the light field camera and passing through the optical center of the equivalent lens of the lens.
And step S220, determining the image distance of the light field camera according to the specified object distance and the specified focal distance.
Digital focusing can be performed on the basis of light field information, i.e., light field distribution, obtained by a light field camera. Specifically, in the case that the object distance and the focal length of the light field camera are known, the image distance of the light field camera can be determined according to the lens imaging formula, and a virtual image plane can be determined according to the image distance, wherein the image distance is the distance between the virtual image plane and the lens of the light field camera.
And step S230, generating a light field image according to the image distance and the light field information.
The collected light field information can be re-projected onto the virtual image surface for integration, so that a light field image at the image distance is obtained. Specifically, as shown in fig. 6, the ST plane is an actual image plane, i.e., a microlens array plane, and the S 'T' plane is a virtual image plane. The image on the virtual image surface, namely the light field image, can be calculated according to a light field imaging formula. The light field imaging formula is as follows:
Figure BDA0002662983190000061
wherein, E is the brightness of all points of the light ray on the S 'T' plane, namely the brightness of all points of the light field image, and is used for representing the light field image; (S, T) is the coordinates of the intersection of the ray with the ST plane, (S ', T') is the coordinates of the intersection of the ray with the S 'T' plane; f is the distance between the lens of the light field camera and the actual image plane, and F' is the distance between the lens of the light field camera and the virtual image plane, i.e. the image distance obtained in step S220; a ═ F'/F.
In step S130, the target RGB image and the target light field image are registered, so as to obtain a position corresponding relationship between the RGB pixels and the light field pixels.
The light field image and the RGB image with lower resolution can be amplified, and registration is carried out by adopting methods such as an optical flow method, block matching and the like after amplification, so that the position corresponding relation between the light field pixel and the RGB pixel is obtained, and the light field pixel can be in one-to-one correspondence with the RGB pixel. Among them, since the resolution of the light field image is generally lower than that of the RGB image, the light field image can be enlarged. It should be noted that, if the block matching method is used for registration, the RGB camera and the light field camera need to be calibrated in advance, and the specific process of calibration is not particularly limited herein.
For example, referring to fig. 7, fig. 7 shows the positional correspondence between two pixels on the same image plane in a light field image and an RGB image, and specifically, the positions of the two pixels satisfy the following relational expression:
xL-xR=v×d/(u+v);
wherein the lens planes of the light field camera and the RGB camera are coplanar, and OLAnd ORThe connecting line is used for showing the plane of the lens; o isLIs the intersection point of the optical axis of the light field camera and the lens plane, ORIs the intersection point of the optical axis of the RGB camera and the lens plane; d is OLAnd ORThe distance between the optical axes of the light field camera and the RGB camera; the light field camera and the RGB camera have coplanar virtual image planes; x is the number ofLAnd xRThe connecting line is used for showing the virtual image surface, and the virtual image surface is positioned between the target object and the lens plane; x is the number ofLIs a target object and OLThe intersection of the connecting line and the virtual image plane, i.e. the virtual image, x, of the target object in the virtual image plane of the light field cameraRIs a target object and ORThe intersection point of the connecting line and the virtual image surface is the virtual image of the target object on the virtual image surface of the RGB camera; v is an image distance, namely the distance between the lens plane and the virtual image plane; and u is the object distance, i.e. the distance between the target object and the virtual image plane. In addition, the light field camera and the RGB camera have an actual image plane (not shown) which is located on the side of the lens plane facing away from the target object and on which a real image of the target object is provided. Of course, OL、xLCan also be used to indicate RGB camera, OR、xRA light field camera may also be indicated.
Fig. 7 and the above relational expressions are merely exemplary of the positional correspondence relationship and do not limit the actual optical paths and imaging modes of the light field camera and the RGB camera.
In step S140, positions and pixel values of a plurality of target pixels are determined according to the pixel values of the RGB pixels, the pixel values of the light field pixels, and the position correspondence relationship, where each target pixel corresponds to one of the RGB pixels and one of the light field pixels.
The position of a target pixel can be determined according to the positions of an RGB pixel and the corresponding light field pixel, and the pixel value of the corresponding target pixel can be determined according to the pixel value of an RGB pixel and the pixel value of the corresponding light field pixel. The positions and pixel values of target pixels distributed by a plurality of arrays can be determined because the number of the RGB pixels and the light field pixels is multiple. Wherein, the pixel value can be characterized by brightness or gray scale.
As shown in fig. 3, in some embodiments of the present disclosure, the positions and pixel values of a plurality of target pixels are determined according to the pixel values of the RGB pixels, the pixel values of the light field pixels, and the position correspondence; i.e., step S140, may include steps S310 to S330, in which:
step S310, determining the position of the light ray corresponding to each light field pixel entering the lens of the light field camera according to the light field information.
The position where the light ray corresponding to each light field pixel enters the lens of the light field camera is the position of the intersection point of the light ray and the lens plane of the light field camera on the lens plane, and the position can be represented by coordinates in a coordinate system in the lens plane.
Step S320, determining a weight of a pixel value of each light field pixel according to a distance between a position where the light ray of each light field pixel enters the lens of the light field camera and the optical axis of the light field camera.
The distance between the position at which the light ray of a light field pixel enters the lens of the light field camera and the optical axis of the light field camera is inversely related to the weight of the light field pixel, i.e. the smaller the distance between the light ray and the optical axis on the lens plane, the greater the weight of the pixel value of the corresponding light field pixel.
For example, as shown in fig. 6, the weights of the pixel values of the light field pixels are normally distributed around the optical axis of the light field camera. In particular, the weights of the pixel values of the light field pixels may be calculated according to a weight calculation formula:
Figure BDA0002662983190000081
wherein, WLA weight that is a pixel value of a light-field pixel; d is a radical ofLThe distance between the position of the light ray of the light field pixel entering the light field camera and the optical axis of the light field camera, namely the distance between the position of the intersection point of the light ray of the corresponding light field pixel and the plane of the lens and the optical axis of the lens; σ is dLThe standard deviation of (b), which may be predetermined.
Step S330, determining the positions and pixel values of a plurality of target pixels according to the pixel values of the RGB pixels, the pixel values of the light field pixels, the weights of the light field pixels and the corresponding relationship of the positions.
As shown in fig. 4, in some embodiments of the present disclosure, positions and pixel values of a plurality of target pixels are determined according to pixel values of the RGB pixels, pixel values of the light field pixels, weights thereof, and the positional correspondence; i.e., step S330, may include step S410 and step S420, wherein:
step S410, generating a weight map according to the weight of the pixel value of each light field pixel.
And integrating the weights of the light field pixels to obtain a weight map, wherein the pixels of the weight map correspond to the light field pixels one by one, and the pixel values of the pixels of the weight map are the weights of the corresponding light field pixels. For example, a weight map may be calculated by integrating a weight calculation formula, where the weight map is expressed as follows:
Figure BDA0002662983190000091
wherein, E' is a weight graph, and the weight calculation formula is described above and is not described herein again.
For the light field image, the smaller sigma is, the smaller the weight of the light ray which is farther away from the optical axis is, and the effect of the finally generated target image is closer to the effect of a small aperture; the larger σ, the closer the effect of the generated target image is to that of the large aperture.
Step S420, determining positions and pixel values of a plurality of target pixels according to the pixel values of the pixels of the weight map, the pixel values of the RGB pixels, the pixel values of the light field pixels, and the position correspondence relationship.
As shown in fig. 5, in some embodiments of the present disclosure, the step S420 may include a step S510 and a step S520, where:
step S510, normalizing the pixel value of the weight map to make the pixel value not less than 0 and not more than 1.
The specific way of normalization is not particularly limited, as long as the pixel value of the weight map is within the interval of not less than 0 and not more than 1.
Step S520, determining pixel values of a plurality of target pixels according to a pixel value formula, where the pixel value formula is:
Pb=(1-WL)×PL+C×WL
wherein, PbIs the pixel value of the target pixel; wLPixel values of pixels of the weight map; pLIs a pixel value of the light field pixel; c is the pixel value of the RGB pixel; and the RGB pixels in the pixel value formula correspond to the light field pixels in a positional correspondence.
The pixel value of the target pixel can be directly calculated according to the pixel value formula, and it can be seen that the higher the pixel value of the pixel of the weight map is, the lower the occupation ratio of the pixel value of the corresponding light field pixel in the pixel value of the target pixel is, and the higher the pixel value of the RGB pixel corresponding to the light field pixel is. Meanwhile, since the definition of the RGB image is higher than that of the light field image, the higher the definition of the region closer to the optical axis in the target image is, the lower the definition of the region which is the highest and the farther from the optical axis is, and the lowest is the pixel value of the light field pixel.
Step S150, generating a target image according to the position and the pixel value of each target pixel.
Under the condition that the position and the pixel value of each target pixel are known, a corresponding target image can be generated, and the target image is an image formed by fusing the light field image and the RGB image. In the target image, the area closer to the optical axis, that is, the area closer to the center of the image, more RGB images are used, and higher definition can be obtained; in the region farther from the optical axis, the more field images are used, and the blurring effect can be obtained. Therefore, the target image with the blurring effect can be obtained through the fusion of the RGB image and the light field image, the digital blurring is realized, the digital blurring can be realized compared with an independent RGB camera, and the definition of the central area, namely the definition of the key area can be improved compared with an independent light field camera.
It should be noted that although the various steps of the image processing method of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that all of the steps must be performed in that particular order to achieve the desired results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken into multiple step executions, etc.
The present disclosure provides an image processing apparatus, as shown in fig. 8, the image processing apparatus 200 may include a first acquisition unit 201, a second acquisition unit 202, a registration unit 203, a processing unit 204, and an output unit 205, wherein:
the first acquisition unit 201 is configured to acquire an RGB image captured by an RGB camera of a photographing device in focus on a target object at a specified focal length, the RGB image including a plurality of RGB pixels;
the second acquisition unit 202 acquires a light field image captured by a light field camera of the capturing device on a target object, the light field image including light field pixels; the optical axis of the light field camera is parallel to the optical axis of the RGB camera.
In some embodiments of the present disclosure, the second acquisition unit 202 may include a light field acquisition module, a calculation module, and an imaging module, wherein:
the light field acquisition module is used for acquiring light field information of a light field camera of the shooting device shooting a target object.
The calculation module is used for determining the image distance of the light field camera according to the specified object distance and the specified focal distance.
The imaging module is used for generating a light field image according to the image distance and the light field information.
The registration unit 203 is configured to register the target RGB image and the target light field image to obtain a position corresponding relationship between an RGB pixel and a light field pixel.
The processing unit 204 is configured to determine positions and pixel values of a plurality of target pixels according to pixel values of RGB pixels and pixel values and position correspondence of light field pixels, where each target pixel corresponds to an RGB pixel and a light field pixel.
In some embodiments of the present disclosure, the processing unit 2044 may include a first processing module, a second processing module, and a third processing module, wherein:
the first processing module is used for determining the position of the light ray corresponding to each light field pixel entering the lens of the light field camera according to the light field information.
The second processing module is used for determining the weight of the pixel value of each light field pixel according to the distance between the position of the light ray of each light field pixel entering the lens of the light field camera and the optical axis of the light field camera.
The third processing module is used for determining the positions and the pixel values of the target pixels according to the pixel values of the RGB pixels, the pixel values of the light field pixels and the corresponding relation between the weights and the positions of the RGB pixels.
In some embodiments of the present disclosure, the third processing module may include a weight map generation module and a pixel value generation module. Wherein:
the weight map generation module generates a weight map according to the weight of the pixel value of each light field pixel.
The pixel value generating module is used for determining the positions and the pixel values of the target pixels according to the pixel values of the pixels of the weight map, the pixel values of the RGB pixels, the pixel values of the light field pixels and the position corresponding relation.
In some embodiments of the present disclosure, the pixel value generation module may include a normalization module and a calculation module, wherein:
the normalization module is used for normalizing the pixel value of the weight map to ensure that the pixel value is not less than 0 and not more than 1;
the calculation module is used for determining the positions and the pixel values of a plurality of target pixels according to the position corresponding relation and a pixel value formula, wherein the pixel value formula is as follows:
Pb=(1-WL)×PL+C×WL
wherein, PbIs the pixel value of the target pixel; wLPixel values of pixels that are weight maps; pLIs the pixel value of the light-field pixel; c is the pixel value of the RGB pixel; and the RGB pixels in the pixel value formula correspond to the light field pixels in a positional correspondence.
The output unit 205 is used to generate a target image from the pixel value of each target pixel.
The advantageous effects of the image processing apparatus 200 and the details of each unit and module therein have been described in detail in the corresponding image processing method, and thus will not be described in detail here.
It should be noted that although in the above detailed description several modules or units for action execution are mentioned, this division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
The present disclosure also provides a terminal device, as shown in fig. 9, the terminal device 900 may include the photographing apparatus 100 and the image processing apparatus 200, wherein:
as shown in fig. 10 and 11, the photographing apparatus 100 includes an RGB camera 101 and a light field camera 102, an optical axis of the light field camera 102 being parallel to an optical axis of the RGB camera; an RGB image photographed with the RGB camera 101 focusing on a target object at a specified focal length, the RGB image including a plurality of RGB pixels; the light field camera 102 is used to capture a target object and obtain a light field image, which includes light field pixels. Wherein, the RGB camera can be a fixed focus camera.
The image processing apparatus 200 is configured to execute the image processing method according to any of the above embodiments. The details of the image processing method are not described in detail herein.
The disclosed embodiments also provide a computer storage medium having stored thereon a program product capable of implementing the above-described image processing method of the present disclosure. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary methods" of the present description, when said program product is run on the terminal device.
As shown in fig. 12, a program product 1201 according to an embodiment of the present invention for implementing the above method is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in the present disclosure, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (11)

1. An image processing method, comprising:
the method comprises the steps of acquiring an RGB image shot by focusing a target object by an RGB camera of a shooting device at a specified focal length, wherein the RGB image comprises a plurality of RGB pixels;
acquiring a light field image shot by a light field camera of the shooting device on the target object, wherein the light field image comprises light field pixels; an optical axis of the light field camera is parallel to an optical axis of the RGB camera;
registering the target RGB image and the target light field image to obtain the position corresponding relation of the RGB pixels and the light field pixels;
determining positions and pixel values of a plurality of target pixels according to the pixel values of the RGB pixels, the pixel values of the light field pixels and the corresponding relationship of the positions, wherein each target pixel corresponds to one RGB pixel and one light field pixel;
generating a target image according to the position and the pixel value of each target pixel;
the distance between the target object and the shooting device is a specified object distance;
acquiring a light field image shot by a light field camera of the shooting device on the target object; the method comprises the following steps:
acquiring light field information of a light field camera of the shooting device for shooting the target object;
determining an image distance of the light field camera according to the specified object distance and the specified focal distance;
generating a light field image according to the image distance and the light field information;
determining the positions and pixel values of a plurality of target pixels according to the pixel values of the RGB pixels, the light field pixels and the pixel values and the position corresponding relation; the method comprises the following steps:
determining the position of the light ray corresponding to each light field pixel entering a lens of the light field camera according to the light field information;
determining the weight of the pixel value of each light field pixel according to the distance between the position of the light ray of each light field pixel entering the lens of the light field camera and the optical axis of the light field camera;
and determining the positions and pixel values of a plurality of target pixels according to the pixel values of the RGB pixels, the pixel values of the light field pixels, the weights of the pixel values and the light field pixels and the corresponding relationship of the positions.
2. The image processing method according to claim 1, wherein positions and pixel values of a plurality of target pixels are determined according to the pixel values of the RGB pixels, the pixel values of the light field pixels, weights thereof, and the positional correspondence; the method comprises the following steps:
generating a weight map according to the weight of the pixel value of each light field pixel;
and determining the positions and pixel values of a plurality of target pixels according to the pixel values of the pixels of the weight map, the pixel values of the RGB pixels, the pixel values of the light field pixels and the position corresponding relation.
3. The image processing method according to claim 2, wherein positions and pixel values of a plurality of target pixels are determined according to pixel values of pixels of the weight map, pixel values of the RGB pixels, pixel values of the light field pixels, and the positional correspondence; the method comprises the following steps:
normalizing the pixel value of the weight map to ensure that the pixel value is not less than 0 and not more than 1;
determining the position pixel values of a plurality of target pixels according to the position corresponding relation and a pixel value formula, wherein the pixel value formula is as follows:
Pb=(1-WL)×PL+C×WL
wherein, PbIs the pixel value of the target pixel; wLPixel values of pixels of the weight map; pLIs a pixel value of the light field pixel; c is the pixel value of the RGB pixel; and the RGB pixels in the pixel value formula correspond to the light field pixels in the positional correspondence.
4. The method of claim 1, wherein a distance between a position at which a ray of the light field pixel enters a lens of the light field camera and an optical axis of the light field camera is inversely related to the weight of the light field pixel.
5. The image processing method according to claim 4, wherein weights of pixel values of the light field pixels are normally distributed centering on an intersection point of a lens of the light field camera and an optical axis thereof.
6. The image processing method according to claim 5, wherein the weights of the pixel values of the light field pixels conform to the following relation:
Figure FDA0003624514330000021
wherein, WLA weight that is a pixel value of the light field pixel; dLA distance from an optical axis of the light field camera to a location where a light ray of the light field pixel enters the light field camera; σ is dLStandard deviation of (2).
7. The image processing method according to any of claims 1 to 6, wherein the RGB camera is a fixed focus camera.
8. An image processing apparatus characterized by comprising:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring an RGB image which is shot by an RGB camera of the shooting device in a focusing mode on a target object at a specified focal length, and the RGB image comprises a plurality of RGB pixels;
a second acquisition unit that acquires a light field image captured of the target object by a light field camera of the capturing device, the light field image including light field pixels; an optical axis of the light field camera is parallel to an optical axis of the RGB camera;
the registration unit is used for registering the target RGB image and the target light field image to obtain the position corresponding relation between the RGB pixels and the light field pixels;
a processing unit, configured to determine positions and pixel values of a plurality of target pixels according to the corresponding relationship between the positions and pixel values of the RGB pixels and the positions and pixel values of the light field pixels, where each target pixel corresponds to one of the RGB pixels and one of the light field pixels;
an output unit configured to generate a target image from a position and a pixel value of each of the target pixels;
the distance between the target object and the shooting device is a specified object distance;
acquiring a light field image shot by a light field camera of the shooting device on the target object; the method comprises the following steps:
acquiring light field information of a light field camera of the shooting device for shooting the target object;
determining an image distance of the light field camera according to the specified object distance and the specified focal distance;
generating a light field image according to the image distance and the light field information;
the processing unit includes:
the first processing module is used for determining the position of the light ray corresponding to each light field pixel entering the lens of the light field camera according to the light field information;
the second processing module is used for determining the weight of the pixel value of each light field pixel according to the distance between the position of the light ray of each light field pixel entering the lens of the light field camera and the optical axis of the light field camera;
and the third processing module is used for determining the positions and the pixel values of the target pixels according to the pixel values of the RGB pixels, the pixel values of the light field pixels and the corresponding relation between the weights and the positions of the light field pixels.
9. A terminal device, comprising:
a camera device comprising an RGB camera and a light field camera, an optical axis of the light field camera being parallel to an optical axis of the RGB camera; the RGB camera focuses on an RGB image shot by a target object at a specified focal length, and the RGB image comprises a plurality of RGB pixels; the light field camera is used for shooting the target object and obtaining a light field image, and the light field image comprises light field pixels;
image processing apparatus for performing the image processing method of any one of claims 1 to 7.
10. The terminal device of claim 9, wherein the RGB camera is a fixed focus camera.
11. A computer storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out the image processing method of any one of claims 1 to 7.
CN202010910192.5A 2020-09-02 2020-09-02 Computer storage medium, terminal device, image processing method and device Active CN112040203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010910192.5A CN112040203B (en) 2020-09-02 2020-09-02 Computer storage medium, terminal device, image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010910192.5A CN112040203B (en) 2020-09-02 2020-09-02 Computer storage medium, terminal device, image processing method and device

Publications (2)

Publication Number Publication Date
CN112040203A CN112040203A (en) 2020-12-04
CN112040203B true CN112040203B (en) 2022-07-05

Family

ID=73592260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010910192.5A Active CN112040203B (en) 2020-09-02 2020-09-02 Computer storage medium, terminal device, image processing method and device

Country Status (1)

Country Link
CN (1) CN112040203B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184936B (en) * 2013-05-21 2017-06-23 吴俊辉 Image focusing processing method and system based on light field camera
CN104519259B (en) * 2013-09-26 2018-11-09 联想(北京)有限公司 A kind of data capture method and electronic equipment
US9294662B2 (en) * 2013-10-16 2016-03-22 Broadcom Corporation Depth map generation and post-capture focusing
US10511787B2 (en) * 2015-02-12 2019-12-17 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Light-field camera
CN108053363A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 Background blurring processing method, device and equipment
US10375378B2 (en) * 2017-12-12 2019-08-06 Black Sesame International Holding Limited Dual camera system for real-time depth map generation
CN108337434B (en) * 2018-03-27 2020-05-22 中国人民解放军国防科技大学 Out-of-focus virtual refocusing method for light field array camera
CN108805921B (en) * 2018-04-09 2021-07-06 奥比中光科技集团股份有限公司 Image acquisition system and method

Also Published As

Publication number Publication date
CN112040203A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
JP6335423B2 (en) Information processing apparatus and information processing method
JP2020505863A (en) Image sensor, imaging method, and electronic device
US9473700B2 (en) Camera systems and methods for gigapixel computational imaging
KR20170005009A (en) Generation and use of a 3d radon image
Kordecki et al. Practical vignetting correction method for digital camera with measurement of surface luminance distribution
WO2019065260A1 (en) Information processing device, information processing method, and program, and interchangeable lens
JP6674643B2 (en) Image processing apparatus and image processing method
CN114846608A (en) Electronic device including image sensor and method of operating the same
CN115225820A (en) Automatic shooting parameter adjusting method and device, storage medium and industrial camera
WO2016175043A1 (en) Image processing device and image processing method
CN112040203B (en) Computer storage medium, terminal device, image processing method and device
TWI604221B (en) Method for measuring depth of field and image pickup device using the same
KR102597470B1 (en) Metohod for determining depth map and electronic device applying the method
JP6674644B2 (en) Image processing apparatus and image processing method
CN110443750B (en) Method for detecting motion in a video sequence
TWI508554B (en) An image focus processing method based on light-field camera and the system thereof are disclosed
JP6684454B2 (en) Image processing apparatus and image processing method
CN115393555A (en) Three-dimensional image acquisition method, terminal device and storage medium
CN110581977A (en) video image output method and device and three-eye camera
CN115272124A (en) Distorted image correction method and device
CN113345024B (en) Method for judging assembly quality of camera module
CN111866354B (en) Image processing device and method based on optics and electronic equipment
CN108205236A (en) Panoramic camera and its camera lens
US11539875B1 (en) Image-focusing method and associated image sensor
TWI583996B (en) Method for measuring depth of field and image pickup device using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant