CN114040089A - Image processing method, device, equipment and computer readable storage medium - Google Patents

Image processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN114040089A
CN114040089A CN202010706558.7A CN202010706558A CN114040089A CN 114040089 A CN114040089 A CN 114040089A CN 202010706558 A CN202010706558 A CN 202010706558A CN 114040089 A CN114040089 A CN 114040089A
Authority
CN
China
Prior art keywords
acquisition module
image acquisition
image
feature
color image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010706558.7A
Other languages
Chinese (zh)
Inventor
李志林
袁石林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010706558.7A priority Critical patent/CN114040089A/en
Publication of CN114040089A publication Critical patent/CN114040089A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method, which is applied to a terminal, wherein the terminal comprises a color image acquisition module and a base image acquisition module. The method comprises the following steps: adopting a color image acquisition module to acquire a color image of a target object; adopting a base image acquisition module to acquire a base image of the target object; respectively extracting a first feature of the color image and a second feature of the base image; determining a noise region in the color image based on the first feature and the second feature; and restoring the contour information, the gray information and the color information of the noise area to obtain a composite image. The method can eliminate the noise interference information in the shot image, and obtain the color image which can better reflect the real characteristics of the target object, thereby improving the image quality.

Description

Image processing method, device, equipment and computer readable storage medium
Technical Field
The present application relates to the field of digital image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a computer-readable storage medium.
Background
The full screen display technology is a relatively novel technology at present. At present, the design of the full-screen is mainly realized by making the display screen low in pixel density (Pixels Per inc, PPI) and transparent in a small area on the screen, generally at the position of the front camera, so that the area is a transparent display area with light transmittance, and arranging the camera below the area.
In the related art, the number of the light transmission in the transparent display area corresponding to the position of the camera is generally increased by reducing the number of pixels or changing the arrangement of the sub-pixels to increase the gap. However, in these schemes, there is a problem that the factor pixel region cannot transmit light, and a part of opaque region is formed in the transparent display region, specifically, in the transparent display region, due to the existence of the periodic sub-pixels, the opaque region corresponding to the sub-pixels will present a periodic arrangement, similar to the structure of the transmission grating, thereby causing a diffraction effect on incident light entering the transparent display region, resulting in a decrease in resolving power of the camera, affecting an imaging effect of the camera, and causing problems of diffraction, interference, glare, and the like in a captured image.
Disclosure of Invention
Embodiments of the present application provide an image processing method, an image processing apparatus, an image processing device, and a computer-readable storage medium, which can eliminate noise interference information in a captured image, and obtain a color image that can better reflect a real feature of a target object, thereby improving image quality.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an image processing method, which is applied to a terminal, wherein the terminal comprises a color image acquisition module and a base image acquisition module, and the method comprises the following steps: collecting a color image of a target object by adopting the color image collecting module; acquiring a base image of the target object by using the base image acquisition module; respectively extracting a first feature of the color image and a second feature of the base image; determining a noise region in the color image based on the first feature and the second feature; and restoring the contour information, the gray information and the color information of the noise area to obtain a composite image.
An embodiment of the present application provides an image processing apparatus, including: the extraction unit is used for respectively extracting a first feature of the color image and a second feature of the base image; a determining unit configured to determine a noise region in the color image based on the first feature and the second feature; and the processing unit is used for carrying out reduction processing on the contour information, the gray information and the color information of the noise area to obtain a composite image.
An embodiment of the present application provides an image capturing apparatus, including: the display screen and the image acquisition device are sequentially arranged along the direction of the optical axis; the display screen comprises a transparent display area; the image acquisition device comprises a color image acquisition module and a base image acquisition module; the color image acquisition module and the base image acquisition module are arranged at the positions opposite to the transparent display area; the color image acquisition module is used for imaging based on incident light which penetrates through the transparent display area along the optical axis direction and is incident to the color image acquisition module to obtain a color image of a target object; and the base image acquisition module is used for obtaining a color image of the target object through the transparent display area.
An embodiment of the present application provides a terminal, including: the device comprises a display screen, a memory, a processor, the color image acquisition module and a base image acquisition module; the display screen comprises a transparent display area; the color image acquisition module and the base image acquisition module are arranged at the positions opposite to the transparent display area; the color image acquisition module is used for acquiring a color image of the target object through the transparent display area; the base image acquisition module is used for acquiring a base image of the target object through the transparent display area; the memory for storing an executable computer program; the processor is configured to implement the image processing method when executing the executable computer program stored in the memory.
An embodiment of the present application provides a computer-readable storage medium, which stores a computer program for causing a processor to execute the above-mentioned image processing method.
According to the image processing method provided by the embodiment of the application, the base image can reflect the real characteristics of the target object, the area where shooting noise such as interference fringes, diffraction fringes and glare is located in the color image can be obtained by comparing the color image with the base image, and the color image which can reflect the real characteristics of the target object can be obtained by restoring the outline information, the gray information and the color information of the area where the shooting noise is located, so that the image quality is improved.
Drawings
FIG. 1A is a schematic diagram illustrating an arrangement of opaque sub-pixels on a transparent display area of an exemplary display screen according to an embodiment of the present disclosure;
fig. 1B is a schematic diagram illustrating a light transmission effect of an unpatterned transparent display area in an exemplary display screen provided in an embodiment of the present application under illumination;
FIG. 2A is a schematic diagram illustrating the effect of exemplary diffraction fringes formed by light diffraction provided by embodiments of the present application;
fig. 2B is a schematic diagram illustrating an effect of interference fringes formed by interference of exemplary light provided by an embodiment of the present application;
fig. 3A is a schematic diagram illustrating a simulation effect of a point spread function of a color image collected through a transparent display area of a display screen according to an exemplary embodiment of the present disclosure;
fig. 3B is a schematic diagram illustrating a simulation effect of a point spread function of an image collected in an exemplary non-transparent display area of a display screen according to an embodiment of the present application;
FIG. 3C is a diagram illustrating the effect of spectral distribution in one-dimensional space of an exemplary incident light ray passing through a transparent display area of a display screen according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of an alternative image processing method provided by the embodiment of the present application;
FIG. 5 is a schematic diagram of an exemplary structured light camera provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of an exemplary TOF camera provided by an embodiment of the present application;
FIG. 7A is an exemplary depth image provided by embodiments of the present application;
FIG. 7B is an exemplary depth image provided by embodiments of the present application;
FIG. 8 is a schematic diagram of an exemplary grayscale image provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of an exemplary display screen provided in an embodiment of the present application;
FIG. 10 is a schematic diagram in partial cross-section of an exemplary terminal along the A-B direction provided by an embodiment of the present application;
FIG. 11 is a schematic flow chart of another alternative image processing method provided in the embodiments of the present application;
FIG. 12 is a schematic flow chart of still another alternative image processing method provided in the embodiments of the present application;
FIG. 13 is a schematic flow chart of still another alternative image processing method provided in the embodiments of the present application;
fig. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used in the examples of this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the embodiments of the present application is for the purpose of describing the embodiments of the present application only and is not intended to be limiting of the present application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application are explained, and the terms and expressions referred to in the embodiments of the present application are applicable to the following explanations:
1) color image: refers to an image where each pixel is made up of R, G, B components.
2) Base image: in this application, images without color information, such as depth images and gray scale images, are meant to be included.
3) Depth image: refers to an image having the distance (depth) from the image grabber to each point in the scene as a pixel value, which directly reflects the geometry of the visible surface of the scene. The depth image may be calculated as point cloud data through coordinate transformation.
4) Grayscale image: is an image with only one sample color per pixel. Such images are typically displayed in gray scale from the darkest black to the brightest white, with many levels of color depth between black and white.
5) Phenomena of glare: often in photographs, especially when taken in backlighting. In general, a phenomenon in which a photograph is whitened and a halo is formed due to strong light is called "Flare (Flare)".
6) Interference of light: the phenomenon that two or more light waves are overlapped in space to form a new waveform is caused; when interference occurs, some areas will be brighter and some areas will be darker, i.e. interference fringes will appear.
7) Diffraction of light: when light encounters an obstacle or a small hole in the propagation process, the light deviates from a straight propagation path and bypasses the obstacle to propagate; light and dark stripes or halo generated during diffraction are called diffraction patterns; the diffraction fringes refer to fringes projected on a screen by diffraction of light.
8) Point Spread Function (PSF): in the optical system, when the input object is a point light source, the light field distribution of the output image is obtained.
At present, in the technology of full-screen design, a display screen is made to be low pixel density (Pixels Per inc, PPI) and transparent in a small area on the screen, generally at the position of a front camera, so that the area is a transparent display area with light transmittance, and a camera (hereinafter referred to as "under-screen camera") is arranged under the area, thereby realizing the design of the full-screen. In the related art, the light transmission amount of the transparent display area of the display screen corresponding to the position of the camera is generally increased by increasing the gap by reducing the number of pixels or changing the arrangement of the pixels. However, in order to achieve the display function of the display screen and make the light emitted from the sub-pixels propagate to the outside of the display screen, the metal anode on the bottom surface of the sub-pixel in the transparent display area is usually designed to be opaque, so as to form a metal anode reflective layer with a reflectivity close to one hundred percent, that is, the sub-pixel area in the transparent display area is designed to be opaque. For example, fig. 1A is a schematic diagram illustrating an arrangement of opaque sub-pixels on a transparent display area in an exemplary display screen provided in an embodiment of the present application; fig. 1B is a schematic diagram of a light transmission effect of an unpatterned transparent display area in an exemplary display screen provided in an embodiment of the present application under illumination. As shown in fig. 1A and 1B, the sub-pixels are uniformly spaced, and the regions where the sub-pixels are located are all opaque regions, and the spaces between the sub-pixels (i.e., the regions where no wires are arranged) are transparent and allow light to pass through. Due to the design, factor pixels exist periodically on the transparent display area in the schemes, so that opaque areas corresponding to the sub-pixels are arranged periodically to form a structure similar to a transmission grating, and accordingly diffraction effect is caused on incident light entering the transparent display area, resolving power of the camera is reduced, imaging effect of the camera is affected, and problems of diffraction, interference, glare and the like exist in a shot image.
Exemplarily, fig. 2A is a schematic diagram illustrating an effect of a diffraction fringe formed by diffraction of exemplary light provided in an embodiment of the present application; fig. 2B is a schematic diagram illustrating an effect of interference fringes formed by interference of exemplary light provided in an embodiment of the present application. As shown in fig. 2A and 2B, both diffraction and interference of light produce fringe interference information.
For example, fig. 3A is a schematic diagram illustrating a simulation effect of a point spread function of a color image collected through a transparent display area of a display screen according to an exemplary embodiment of the present disclosure; fig. 3B is a schematic diagram illustrating a simulation effect of a point spread function of an image captured by a transparent display area of an exemplary non-transparent display screen according to an embodiment of the present application. Fig. 3C is a schematic diagram illustrating an effect of a spectral distribution in a one-dimensional space of an exemplary incident light ray passing through a transparent display area of a display screen according to an embodiment of the present application. In FIGS. 3A, 3B and 3C, the X-axis represents spatial position, the Y-axis represents logarithm of light intensity (luminance), and λ in FIG. 3CR、λGAnd λBRepresenting incident light of different wavebands; as shown in fig. 3A and 3B, the light field distribution of the image collected through the transparent display area is different from the light field distribution of the image collected through the non-transparent display area, and obviously, the transparent display area of the display screen affects the image collection effect of the off-screen camera.
Based on this, embodiments of the present application provide an image processing method, an apparatus, a device, and a computer-readable storage medium, which can eliminate noise interference information in a captured image, thereby improving the quality of the image.
An exemplary application of the terminal provided by the embodiment of the present application is described below, and the terminal provided by the embodiment of the present application can be implemented as various types of terminals such as a smart phone, a tablet computer, and a notebook computer having a color image acquisition module and a base image acquisition module.
The image processing method provided by the embodiment of the present application will be described below in conjunction with exemplary applications and embodiments of the terminal provided by the embodiment of the present application.
Fig. 4 is an alternative flowchart of an image processing method provided in an embodiment of the present application, which will be described with reference to the steps shown in fig. 4.
S101, a color image acquisition module is adopted to acquire a color image of the target object.
In an embodiment of the present application, the color image acquisition module may be an RGB camera; the terminal can acquire color images of target objects such as people and scenery by controlling a three primary color (Red Green Blue, RGB) camera.
And S102, acquiring a base image of the target object by adopting a base image acquisition module.
In the embodiment of the present application, the base image capturing module may be a camera that captures an image that reflects the real features of the target object and does not have color information, and may be, for example, a black-and-white charge-coupled) Charge Coupled Device (CCD) camera. The terminal can collect the base image which can reflect the real characteristics of the target objects such as people, scenery and the like and has no color information by controlling the base image collecting module.
In the embodiment of the application, the terminal can control the color image acquisition module and the base image acquisition module to simultaneously acquire the images of the target object.
In the embodiment of the application, the terminal can respectively control the color image acquisition module and the base image acquisition module to simultaneously and respectively acquire a color image and a base image of the target object.
And S103, respectively extracting the first feature of the color image and the second feature of the base image.
In the embodiment of the application, after the terminal acquires the color image and the base image, the terminal can extract the feature information in the color image and extract the feature information in the base image.
In the embodiment of the application, the terminal can extract one or more of texture features, shape features, color features and spatial relationship features in the color image as first features; and extracting one or more of texture features, shape features, gray scale features and spatial relationship features of the base map image as second features.
In the embodiment of the application, for the texture features, the terminal can extract the texture features by adopting a geometric method, a statistical-based gray level co-occurrence matrix and energy spectrum function method, a model method, a signal processing method and other methods; for the shape characteristics, the terminal can extract the shape characteristics by methods such as Fourier transform, geometric matrix, finite element method and the like; for the color features, the terminal can extract the color features by adopting methods such as a color histogram, a color aggregation vector and a color correlation diagram; for the spatial relationship features, the terminal may automatically segment the image to partition an object or color region included in the image, and then extract image features according to the regions and establish an index, or the terminal may uniformly divide the image into a plurality of regular sub-blocks, and then extract features for each image sub-block and establish an index, etc., so as to extract the spatial relationship features, and the corresponding feature extraction method may be selected according to actual needs.
And S104, determining a noise area in the color image based on the first characteristic and the second characteristic.
In the embodiment of the application, after the first feature in the color image and the second feature in the base image are extracted, the terminal may compare the first feature with the second feature, and determine the noise region in the color image according to the comparison result. In the embodiment of the present application, the terminal may perform the comparison between the first feature and the second feature by using a correlation comparison method, a histogram comparison method, or a time-to-frequency domain comparison method.
In an embodiment of the present application, the terminal may compare a texture feature extracted from the color image with a texture feature extracted from the base image, compare a shape feature extracted from the color image with a shape feature extracted from the base image, and compare a spatial relationship feature extracted from the color image with a spatial relationship feature extracted from the base image, and an embodiment of the present application is not limited herein to a manner of feature comparison.
In the embodiment of the present application, since the base map image can reflect the real features of the target object, when some features in the first features are different from some corresponding features in the second features and the correlation degree is lower than a certain preset threshold, the features are indicated as noise in the color image, and thus the region where the features are located is a noise region in the color image.
In embodiments of the present application, the noise may include interference information caused by ambient light. Illustratively, the interference information caused by the ambient light includes at least one of diffraction fringes, interference fringes, and glare.
And S105, restoring the contour information, the gray information and the color information of the noise area to obtain a composite image.
In the embodiment of the application, after the terminal determines the noise region in the color image, the terminal can perform reduction processing on the contour information, the gray information and the color information of the noise region according to the extracted features in the base image, so as to obtain a composite image which can better reflect the real features of the target object.
In the embodiment of the application, the base image can reflect the real characteristics of the target object, the area where shooting noise such as interference fringes, diffraction fringes and glare is located in the color image can be obtained by comparing the color image with the base image, and the color image which can reflect the real characteristics of the target object can be obtained by restoring the outline information, the gray scale information and the color information of the area where the shooting noise is located, so that the image quality is improved.
In an embodiment of the application, the base image acquisition module comprises at least one of a depth image acquisition module and a gray image acquisition module; s102 may be implemented by S1021, S1022, or S1023, which is as follows:
and S1021, acquiring a depth image of the target object by adopting a depth image acquisition module.
In an embodiment of the present application, the depth image acquisition module may be a structured light camera, an ultrasonic sensor, or a Time of flight (TOF) camera, and the terminal may acquire the depth image of the target object by controlling the structured light camera, the ultrasonic sensor, or the TOF camera.
For example, fig. 5 is a schematic diagram of an exemplary structured light camera provided in an embodiment of the present application, and as shown in fig. 5, modulated light is continuously emitted to a target object through an LED array, and then reflected light reflected from the target object is received through a sensor module, and a distance of the target object from the camera may be calculated by detecting a flight (round trip) time of the light to generate depth information, so as to obtain a depth image including a depth and a contour of the target object.
The TOF camera may obtain point cloud information of the image by the transmission and reception of laser pulses. For example, fig. 6 is a schematic diagram of an exemplary TOF camera provided by an embodiment of the present application, and as shown in fig. 6, a laser pulse is continuously transmitted to a photographed object through a light transmitter, then the laser pulse reflected from the object is received through a light receiver, and the distance from the camera to the photographed object is calculated by detecting the flight (round trip) time of the light pulse to generate depth information, so as to obtain a depth image including the depth and the profile of the photographed object.
The ultrasonic sensor may calculate a distance of the object from the ultrasonic sensor by calculating a round trip time by transmitting an ultrasonic signal to the object and receiving a sound wave signal reflected from the object to generate depth information, thereby obtaining a depth image including a depth of the object and a contour of the object.
Illustratively, fig. 7A and 7B are exemplary two depth images provided by an embodiment of the present application.
And S1022, acquiring the gray level image of the target object by adopting a gray level image acquisition module.
In the embodiment of the application, the grayscale image acquisition module may be an infrared camera, and the terminal may acquire the grayscale image of the target object by controlling the infrared camera.
By way of example, fig. 8 is a schematic diagram of an exemplary grayscale image provided by an embodiment of the present application; as shown in fig. 8, the grayscale image includes contour information and grayscale information of the object.
And S1023, acquiring the depth image of the target object by adopting a depth image acquisition module, and acquiring the gray image of the target object by adopting a gray image acquisition module.
In the embodiment of the application, the infrared camera has no color information, so the influence of diffraction is small, in addition, the TOF camera and the structured light camera adopt a phased array type emission module design in principle design, and the ultrasonic sensor adopts a mode of emitting and receiving ultrasonic waves, so the influence caused by the light opacity of sub-pixels in a transparent display area can be eliminated in advance; therefore, images that reflect the true characteristics of the target object can be obtained using any one or more of an infrared camera, a TOF camera, an ultrasonic sensor, and a structured light camera.
In an embodiment of the application, a terminal comprises a display screen, wherein the display screen comprises a transparent display area and a non-transparent display area; the color image acquisition module and the base image acquisition module are arranged at positions opposite to the transparent display area and are respectively used for acquiring a color image and a base image of the target object through the transparent display area.
In the embodiment of the application, color image acquisition module and base map image acquisition module are built-in the terminal to set up in the position department that corresponds with transparent display area, when carrying out image acquisition, color image acquisition module carries out the collection of the color image of target object through gathering the incident light that sees through transparent display area and gets into in the terminal, and base map image acquisition module carries out the collection of the base map image of target object through the light wave of transmission and receiving permeable transparent display area.
For example, fig. 9 is a schematic structural diagram of an exemplary display screen provided in an embodiment of the present application; fig. 10 is a schematic partial cross-sectional view of an exemplary terminal along a-B direction provided by an embodiment of the present application. As shown in fig. 9 and 10, the terminal includes a display screen 10, a color image capturing module 20, and a base image capturing module 30, where the display screen 10 includes a transparent display area 11 and a non-transparent display area 12, and the color image capturing module 20 and the base image capturing module 30 are both disposed at positions corresponding to the transparent display area 11.
Fig. 11 is an optional flowchart of the image processing method according to the embodiment of the present application, and after S103, S201 is further included, and S105 may be implemented by S1051 to S1052, and the example that S201 is further included after S104 will be described with reference to the steps shown in fig. 11.
S201, determining a noise reference characteristic based on the first characteristic and the second characteristic; the noise reference feature is a feature for optimizing a noise region in the first feature.
In an embodiment of the application, when the terminal compares the first feature with the second feature, when determining that the correlation between some of the first features and some of the corresponding features in the second features is lower than a certain preset threshold, which indicates that the features are noise in the color image, the region where the features are located is a noise region in the color image, and meanwhile, the corresponding features in the second features are noise reference features for optimizing the noise region.
S1051, denoising the noise region.
In the embodiment of the present application, after determining the noise region in the color image, the terminal may remove the noise in the noise region by using a method such as wavelet transform, frequency domain processing, or laplace transform.
And S1052, restoring the contour information, the gray information and the color information of the denoised region according to the noise reference characteristics.
In the embodiment of the application, after the terminal removes noise in the noise area, the terminal may perform reduction processing on the contour information, the gray information, and the color information of the area after the noise is removed, based on the noise reference feature in the determined first feature. For example, when the noise area is a diffraction fringe area, after the terminal removes the diffraction fringes in the color image, the contour information, the gray scale information, and the color information of the area in the color image from which the diffraction fringes are removed may be restored according to one or more of texture features, gray scale features, shape features, and spatial relationship features of the image area in the base image at the same position as the area in which the diffraction fringes are located.
In the embodiment of the application, the base image can reflect the real characteristics of the target object, the area where shooting noise such as interference fringes, diffraction fringes and glare is located in the color image can be obtained by comparing the color image with the base image, and the color image which can reflect the real characteristics of the target object can be obtained by eliminating the shooting noise and reducing the contour information, the gray information and the color information of the area where the shooting noise is eliminated, so that the image quality is improved.
In other embodiments of the present application, S201 may also be performed simultaneously with S104.
Fig. 12 is an optional schematic flow chart of the image processing method according to the embodiment of the present application, which may further include S301-S302 after the above-mentioned S103 and before the above-mentioned S105, and the following description will be made with reference to the steps shown in fig. 12 by taking the example of including S301-S302 after S103 in fig. 4.
S301, determining a fuzzy area and an optimized reference feature in the color image based on the first feature and the second feature; the optimized reference feature is a feature of the first features that is used to optimize the blur region.
In the embodiment of the application, when the terminal determines that the correlation degree between some of the first features and some of the corresponding features in the second features is lower than another preset threshold, which indicates that the features are blurred areas in the color image, the areas in which the features are located are the blurred areas in the color image, and meanwhile, the corresponding features in the second features are optimization reference features for optimizing the blurred areas.
And S302, according to the optimized reference characteristics, correcting the contour information, the gray information and the color information of the fuzzy area.
In the embodiment of the application, after the terminal determines the blurred image in the color image, the terminal may perform reduction processing on the contour information, the gray information, and the color information of the blurred region based on the determined optimized reference feature in the first feature. For example, the terminal may perform correction processing on contour information, gray information, and color information of the blurred region in the color image according to one or more of texture features, color features, shape features, and spatial relationship features of an image region in the base image at the same position as the blurred region, so that the blurred region becomes clear.
In other embodiments of the present application, S301 to S302 may also be executed synchronously with S104, which is not limited in this embodiment of the present application.
In the embodiment of the application, because including infrared camera, TOF camera, structured light camera and ultrasonic sensor in one or more base map image acquisition module, have better dark light environment adaptability, consequently, based on the characteristics in the base map image that base map image acquisition module gathered, correct the processing to the color image of gathering by color image acquisition module under dark light environment, can play the effect of compensation color image to can improve the imaging quality of terminal under dark light environment.
Fig. 13 is an optional schematic flow chart of the image processing method according to the embodiment of the present application, and after S105, S106 may be further included, and the following description will be given with reference to the step shown in fig. 13 by taking the example that S106 is further included after S105 in fig. 4.
And S106, performing uniform slip compensation processing on the processed color image by adopting a deconvolution model.
In the embodiment of the application, after obtaining the color image after the correction processing and the reduction processing, the terminal may input the obtained color image into a deconvolution model, and perform smoothing compensation processing on the image area after the correction processing and the reduction processing in the color image again through the deconvolution model, so as to obtain a smooth image with noise interference information removed.
In the embodiment of the present application, before S106, S401-S403 are further included, which are specifically as follows:
s401, constructing a training set, a verification set and a test set.
In the embodiment of the application, a sample set can be constructed according to the color image after the correction processing and the restoration processing, and the sample set is divided into a training set, a verification set and a test set.
S402, training a preset initial deconvolution neural network model by adopting a training set and a verification set.
In the embodiment of the application, a training set can be adopted to train a preset initial deconvolution neural network model, a corresponding loss value is calculated through a preset loss function, model parameters of the initial deconvolution neural network model are adjusted, after the loss value reaches a preset condition, the training is completed, and the trained initial deconvolution neural network model is output; and adjusting the hyper-parameters of the initial deconvolution neural network model in the training process through the verification set.
And S403, performing a uniform slip compensation test on the trained initial deconvolution neural network model by adopting a test set, and taking the initial deconvolution neural network meeting preset uniform slip compensation conditions as a deconvolution model.
In the embodiment of the application, the test set can be adopted to carry out uniform slip compensation test on the trained initial deconvolution neural network model, and judge whether the uniform slip compensation effect reaches the preset uniform slip compensation effect, if the preset uniform slip compensation effect is not reached, the training set and the verification set are adopted to carry out continuous training and parameter optimization on the initial deconvolution neural network model, then the test set is adopted to carry out continuous test until the uniform slip compensation effect of the initial deconvolution neural network model obtained by the test reaches the preset uniform slip compensation effect, and at the moment, the initial deconvolution neural network model reaching the preset uniform slip compensation effect is taken as the deconvolution model.
In the embodiment of the application, through training and testing, a deconvolution model with a better uniform slip compensation effect can be obtained, so that a good image processing effect is obtained.
An image processing apparatus is further provided in the embodiments of the present application, and fig. 14 is a schematic structural diagram of the image processing apparatus provided in the embodiments of the present application; as shown in fig. 14, the image processing apparatus 2 includes: an extraction unit 21 for extracting a first feature of the color image and a second feature of the base image, respectively; a determining unit 22, configured to determine a noise region in the color image based on the first feature and the second feature; and the processing unit 23 is configured to perform reduction processing on the contour information, the gray information, and the color information of the noise region to obtain a composite image.
In some embodiments of the present application, the determining unit 22 is further configured to determine a noise reference feature based on the first feature and the second feature; the noise reference feature is a feature used for optimizing the noise region in the first feature; denoising the noise region; and restoring the contour information, the gray information and the color information of the denoised region according to the noise reference characteristics.
In some embodiments of the present application, the noise includes interference information caused by ambient light.
In some embodiments of the present application, the interference information caused by ambient light comprises at least one of: diffraction fringes, interference fringes, and glare.
In some embodiments of the present application, after the extracting of the first feature of the color image and the second feature of the base image respectively and before obtaining the composite image, the determining unit 22 is further configured to determine a blur area and an optimized reference feature in the color image based on the first feature and the second feature; the optimized reference feature is a feature used for optimizing the fuzzy region in the first feature; the processing unit 23 is further configured to perform modification processing on the contour information, the gray scale information, and the color information of the blurred region according to the optimized reference feature.
In some embodiments of the present application, after the color image is subjected to the reduction processing and before the composite image is obtained, the processing unit 23 is further configured to perform smoothing compensation processing on the processed color image by using a deconvolution model.
An embodiment of the present application provides an image capturing apparatus, including: the display screen and the image acquisition device are sequentially arranged along the direction of the optical axis; the display screen comprises a transparent display area; the image acquisition device comprises a color image acquisition module and a base image acquisition module; the color image acquisition module and the base image acquisition module are arranged at the positions opposite to the transparent display area; the color image acquisition module is used for imaging based on incident light which penetrates through the transparent display area along the optical axis direction and is incident to the color image acquisition module to obtain a color image of a target object; and the base image acquisition module is used for obtaining a color image of the target object through the transparent display area.
An embodiment of the present application provides a terminal, fig. 15 is a schematic structural diagram of the terminal provided in the embodiment of the present application, and as shown in fig. 15, a terminal 1 includes: the display screen 31, the memory 32, the processor 33, and the color image acquisition module 34 and the base image acquisition module 35; the display screen 31, the memory 32, the processor 33, the color image capturing module 34 and the base image capturing module 35 are connected by a bus 36. A display screen 31 including a transparent display area; the color image acquisition module 34 and the base image acquisition module 35 are both arranged at positions opposite to the transparent display area, and the color image acquisition module 34 is used for acquiring a color image of the target object through the transparent display area; the base image acquisition module 35 is configured to acquire a base image of the target object through the transparent display area; a memory 32 for storing an executable computer program; the processor 33 is configured to implement the method provided by the embodiment of the present application when executing the executable computer program stored in the memory 32. For example, the image processing method provided by the embodiment of the application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, where a computer program is stored, and when the computer program is executed by a processor, the computer program will cause the processor to execute a method provided by an embodiment of the present application, for example, an image processing method provided by an embodiment of the present application.
In some embodiments of the present application, the storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments of the application, the executable instructions may be in the form of a program, software module, script, or code, written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, according to the technical implementation scheme, because the infrared camera or the TOF structured light camera has better adaptability to the dark light environment and is less affected by the opaque sub-pixels of the transparent display area, the color image collected by the RGB camera and including noise such as interference fringes, diffraction fringes, and glare and having a blurred area is subjected to reduction processing and correction processing based on the features in the gray scale image or the depth image collected by the infrared camera or the TOF structured light camera, so that noise information such as interference fringes, diffraction fringes, and glare can be obtained, and the blurred color image can be improved, thereby obtaining a color image which can better reflect the real features of the target object, and improving the quality of the obtained color image.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (13)

1. An image processing method is applied to a terminal, wherein the terminal comprises a color image acquisition module and a base image acquisition module, and the method comprises the following steps:
collecting a color image of a target object by adopting the color image collecting module;
acquiring a base image of the target object by using the base image acquisition module;
respectively extracting a first feature of the color image and a second feature of the base image;
determining a noise region in the color image based on the first feature and the second feature;
and restoring the contour information, the gray information and the color information of the noise area to obtain a composite image.
2. The method of claim 1, wherein the base image acquisition module comprises at least one of a depth image acquisition module and a grayscale image acquisition module; the adoption the base map image acquisition module group, gather the base map image of target object, include:
acquiring a depth image of the target object by adopting the depth image acquisition module; and/or the presence of a gas in the gas,
and acquiring the gray level image of the target object by adopting the gray level image acquisition module.
3. The method according to claim 1 or 2, wherein the terminal comprises a display screen comprising a transparent display area; the color image acquisition module and the base image acquisition module are both arranged at the position opposite to the transparent display area;
the color image acquisition module acquires a color image of the target object through the transparent display area;
and the base image acquisition module acquires the base image of the target object through the transparent display area.
4. The method according to claim 1, wherein after said extracting the first feature of the color image and the second feature of the base image, respectively, the method further comprises:
determining a noise reference feature based on the first feature and the second feature; the noise reference feature is a feature used for optimizing the noise region in the first feature;
the reducing processing is performed on the contour information, the gray information and the color information of the noise area to obtain a composite image, and the reducing processing comprises the following steps:
denoising the noise region;
and restoring the contour information, the gray information and the color information of the denoised region according to the noise reference characteristics.
5. The method of claim 4, wherein the noise comprises interference information caused by ambient light.
6. The method of claim 5, wherein the interference information caused by the ambient light comprises at least one of:
diffraction fringes, interference fringes, and glare.
7. The method according to claim 1 or 4, wherein after said extracting the first feature of the color image and the second feature of the base image respectively and before obtaining the composite image, the method further comprises:
determining a blurred region and an optimized reference feature in the color image based on the first feature and the second feature; the optimized reference feature is a feature used for optimizing the fuzzy region in the first feature;
and according to the optimized reference characteristics, carrying out correction processing on the contour information, the gray information and the color information of the fuzzy area.
8. The method according to claim 1, further comprising, after the reducing the color image and before obtaining the composite image:
and performing uniform slip compensation processing on the processed color image by adopting a deconvolution model.
9. The method of claim 2, wherein the color image acquisition module comprises a three primary RGB camera; the gray image acquisition module comprises an infrared camera; the depth image acquisition module comprises at least one of:
structured light cameras, ultrasonic sensors, and time of flight TOF cameras.
10. An image processing apparatus characterized by comprising:
the extraction unit is used for respectively extracting a first feature of the color image and a second feature of the base image;
a determining unit configured to determine a noise region in the color image based on the first feature and the second feature;
and the processing unit is used for carrying out reduction processing on the contour information, the gray information and the color information of the noise area to obtain a composite image.
11. An image acquisition apparatus, characterized by comprising:
the display screen and the image acquisition device are sequentially arranged along the direction of the optical axis;
the display screen comprises a transparent display area;
the image acquisition device comprises a color image acquisition module and a base image acquisition module;
the color image acquisition module and the base image acquisition module are arranged at the positions opposite to the transparent display area;
the color image acquisition module is used for imaging based on incident light which penetrates through the transparent display area along the optical axis direction and is incident to the color image acquisition module to obtain a color image of a target object;
and the base image acquisition module is used for obtaining a color image of the target object through the transparent display area.
12. A terminal, comprising:
a display screen, a memory, a processor, and the color image acquisition module and the base image acquisition module of claim 1;
the display screen comprises a transparent display area;
the color image acquisition module and the base image acquisition module are arranged at the positions opposite to the transparent display area;
the color image acquisition module is used for acquiring a color image of the target object through the transparent display area;
the base image acquisition module is used for acquiring a base image of the target object through the transparent display area;
the memory for storing an executable computer program;
the processor, when executing an executable computer program stored in the memory, for implementing the method of any of the preceding claims 1 to 9.
13. A computer-readable storage medium, in which a computer program is stored for causing a processor, when executed, to carry out the method of any one of claims 1 to 9.
CN202010706558.7A 2020-07-21 2020-07-21 Image processing method, device, equipment and computer readable storage medium Pending CN114040089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010706558.7A CN114040089A (en) 2020-07-21 2020-07-21 Image processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010706558.7A CN114040089A (en) 2020-07-21 2020-07-21 Image processing method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114040089A true CN114040089A (en) 2022-02-11

Family

ID=80134053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010706558.7A Pending CN114040089A (en) 2020-07-21 2020-07-21 Image processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114040089A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140106870A (en) * 2013-02-27 2014-09-04 삼성전자주식회사 Apparatus and method of color image quality enhancement using intensity image and depth image
CN109379454A (en) * 2018-09-17 2019-02-22 深圳奥比中光科技有限公司 Electronic equipment
CN109840475A (en) * 2018-12-28 2019-06-04 深圳奥比中光科技有限公司 Face identification method and electronic equipment
CN109905691A (en) * 2017-12-08 2019-06-18 浙江舜宇智能光学技术有限公司 Depth image acquisition device and depth image acquisition system and its image processing method
KR102020464B1 (en) * 2018-03-12 2019-09-10 가천대학교 산학협력단 Color-mono Dual Camera Image Fusion Method, System and Computer-readable Medium
CN110322411A (en) * 2019-06-27 2019-10-11 Oppo广东移动通信有限公司 Optimization method, terminal and the storage medium of depth image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140106870A (en) * 2013-02-27 2014-09-04 삼성전자주식회사 Apparatus and method of color image quality enhancement using intensity image and depth image
CN109905691A (en) * 2017-12-08 2019-06-18 浙江舜宇智能光学技术有限公司 Depth image acquisition device and depth image acquisition system and its image processing method
KR102020464B1 (en) * 2018-03-12 2019-09-10 가천대학교 산학협력단 Color-mono Dual Camera Image Fusion Method, System and Computer-readable Medium
CN109379454A (en) * 2018-09-17 2019-02-22 深圳奥比中光科技有限公司 Electronic equipment
CN109840475A (en) * 2018-12-28 2019-06-04 深圳奥比中光科技有限公司 Face identification method and electronic equipment
CN110322411A (en) * 2019-06-27 2019-10-11 Oppo广东移动通信有限公司 Optimization method, terminal and the storage medium of depth image

Similar Documents

Publication Publication Date Title
RU2694021C1 (en) Method and apparatus for identifying portions of fragmented material within an image
CN104052905B (en) Method and apparatus for handling image
CN111402146B (en) Image processing method and image processing apparatus
EP3849170B1 (en) Image processing method, electronic device, and computer-readable storage medium
CN110675336A (en) Low-illumination image enhancement method and device
CN110276767A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN110473185A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN103020904B (en) Reflection removal system
CN107077602A (en) System and method for activity analysis
CN108710910A (en) A kind of target identification method and system based on convolutional neural networks
CN108322651B (en) Photographing method and device, electronic equipment and computer readable storage medium
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
Bi et al. Haze removal for a single remote sensing image using low-rank and sparse prior
CN110490196A (en) Subject detection method and apparatus, electronic equipment, computer readable storage medium
CN113888509A (en) Method, device and equipment for evaluating image definition and storage medium
CN114040089A (en) Image processing method, device, equipment and computer readable storage medium
CN111968039B (en) Day and night general image processing method, device and equipment based on silicon sensor camera
CN116363027A (en) Method, equipment and storage medium for removing rainbow-like glare of under-screen RGB image by utilizing infrared image
CN116092019A (en) Ship periphery abnormal object monitoring system, storage medium thereof and electronic equipment
CN110533740A (en) A kind of image rendering methods, device, system and storage medium
CN115546909A (en) Living body detection method and device, access control system, equipment and storage medium
US11443414B2 (en) Image signal processing
CN111866476B (en) Image shooting method and device and electronic equipment
CN114170668A (en) Hyperspectral face recognition method and system
CN115077681A (en) Distributed optical fiber vibration monitoring system fusing mobile image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination