CN117745563B - Dual-camera combined tablet personal computer enhanced display method - Google Patents

Dual-camera combined tablet personal computer enhanced display method Download PDF

Info

Publication number
CN117745563B
CN117745563B CN202410190014.8A CN202410190014A CN117745563B CN 117745563 B CN117745563 B CN 117745563B CN 202410190014 A CN202410190014 A CN 202410190014A CN 117745563 B CN117745563 B CN 117745563B
Authority
CN
China
Prior art keywords
image
foreground
corner
black
coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410190014.8A
Other languages
Chinese (zh)
Other versions
CN117745563A (en
Inventor
樊云
王云波
黄伟
付显品
罗建强
麦继划
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Geruibang Technology Co ltd
Original Assignee
Shenzhen Geruibang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Geruibang Technology Co ltd filed Critical Shenzhen Geruibang Technology Co ltd
Priority to CN202410190014.8A priority Critical patent/CN117745563B/en
Publication of CN117745563A publication Critical patent/CN117745563A/en
Application granted granted Critical
Publication of CN117745563B publication Critical patent/CN117745563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image enhancement, in particular to a dual-camera combined tablet personal computer enhanced display method, which comprises the following steps: acquiring a color image and a black-and-white image of the night scene image, and constructing a corner fuzzy coefficient of the night scene image according to the LBP value of the neighborhood of the corresponding corner point in the color image and the black-and-white image and the difference between edge lines; and constructing noise salient coefficients by combining differences between each super pixel block in the color image and each foreground region in the black-and-white image, improving gain coefficients of a gray world algorithm, and enhancing a composite image of the color image and the black-and-white image by combining the gray world algorithm according to the improved gain coefficients. Therefore, the enhancement of the tablet personal computer combined by the two cameras is realized, the problem that the enhancement effect on the night scene image is poor due to the fact that only the pixel value of the image is considered and the influence of noise in the night scene is not considered is avoided, and the definition degree of the night scene image is improved.

Description

Dual-camera combined tablet personal computer enhanced display method
Technical Field
The application relates to the technical field of image enhancement, in particular to a dual-camera combined tablet personal computer enhanced display method.
Background
With the development of science and technology, dual cameras have been popular in tablet computers. The double cameras can provide more abundant shooting functions, such as optical zooming, depth of field effect and better low-light shooting performance; secondly, the quality of the pictures and the shooting experience can be improved by the double cameras, and different requirements of users for shooting are met; in addition, dual cameras may also implement some Augmented Reality (AR) functions and 3D effects.
For better night scene image acquisition, the panel usually adopts a mode of combining a color camera and a black-and-white camera to acquire the night scene image. And because the light rays at night are less, the interference is more than that at daytime, so that the acquired image is blurred, and the acquired image still needs to be enhanced. However, the conventional image enhancement has poor enhancement effect on the night scene image because only the pixel value of the image itself is considered and the influence of noise in the night scene is not considered.
In summary, the present embodiment provides a dual-camera combined tablet computer enhanced display method, which includes collecting a color image and a black-and-white image of a night scene image, and constructing a noise salient coefficient of the night scene image according to differences between various features of pixel points in the color image and the black-and-white image; the gain coefficient of the gray world algorithm is improved according to the noise salient coefficient, and the synthesized image of the color image and the black-and-white image is enhanced according to the improved gain coefficient combined with the gray world algorithm, so that the night scene image has higher definition.
Disclosure of Invention
In order to solve the technical problems, the invention provides a dual-camera combined tablet personal computer enhanced display method for solving the existing problems.
The invention relates to a dual-camera combined tablet personal computer enhanced display method which adopts the following technical scheme:
The embodiment of the invention provides a dual-camera combined tablet personal computer enhanced display method, which comprises the following steps:
Collecting RGB images and binary images of night scene images, and respectively marking the RGB images and the binary images as color images and black-and-white images;
Acquiring a composite image of a color image and a black-and-white image; acquiring each foreground region in a black-and-white image; obtaining foreground salient coefficients of the black-and-white image according to the number of pixel points of all foreground areas; respectively acquiring each corner in the color image and the black-and-white image through Harris corner detection; obtaining matching angular points of each angular point in the black-and-white image in the color image according to the angular point coordinates in the color image and the black-and-white image; taking a set formed by each corner point in the black-and-white image and the corresponding matching corner point as each matching corner point combination; obtaining the Hamming distance of each matched corner combination according to the corner neighborhood in each matched corner combination; obtaining the corner fuzzy coefficient of the night scene image according to the Hamming distance of each matched corner combination and the number of edge lines in the color and black-and-white images; obtaining each foreground super-pixel block of the color image according to the super-pixel segmentation algorithm and the gray value of each pixel point in the gray image of the color image; obtaining a foreground fuzzy coefficient of each foreground super-pixel block according to the shape context descriptor of each foreground super-pixel block and each foreground region; obtaining a noise salient coefficient of the night scene image according to the foreground salient coefficient, the corner fuzzy coefficient and the foreground fuzzy coefficient;
Obtaining an updated gain coefficient of the gray world algorithm according to the noise salient coefficient; and enhancing the composite image by updating the gain coefficient and combining a gray world algorithm.
Preferably, the acquiring each foreground area in the black-and-white image specifically includes: and acquiring connected domains in the black-and-white image, and taking each connected domain with the pixel value larger than 0 as each foreground region.
Preferably, the obtaining the foreground salient coefficients of the black-and-white image according to the number of pixels of all foreground areas specifically includes:
Acquiring the total number of pixel points in all foreground areas; and taking the product of the number of foreground areas and the total number of pixel points as a foreground salient coefficient of the black-and-white image.
Preferably, the obtaining the matching corner points of each corner point in the black-and-white image in the color image according to the coordinates of the corner points in the color and black-and-white images specifically includes:
and calculating Euclidean distance between the jth corner in the black-and-white image and each corner coordinate in the color image, and taking the corner in the color image corresponding to the minimum value in all the Euclidean distances as a matching corner of the jth corner in the black-and-white image.
Preferably, the hamming distance of each matching corner combination is obtained according to the corner neighborhood in each matching corner combination, specifically:
Acquiring a binary image of a gray level image of a color image through an OUTS algorithm, and recording the binary image as the binary image of the color image; respectively acquiring LBP codes of each corner neighborhood of the black-and-white image and the color image by adopting a local binary pattern; calculating hamming distances between the LBP codes of the corner points in the black-and-white image and the matching corner points in the color image; and taking the Hamming distance as the Hamming distance of each matched corner combination.
Preferably, the obtaining the corner blur coefficient of the night scene image according to the hamming distance of each matching corner combination and the number of edge lines in the color and black-and-white images specifically includes:
Calculating the average value of the Hamming distances of all the matched corner combinations; calculating the absolute value of the difference value of the quantity of edge lines between the gray level image and the black-and-white image of the color image; calculating the absolute value of the angular point quantity difference between the color image and the black-and-white image; calculating the product of the absolute value of the edge line quantity difference and the absolute value of the corner quantity difference; calculating a ratio of the average value to the product; and taking the ratio as the corner blurring coefficient of the night scene image.
Preferably, the obtaining each foreground super pixel block of the color image according to the super pixel segmentation algorithm and the gray value of each pixel point in the gray map of the color image specifically includes:
Acquiring each super pixel block of the color image through a super pixel segmentation algorithm; and in the gray level diagram of the color image, acquiring each super-pixel block with the gray value average value of all pixel points in the super-pixel block larger than a preset threshold value as each foreground super-pixel block of the color image.
Preferably, the obtaining the foreground fuzzy coefficient of each foreground super pixel block according to the shape context descriptor of each foreground super pixel block and the foreground region specifically includes:
Acquiring mass centers of all foreground super-pixel blocks and all foreground areas; acquiring Euclidean distance between each foreground super-pixel block and the centroid coordinates of each foreground region; taking a foreground region corresponding to the minimum Euclidean distance value of each foreground super-pixel block as the nearest foreground region of each foreground super-pixel block; taking the gray value average value of all pixel points in each foreground super-pixel block as the gray value of each foreground super-pixel block; calculating Euclidean distance between each foreground super-pixel block and a shape context descriptor of the corresponding nearest foreground region;
calculating the product of Euclidean distance between each foreground super-pixel block and the centroid coordinates of the corresponding nearest foreground region and Euclidean distance between the shape context descriptors; calculating the difference value between the gray value of each foreground super pixel block and a preset threshold value; calculating a ratio of the product to the difference; and taking the ratio as a foreground fuzzy coefficient of each foreground super pixel block.
Preferably, the obtaining the noise salient coefficient of the night scene image according to the foreground salient coefficient, the corner blurring coefficient and the foreground blurring coefficient specifically includes:
calculating the average value of the foreground fuzzy coefficients of all foreground super pixel blocks; calculating the product of the mean value and the corner fuzzy coefficient; calculating the ratio of the product to the foreground salient coefficient; and taking the ratio as a noise salient coefficient of the night scene image.
Preferably, the obtaining the updated gain coefficient of the gray world algorithm according to the noise salient coefficient specifically includes:
And taking the product of the noise salient coefficient and the original gain coefficient of the gray world algorithm as the updated gain coefficient of the gray world algorithm.
The invention has at least the following beneficial effects:
According to the method, the color camera image and the black-and-white camera image are respectively obtained, the characteristics of the color image and the black-and-white image in the night scene are analyzed by combining the characteristics of the night scene, the influence of noise in the night scene is considered, the night scene noise salient coefficient is obtained, the gain coefficient is updated according to the night scene noise salient coefficient, the problem that the enhancement effect on the night scene image is poor due to the fact that only the pixel value of the image is considered and the influence of noise in the night scene is not considered is avoided, and the definition degree of the night scene image is improved;
The foreground salient coefficients of the black-and-white image are obtained according to each foreground region in the black-and-white image by collecting the color image and the black-and-white image of the night scene image; obtaining each matched corner combination according to the distance between the corners in the color image and the black-and-white image; obtaining the corner fuzzy coefficient of the night scene image according to the difference between LBP values of the corner neighborhood in each matched corner combination and the edge line difference in the color image and the black-and-white image; obtaining a foreground fuzzy coefficient of each foreground super-pixel block according to each super-pixel block in the color image and each foreground region shape context descriptor in the black-and-white image; the gain coefficient of the gray world algorithm is adjusted by combining the foreground salient coefficient and the corner fuzzy coefficient to obtain an updated gain coefficient; the night scene image is enhanced according to the updated gain coefficient and the gray level world algorithm, so that the night scene image enhancement method has a high image enhancement effect.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a dual-camera combined tablet computer enhanced display method provided by the invention;
Fig. 2 is a specific step diagram of a dual-camera combined tablet computer enhanced display method.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a specific implementation, structure, characteristics and effects of the dual-camera combined tablet personal computer enhanced display method according to the invention, which are described in detail below with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the dual-camera combined tablet personal computer enhanced display method provided by the invention with reference to the accompanying drawings.
The embodiment of the invention provides a dual-camera combined tablet personal computer enhanced display method.
Specifically, the following method for enhancing display of a tablet personal computer combined with two cameras is provided, please refer to fig. 1, and the method comprises the following steps:
In step S001, a color image and a black-and-white image are obtained by the dual cameras of the panel.
The night scene image is acquired by using a flat double camera, and the double cameras selected in the embodiment are a color camera and a black-and-white camera. In order to ensure the consistency of the scene shot by the acquired images, two cameras need to shoot simultaneously when shooting the images, namely, the two cameras acquire simultaneously and simultaneously. The color image is an RGB space image, and the image obtained by the black-and-white camera is a binary image.
After the color image and the black-and-white image are obtained, in order to reduce interference of external light, noise and other factors, denoising processing is performed on the obtained image, in this embodiment, gaussian filtering is selected to process noise in the image, and an operator can select other denoising modes according to practical situations. The denoised color image is denoted CF and the denoised black-and-white image is denoted image BF.
To this end, a color image CF and a black-and-white image BF are acquired.
And step S002, analyzing according to the characteristics of the acquired color image and black-and-white image in the night scene to acquire the noise salient coefficients of the night scene.
Along with development of technology, the tablet also selects to carry a plurality of cameras to shoot, wherein, the color combines the black and white double cameras, fuses two images that acquire, finally acquires comparatively clear image. The color camera mainly records colors, and the black-and-white lens records the outline of an object. When night scene images are acquired, usually, due to insufficient illumination, acquired external information is less, so that signal to noise ratio is high, and compared with the acquired images acquired in daytime, the acquired images are more in noise.
After the color image CF and the black-and-white image BF are acquired, the image obtained by combining the two images is determined as a combined image SF, and the combined image is a color image in RGB space.
The method comprises the steps of registering and matching two images through a computer algorithm, extracting features in the images by using a Harria algorithm, realizing feature matching and transformation of feature points in the two images by using a RANSAC algorithm to determine the positions of the same objects in the two images, de-distorting the two images in the registering process, de-distorting the images by using a Zhang Zhengyou calibration method, extracting feature points and matching to synthesize a final image.
The above Harria algorithm, zhang Zhengyou calibration method and synthesis process are known techniques, and will not be described herein in detail.
The original black-and-white sensor avoids the loss of light on the filter, can lead more light to reach the photosensitive element, and has better high-sensitivity performance under the condition that other conditions are the same. Namely, the brightness information of the picture captured by the black-and-white camera is overlapped with the picture of the color camera to obtain a brightness channel with better picture quality, and then the picture is colored by utilizing the color information obtained by the color camera to obtain better picture quality.
When a night scene image is acquired, the acquired image is blurred due to insufficient illumination at night, the color of an object in the acquired image can deviate, and the edge of the outline of the object is blurred. When the image is enhanced by the conventional image enhancement algorithm, the color of the image is not restored and a clear edge is obtained.
Among the acquired color image and black-and-white image, the black-and-white image BF mainly reflects brightness information of the image, and the color image CF captures color information of the image.
And analyzing the obtained black-and-white image BF, namely a binary image, wherein the pixel points in the image only comprise two pixel values, namely 0 and 1. The pixel with the pixel value of 1 in the image BF is the pixel with the brighter color in the image. Since the acquired image BF is a night scene image, the information contained in the brighter region of the image is relatively large, and each connected domain composed of pixel points having a pixel value of 1 is used as each foreground region of the image BF. The number of foreground regions in the image BF is recorded asThe total number of pixels that make up all foreground regions is recorded as/>
Obtaining foreground salient coefficients of black-and-white imagesThe formula is:
In the method, in the process of the invention, Is the foreground salient coefficient of the black-and-white image; /(I)Is the number of foreground regions in the image BF,/>Is the total number of pixels of all foreground regions in the image BF.
Formula logic: in the image BF, if the number of foreground regions is larger, that is, the number of pixels included in the foreground region is larger, the foreground region is more emphasized in the image, and the foreground protrusion coefficient obtained at this time is largerThe value of (2) is larger.
In order to analyze the outline of the foreground region, the image is processed by using a Canny edge detection operator, and edge lines in the image BF are obtained. The ith edge line acquired in the image BF is marked as an edge lineAnd the number of edge lines acquired by the image BF is recorded as/>. Acquiring corner points in the image BF by using Harris corner point detection technology, and marking the j-th corner point in the image BF as/>The total number of corner points obtained in the image BF is noted as/>. The Canny operator and Harris corner detection are known techniques, and the specific process is not repeated.
Further, the image BF is a black-and-white image reflecting the light and shade information in the scene, and the image CF is a color image reflecting the color information in the scene. The image CF is subjected to graying to obtain a gray image thereof, wherein the graying is a known technique, and the specific process is not repeated. Similarly, a Canny edge detection operator is used for processing the gray level image of the image CF, and edge lines in the gray level image of the image CF are obtained; simultaneously, acquiring corner points in the image CF by using Harris corner point detection technology; the p-th edge line obtained from the gray level image of the image CF is recorded asAnd the number of edge lines in the gray level map of the image CF is recorded as/>
Simultaneously acquiring coordinates of all corner points in the image CF, and carrying out the first stepThe coordinate positions of the individual corner points are denoted/>The number of corner points acquired in the image CF is recorded as/>. Similarly, the coordinate position of the corner in the image BF is obtained, and the jth corner/>, obtained in the image BF, is obtainedThe coordinate position of (2) is denoted/>
Because the image CF is an RGB space image, the image BF is a black-and-white image (i.e., a binary image), the image CF contains more image information, the number of edge lines and corner points acquired in the image CF should be greater than the number of edge lines and corner points acquired in the image BF in the same scene, meanwhile, due to the influence of night scenes, the acquired image CF is blurred compared with the image acquired under the condition of sufficient illumination, and the acquired edge lines and corner points are correspondingly fewer compared with the image acquired under normal light.
In the image CF, in addition to obtaining edges with obvious light-dark relation as in the image BF, since the image CF is an image in RGB space, the obtained information is more than that of a black-and-white image, and some edges or corner points with less obvious light-dark relation are obtained, and when the image CF is affected by the night light source, the number of the edges and corner points with less obvious light-dark relation is reduced.
Meanwhile, in the image BF, the obtained edge lines and corner points are boundary lines or corner points of the areas with obvious color change, correspondingly, in the obtained image CF, the positions of the edge lines and the corner points in the image BF correspondingly obtain corresponding edge lines and corner points, and the positions of the corresponding edge lines and the corner points are also areas with obvious color change in the image CF.
By corner points in the image BFFor example, corner/>Is/>Placing the image BF and the corner points in the image CF in the same coordinate system, and calculating all the corner points and the corner points/>, in the image CF in the coordinate systemThe Euclidean distance between them will be equal to the corner/>The corner point with the smallest Euclidean distance is marked as/>Corner/>, thenAnd corner/>Marking as a pair of matching angular points; and similarly, carrying out the processing on all the corner points of the image BF in the coordinate system, and obtaining the matching corner points of all the corner points in the image BF in the image CF. And taking a set formed by all the corner points and the matching corner points in the image BF as a matching corner point combination.
Acquiring optimal segmentation threshold value of gray level image of image CF by OTSU algorithmThe gray level image of the image CF is subjected to threshold segmentation through an optimal segmentation threshold, the binary image of the gray level image of the image CF is taken as the binary image of the image CF, and the binary image obtained after conversion is taken as an image BCF.
Taking the j-th matching corner combination as an example after the image BCF is acquired, and regarding the corner of the image BF in the combinationConstructing/>, of the corner point by taking the corner point as a central pixel pointNeighborhood, obtaining the corner/>, through a local binary patternLBP encoding of the neighborhood, noted/>; Corner/>, for image BF in combinationThe position of the corner in the image BCF is first acquired, then in the image BCF the position is determined by the method of the corner/>Obtaining corner/>, by using same acquisition mode of LBP codes of neighborhoodLBP encoding of the neighborhood, noted/>; Calculate corner/>And corner/>The Hamming distance between LBP codes of the neighborhood is used as the Hamming distance of the j-th matching corner combinationAnd (3) representing. Wherein the size of the corner neighborhood can be set by the practitioner, and the embodiment is not particularly limited; the local binary pattern and hamming distance are known techniques, and the detailed process is not repeated.
Similarly, the Hamming distance of all the matching corner combinations is obtained, and the number of the corner in the image BF isThe number of the obtained matching corner combinations is/>The number of obtained hamming distances is also/>
Acquiring corner blurring coefficient of night scene imageThe formula is:
In the method, in the process of the invention, The corner fuzzy coefficient of the night scene image; /(I)For obtaining the number of edge lines in the image BF,/>The number of corner points obtained from the image BF; /(I)For the number of edge lines acquired in the image CF,/>The number of corner points obtained from the image CF; /(I)Hamming distance for the j-th matching corner combination; 1 is a parameter adjusting factor, and prevents denominator from being 0.
Formula logic: from the above analysis, the image CF is a color image in RGB space, the image BF is a black-and-white image, when the two images are both images of the same scene, the image information acquired by the image CF should be more than the image information acquired by the image BF, that is, the number of edge lines and corner points acquired in the image CF should be more than the number of edge lines and corner points acquired in the image BF, if the image CF is affected by night light, the acquired image is blurred, and the information of other edge lines and corner points acquired is less except for the edge lines and corner points with more obvious featuresAnd/>The value of (2) is small.
Meanwhile, because the image information acquired by the image CF is more than the image information acquired by the image BF, the corner points acquired in the image BF are acquired in the image CF, the image textures around the corner points at the same position in the scene are the same, and because the positions of the double cameras behind the flat plate are different, the acquired images of the same scene are slightly deviated, so the corner points at the same position in the scene are acquired by calculating the Euclidean distance between the image CF and the image BF, the LBP value of the corner points is calculated, if the image CF is greatly interfered by night scenes, the acquired image is relatively blurred, the Hamming distance of the acquired matching corner point combination is larger after the image CF is converted into a binary image, namely the acquired image isThe value of (2) is larger; at this time, the obtained corner blurring coefficient/>And on the contrary, the obtained corner fuzzy coefficient/>The value of (2) is small.
Since the image CF is a color image in RGB space, the acquired image is mainly color information, and since the light is insufficient at night, the acquired image is blurred, and a large deviation may occur in color.
In an image in RGB space, a lighter colored region typically has a high RGB value. Performing super pixel point segmentation on the image CF by using an SLIC super pixel segmentation algorithm to obtain each super pixel block in the image CF, and analyzing the obtained super pixel blocks: and calculating the gray value average value of all pixel points in each super pixel block in the color image gray level graph, and taking the gray value average value as the gray value of each super pixel block. The super-pixel segmentation is a well-known technique, and the specific process is not described in detail. Acquiring gray values greater than an optimal segmentation thresholdAnd marking the same, and marking the marked super-pixel blocks as foreground super-pixel blocks, and obtaining the/>Use of individual foreground super pixel blocks/>Representation, foreground super-pixel block/>The gray value of (2) is denoted as/>. Meanwhile, each foreground super pixel block/>, is acquiredThe shape context descriptors of (a) specifically are: representing all pixel points on the contour edge line of each foreground super-pixel block as a series of diameters and angles by using polar coordinates, normalizing, converting the normalized contour into a shape context descriptor, and converting the foreground super-pixel block/>Shape context descriptor for contour edge lines of (1)/>. And by analogy, obtaining the shape context descriptors of all the foreground super-pixel blocks. The shape context is a well-known technique, and the detailed process is not repeated.
The shape context descriptor of each foreground region in the image BF is acquired in the above-described manner.
And acquiring the barycenter coordinates of all foreground super-pixel blocks in the image CF, and acquiring the barycenter coordinates of all foreground areas in the image BF. Foreground super-pixel blockFor example, the foreground super-pixel block/>Centroid point of (2) is denoted/>The barycenter coordinates of the image CF and the image BF are placed in the same coordinate system, and the barycenter coordinates and barycenter points/>, of all foreground areas in the image BF are calculatedThe Euclidean distance between coordinates is used for acquiring a foreground region corresponding to a centroid point in an image BF with the shortest Euclidean distance, and the foreground region is used as a foreground super-pixel block/>Is recorded as/>Centroid point/>And centroid point/>European distance between coordinatesRepresenting and noting the shape context descriptor of the contour edge line of the foreground region as/>
Acquiring shape context descriptorsAnd shape context descriptor/>The Euclidean distance between them is recorded as/>
Foreground super-pixel blockFor example, the foreground blur coefficient/>The formula is:
In the method, in the process of the invention, Is a foreground fuzzy coefficient; /(I)For shape context descriptor/>And shape context descriptorEuclidean distance between,/>Is centroid point/>And centroid point/>A Euclidean distance between them; /(I)For foreground super-pixel block/>Gray value of/>Is the optimal segmentation threshold in the image CF.
Formula logic: as can be seen from the above analysis, in the image CF, the area with lighter color is the same as the area of the foreground area in the image BF, since the acquired image is night scene, the image CF has less bright and dark information due to insufficient light, the bright and dark information acquired according to the image CF is less than that of the image BF, at this time, the area occupied by the brighter area in the scene in the image CF is different from the area occupied by the brighter area in the image BF, i.e. the area occupied by the same brighter area in the scene in the image CF is also different from the area occupied by the corresponding area in the image BF, i.e. the contour similarity of the corresponding area is smaller, and the acquired area is smallerThe value of (2) is larger; meanwhile, due to the influence of more noise at different night of the positions of the color camera and the black-and-white camera, the positions of the corresponding areas in the two images are inconsistent, the more the noise is acquired in the image CF, the larger the deviation of the positions of the corresponding areas in the two images is, and the/>, acquired at the momentThe value of (2) is larger; if the color of the area of the image CF larger than the optimal segmentation threshold is darker, the more insufficient the night scene image light acquired by the image CF is, the worse the acquired image quality is, and the more blurred state is presented, and the acquired/>If the value of (1) is small, then the foreground super-pixel block/>Acquired foreground fuzzy coefficient/>The value of (2) is larger; conversely, when the acquired image quality is better, the foreground super-pixel block/>Acquired foreground fuzzy coefficient/>The value of (2) is small.
And so on, obtaining the foreground fuzzy coefficients of all foreground super-pixel blocks, normalizing the foreground fuzzy coefficients, and recording the average value of all normalized foreground fuzzy coefficients as
Acquiring noise salient coefficients of night scene imagesThe formula is:
In the method, in the process of the invention, Noise salient coefficients of night scene images; /(I)Is the foreground salient coefficient of black-and-white image,/>Is the corner blurring coefficient of night scene image,/>Is the average value of all the normalized foreground fuzzy coefficients.
Formula logic: because the acquired image is a night scene image, if the number of foreground areas in the image BF is smaller, the occupied areas are smaller, the dark areas in the image are more, and the bright areas are fewer, namely the value of the foreground salient coefficient ppc is smaller; meanwhile, when the acquired scene edge in the image CF is more blurred and the scene area is also more blurred, namely cvc andThe night scene noise salient coefficient/>, which is obtained at this time, is largerThe value of (2) is larger; otherwise, the acquired night scene noise salient coefficient/>The value of (2) is small.
Thus, the noise salient coefficient of the night scene image is acquired.
And step S003, obtaining an updated gain coefficient according to the night scene noise salient coefficient, and finishing enhancement of the display image of the tablet personal computer.
When the noise prominence coefficient of the acquired night scene image is larger, the interference of the images acquired by the color camera and the black-and-white camera to the night scene is larger, the interference of the image SF synthesized by the images acquired by the two cameras to the night scene is larger, and the definition of the image is poorer.
In this embodiment, a gray world algorithm is selected to enhance the image SF, and the gray world algorithm is improved, when the image SF is enhanced by the conventional gray world algorithm, because the image SF is acquired as a night scene image, the scene is darker, that is, the scene contains more single colors, and when gain coefficients are acquired through values of three channels of pixel points RGB in the image SF, the enhancement effect is poor.
Obtaining an updated gain coefficient, wherein the formula is as follows:
In the method, in the process of the invention, To update the gain factor; /(I)Is the original gain coefficient of gray world algorithm,/>Is a night scene noise salient coefficient.
When the acquired night scene noise prominence coefficient is larger, the influence of the image SF on the night noise is larger, and a larger gain coefficient is given at the moment, so that the enhancement effect on the aspects of definition, brightness and the like of the image SF is improved.
The image displayed on the tablet personal computer is enhanced by updating the gain coefficient and combining a gray world algorithm, wherein the gray world algorithm is a known technology and will not be described in detail herein.
Thus, the image display enhancement of the tablet personal computer with the double cameras is completed. A specific step diagram of the above method is shown in fig. 2.
In summary, according to the embodiment of the invention, by respectively acquiring the color camera image and the black-and-white camera image, analyzing the characteristics of the color image and the black-and-white image in the night scene by combining the characteristics of the night scene, taking the influence of noise in the night scene into consideration, acquiring the night scene noise salient coefficient, updating the gain coefficient according to the night scene noise salient coefficient, avoiding the problem of poor enhancement effect on the night scene image caused by considering only the pixel value of the image and not the influence of noise in the night scene, and improving the definition of the night scene image;
According to the embodiment, a foreground salient coefficient of a black-and-white image is obtained according to each foreground area in the black-and-white image by collecting a color image and the black-and-white image of a night scene image; obtaining each matched corner combination according to the distance between the corners in the color image and the black-and-white image; obtaining the corner fuzzy coefficient of the night scene image according to the difference between LBP values of the corner neighborhood in each matched corner combination and the edge line difference in the color image and the black-and-white image; obtaining a foreground fuzzy coefficient of each foreground super-pixel block according to each super-pixel block in the color image and each foreground region shape context descriptor in the black-and-white image; the gain coefficient of the gray world algorithm is adjusted by combining the foreground salient coefficient and the corner fuzzy coefficient to obtain an updated gain coefficient; the night scene image is enhanced according to the updated gain coefficient and the gray level world algorithm, so that the night scene image enhancement method has a high image enhancement effect.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and the same or similar parts of each embodiment are referred to each other, and each embodiment mainly describes differences from other embodiments.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; the technical solutions described in the foregoing embodiments are modified or some of the technical features are replaced equivalently, so that the essence of the corresponding technical solutions does not deviate from the scope of the technical solutions of the embodiments of the present application, and all the technical solutions are included in the protection scope of the present application.

Claims (5)

1. The dual-camera combined tablet personal computer enhanced display method is characterized by comprising the following steps of:
Collecting RGB images and binary images of night scene images, and respectively marking the RGB images and the binary images as color images and black-and-white images;
Acquiring a composite image of a color image and a black-and-white image; acquiring each foreground region in a black-and-white image; obtaining foreground salient coefficients of the black-and-white image according to the number of pixel points of all foreground areas; respectively acquiring each corner in the color image and the black-and-white image through Harris corner detection; obtaining matching angular points of each angular point in the black-and-white image in the color image according to the angular point coordinates in the color image and the black-and-white image; taking a set formed by each corner point in the black-and-white image and the corresponding matching corner point as each matching corner point combination; obtaining the Hamming distance of each matched corner combination according to the corner neighborhood in each matched corner combination; obtaining the corner fuzzy coefficient of the night scene image according to the Hamming distance of each matched corner combination and the number of edge lines in the color and black-and-white images; obtaining each foreground super-pixel block of the color image according to the super-pixel segmentation algorithm and the gray value of each pixel point in the gray image of the color image; obtaining a foreground fuzzy coefficient of each foreground super-pixel block according to the shape context descriptor of each foreground super-pixel block and each foreground region; obtaining a noise salient coefficient of the night scene image according to the foreground salient coefficient, the corner fuzzy coefficient and the foreground fuzzy coefficient;
Obtaining an updated gain coefficient of the gray world algorithm according to the noise salient coefficient; enhancing the synthetic image by updating the gain coefficient and combining a gray world algorithm;
the foreground salient coefficients of the black-and-white image are obtained according to the pixel point numbers of all foreground areas, and the method specifically comprises the following steps:
acquiring the total number of pixel points in all foreground areas; taking the product of the number of foreground areas and the total number of pixel points as a foreground salient coefficient of a black-and-white image;
the method for obtaining the corner fuzzy coefficient of the night scene image according to the Hamming distance of each matched corner combination and the quantity of edge lines in the color and black-and-white images specifically comprises the following steps:
Calculating the average value of the Hamming distances of all the matched corner combinations; calculating the absolute value of the difference value of the quantity of edge lines between the gray level image and the black-and-white image of the color image; calculating the absolute value of the angular point quantity difference between the color image and the black-and-white image; calculating the product of the absolute value of the edge line quantity difference and the absolute value of the corner quantity difference to be recorded as a first product; calculating the ratio of the average value to the first product and recording the ratio as a first ratio; taking the first ratio as a corner fuzzy coefficient of the night scene image;
The method for obtaining the foreground fuzzy coefficient of each foreground super pixel block according to the shape context descriptors of each foreground super pixel block and the foreground region specifically comprises the following steps:
Acquiring mass centers of all foreground super-pixel blocks and all foreground areas; acquiring Euclidean distance between each foreground super-pixel block and the centroid coordinates of each foreground region; taking a foreground region corresponding to the minimum Euclidean distance value of each foreground super-pixel block as the nearest foreground region of each foreground super-pixel block; taking the gray value average value of all pixel points in each foreground super-pixel block as the gray value of each foreground super-pixel block; calculating Euclidean distance between each foreground super-pixel block and a shape context descriptor of the corresponding nearest foreground region;
Calculating the product of Euclidean distance between each foreground super-pixel block and the centroid coordinates of the corresponding nearest foreground region and Euclidean distance between the shape context descriptors to be recorded as a second product; calculating the difference value between the gray value of each foreground super pixel block and a preset threshold value; calculating the ratio of the second product to the difference value to be a second ratio; taking the second ratio as a foreground fuzzy coefficient of each foreground super pixel block;
The noise salient coefficient of the night scene image is obtained according to the foreground salient coefficient, the corner fuzzy coefficient and the foreground fuzzy coefficient, and the method specifically comprises the following steps:
calculating the average value of the foreground fuzzy coefficients of all foreground super pixel blocks and recording the average value as a first average value; calculating the product of the first mean value and the corner fuzzy coefficient and recording the product as a third product; calculating the ratio of the third product to the foreground salient coefficient and recording the ratio as a third ratio; taking the third ratio as a noise salient coefficient of the night scene image;
The method for obtaining the updated gain coefficient of the gray world algorithm according to the noise salient coefficient specifically comprises the following steps:
And taking the product of the noise salient coefficient and the original gain coefficient of the gray world algorithm as the updated gain coefficient of the gray world algorithm.
2. The method for enhancing display of a tablet personal computer combined with two cameras according to claim 1, wherein the steps of obtaining each foreground area in a black-and-white image are as follows: and acquiring connected domains in the black-and-white image, and taking each connected domain with the pixel value larger than 0 as each foreground region.
3. The method for enhancing display of a tablet personal computer combined with two cameras according to claim 1, wherein the method for obtaining the matching corner points of each corner point in the black-and-white image in the color image according to the coordinates of the corner points in the color and black-and-white images is specifically as follows:
and calculating Euclidean distance between the jth corner in the black-and-white image and each corner coordinate in the color image, and taking the corner in the color image corresponding to the minimum value in all the Euclidean distances as a matching corner of the jth corner in the black-and-white image.
4. The method for enhancing display of a tablet personal computer combined with two cameras according to claim 1, wherein the hamming distance of each matched corner combination is obtained according to the corner neighborhood in each matched corner combination, specifically:
Acquiring a binary image of a gray level image of a color image through an OUTS algorithm, and recording the binary image as the binary image of the color image; respectively acquiring LBP codes of each corner neighborhood of the black-and-white image and the color image by adopting a local binary pattern; calculating hamming distances between the LBP codes of the corner points in the black-and-white image and the matching corner points in the color image; and taking the Hamming distance as the Hamming distance of each matched corner combination.
5. The method for enhancing display of a tablet personal computer combined with two cameras according to claim 1, wherein the obtaining each foreground super pixel block of the color image according to the super pixel segmentation algorithm and each pixel gray value in the color image gray scale map specifically comprises:
Acquiring each super pixel block of the color image through a super pixel segmentation algorithm; and in the gray level diagram of the color image, acquiring each super-pixel block with the gray value average value of all pixel points in the super-pixel block larger than a preset threshold value as each foreground super-pixel block of the color image.
CN202410190014.8A 2024-02-21 2024-02-21 Dual-camera combined tablet personal computer enhanced display method Active CN117745563B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410190014.8A CN117745563B (en) 2024-02-21 2024-02-21 Dual-camera combined tablet personal computer enhanced display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410190014.8A CN117745563B (en) 2024-02-21 2024-02-21 Dual-camera combined tablet personal computer enhanced display method

Publications (2)

Publication Number Publication Date
CN117745563A CN117745563A (en) 2024-03-22
CN117745563B true CN117745563B (en) 2024-05-14

Family

ID=90261301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410190014.8A Active CN117745563B (en) 2024-02-21 2024-02-21 Dual-camera combined tablet personal computer enhanced display method

Country Status (1)

Country Link
CN (1) CN117745563B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436639A (en) * 2011-09-02 2012-05-02 清华大学 Image acquiring method for removing image blurring and image acquiring system
KR101675030B1 (en) * 2015-05-08 2016-11-11 주식회사 에스원 Black and White Image Acquisition System accentuate Color Contrast and Method thereof
CN106327507A (en) * 2016-08-10 2017-01-11 南京航空航天大学 Color image significance detection method based on background and foreground information
CN106534677A (en) * 2016-10-27 2017-03-22 成都西纬科技有限公司 Image overexposure optimization method and device
CN107452013A (en) * 2017-05-27 2017-12-08 深圳市美好幸福生活安全系统有限公司 Conspicuousness detection method based on Harris Corner Detections and Sugeno fuzzy integrals
CN108596867A (en) * 2018-05-09 2018-09-28 五邑大学 A kind of picture bearing calibration and system based on ORB algorithms
CN112669242A (en) * 2021-03-16 2021-04-16 四川大学 Night scene restoration method based on improved image enhancement algorithm and generation countermeasure network
CN113556526A (en) * 2021-07-18 2021-10-26 北京理工大学 RGBW filter array-based color enhancement method for color night vision equipment
CN115272477A (en) * 2022-06-28 2022-11-01 成都上富智感科技有限公司 Checkerboard coding corner detection algorithm applied to panoramic image splicing
CN115861135A (en) * 2023-03-01 2023-03-28 铜牛能源科技(山东)有限公司 Image enhancement and identification method applied to box panoramic detection

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102436639A (en) * 2011-09-02 2012-05-02 清华大学 Image acquiring method for removing image blurring and image acquiring system
KR101675030B1 (en) * 2015-05-08 2016-11-11 주식회사 에스원 Black and White Image Acquisition System accentuate Color Contrast and Method thereof
CN106327507A (en) * 2016-08-10 2017-01-11 南京航空航天大学 Color image significance detection method based on background and foreground information
CN106534677A (en) * 2016-10-27 2017-03-22 成都西纬科技有限公司 Image overexposure optimization method and device
CN107452013A (en) * 2017-05-27 2017-12-08 深圳市美好幸福生活安全系统有限公司 Conspicuousness detection method based on Harris Corner Detections and Sugeno fuzzy integrals
CN108596867A (en) * 2018-05-09 2018-09-28 五邑大学 A kind of picture bearing calibration and system based on ORB algorithms
CN112669242A (en) * 2021-03-16 2021-04-16 四川大学 Night scene restoration method based on improved image enhancement algorithm and generation countermeasure network
CN113556526A (en) * 2021-07-18 2021-10-26 北京理工大学 RGBW filter array-based color enhancement method for color night vision equipment
CN115272477A (en) * 2022-06-28 2022-11-01 成都上富智感科技有限公司 Checkerboard coding corner detection algorithm applied to panoramic image splicing
CN115861135A (en) * 2023-03-01 2023-03-28 铜牛能源科技(山东)有限公司 Image enhancement and identification method applied to box panoramic detection

Also Published As

Publication number Publication date
CN117745563A (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN110827200B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal
CN111741211B (en) Image display method and apparatus
Jiang et al. Image dehazing using adaptive bi-channel priors on superpixels
WO2020152521A1 (en) Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures
CN108154514B (en) Image processing method, device and equipment
CN112085659B (en) Panorama splicing and fusing method and system based on dome camera and storage medium
JP2019514116A (en) Efficient canvas view generation from intermediate views
CN111784605B (en) Image noise reduction method based on region guidance, computer device and computer readable storage medium
CN111260580B (en) Image denoising method, computer device and computer readable storage medium
CN111028165B (en) High-dynamic image recovery method for resisting camera shake based on RAW data
CN113902657A (en) Image splicing method and device and electronic equipment
CN110430403B (en) Image processing method and device
CN115550570B (en) Image processing method and electronic equipment
Bi et al. Haze removal for a single remote sensing image using low-rank and sparse prior
CN115063331B (en) Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method
CN113379609B (en) Image processing method, storage medium and terminal equipment
Fan et al. Multiscale cross-connected dehazing network with scene depth fusion
CN115883988A (en) Video image splicing method and system, electronic equipment and storage medium
CN111447428A (en) Method and device for converting plane image into three-dimensional image, computer readable storage medium and equipment
CN116055895B (en) Image processing method and device, chip system and storage medium
CN110580684A (en) image enhancement method based on black-white-color binocular camera
CN117745563B (en) Dual-camera combined tablet personal computer enhanced display method
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN116468636A (en) Low-illumination enhancement method, device, electronic equipment and readable storage medium
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant