CN111223058B - Image enhancement method - Google Patents

Image enhancement method Download PDF

Info

Publication number
CN111223058B
CN111223058B CN201911377762.2A CN201911377762A CN111223058B CN 111223058 B CN111223058 B CN 111223058B CN 201911377762 A CN201911377762 A CN 201911377762A CN 111223058 B CN111223058 B CN 111223058B
Authority
CN
China
Prior art keywords
feature
image
convolution
image block
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911377762.2A
Other languages
Chinese (zh)
Other versions
CN111223058A (en
Inventor
周晓亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Xinmai Microelectronics Co ltd
Original Assignee
Hangzhou Xiongmai Integrated Circuit Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Xiongmai Integrated Circuit Technology Co Ltd filed Critical Hangzhou Xiongmai Integrated Circuit Technology Co Ltd
Priority to CN201911377762.2A priority Critical patent/CN111223058B/en
Publication of CN111223058A publication Critical patent/CN111223058A/en
Application granted granted Critical
Publication of CN111223058B publication Critical patent/CN111223058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses an image enhancement method, which relates to the field of image processing and comprises the following steps: collecting a test scene image, preprocessing the test scene image, and establishing a reference database; collecting an image to be enhanced, and enhancing the image to be enhanced based on data in a reference database; after the enhancement process, the data generated during the process is updated to the reference database. The method provided by the invention improves the reference data quantity, does not need to compare in a plurality of blocks, and reduces the complexity of matching.

Description

Image enhancement method
[ field of technology ]
The invention relates to the field of image processing, in particular to an image enhancement method.
[ background Art ]
In the prior art, an enhancement method for an image intercepted in a video only carries out auxiliary enhancement according to multi-frame image data before and after the current video, and the images are segmented in real time and compared with the similarity of the blocks. The processing method has the following defects: less referenceable data; the similarity matching between blocks is slow, and the blocks need to be compared repeatedly, so that the processing is complex.
[ invention ]
In order to solve the problems, the invention provides an image enhancement method, which improves the reference data quantity, does not need to compare a plurality of blocks and reduces the complexity of matching.
In order to achieve the above purpose, the invention adopts the following technical scheme:
an image enhancement method comprising the steps of:
collecting a test scene image, preprocessing the test scene image, and establishing a reference database;
collecting an image to be enhanced, and enhancing the image to be enhanced based on data in a reference database;
after the enhancement process, the data generated during the process is updated to the reference database.
Optionally, preprocessing the test scene image specifically includes:
calculating a first size characteristic of the test scene image, wherein the first size characteristic is the size of an image block after the test scene image is divided into a plurality of image blocks;
calculating a first mean square feature and a first convolution feature of each image block, the first mean square feature comprising a first luminance mean feature, a first luminance variance feature, a first chrominance mean feature, and a first chrominance variance feature, the first convolution feature comprising: a first horizontal convolution feature, a first vertical convolution feature, a first 45 degree convolution feature, and a first 135 degree convolution feature;
dividing the luminance mean feature, the chrominance mean feature, the luminance variance feature, the chrominance variance feature, the horizontal convolution feature, the vertical convolution feature, the 45-degree convolution feature and the 135-degree convolution feature of the test scene image to obtain feature segments of each feature:
the luminance average feature is segmented intoThe luminance variance feature is segmented into->The chrominance mean value is characterized by being segmented into->The chrominance variance feature is segmented into->The horizontal convolution feature, the vertical convolution feature, the 45 degree convolution feature and the 135 degree convolution feature are all +.>
Wherein, len 1 Step length len of each characteristic segment after dividing brightness average value 2 Step length, len, of each feature segment after dividing the brightness variance 3 Step length of each characteristic segment after dividing chromaticity mean value, len 4 Step length, max, of each feature segment after dividing the chrominance variance 1 Max for the luminance mean maximum value 2 Max for the maximum value of the luminance variance 3 Maximum value of chromaticity mean, max 4 Maximum of chromatic varianceValue, min 1 Minimum value of brightness mean value, min 2 Min, which is the minimum of the luminance variance 3 Minimum value of chromaticity mean, min 4 The method comprises the steps that (1) the minimum value of chromatic variance, max is the maximum value of convolution characteristics, min is the minimum value of convolution characteristics, N is the effective value of test scene image data, and N is the number of segments for segmentation;
after division is completed, calculating a mean value of the feature segments of each feature, wherein the mean value is the feature grade intensity of the feature segments of each feature of the test scene image.
Optionally, the reference database is an HBase database, and establishing the reference database specifically includes:
setting a primary key to form a primary key row, wherein each image block is provided with a primary key for determining that each feature of each image block is positioned in a corresponding feature segment, and the primary key comprises a first size feature, a first mean square feature, a first horizontal convolution feature, a first vertical convolution feature, a first 45-degree convolution feature, a first 135-degree convolution feature and feature data ID, wherein the data feature ID is the number of the image blocks and is used for positioning each feature value in the feature segment;
all the features of each image block are sequentially listed in the corresponding primary key from top to bottom and from left to right and then serve as image content columns;
the feature level intensities of the feature segments are listed as auxiliary columns after the image content columns;
the number of image segments in each feature segment is listed in the auxiliary column and then used as the positioning column.
Optionally, when the test scene image is collected, the videos of different conventional test scenes under different gain environments are collected, and the video of each scene under each gain environment takes 1 to 10 frames of images.
Optionally, collecting the image to be enhanced, and enhancing the image to be enhanced based on the data in the reference database specifically includes:
calculating a second size characteristic of the image to be enhanced, wherein the second size characteristic is the size of the image block after dividing the image to be enhanced into a plurality of image blocks;
calculating a second mean square feature and a second convolution feature for each image block, the second mean square feature comprising a second luminance mean feature, a second luminance variance feature, a second chrominance mean feature, and a second chrominance variance feature, the second convolution feature comprising: the second horizontal convolution feature, the second vertical convolution feature, the second 45-degree convolution feature and the second 135-degree convolution feature take the feature value of the calculated second mean square feature as the image block feature grade intensity;
judging whether the second mean square characteristic and the second convolution characteristic of each image block are in the HBase database, if so, taking the characteristic value contained in the main key in the HBase database corresponding to the image block and two characteristic values adjacent to the characteristic value contained in the main key, and carrying out intra-group enhancement.
Alternatively, intra-group enhancement is performed by spark parallel computation.
Optionally, performing intra-group enhancement specifically includes:
comparing the feature level intensity of the feature segments of each feature of the test scene image in the HBase database with the feature level intensity of each feature of each image block to obtain the differential absolute value of the feature level intensity and the feature level intensity of the image block, setting a first weight according to the differential absolute value, wherein the calculation formula of the first weight is as follows:
wherein thrl is the upper threshold of the first weight, thrh is the lower threshold of the first weight, diff is the absolute difference value of the characteristic grade intensity and the characteristic grade intensity of the image block, and wi is the first weight of the ith characteristic section in the HBase database;
and determining the proportion of the non-reinforced features and the reinforced features in the image block according to the first weight of each feature, and reinforcing the reinforced features, specifically:
blkres2 i =blkres1 i *(1-w i )+blkin*w i
wherein blkres1i is the full enhancement result of the ith feature segment in the database; blkin is an image block of an image to be enhanced; blkres2i is the output enhancement result of the ith feature segment in the database;
calculating a second weight according to the first weight, and fusing the non-reinforced feature and the reinforced feature according to the second weight, wherein the method specifically comprises the following steps:
w' i =1-w i
wherein w' i For a second weight corresponding to the first weight, blkout is the output result of fusing the non-enhanced feature and the enhanced feature,
and obtaining the enhanced image after fusion.
Optionally, judging whether the second mean square feature and the second convolution feature of each image block are in the HBase database, if not, judging whether the feature values adjacent to the second mean square feature and the second convolution feature of each image block are in the corresponding feature segments, if so, adopting the feature values adjacent to the second mean square feature and the second convolution feature of each image block to enhance, and if not, not enhancing.
Optionally, updating the data generated in the processing procedure to the reference database specifically includes:
judging whether the second mean square characteristic and the second convolution characteristic of each image block are in an HBase database or not;
if the second mean square feature and the second convolution feature of the image block are in the HBase database, setting a new primary key according to the second mean square feature and the second convolution feature of the image block, and updating the new primary key to the primary key column;
updating all the features in the image block to an image content column;
the feature level of the feature segment where the features of the image block are located is calculated again, the feature level is strong and updated to the auxiliary column, and the calculation formula is as follows:
wherein flevel is the feature level of the recalculation and is strong, flevel0 is the feature level of the feature segment where the feature of the image block before being not updated is strong, flevelc is the feature level intensity of the image block, and M is the number value contained in the primary key in the HBase database corresponding to the image block before being not updated;
the new number value is added by 1 to the number value contained in the main key in the HBase database corresponding to the image block, and the new number value is updated to the positioning column.
Optionally, if the second mean square feature and the second convolution feature of the image block are not in the HBase database, setting a new primary key according to the second mean square feature and the second convolution feature of the image block, and updating the new primary key to the primary key column;
updating all the features in the image block to an image content column;
updating the image block characteristic grade intensity to an auxiliary column;
the number of image blocks is noted as 1.
The invention has the following beneficial effects:
1. according to the method provided by the application, through establishing the HBase database for storage, designing the corresponding primary key and the corresponding storage sequence, and establishing the corresponding updating mechanism, the reference data quantity of the enhancement algorithm is improved, and the data analysis and enhancement are facilitated;
2. according to the multiple feature classification of the image, the matching complexity of the blocks is reduced, multiple blocks are not needed to be compared, all corresponding matching blocks can be obtained by calculating the feature intensity, the self-adaptive enhancement control is realized according to the feature intensity level, the matching complexity is reduced, and the matching efficiency is improved;
3. the HBase storage is utilized to improve the read-write performance of the database, the block extraction processing and the corresponding updating are convenient, and the design of the main key improves the block index and the updating; and the spark parallel computing model is utilized to improve the processing efficiency of multiple parameters and groups.
These features and advantages of the present invention will be disclosed in more detail in the following detailed description and the accompanying drawings. The best mode or means of the present invention will be described in detail with reference to the accompanying drawings, but is not limited to the technical scheme of the present invention. In addition, these features, elements, and components are shown in plural in each of the following and drawings, and are labeled with different symbols or numerals for convenience of description, but each denote a component of the same or similar construction or function.
[ description of the drawings ]
The invention is further described below with reference to the accompanying drawings:
FIG. 1 is an overall flow chart of an embodiment of the present invention;
FIG. 2 is a flow chart of database creation in an embodiment of the present invention;
fig. 3 is a flowchart of image enhancement in an embodiment of the present invention.
[ detailed description ] of the invention
The technical solutions of the embodiments of the present invention will be explained and illustrated below with reference to the drawings of the embodiments of the present invention, but the following embodiments are only preferred embodiments of the present invention, and not all embodiments. Based on the examples in the implementation manner, other examples obtained by a person skilled in the art without making creative efforts fall within the protection scope of the present invention.
Reference in the specification to "one embodiment" or "an example" means that a particular feature, structure, or characteristic described in connection with the embodiment itself can be included in at least one embodiment of the present patent disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
Examples:
as shown in fig. 1 to 3, the present embodiment provides an image enhancement method, including the steps of:
and acquiring images of the test scene, in particular, acquiring videos of different conventional test scenes under different gain environments. In this embodiment, the video of the test scene such as office, living room, darkroom, parking lot, corridor, outdoor is selected by taking gain factors such as 1 time, 4 times, 16 times, 32 times, 64 times, 128 times, etc., in this embodiment, in consideration of scene effectiveness and data calculation amount, 18 groups of test scene image videos including 6 scenes and 3 typical gain factors are selected, each test scene image video takes 1 to 10 frames of images, and in this embodiment 1 frame or 2 frames of test scene image videos are selected.
Preprocessing the collected test scene images, specifically comprising the following steps:
calculating a first size characteristic of the test scene image, wherein the first size characteristic is the size of an image block after the test scene image is divided into a plurality of image blocks, and in the embodiment, the test scene image is divided according to 5*5 and 7*7;
calculating a first mean square feature and a first convolution feature of each image block, the first mean square feature comprising a first luminance mean feature, a first luminance variance feature, a first chrominance mean feature and a first chrominance variance feature, the first convolution feature comprising: a first horizontal convolution feature, a first vertical convolution feature, a first 45 degree convolution feature, and a first 135 degree convolution feature, namely a horizontal direction convolution feature, a vertical direction convolution feature, and two diagonal direction convolution features;
dividing the luminance mean feature, the chrominance mean feature, the luminance variance feature, the chrominance variance feature, the horizontal convolution feature, the vertical convolution feature, the 45-degree convolution feature and the 135-degree convolution feature of the test scene image to obtain feature segments of each feature:
the luminance average feature is segmented intoThe luminance variance feature is segmented into->The chrominance mean value is characterized by being segmented into->The chrominance variance feature is segmented into->The horizontal convolution feature, the vertical convolution feature, the 45 degree convolution feature and the 135 degree convolution feature are all +.>
Wherein, len 1 Step length len of each characteristic segment after dividing brightness average value 2 Step length, len, of each feature segment after dividing the brightness variance 3 Step length of each characteristic segment after dividing chromaticity mean value, len 4 Step length, max, of each feature segment after dividing the chrominance variance 1 Max for the luminance mean maximum value 2 Max for the maximum value of the luminance variance 3 Maximum value of chromaticity mean, max 4 Maximum value of chromatic variance, min 1 Minimum value of brightness mean value, min 2 Min, which is the minimum of the luminance variance 3 Minimum value of chromaticity mean, min 4 The minimum value of the chromatic variance is max, the maximum value of the convolution characteristic is min, the minimum value of the convolution characteristic is min, N is the effective value of the image data of the test scene, and N is the number of segments for segmentation. In this embodiment, the test scene image is an 8-bit wide image, so that the effective value N of the test scene image data is in the range of 0-225, i.e. 2 8 -1 = 255; the segmentation number n of the brightness average characteristic of the test scene image is 16; taking the segmentation number n of the chrominance mean value characteristic to 16; taking the segmentation number n of the brightness variance characteristic as 16; generally, the requirement of chromaticity detail is lower than that of luminance detail, so that the chromaticity variance feature, the number of segments n of the horizontal convolution feature, the number of segments n of the vertical convolution feature, the number of segments n of the 45-degree convolution feature, the number of segments n of the 16-degree convolution feature, and the number of segments n of the 135-degree convolution feature are 16.
After division is completed, calculating a mean value of the feature segments of each feature, namely the distribution center of the corresponding feature in each feature segment, wherein the mean value is the feature grade strength of the feature segments of each feature of the test scene image.
After pretreatment is completed, a reference database is established; the reference database is an HBase database, and the establishment of the reference database specifically comprises the following steps:
and setting a primary key to form a primary key row by considering the HBase storage characteristic and the index characteristic, wherein each image block is provided with a primary key for determining that each characteristic of each image block is positioned in a corresponding characteristic segment, and the primary key comprises a first size characteristic, a first mean square characteristic, a first horizontal convolution characteristic, a first vertical convolution characteristic, a first 45-degree convolution characteristic, a first 135-degree convolution characteristic and characteristic data ID, wherein the data ID is the number of the image blocks and is used for positioning each characteristic value in the characteristic segment. In this embodiment, the main key is specifically set as: 2bit first size feature+14 bit first mean square feature (4 bit first luminance mean feature+4 bit first luminance variance feature+4 bit first chrominance mean feature+2 bit first chrominance variance feature) +16bit first convolution feature (4 bit first horizontal convolution feature+4 bit first vertical convolution feature+4 bit first 45 degree convolution feature+4 bit first 135 degree convolution feature) +32bit feature data ID, as shown in the following table:
in other embodiments, the size of each feature in the primary key may be set according to actual requirements, for example, if the requirements for the luminance feature are higher, the size of the luminance mean feature and the luminance variance feature in the primary key may be correspondingly increased, and if the requirements for the chrominance feature are higher, the size of the chrominance mean feature and the chrominance variance feature in the primary key may be correspondingly increased.
All the features of each image block are sequentially listed in the corresponding primary key from top to bottom and from left to right and then serve as image content columns;
the feature level intensities of the feature segments are listed as auxiliary columns after the image content columns;
the number of image segments in each feature segment is listed in the auxiliary column and then used as the positioning column.
So far, the preprocessing of the reference data and the establishment of the HBase database have been completed.
When the preprocessing of the test scene image is finished and the image is enhanced after the HBase database is established, firstly, the image to be enhanced is acquired, and the image to be enhanced is enhanced based on the data in the reference database, which specifically comprises the following steps:
calculating a second size characteristic of the image to be enhanced, wherein the second size characteristic is the size of the image block after the image to be enhanced is divided into a plurality of image blocks, and in the embodiment, the image to be enhanced is still divided according to 5*5 and 7*7;
calculating a second mean square feature and a second convolution feature for each image block, the second mean square feature comprising a second luminance mean feature, a second luminance variance feature, a second chrominance mean feature, and a second chrominance variance feature, the second convolution feature comprising: the second horizontal convolution feature, the second vertical convolution feature, the second 45-degree convolution feature and the second 135-degree convolution feature take the feature value of the calculated second mean square feature as the image block feature grade intensity;
judging whether the second mean square characteristic and the second convolution characteristic of each image block are in an HBase database or not so as to determine reference data:
if the feature value contained in the primary key and two feature values adjacent to the feature value contained in the primary key in the HBase database corresponding to the image block are taken from the HBase database, for example, the second luminance average feature of the current image to be enhanced is located to 0010, the luminance average feature corresponding to the adjacent upper and lower luminance average features, namely, the feature data of 0001, 0011 and 0010, is taken, and the rest luminance variance, the chrominance average, the chrominance variance and the convolution feature in four directions are considered, so that the current image block can be selected to be subjected to intra-group enhancement for 3*8 total 24 groups of data, and the intra-group enhancement is performed by spark parallel calculation, specifically:
comparing the feature level intensity of the feature segments of each feature of the test scene image in the HBase database with the feature level intensity of each feature of each image block to obtain the differential absolute value of the feature level intensity and the feature level intensity of the image block, setting a first weight according to the differential absolute value, wherein the calculation formula of the first weight is as follows:
wherein thrl is the upper threshold of the first weight, thrh is the lower threshold of the first weight, diff is the absolute difference value of the characteristic grade intensity and the characteristic grade intensity of the image block, and wi is the first weight of the ith characteristic section in the HBase database;
and determining the proportion of the non-reinforced features and the reinforced features in the image block according to the first weight of each feature, and reinforcing the reinforced features, specifically:
blkres2 i =blkres1 i *(1-w i )+blkin*w i
wherein blkres1i is the full enhancement result of the ith feature segment in the database; blkin is an image block of an image to be enhanced; blkres2i is the output enhancement result of the ith feature segment in the database;
in general, the upper threshold thrl of the first weight is between N/(n×2) and n×3/(n×2), the lower threshold thrh of the first weight is between n×3/(n×2) and n×5/(n×2), N is the effective value of the test scene image data, N is the number of segments to be segmented, in this embodiment, in other embodiments, the effective value N of the test scene image data takes 225, the upper threshold and the lower threshold of the first weight of the segment number N take 16 may be set according to practical requirements, for example, the upper threshold and the lower threshold of the first weight of the luminance average feature and the luminance variance feature may be increased if the image luminance requirement is higher.
Calculating a second weight according to the first weight, and fusing the non-reinforced feature and the reinforced feature according to the second weight, wherein the method specifically comprises the following steps:
w' i =1-w i
wherein w' i For the second weight corresponding to the first weight, blkout is an output result of fusing the feature not subjected to enhancement and the feature subjected to enhancement, n 'is the number of data blocks subjected to intra-group enhancement, and in this embodiment, n' takes 3*8 =24.
The second weight can also be set according to actual requirements, and the second weight of the brightness mean feature and the brightness variance feature is correspondingly added by taking the example of higher requirement on the brightness of the image as the actual requirement, so that the brightness features have higher fusion strength due to the large occupation weight when fusion is carried out, and finally the enhanced image with higher requirement on the brightness is obtained.
If the characteristic values are not in the HBase database, judging whether the characteristic values adjacent to the second mean square characteristic and the second convolution characteristic of each image block are in the corresponding characteristic segments, if so, adopting the characteristic values adjacent to the second mean square characteristic and the second convolution characteristic of each image block to enhance, and if not, not enhancing.
After the enhancement processing, the data generated in the processing process is updated to a reference database, which specifically comprises the following steps:
judging whether the second mean square characteristic and the second convolution characteristic of each image block are in an HBase database:
if the second mean square feature and the second convolution feature of the image block are in the HBase database, setting a new primary key according to the second mean square feature and the second convolution feature of the image block, and updating the new primary key to the primary key column;
updating all the features in the image block to an image content column;
the feature level of the feature segment where the features of the image block are located is calculated again, the feature level is strong and updated to the auxiliary column, and the calculation formula is as follows:
wherein flecel is the feature level of recalculation and flecel 0 is the feature level of the feature segment where the feature of the image block is located before the update is strong, flecel is the feature level intensity of the image block, M is the number value contained in the primary key in the HBase database corresponding to the image block before the update, if M is 232, this indicates that the data amount stored in the current HBase database has reached the maximum data amount of the current design (i.e., the feature data ID maximum representation value of the primary key), and no update is performed.
The new number value is added by 1 to the number value contained in the main key in the HBase database corresponding to the image block, and the new number value is updated to the positioning column.
If the second mean square feature and the second convolution feature of the image block are not in the HBase database, setting a new primary key according to the second mean square feature and the second convolution feature of the image block, and updating the new primary key to the primary key column;
updating all the features in the image block to an image content column;
updating the image block characteristic grade intensity to an auxiliary column;
the number of image blocks is noted as 1.
The above is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and it should be understood by those skilled in the art that the present invention includes but is not limited to the accompanying drawings and the description of the above specific embodiment. Any modifications which do not depart from the functional and structural principles of the present invention are intended to be included within the scope of the appended claims.

Claims (7)

1. An image enhancement method, characterized in that the image enhancement method comprises the steps of:
collecting a test scene image, preprocessing the test scene image, and establishing a reference database, wherein the reference database is an HBase database;
collecting an image to be enhanced, and enhancing the image to be enhanced based on data in a reference database;
after the enhancement processing, the data generated in the processing process is updated to a reference database;
the establishing a reference database specifically comprises the following steps: setting a primary key to form a primary key row, wherein each image block is provided with a primary key for determining that each feature of each image block is positioned in a corresponding feature segment, and the primary key comprises a first size feature, a first mean square feature, a first horizontal convolution feature, a first vertical convolution feature, a first 45-degree convolution feature, a first 135-degree convolution feature and feature data ID, wherein the feature data ID is the number of the image blocks and is used for positioning each feature value in the feature segment;
all the features of each image block are sequentially listed in the corresponding primary key from top to bottom and from left to right and then serve as image content columns;
the feature level intensities of the feature segments are listed as auxiliary columns after the image content columns;
the number value of the image blocks in each characteristic segment is listed in an auxiliary column and then is used as a positioning column;
the collecting the image to be enhanced, and enhancing the image to be enhanced based on the data in the reference database specifically comprises the following steps:
calculating a second size characteristic of the image to be enhanced, wherein the second size characteristic is the size of the image block after dividing the image to be enhanced into a plurality of image blocks;
calculating a second mean square feature and a second convolution feature for each image block, the second mean square feature comprising a second luminance mean feature, a second luminance variance feature, a second chrominance mean feature, and a second chrominance variance feature, the second convolution feature comprising: the second horizontal convolution feature, the second vertical convolution feature, the second 45-degree convolution feature and the second 135-degree convolution feature take the feature value of the calculated second mean square feature as the image block feature grade intensity;
judging whether the second mean square characteristic and the second convolution characteristic of each image block are in an HBase database, if so, taking a characteristic value contained in a main key in the HBase database corresponding to the image block and two characteristic values adjacent to the characteristic value contained in the main key, and carrying out intra-group enhancement;
the performing intra-group enhancement specifically includes:
comparing the feature level intensity of the feature segments of each feature of the test scene image in the HBase database with the feature level intensity of each feature of each image block to obtain the differential absolute value of the feature level intensity and the feature level intensity of the image block, setting a first weight according to the differential absolute value, wherein the calculation formula of the first weight is as follows:
wherein thrl is the upper threshold of the first weight, thrh is the lower threshold of the first weight, diff is the absolute difference value of the characteristic grade intensity and the characteristic grade intensity of the image block, and w i A first weight of the ith feature segment in the HBase database;
and determining the proportion of the non-reinforced features and the reinforced features in the image block according to the first weight of each feature, and reinforcing the reinforced features, specifically:
blkres2 i =blkres1 i *(1-wi)+blkin*w i
wherein blkres1 i The full enhancement result of the ith feature segment in the database; blkin is an image block of an image to be enhanced; blkres2 i The method comprises the steps of outputting an enhancement result for an ith feature segment in a database;
calculating a second weight according to the first weight, and fusing the non-reinforced feature and the reinforced feature according to the second weight, wherein the method specifically comprises the following steps:
w' i =1-w i
wherein w' i For a second weight corresponding to the first weight, blkout is the output of fusing the non-enhanced feature and the enhanced feature, n' is the number of data blocks that are enhanced within the group,
and obtaining the enhanced image after fusion.
2. The image enhancement method according to claim 1, wherein preprocessing the test scene image specifically comprises:
calculating a first size characteristic of the test scene image, wherein the first size characteristic is the size of an image block after the test scene image is divided into a plurality of image blocks;
calculating a first mean square feature and a first convolution feature of each image block, the first mean square feature comprising a first luminance mean feature, a first luminance variance feature, a first chrominance mean feature, and a first chrominance variance feature, the first convolution feature comprising: a first horizontal convolution feature, a first vertical convolution feature, a first 45 degree convolution feature, and a first 135 degree convolution feature;
dividing the luminance mean feature, the chrominance mean feature, the luminance variance feature, the chrominance variance feature, the horizontal convolution feature, the vertical convolution feature, the 45-degree convolution feature and the 135-degree convolution feature of the test scene image to obtain feature segments of each feature:
the luminance average feature is segmented intoThe luminance variance feature is segmented into->The chrominance mean value is characterized by being segmented into->The chrominance variance feature is segmented into->The horizontal convolution feature, the vertical convolution feature, the 45 degree convolution feature and the 135 degree convolution feature are all +.>
Wherein, len 1 Step length len of each characteristic segment after dividing brightness average value 2 Is a luminance squareStep length of each characteristic segment after difference division, len 3 Step length of each characteristic segment after dividing chromaticity mean value, len 4 Step length, max, of each feature segment after dividing the chrominance variance 1 Max for the luminance mean maximum value 2 Max for the maximum value of the luminance variance 3 Maximum value of chromaticity mean, max 4 Maximum value of chromatic variance, min 1 Minimum value of brightness mean value, min 2 Min, which is the minimum of the luminance variance 3 Minimum value of chromaticity mean, min 4 The method comprises the steps that (1) the minimum value of chromatic variance, max is the maximum value of convolution characteristics, min is the minimum value of convolution characteristics, N is the effective value of test scene image data, and N is the number of segments for segmentation;
after division is completed, calculating a mean value of the feature segments of each feature, wherein the mean value is the feature grade intensity of the feature segments of each feature of the test scene image.
3. The image enhancement method according to claim 1, wherein when acquiring the test scene images, videos of different conventional test scenes under different gain environments are acquired, and the video of each scene under each gain environment takes 1 to 10 frames of images.
4. The image enhancement method according to claim 1, wherein the intra-group enhancement is performed by spark parallel computation.
5. The image enhancement method according to claim 1, wherein it is determined whether the second mean square feature and the second convolution feature of each image block are in the HBase database, if not in the HBase database, it is determined whether feature values adjacent to the second mean square feature and the second convolution feature of each image block are in the corresponding feature segment, if yes, enhancement is performed using feature values adjacent to the second mean square feature and the second convolution feature of each image block, and if not, no enhancement is performed.
6. The image enhancement method according to claim 1, wherein updating the data generated during the processing to the reference database comprises:
judging whether the second mean square characteristic and the second convolution characteristic of each image block are in an HBase database or not;
if the second mean square feature and the second convolution feature of the image block are within the HBase database, then based on the second mean square feature of the image block
Setting a new primary key by the mean square feature and the second convolution feature, and updating the new primary key to the primary key row;
updating all the features in the image block to an image content column;
the feature level of the feature segment where the features of the image block are located is calculated again, the feature level is strong and updated to the auxiliary column, and the calculation formula is as follows:
wherein flevel is the feature level of the recalculation and is strong, flevel0 is the feature level of the feature segment where the feature of the image block before being not updated is strong, flevelc is the feature level intensity of the image block, and M is the number value contained in the primary key in the HBase database corresponding to the image block before being not updated;
the new number value is added by 1 to the number value contained in the main key in the HBase database corresponding to the image block, and the new number value is updated to the positioning column.
7. The image enhancement method according to claim 6, wherein if the second mean square feature and the second convolution feature of the image block are not in the HBase database, a new primary key is set according to the second mean square feature and the second convolution feature of the image block, and the new primary key is updated to the primary key column;
updating all the features in the image block to an image content column;
updating the image block characteristic grade intensity to an auxiliary column;
the number of image blocks is noted as 1.
CN201911377762.2A 2019-12-27 2019-12-27 Image enhancement method Active CN111223058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911377762.2A CN111223058B (en) 2019-12-27 2019-12-27 Image enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911377762.2A CN111223058B (en) 2019-12-27 2019-12-27 Image enhancement method

Publications (2)

Publication Number Publication Date
CN111223058A CN111223058A (en) 2020-06-02
CN111223058B true CN111223058B (en) 2023-07-18

Family

ID=70830866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911377762.2A Active CN111223058B (en) 2019-12-27 2019-12-27 Image enhancement method

Country Status (1)

Country Link
CN (1) CN111223058B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393388A (en) * 2021-05-26 2021-09-14 联合汽车电子有限公司 Image enhancement method, device adopting same, storage medium and vehicle
CN115908210A (en) * 2021-08-05 2023-04-04 中兴通讯股份有限公司 Image processing method, electronic device, computer-readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016054904A1 (en) * 2014-10-11 2016-04-14 京东方科技集团股份有限公司 Image processing method, image processing device and display device
CN106548169A (en) * 2016-11-02 2017-03-29 重庆中科云丛科技有限公司 Fuzzy literal Enhancement Method and device based on deep neural network
WO2018006058A1 (en) * 2016-07-01 2018-01-04 Cubisme, Inc. System and method for forming a super-resolution biomarker map image
WO2018076614A1 (en) * 2016-10-31 2018-05-03 武汉斗鱼网络科技有限公司 Live video processing method, apparatus and device, and computer readable medium
CN108109115A (en) * 2017-12-07 2018-06-01 深圳大学 Enhancement Method, device, equipment and the storage medium of character image
CN108573284A (en) * 2018-04-18 2018-09-25 陕西师范大学 Deep learning facial image extending method based on orthogonal experiment analysis
CN109003249A (en) * 2017-06-07 2018-12-14 展讯通信(天津)有限公司 Enhance the method, apparatus and terminal of image detail
JP6542445B1 (en) * 2018-07-31 2019-07-10 株式会社 情報システムエンジニアリング Information providing system and information providing method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL234396A0 (en) * 2014-08-31 2014-12-02 Brightway Vision Ltd Self-image augmentation
US10839487B2 (en) * 2015-09-17 2020-11-17 Michael Edwin Stewart Methods and apparatus for enhancing optical images and parametric databases
CN107133933B (en) * 2017-05-10 2020-04-28 广州海兆印丰信息科技有限公司 Mammary X-ray image enhancement method based on convolutional neural network
CN107330854B (en) * 2017-06-15 2019-09-17 武汉大学 A kind of image super-resolution Enhancement Method based on new type formwork
CN110084757B (en) * 2019-04-15 2023-03-07 南京信息工程大学 Infrared depth image enhancement method based on generation countermeasure network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016054904A1 (en) * 2014-10-11 2016-04-14 京东方科技集团股份有限公司 Image processing method, image processing device and display device
WO2018006058A1 (en) * 2016-07-01 2018-01-04 Cubisme, Inc. System and method for forming a super-resolution biomarker map image
WO2018076614A1 (en) * 2016-10-31 2018-05-03 武汉斗鱼网络科技有限公司 Live video processing method, apparatus and device, and computer readable medium
CN106548169A (en) * 2016-11-02 2017-03-29 重庆中科云丛科技有限公司 Fuzzy literal Enhancement Method and device based on deep neural network
CN109003249A (en) * 2017-06-07 2018-12-14 展讯通信(天津)有限公司 Enhance the method, apparatus and terminal of image detail
CN108109115A (en) * 2017-12-07 2018-06-01 深圳大学 Enhancement Method, device, equipment and the storage medium of character image
CN108573284A (en) * 2018-04-18 2018-09-25 陕西师范大学 Deep learning facial image extending method based on orthogonal experiment analysis
JP6542445B1 (en) * 2018-07-31 2019-07-10 株式会社 情報システムエンジニアリング Information providing system and information providing method

Also Published As

Publication number Publication date
CN111223058A (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN111291826B (en) Pixel-by-pixel classification method of multisource remote sensing image based on correlation fusion network
CN103778900B (en) A kind of image processing method and system
CN111223058B (en) Image enhancement method
CN110795858B (en) Method and device for generating home decoration design drawing
CN111310718A (en) High-accuracy detection and comparison method for face-shielding image
CN109389569B (en) Monitoring video real-time defogging method based on improved DehazeNet
CN111385640B (en) Video cover determining method, device, equipment and storage medium
CN115297288B (en) Monitoring data storage method for driving simulator
CN110706151B (en) Video-oriented non-uniform style migration method
CN111192213A (en) Image defogging adaptive parameter calculation method, image defogging method and system
CN110047077A (en) A kind of image processing method for ether mill common recognition mechanism
CN116757988B (en) Infrared and visible light image fusion method based on semantic enrichment and segmentation tasks
CN111738964A (en) Image data enhancement method based on modeling
CN109741276B (en) Infrared image base layer processing method and system based on filtering layered framework
CN108830834B (en) Automatic extraction method for video defect information of cable climbing robot
JP2000357226A (en) Method for binarizing light and shade image and recording medium where binarizing program is recorded
CN113298112B (en) Integrated data intelligent labeling method and system
CN114419018A (en) Image sampling method, system, device and medium
CN114419630A (en) Text recognition method based on neural network search in automatic machine learning
CN113784147A (en) Efficient video coding method and system based on convolutional neural network
CN112102186A (en) Real-time enhancement method for underwater video image
CN115439756A (en) Building extraction model training method, extraction method, device and storage medium
CN117252789B (en) Shadow reconstruction method and device for high-resolution remote sensing image and electronic equipment
CN115082319B (en) Super-resolution image construction method, computer equipment and storage medium
CN110008951B (en) Target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 311422 4th floor, building 9, Yinhu innovation center, 9 Fuxian Road, Yinhu street, Fuyang District, Hangzhou City, Zhejiang Province

Patentee after: Zhejiang Xinmai Microelectronics Co.,Ltd.

Address before: 311400 4th floor, building 9, Yinhu innovation center, No.9 Fuxian Road, Yinhu street, Fuyang District, Hangzhou City, Zhejiang Province

Patentee before: Hangzhou xiongmai integrated circuit technology Co.,Ltd.

CP03 Change of name, title or address