CN115830431B - Neural network image preprocessing method based on light intensity analysis - Google Patents

Neural network image preprocessing method based on light intensity analysis Download PDF

Info

Publication number
CN115830431B
CN115830431B CN202310077815.9A CN202310077815A CN115830431B CN 115830431 B CN115830431 B CN 115830431B CN 202310077815 A CN202310077815 A CN 202310077815A CN 115830431 B CN115830431 B CN 115830431B
Authority
CN
China
Prior art keywords
legend
light intensity
region
learning
selection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310077815.9A
Other languages
Chinese (zh)
Other versions
CN115830431A (en
Inventor
童亚拉
李诚楷
吴傲楠
彭鑫宇
尚晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Technology
Original Assignee
Hubei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Technology filed Critical Hubei University of Technology
Priority to CN202310077815.9A priority Critical patent/CN115830431B/en
Publication of CN115830431A publication Critical patent/CN115830431A/en
Application granted granted Critical
Publication of CN115830431B publication Critical patent/CN115830431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a neural network image preprocessing method based on light intensity analysis. Acquiring a plurality of learning images, and carrying out region division on any one learning image according to a region division standard to obtain a plurality of learning legends of which the region division is set as a first region, a second region, … … and an N region; identifying the light intensity of a first area, a second area, … … and an N area in any learning legend, and recording the light intensity as a first light intensity, a second light intensity, … … and an N light intensity respectively; setting a plurality of learning legends as a first legend, a second legend, a … … legend and an Nth legend according to the classification statistics standard; performing region division on the target image according to the region division standard to obtain a target legend of which the region division is set as a first region, a second region, … … and an N region; acquiring a processing target; and comparing the target legend with the first legend, the second legend, the … … legend and the Nth legend respectively according to the processing targets, and preprocessing the target area.

Description

Neural network image preprocessing method based on light intensity analysis
Technical Field
The application relates to the technical field of image preprocessing, in particular to a neural network image preprocessing method based on light intensity analysis.
Background
In the field of image processing, especially post-processing after image shooting, an automatic light intensity design method has very important significance. Some current work uses neural networks to enhance the visual effect of an image, thereby obtaining a specific style of image. For example, the image enhancement and style migration methods based on the neural network effectively improve the generation quality of the image and can generate the image similar to the target data set.
However, the current method is not designed for the light intensity of light (the light intensity refers to the light receiving distance of the surface of the object, namely, illuminance, which is an important basis for determining exposure, and is proportional to the luminous intensity of the light source, inversely proportional to the square of the distance, and the illuminance on the oblique plane is also related to the cosine function of the incident angle of the light, and the intensity of the light in photography is expressed by EV (Exposure Value) values of illuminance measurement), and a method for preprocessing the image is required for the light intensity.
It should be noted that the above information disclosed in this background section is only for understanding the background of the inventive concept and thus may contain information that does not constitute prior art.
Disclosure of Invention
Aiming at the defects of the prior art, the application discloses a neural network image preprocessing method based on light intensity analysis, which can solve the problem of preprocessing an image aiming at light intensity.
In order to achieve the above purpose, the present application is implemented by the following technical schemes:
a neural network image preprocessing method based on light intensity analysis comprises the following steps:
obtaining a region dividing standard; acquiring a plurality of learning images, and carrying out region division on any one learning image according to a region division standard to obtain a plurality of learning legends of which the region division is set as a first region, a second region, … … and an N region; identifying the light intensity of a first area, a second area, … … and an N area in any learning legend, and recording the light intensity as a first light intensity, a second light intensity, … … and an N light intensity respectively; obtaining a classification statistical standard; setting a plurality of learning legends as a first legend, a second legend, a … … legend and an Nth legend according to the classification statistics standard; obtaining a target image, and carrying out region division on the target image according to a region division standard to obtain a target legend of which the region division is set as a first region, a second region, … … and an N region; acquiring a processing target; and comparing the target legend with the first legend, the second legend, the … … legend and the Nth legend respectively according to the processing targets, and preprocessing the target area.
In a preferred embodiment, the region dividing criteria are set to establish a coordinate system for any one of the images according to a specified coordinate system, and the size of each region is set according to a specified pixel size.
In a preferred embodiment, the region dividing criteria are set to identify the object in any one of the images, and the number of regions, the positions of the regions, and the size of the regions of the image are determined according to the content of the object.
In a preferred technical solution, the classification statistical standard is set to a learning legend that the first legend includes a value of the first light intensity within a specified range, a learning legend that the second legend includes a value of the second light intensity within a specified range, … …, and a learning legend that the nth legend includes a value of the nth light intensity within a specified range.
In a preferred technical solution, the learning legend with the largest number of the light intensities in the first legend is set as the first selection legend, the learning legend with the largest number of the light intensities in the second legend is set as the second selection legend, … …, and the learning legend with the largest number of the light intensities in the nth legend is set as the nth selection legend.
In a preferred technical solution, a legend with the first light intensity outside the specified range, a legend with the second light intensity outside the specified range, … …, and a legend with the nth light intensity outside the specified range are set as excluding legends.
In a preferred embodiment, the preprocessing procedure for the target area is set to set the light intensity value of the first area of the target legend to the light intensity value of the first selection legend, the light intensity value of the second area of the target legend to the light intensity value of the second selection legend, … …, and the light intensity value of the nth area of the target legend to the light intensity value of the nth selection legend.
In a preferred embodiment, the processing target is set to one or more of a first region, a second region, a … … region, and an nth region of the target legend.
The application discloses a neural network image preprocessing method based on light intensity analysis, which has the following advantages:
the method has the advantages that the learning process of a large number of learning images is completed through the neural network, the learning images or target images are flexibly divided into areas, two different area dividing modes are adopted, the learning images or target images in different forms can be more refined, the area optimization for light intensity is completed, and the identification and the processing of the position and the size of the same light intensity are more facilitated.
The recognition of the neural network on the learning image is performed to finish the classification of the light intensity of different values in the learning image, and a plurality of different classification levels are adopted to realize the primary classification, the secondary classification and even the tertiary classification of the legend so as to achieve different data processing levels and realize the further refinement of the target image to provide the legend and the selection.
Through dividing, identifying, analyzing, counting and processing the light intensity, the automatic supplement, filling and repairing of the target image can be generated through the learning process of a large number of learning images, and under the condition that the same-class images are met, the optimization processing and intelligent processing of the light intensity attribute of the target image can be completed by means of the light intensity attribute of the learning image, so that the difficulty of post-processing is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a region division criteria according to an embodiment of the present application;
FIG. 2 is another region division criterion of an embodiment of the present application;
fig. 3 is a flow chart of an embodiment of the present application.
Description of the embodiments
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments.
All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The application provides a neural network image preprocessing method based on light intensity analysis. The steps included therein are described separately below.
The first step: and obtaining the regional division standard.
Specifically, in one embodiment as shown in fig. 1 (assuming that fig. 1 is a learning image), the region division criterion is set to establish a coordinate system for the image in accordance with a coordinate system formed by the X-axis and the Y-axis that correspond to the picture, and the size of each region is set in accordance with a specified pixel size, for example, 9 regions shown in fig. 1 (assuming that fig. 1 is a learning image). It will be readily appreciated that the number of regions may be other, and only 9 regions are illustrated in this embodiment.
In another embodiment as shown in fig. 2 (assuming fig. 2 is a study image), the region dividing criteria is set to identify objects in any one image, for example, background, cup, water, 5 oranges, table in the image, respectively, and determine the number of regions, the region positions, and the region sizes of the image according to the object contents, for example, 9 regions as shown in fig. 2 (assuming fig. 2 is a study image), and the positions and sizes of the background, cup, water, 5 oranges, table in the other image.
And a second step of: and acquiring a plurality of learning images, and carrying out region division on any one learning image according to a region division standard to obtain a plurality of learning legends of which the region division is set as a first region, a second region, … … and an N region.
In the above two embodiments, the image needs to be area-divided and numbered, for example, as a first area, a second area, … …, a ninth area.
It is easy to understand that a large number of learning images need to be acquired for learning on the basis of neural network-based learning, and in order to improve learning ability with respect to learning images, it is possible to learn respective types of learning images with respect to a specified type of target image. For example, if the target image is a still image as shown in fig. 1 and 2, the learning image also selects a corresponding still image.
And a third step of: the light intensities of the first region, the second region, … … and the N region in any one of the learning legends are identified and recorded as the first light intensity, the second light intensity, … … and the N light intensity respectively.
Obviously, in the study image after the area division, each area has own light intensity, namely local light intensity, and in the correct exposure range, the higher the light intensity is, the brighter the shot main body is, and the clearer the details such as color, texture and the like of the surface are.
However, the intensity of the light is not stronger and better, and in the shooting process, the too strong light can cause the overexposure of the picture, and the insufficient light can cause the underexposure of the picture, so that the correct exposure range is required, and the overexposure or underexposure is avoided.
In the above two embodiments, it is necessary to identify and record the values of the light intensities of the respective areas of the image, for example, the value of the light intensity numbered as the first area is recorded as the first light intensity and the value is recorded, the value of the light intensity of the second area is recorded as the second light intensity and the value is recorded, … …, and the value of the light intensity of the ninth area is recorded as the ninth light intensity and the value is recorded.
Fourth step: and obtaining a classification statistical standard.
Specifically, the classification statistical standard is set to a learning legend that the first legend includes the value of the first light intensity within the specified range, a learning legend that the second legend includes the value of the second light intensity within the specified range, a … …, and an nth legend including the learning legend that the value of the nth light intensity is within the specified range.
Fifth step: and setting a plurality of learning legends as a first legend, a second legend, a … … legend and an Nth legend according to the classification statistical standard.
It is easy to understand that, first, the learning legend meeting the classification statistical standard is classified, the first light intensity of the first legend includes the learning legend meeting the classification statistical standard, and the other light intensity values are not required. Correspondingly, the second light intensity, the … … light intensity and the ninth light intensity are also subjected to corresponding classification statistics, and nine classification completed legends are obtained in the embodiment.
In order to perform secondary classification on the legends after the primary classification, the learning legends with the most number and the same light intensity values in the first legend are set as the first selection legends, the learning legends with the most number and the same light intensity values in the second legends are set as the second selection legends, … …, and the learning legends with the most number and the same light intensity values in the Nth legends are set as the Nth selection legends.
It will be readily appreciated that the illustration is given in the first illustration.
For example, in the first illustration, the first intensity of the first region has 50 intensity data for the A value, 30 intensity data for the B value, and 10 intensity data for the C value, 9 intensity data for the D value, and 5 intensity data for the E value. The learning legends of 50 light intensity data having a value can be individually categorized into the first selection legend.
If a further classification is required to provide more options for the preprocessing process, the learned legend for intensity data with 30B values may be further classified into the first-time legend.
It is noted that a legend where the first light intensity is outside the specified range, or a legend where the second light intensity is outside the specified range, … …, or a legend where the ninth light intensity is outside the specified range is set as an exclusion legend. This is to exclude the legend from the learning range in the case where the legend is overexposed or underexposed, and avoid misleading the learning of the neural network.
Sixth step: and acquiring a target image, and carrying out region division on the target image according to a region division standard to obtain a target legend with region division set as a first region, a second region, … … and an N region.
It is easily understood that reference may be made to fig. 1 or fig. 2 (here, it is assumed that fig. 1 or fig. 2 is a target image). The division manner is the same as and corresponds to the division manner of the learning image. For example, the target image is also divided by establishing a coordinate system when the learning image is divided by establishing a coordinate system, or the target image is also divided by identifying an object when the learning image is divided by identifying an object.
Seventh step: and acquiring a processing target.
Specifically, the processing target is set to one or more of a first region, a second region, … …, and a ninth region of the target legend. Taking fig. 2 (here, fig. 2 is assumed as a target image) as an example, it is assumed that local overexposure occurs in the third region and the fourth region in fig. 2, and a post-processing based on the light intensity is required. Then the processing targets are two areas, the third area and the fourth area.
Eighth step: and comparing the target legend with the first legend, the second legend, the … … legend and the Nth legend respectively according to the processing targets, and preprocessing the target area.
Specifically, in the case where the processing target is the third region and the fourth region, it can be known that the first region, the second region, the fifth region, … …, and the ninth region are all normal, and no adjustment is required. And comparing the first legend, the second legend, the fifth legend, the … … and the ninth legend respectively to obtain dynamic comparison results of the third region and the fourth region, acquiring a third selection legend and a fourth selection legend of the third region and the fourth region in the obtained dynamic comparison results, and replacing the light intensity values of the third selection legend and the fourth selection legend with the light intensity values of the processing targets.
In the above comparison process, the first selection legend, the second selection legend, the fifth selection legend, … …, the ninth selection legend, or the first selection legend, the second selection legend, the fifth selection legend, … …, the ninth selection legend may also be selected.
It is noted that relational terms are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented in a general purpose computing device, and they may be centralized on a single computing device, or distributed across a network of computing devices.
Alternatively, they may be implemented by program code executable by a computing device, so that they may be stored in a storage device to be executed by the computing device, or they may be fabricated into individual integrated circuit modules, respectively, or a plurality of modules or steps in them may be fabricated into a single integrated circuit module.
The present invention is not limited to any specific combination of hardware and software.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (5)

1. The neural network image preprocessing method based on light intensity analysis is characterized by comprising the following steps of:
obtaining a region dividing standard;
obtaining a plurality of learning images, and carrying out region division on any one of the learning images according to the region division standard to obtain a plurality of learning legends of which the region division is set as a first region, a second region, … … and an N region;
identifying the light intensity of a first region, a second region, … … and an N region in any one of the learning legends, and recording the light intensity as a first light intensity, a second light intensity, … … and an N light intensity respectively;
obtaining a classification statistical standard;
the classification statistical standard is set as a learning legend that the first legend comprises the value of the first light intensity in a specified range, a learning legend that the second legend comprises the value of the second light intensity in a specified range, a … … legend that the Nth legend comprises the value of the Nth light intensity in a specified range;
setting a plurality of learning legends as a first legend, a second legend, a … … legend and an Nth legend according to the classification statistical standard;
firstly classifying learning legends meeting the classification statistical standard, wherein the first legends comprise learning legends with first light intensity meeting the classification statistical standard, the values of other light intensities do not need, and correspondingly, classifying and counting the second light intensity, … … and ninth light intensity, thereby obtaining nine legends with classification completion;
in order to perform secondary classification on the legends after primary classification, setting a learning legend with the largest number of the same light intensity values in the first legend as a first selection legend, setting a learning legend with the largest number of the same light intensity values in the second legend as a second selection legend, … …, and setting a learning legend with the largest number of the same light intensity values in the Nth legend as an Nth selection legend;
the legend with the first light intensity outside the specified range or the legend with the second light intensity outside the specified range or the legend with the ninth light intensity outside the specified range is set as an exclusion legend, which is to exclude the legend from the learning range and avoid misleading the learning of the neural network in the case that the legend is overexposed or underexposed;
obtaining a target image, and carrying out region division on the target image according to the region division standard to obtain a target legend of which the region division is set as a first region, a second region, … … and an N region;
acquiring a processing target;
comparing the target legend with a first legend, a second legend, a … … legend and an Nth legend respectively according to the processing targets, and preprocessing a target area;
when the processing target is the third area and the fourth area, it can be known that the first area, the second area, the fifth area, the … … area and the ninth area are normal and do not need to be adjusted, and then the first legend, the second legend, the fifth legend, the … … and the ninth legend are respectively compared to obtain dynamic comparison results of the third area and the fourth area, and in the obtained dynamic comparison results, a third selection legend and a fourth selection legend of the third area and the fourth area are obtained, and the light intensity values of the third selection legend and the fourth selection legend are replaced with the light intensity values of the processing target;
in the comparison process, the first selection legend, the second selection legend, the fifth selection legend, … … and the ninth selection legend can be selected, or the first selection legend, the second selection legend, the fifth selection legend, … … and the ninth selection legend can be selected;
the flexible region division of the learning images or the target images is completed through the learning process of the neural network on a large number of learning images, and the classification of the light intensities of different values in the learning images is completed through the recognition of the learning images by the neural network.
2. The neural network image preprocessing method based on light intensity analysis according to claim 1, wherein the area division criterion is set to establish a coordinate system for any one image in accordance with a specified coordinate system, and the size of each area is set in accordance with a specified pixel size.
3. The neural network image preprocessing method based on light intensity analysis according to claim 1, wherein the region division criteria is set to identify an object in any one image, and the number of regions, the region positions, and the region sizes of the image are determined according to the object contents.
4. The neural network image preprocessing method based on light intensity analysis according to claim 1, wherein the preprocessing process for the target region is set to set the light intensity value of the first region of the target legend to the light intensity value of the first selection legend, the light intensity value of the second region of the target legend to the light intensity value of the second selection legend, … …, and the light intensity value of the nth region of the target legend to the light intensity value of the nth selection legend.
5. The neural network image preprocessing method based on light intensity analysis of claim 1, wherein the processing target is set to one or more of a first region, a second region, … …, and an nth region of a target legend.
CN202310077815.9A 2023-02-08 2023-02-08 Neural network image preprocessing method based on light intensity analysis Active CN115830431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310077815.9A CN115830431B (en) 2023-02-08 2023-02-08 Neural network image preprocessing method based on light intensity analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310077815.9A CN115830431B (en) 2023-02-08 2023-02-08 Neural network image preprocessing method based on light intensity analysis

Publications (2)

Publication Number Publication Date
CN115830431A CN115830431A (en) 2023-03-21
CN115830431B true CN115830431B (en) 2023-05-02

Family

ID=85520864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310077815.9A Active CN115830431B (en) 2023-02-08 2023-02-08 Neural network image preprocessing method based on light intensity analysis

Country Status (1)

Country Link
CN (1) CN115830431B (en)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4144377B2 (en) * 2003-02-28 2008-09-03 ソニー株式会社 Image processing apparatus and method, recording medium, and program
CN101937563B (en) * 2009-07-03 2012-05-30 深圳泰山在线科技有限公司 Target detection method and equipment and image acquisition device thereof
US11293873B2 (en) * 2015-09-08 2022-04-05 Xerox Corporation Methods and devices for improved accuracy of test results
DE102017101102B3 (en) * 2017-01-20 2018-05-24 Carl Zeiss Industrielle Messtechnik Gmbh Method and coordinate measuring device for measuring optical properties of an optical filter
CN110164854B (en) * 2018-07-25 2021-01-22 友达光电股份有限公司 Lighting device
WO2020112092A1 (en) * 2018-11-27 2020-06-04 Hewlett-Packard Development Company, L.P. Control of light intensities based on use and decay
WO2020223881A1 (en) * 2019-05-06 2020-11-12 深圳市汇顶科技股份有限公司 Fingerprint detection method and apparatus, and electronic device
CN110414445B (en) * 2019-07-31 2022-03-25 联想(北京)有限公司 Light source adjusting method and device for face recognition and electronic equipment
CN111242024A (en) * 2020-01-11 2020-06-05 北京中科辅龙科技股份有限公司 Method and system for recognizing legends and characters in drawings based on machine learning
CN112883970A (en) * 2021-03-02 2021-06-01 湖南金烽信息科技有限公司 Digital identification method based on neural network model
CN114758249B (en) * 2022-06-14 2022-09-02 深圳市优威视讯科技股份有限公司 Target object monitoring method, device, equipment and medium based on field night environment
CN115331013B (en) * 2022-10-17 2023-02-24 杭州恒生聚源信息技术有限公司 Data extraction method and processing equipment for line graph

Also Published As

Publication number Publication date
CN115830431A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
US10666873B2 (en) Exposure-related intensity transformation
CN108197546B (en) Illumination processing method and device in face recognition, computer equipment and storage medium
US8295606B2 (en) Device and method for detecting shadow in image
KR101640998B1 (en) Image processing apparatus and image processing method
US8902328B2 (en) Method of selecting a subset from an image set for generating high dynamic range image
CN109997351B (en) Method and apparatus for generating high dynamic range images
Várkonyi-Kóczy et al. Gradient-based synthesized multiple exposure time color HDR image
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
US9123141B2 (en) Ghost artifact detection and removal in HDR image processing using multi-level median threshold bitmaps
CN113034474A (en) Test method for wafer map of OLED display
TWI498830B (en) A method and system for license plate recognition under non-uniform illumination
CN115830431B (en) Neural network image preprocessing method based on light intensity analysis
CN109961422B (en) Determination of contrast values for digital images
US11631183B2 (en) Method and system for motion segmentation
US10958899B2 (en) Evaluation of dynamic ranges of imaging devices
CN112070771A (en) Adaptive threshold segmentation method and device based on HS channel and storage medium
CN110706168A (en) Image brightness adjusting method
Nam et al. Flash shadow detection and removal in stereo photography
CN113840134B (en) Camera tuning method and device
CN111866400B (en) Image processing method and device
CN111383237B (en) Image analysis method and device and terminal equipment
CN117611554A (en) Shadow detection method based on fusion of YUV color space and gradient characteristics
CN112598592A (en) Image shadow removing method and device, electronic equipment and storage medium
CN117252834A (en) Method, system, equipment and medium for countermeasure expansion of power distribution network inspection data
CN115829852A (en) Image processing method, electronic device, storage medium, and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant