CN111225180B - Picture processing method and device - Google Patents

Picture processing method and device Download PDF

Info

Publication number
CN111225180B
CN111225180B CN201811415942.0A CN201811415942A CN111225180B CN 111225180 B CN111225180 B CN 111225180B CN 201811415942 A CN201811415942 A CN 201811415942A CN 111225180 B CN111225180 B CN 111225180B
Authority
CN
China
Prior art keywords
value
sub
area
region
evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811415942.0A
Other languages
Chinese (zh)
Other versions
CN111225180A (en
Inventor
徐琼
张文萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co Ltd filed Critical Zhejiang Uniview Technologies Co Ltd
Priority to CN201811415942.0A priority Critical patent/CN111225180B/en
Publication of CN111225180A publication Critical patent/CN111225180A/en
Application granted granted Critical
Publication of CN111225180B publication Critical patent/CN111225180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The embodiment of the invention provides a picture processing method and a picture processing device, which relate to the field of monitoring processing, and the method comprises the following steps: when the target gray difference value is larger than a first threshold value, determining whether the target area is a fog face area or not according to the judgment sub-area and the reference sub-area, wherein the target gray difference value is the gray difference value between the target area and the reference area, the judgment sub-area belongs to the target area, the reference sub-area belongs to the reference area, and the judgment sub-area is adjacent to the reference sub-area; and when the target area is determined to be the fog surface area, adjusting the picture parameters of the target area according to the picture attributes of the judgment sub-area and the reference sub-area, so that the picture attributes of the judgment sub-area and the reference sub-area are matched. The picture processing method and the picture processing device provided by the embodiment of the invention can automatically adjust the picture parameters of the target area, so that the picture effect after splicing a plurality of picture areas keeps consistent.

Description

Picture processing method and device
Technical Field
The invention relates to the field of monitoring processing, in particular to a picture processing method and device.
Background
In the application of the multi-view monitoring camera, because the monitoring picture is formed by splicing the pictures of a plurality of monitoring camera lenses, after the infrared lamp is started at night, local fogging at two ends of the picture can be caused due to the installation environment of the multi-view monitoring camera or the rotation of the camera, so that the permeability at two ends of the picture is reduced, the image quality and the visibility of a monitored object are influenced, and the monitoring quality is reduced.
Disclosure of Invention
The invention aims to provide a picture processing method and a picture processing device, which can automatically adjust picture parameters of a target area and keep the consistency of picture effects after splicing a plurality of picture areas.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides an image processing method, where the method includes: when the target gray difference value is larger than a first threshold value, determining whether a target area is a fog face area or not according to a judgment sub-area and a reference sub-area, wherein the target gray difference value is the gray difference value between the target area and the reference area, the judgment sub-area belongs to the target area, the reference sub-area belongs to the reference area, and the judgment sub-area is adjacent to the reference sub-area; and when the target area is determined to be the fog surface area, adjusting the picture parameters of the target area according to the picture attributes of the judgment sub-area and the reference sub-area, so that the picture attributes of the judgment sub-area and the reference sub-area are matched.
In a second aspect, an embodiment of the present invention provides a picture processing apparatus, including: the first judgment module is used for judging whether a target gray difference value is larger than a first threshold value, wherein the target gray difference value is the gray difference value between a target area and a reference area; the second judging module is used for determining whether the target area is a fog face area or not according to a judging sub-area and a reference sub-area when the target gray difference value is larger than the first threshold, wherein the judging sub-area belongs to the target area, the reference sub-area belongs to the reference area, and the judging sub-area is adjacent to the reference sub-area; and the picture adjusting module is used for adjusting picture parameters of the target area according to the picture attributes of the judgment sub-area and the reference sub-area when the target area is determined to be the fog surface area, so that the picture attributes of the judgment sub-area and the reference sub-area are matched.
In a third aspect, an embodiment of the present invention provides a multi-view surveillance camera, where the multi-view surveillance camera includes a memory, and is configured to store one or more programs; a processor. The one or more programs, when executed by the processor, implement the picture processing method described above.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the above-mentioned picture processing method.
Compared with the prior art, the picture processing method and the picture processing device provided by the embodiment of the invention can automatically adjust the picture parameters of the target area according to the fog feeling degree of the target area compared with the reference area and keep the picture effects of a plurality of spliced picture areas consistent according to the fog feeling degree of the target area compared with the reference area by adjusting the picture parameters of the target area according to the picture attributes of the judgment sub-area of the target area and the reference sub-area of the reference area when the gray difference value of the target area and the reference area is judged to be larger than the first threshold and when the target area is determined to be the fog area.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic application scenario diagram of a picture processing method according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a multi-view monitoring camera according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart diagram of a picture processing method according to an embodiment of the present invention;
FIG. 4 is a schematic view of monitoring image coordinate region division;
FIG. 5 is a schematic flow chart of the substeps of S200 in FIG. 3;
FIG. 6 is a schematic flow chart of the substeps of S210 in FIG. 5;
FIG. 7 is a schematic flow chart of the substeps of S300 in FIG. 3;
FIG. 8 is a schematic flow chart of the substeps of S310 in FIG. 7;
FIG. 9 is a schematic flow chart of the substeps of S311 in FIG. 8;
FIG. 10 is a schematic flow chart of the substeps of S312 in FIG. 8;
fig. 11 is a schematic configuration diagram showing a picture processing apparatus according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram showing a second determination module of a picture processing apparatus according to an embodiment of the present invention;
fig. 13 is a schematic configuration diagram showing a gray-scale-average-value-variance-value calculating unit of a picture processing apparatus according to an embodiment of the present invention;
FIG. 14 is a schematic block diagram of a picture adjustment module of a picture processing apparatus according to an embodiment of the present invention;
fig. 15 is a schematic configuration diagram showing a fog sense evaluation value calculation unit of a screen processing apparatus provided by an embodiment of the present invention;
fig. 16 is a schematic configuration diagram showing a first evaluation calculation subunit of a screen processing apparatus according to an embodiment of the present invention;
fig. 17 is a schematic configuration diagram showing a second evaluation calculation subunit of a screen processing apparatus according to an embodiment of the present invention.
In the figure: 10-multi-view surveillance camera; 110-a memory; 120-a processor; 130-a memory controller; 140-peripheral interfaces; 150-a camera unit; 160-communication bus/signal line; 200-picture processing means; 210-a first judgment module; 220-a second judgment module; 221-a gray-scale squared error value calculation unit; 2211-average gray value calculating operator unit; 2212-difference calculation subunit; 222-a gray level squared error value judgment unit; 230-picture adjustment module; 231-fog sense evaluation value calculation unit; 2311-a first rating calculation subunit; 23111-grayscale difference calculation subunit; 23112-average gray difference calculation subunit; 23113-contrast rating value operator unit; 2312-a second evaluation calculation subunit; 23121-sharpness value calculation subunit; 23122-noise intensity value calculation subunit; 23123-a sharpness evaluation value calculation operator unit; 2313-a third evaluation calculation subunit; 2314-a fourth evaluation calculation subunit; 232-fog feeling evaluation and judgment unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 shows a schematic application scene diagram of a picture processing method according to an embodiment of the present invention, taking a four-view monitoring camera as an example, a monitoring picture of the four-view monitoring camera is composed of four regions, such as region 1, region 2, region 3, and region 4 in fig. 1, and the monitoring picture is formed by splicing pictures taken by 4 monitoring cameras respectively. After the infrared lamp is turned on at night, due to the installation environment of the four-eye monitoring camera or the rotation of the camera, local fogging at two ends of a picture may be caused, for example, fogging occurs in the area 1 and the area 4 in fig. 1, so that permeability at two ends of the monitoring picture is reduced, and monitoring quality is further reduced.
Aiming at the phenomenon that the two ends of the monitoring picture of the multi-view monitoring camera are locally fogged, the solution provided by the prior art is as follows: the brightness and permeability of the areas 1 and 4 are respectively kept consistent with those of the areas 2 and 3 by manually adjusting ISP (Image Signal Processing) parameters of the fogging areas in the monitoring screen, such as the ISP parameters of the areas 1 and 4 in fig. 1.
However, the above solution needs manual work, and the timeliness is poor. Based on this, an improvement method provided by the embodiment of the present invention is: when the gray difference value between the target area and the reference area is judged to be larger than the first threshold value and the target area is determined to be a fog area, picture parameters of the target area are adjusted according to picture attributes of the judgment sub-area of the target area and the reference sub-area of the reference area, so that the picture attributes of the judgment sub-area are matched with those of the reference sub-area, and the picture effect after splicing the plurality of picture areas is kept consistent.
Referring to fig. 2, fig. 2 is a schematic block diagram of a multi-view monitoring camera 10 according to an embodiment of the present invention. The multi-view monitoring camera 10 includes a memory 110, a memory controller 130, one or more processors (only one shown) 120, a peripheral interface 140, a camera unit 150, and the like. These components communicate with each other via one or more communication buses/signal lines 160.
The memory 110 can be used for storing software programs and modules, such as program instructions/modules corresponding to the image processing apparatus 200 provided in the embodiment of the present invention, and the processor 120 executes various functional applications and image processing, such as the image processing method provided in the embodiment of the present invention, by running the software programs and modules stored in the memory 110.
The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 120 may be an integrated circuit chip having signal processing capabilities. The Processor 120 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a voice Processor, a video Processor, and the like; but may also be a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 120 may be any conventional processor or the like.
The peripheral interface 140 couples various input/output devices to the processor 120 as well as to the memory 110. In some embodiments, peripheral interface 140, processor 120, and memory controller 130 may be implemented in a single chip. In other embodiments of the present invention, they may be implemented by separate chips.
The camera unit 150 is used to take pictures so that the processor 120 processes the taken pictures.
It will be appreciated that the configuration shown in fig. 2 is merely illustrative and that the multi-view surveillance camera 10 may include more or fewer components than shown in fig. 2 or may have a different configuration than shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a picture processing method according to an embodiment of the present invention, in which the picture processing method includes the following steps:
s100, judging whether the target gray difference value is larger than a first threshold value or not; when the target gray difference is greater than the first threshold, executing S200; and when the target gray difference value is less than or equal to the first threshold value, returning to continue executing the S100.
As described above, in the multi-view monitoring camera 10, for example, the four-view monitoring camera shown in fig. 1, two end regions of a monitoring screen generally have a fogging phenomenon, and therefore, in the embodiment of the present invention, a specific region may be selected as a target region to determine whether the target region has the fogging phenomenon, that is, whether the target region is a fogging surface region, for example, the region 1 in fig. 1 is selected as the target region, another region is selected as a reference region, for example, the region 2 in fig. 1 is selected as the reference region, then respective gray values of the target region and the reference region are respectively calculated, and then respective gray values of the target region and the reference region are subtracted to obtain a target gray difference value.
The calculation formula for calculating the respective gray value of each region is as follows:
Figure BDA0001879471060000061
in the formula, Gray (p, q) represents the Gray value of the pixel point (p, q), and p and q represent the width and height of the ith picture, respectively.
And respectively obtaining the respective gray values of the target area and the reference area according to the formula, and then taking the difference between the gray values as a target gray difference value.
Since the monitoring picture of the multi-view monitoring camera 10 is formed by splicing pictures shot by a plurality of monitoring cameras, the target gray scale difference value represents the gray scale difference value of the pictures shot by different monitoring cameras, and the larger the target gray scale difference value is, the larger the difference between the represented target area and the reference area is, and the more likely the target area is to be a fog surface area. Therefore, a first threshold is set in the multi-view monitoring camera 10, where the first threshold is an upper limit of the target gray scale difference, and after the multi-view monitoring camera 10 calculates and obtains the target gray scale difference, it is determined whether the target gray scale difference is greater than the first threshold, and when the target gray scale difference is greater than the first threshold, the target area may be a fog surface area, and then S200 is executed; otherwise, when the target gray scale difference is smaller than or equal to the first threshold, the target area is not represented as a fog surface area, and then the step returns to execute S100.
It should be noted that, in the embodiment of the present invention, the first threshold may be preset in the multi-view monitoring camera 10 by the user in advance, or may be generated by the multi-view monitoring camera 10 according to an environmental factor, as long as the first threshold is stored in the multi-view monitoring camera 10 to determine whether the target area is possibly a fog area, for example, in some other embodiments of the embodiment of the present invention, the first threshold may also be obtained by being sent by another terminal when the multi-view monitoring camera 10 establishes communication with another terminal.
It should be noted that the target area may be a designated area as described above, for example, the area 1 or the area 4 in fig. 1 is designated as the target area, and in the embodiment of the present invention, the target area may also be determined in other forms, for example, as shown in fig. 4, the monitoring picture of the multi-view monitoring camera 10 is set in a preset coordinate system, and the gray scale difference is calculated for each area and other adjacent areas in a sequentially traversing manner, as long as the target area and the reference area can be determined in a determined manner.
S200, determining whether the target area is a fog face area or not according to the judgment sub-area and the reference sub-area; when the target area is a fog surface area, executing S300; when the target area ratio is the fog surface area, the process returns to S100.
When the multi-view monitoring camera 10 determines that the target gray scale difference is greater than the first threshold value in S100, it indicates that the target area may be a fog surface area, and then the multi-view monitoring camera 10 determines whether the target area is a fog surface area according to the determination sub-area located in the target area and the reference sub-area located in the reference area; and executing S300 when the target area is determined to be the fog surface area; otherwise, when the target area is determined not to be the fog surface area, the process returns to continue to S100.
The judgment sub-region belongs to the target region, the reference sub-region belongs to the reference region, and the judgment sub-region is adjacent to the reference sub-region. In some embodiments of the present invention, the judgment sub-area and the reference sub-area may be selected according to a preset manner, for example, in the schematic diagram shown in fig. 4, it is assumed that the area 1 is a target area, the area 2 is a reference area, a strip-shaped judgment sub-area is preset in the area 1, and a strip-shaped reference sub-area is preset in the area 2, when the multi-view monitoring camera 10 is in the judgment area 1 as the target area, and a target gray difference between the area 1 and the reference area 2 is greater than a first threshold, the multi-view monitoring camera 10 directly determines whether the area 1 as the target area is a fog-face area according to the preset judgment sub-area and the reference sub-area.
In some other embodiments of the embodiment of the present invention, the judgment sub-area and the reference sub-area may be obtained in other manners, for example, in the schematic diagram shown in fig. 4, when it is determined that the area 1 is used as the target area and the area 2 is used as the reference area, the boundary line between the area 1 and the area 2 is used as a reference, the boundary line extends in the positive direction and the negative direction of the x-axis with a predetermined step length x as an interval, and the judgment sub-area and the reference sub-area are determined by matching the two lines with the boundary line.
Optionally, as an implementation manner, the judgment sub-region includes at least one first-type sub-region, the reference sub-region includes at least one second-type sub-region, for example, in the schematic diagram shown in fig. 4, the judgment sub-region and the reference sub-region are respectively divided into a plurality of first-type sub-regions and a plurality of second-type sub-regions according to a preset step length, and the first-type sub-regions and the second-type sub-regions are correspondingly combined to obtain an evaluation sub-region group, where the evaluation sub-region group is used to judge whether the target region is a fog-surface region.
Referring to fig. 5, fig. 5 is a schematic flow chart of the sub-steps of S200 in fig. 3, in the embodiment of the present invention, S200 includes the following sub-steps:
and S210, obtaining a gray level mean square difference value corresponding to each evaluation subregion group.
Optionally, as an implementation manner, please refer to fig. 6, where fig. 6 is a schematic flowchart of the sub-steps of S210 in fig. 5, and in an embodiment of the present invention, S210 includes the following sub-steps:
s211, respectively calculating the respective average gray values of the first sub-area and the second sub-area in the evaluation sub-area group to obtain a first average gray value and a second average gray value.
S212, calculating the square difference value of the first average gray value and the second average gray value to obtain the gray scale square difference value corresponding to the evaluation subregion group.
In the embodiment of the present invention, when the gray scale mean square difference value corresponding to each evaluation sub-region group is obtained, the respective average gray values of the first sub-region and the second sub-region in the evaluation sub-region group are respectively calculated and recorded as the first average gray value and the second average gray value.
For example, in the schematic diagram shown in fig. 4, it is assumed that in the judgment sub-region and the reference sub-region, the obtained first-type sub-region and second-type sub-region are both squares with the side length being the preset step length x, taking a group of evaluation sub-region groups as an example, the calculation formula of the first average gray value of the first-type sub-region is as follows:
Figure BDA0001879471060000081
in the formula, Gray (p, q) represents the Gray value of a pixel point (p, q) of the first type sub-area, p and q represent the width and height of a picture of the first type sub-area respectively, x is the side length of the first type sub-area, m is the abscissa of the N1 point, and Lt1Is the first average gray value.
Correspondingly, the calculation formula of the second average gray value of the second type of sub-region is:
Figure BDA0001879471060000082
in the formula, Gray (p, q) represents the Gray value of the pixel point (p, q) of the second type subregion, p and q represent the width and height of the picture of the second type subregion respectively, x is the side length of the second type subregion, m is the abscissa of the N1 point, and Lt2Is the second average gray value.
Therefore, the calculation formula for calculating the square difference value of the first average gray value and the second average gray value to obtain the gray scale square difference value corresponding to the evaluation sub-region group is as follows:
GV=Lt1 2-Lt2 2
wherein GV is the gray level mean square deviation value corresponding to the evaluation sub-region group, Lt1And Lt2Respectively, a first average gray value and a second average gray value.
Continuing to refer to fig. 5, S220, determining whether the number of target sub-region groups reaches a preset threshold number; when the number of the target sub-area groups reaches the preset threshold number, determining that the target area is a fog surface area; and when the number of the target sub-area groups does not reach the preset threshold number, determining that the target area is not a fog surface area.
The evaluation sub-area groups obtained by the multi-view monitoring camera 10 include target sub-area groups, and the target sub-area groups are evaluation sub-area groups in which the gray scale mean square difference value is greater than the second threshold value in all the evaluation sub-area groups.
The multi-view monitoring camera 10 judges whether the target area is a fog surface area by judging whether the number of the target sub-area groups reaches the preset threshold number; when the number of the target sub-area groups reaches the preset threshold number, the multi-view monitoring camera 10 determines that the target area is a fog surface area; when the number of the target sub-area groups does not reach the preset threshold number, the multi-view monitoring camera 10 determines that the target area is not a fog surface area.
It should be noted that, in the embodiment of the present invention, the second threshold may be preset in the multi-view monitoring camera 10 in advance by the user, or may be generated by the multi-view monitoring camera 10 according to an environmental factor, as long as the second threshold is stored in the multi-view monitoring camera 10 to determine the target subregion group, for example, in some other embodiments of the embodiment of the present invention, the second threshold may also be obtained by being sent by another terminal when the multi-view monitoring camera 10 establishes communication with another terminal.
Based on the above design, in the picture processing method provided in the embodiment of the present invention, the evaluation sub-region group is obtained by correspondingly combining the first-type sub-region and the second-type sub-region, and the at least one first-type sub-region obtained in the judgment sub-region and the at least one second-type sub-region obtained in the reference sub-region, so that the evaluation sub-region group in which the gray scale square difference value is greater than the second threshold value in all the evaluation sub-region groups is taken as the target sub-region group.
Referring to fig. 2, in step S300, the frame parameter of the target region is adjusted according to the respective frame attributes of the judgment sub-region and the reference sub-region.
When the target area is determined to be the fog area, the monitoring quality of the current monitoring picture is represented to be reduced, at this time, picture parameters of the target area are adjusted, such as the contrast of the target area is adjusted, or fog penetration processing is performed on the target area according to picture attributes, such as brightness, gray value, brightness value and the like, of the judgment sub area and the reference sub area based on that the reference sub area is a preset normal picture area, so that the picture attributes of the judgment sub area and the reference sub area are matched, and the consistency of the multi-screen effect of the monitoring picture is further maintained.
Optionally, as an implementation manner, please refer to fig. 7, fig. 7 is a schematic flowchart of sub-steps of S300 in fig. 3, in an embodiment of the present invention, S300 includes the following sub-steps:
and S310, obtaining a first fog sense evaluation value and a second fog sense evaluation value according to the respective picture attributes of the judgment sub-area and the reference sub-area.
When adjusting the picture parameters of the target area to match the picture attributes of the judgment sub-area and the reference sub-area, the fog feeling degree of the judgment sub-area relative to the reference sub-area needs to be evaluated first, and then the adjustment parameters are determined.
Optionally, as an embodiment, the fog degree of the region is evaluated by integrating the image attributes of three dimensions of contrast, sharpness, and brightness, please refer to fig. 8, fig. 8 is a schematic flowchart of the sub-step of S310 in fig. 7, in an embodiment of the present invention, taking calculating the first fog evaluation value of the target region as an example, S310 includes the following sub-steps:
s311, obtaining a contrast evaluation value of the judgment sub-region according to the gray value of each pixel point in the judgment sub-region, the gray values of other pixel points adjacent to each pixel point, and the average brightness value of the judgment sub-region.
Optionally, as an implementation manner, please refer to fig. 9, where fig. 9 is a schematic flowchart of the sub-step of S311 in fig. 8, in an embodiment of the present invention, S311 includes the following sub-steps:
s3111, obtaining a gray difference value of each pixel according to the gray value of each pixel in the judgment sub-area and the gray values of other adjacent pixels of each pixel.
In the embodiment of the present invention, the calculation formula for obtaining the gray scale difference value of each pixel point is as follows:
σ(i,j)=|i-j|,
wherein σ (i, j) is the gray scale difference of each pixel, i is the gray scale value of the pixel (i, j), and j is the gray scale value of other pixels adjacent to the pixel (i, j).
S3112, traversing the respective gray difference values of all the pixel points in the judgment sub-region, and obtaining the average gray difference value of the judgment sub-region.
In the embodiment of the present invention, the calculation formula of the average gray difference value is:
Figure BDA0001879471060000101
where Ct is the average gray difference, σ (i, j) is the gray difference of each pixel, and n is the number of pixels in the judgment sub-region.
S3113, obtaining a contrast evaluation value of the judgment sub-region according to the average gray difference value and the average brightness value of the judgment sub-region.
In the embodiment of the present invention, considering that the lower the contrast, the worse the permeability is, and therefore the lower the evaluation value is, the formula for calculating the evaluation value of the contrast is:
Figure BDA0001879471060000102
wherein CV is a contrast evaluation value, L represents the average brightness of the judgment sub-region, and Ct is an average gray difference value.
Referring to fig. 8, in step S312, the sharpness evaluation value of the sub-region is obtained according to the pixel value of each pixel point in the sub-region, the pixel value of each noise point in the sub-region, and the average brightness value of the sub-region.
In the embodiment of the present invention, the noise point is defined as a pixel point whose gray difference is greater than a fourth threshold or whose gray value is less than a fifth threshold, where the gray difference is a difference between the gray value of the pixel point and the gray value of any other adjacent pixel point.
Based on this, optionally, as an implementation manner, please refer to fig. 10, fig. 10 is a schematic flowchart of the sub-step of S312 in fig. 8, in an embodiment of the present invention, S312 includes the following sub-steps:
s3121, obtaining the definition value of the judgment sub-region according to the pixel value of each pixel point.
In the embodiment of the present invention, the formula for calculating the sharpness value is as follows:
Figure BDA0001879471060000111
wherein, f (I) is the definition value of the judgment sub-area, and I (x, y) is the pixel value of the pixel point (x, y).
And S3122, obtaining a noise intensity value of the judgment sub-region according to the pixel value of each noise point.
In the embodiment of the invention, all noise points on the sub-region are traversed and judged, and the noise intensity value of the sub-region is calculated and judged according to the following formula:
Figure BDA0001879471060000112
wherein, S (P, Q) is the noise point of the judgment sub-region, P and Q respectively represent the width and height of the judgment sub-region picture, and Nos is the noise intensity value of the judgment sub-region.
And S3123, obtaining a definition evaluation value of the judgment sub-region according to the definition value and the noise intensity value of the judgment sub-region and the average brightness value of the judgment sub-region.
Considering that the sharpness function of a picture is positively correlated with sharpness and negatively correlated with noise intensity, in the embodiment of the present invention, the feature sharpness evaluation function is defined as:
Figure BDA0001879471060000113
wherein f (I) is a definition value of the judgment sub-region, L represents the average brightness of the judgment sub-region, k is a preset constant, Nos is a noise intensity value of the judgment sub-region, and Clarity is a feature definition evaluation function.
Thus, the calculation formula of the sharpness evaluation value is:
Figure BDA0001879471060000121
wherein, Clarity is a feature definition evaluation function, L represents the average brightness of the judgment sub-area, δ is a preset constant, and DV is a definition evaluation value.
And S313, obtaining the brightness evaluation value of the judgment sub-area according to the average brightness value of the judgment sub-area.
In the embodiment of the present invention, the calculation formula for calculating the luminance evaluation value is:
Figure BDA0001879471060000122
where L denotes the average luminance of the judgment sub-region, and σ is a preset constant.
And S314, obtaining a first fog sense evaluation value of the judgment sub-area according to the contrast evaluation value, the definition evaluation value and the brightness evaluation value of the judgment sub-area.
In summary, the calculation formula for calculating the first fog sense evaluation value from the obtained contrast evaluation value, sharpness evaluation value, and luminance evaluation value is:
F1=w1*LV+w2*DV+w3*CV,
wherein, F1Is a first fog evaluation value, w1Is a first preset coefficient, LV is a luminance evaluation value, w2Is a second predetermined coefficient, DV is a sharpness evaluation value, w3CV is a contrast evaluation value, which is a third preset coefficient.
In the embodiment of the present invention, the higher the first fog sense evaluation value is, the higher the degree of fog sense characterizing the target area is; the lower the first fog sense evaluation value is, the lower the degree of fog sense characterizing the target area is.
Alternatively, as an implementation, in the embodiment of the present invention,
w1+w2+w3=1。
based on the above design, the image processing method provided in the embodiment of the present invention calculates and determines the first fog sense evaluation value of the sub-region by integrating the image attributes of three dimensions, i.e., contrast, sharpness, and brightness, to evaluate the fog sense degree of the target region.
Continuing to refer to fig. 7, S320, determining whether the fog sense evaluation difference is greater than a third threshold; when the fog evaluation difference value is larger than the third threshold value, adjusting the picture parameters of the target area, and returning to continue to execute S310; and when the fog evaluation difference value is smaller than or equal to a third threshold value, determining that the image attributes of the judgment sub-area and the reference sub-area are matched.
As described above, the first fog sense evaluation value by which the degree of fog sense of the evaluation target area can be calculated; it is understood that the second fog sense evaluation value that evaluates the degree of fog sense in the reference region can be obtained similarly by the same calculation formula.
When the multi-view monitoring camera 10 obtains the first fog sense evaluation value and the second fog sense evaluation value respectively, calculating a difference value between the first fog sense evaluation value and the second fog sense evaluation value to obtain a fog sense evaluation difference value, wherein the fog sense evaluation difference value represents the fog sense degree of the target area compared with the reference area, and the larger the fog sense evaluation difference value is, the more serious the fog sense degree of the target area compared with the reference area is; on the contrary, the smaller the fog evaluation difference is, the lower the fog degree of the target area is in the reference area of the representation table metaphor.
Therefore, in the embodiment of the present invention, the obtained fog sense evaluation difference is compared with the third threshold, when the fog sense evaluation difference is greater than the third threshold, the picture parameter of the target area that needs to be adjusted is represented, such as adjusting the contrast of the target area or performing fog penetrating processing on the target area, and the process returns to continue to execute S310; on the contrary, when the fog evaluation difference is smaller than or equal to the third threshold, the picture attributes of the judgment sub-area and the reference sub-area are determined to be matched, so that the picture attributes of the judgment target area and the reference area are close to each other.
Optionally, as an implementation manner, in the embodiment of the present invention, when adjusting the picture parameter of the target area, if the fog sense evaluation difference is greater than the sixth threshold, the multi-view monitoring camera 10 adjusts the picture parameter of the target area according to the first parameter adjustment step length; and when the fog feeling evaluation difference is smaller than or equal to a sixth threshold, adjusting the picture parameters of the target area according to a second parameter adjustment step length, wherein the second parameter adjustment step length is smaller than the first parameter adjustment step length.
That is, when the difference between the image attributes of the sub-area and the reference sub-area is judged to be large, the image parameter of the target area is adjusted by a large adjustment step length; and when the image attribute difference between the sub-area and the reference sub-area is smaller, adjusting the image parameter of the target area by a smaller adjusting step length.
Based on the above design, in the picture processing method provided in the embodiment of the present invention, when it is determined that the gray difference between the target region and the reference region is greater than the first threshold and the target region is determined to be the fog-face region, the picture parameters of the target region are adjusted according to the respective picture attributes of the judgment sub-region of the target region and the reference sub-region of the reference region, so that the picture attributes of the judgment sub-region and the reference sub-region are matched.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a picture processing apparatus 200 according to an embodiment of the present invention, in which the picture processing apparatus 200 includes a first determining module 210, a second determining module 220, and a picture adjusting module 230.
The first determining module 210 is configured to determine whether a target gray difference is greater than a first threshold, where the target gray difference is a gray difference between a target area and a reference area.
The second determining module 220 is configured to determine whether the target area is a fog surface area according to a determining sub-area and a reference sub-area when the target gray scale difference is greater than the first threshold, where the determining sub-area belongs to the target area, the reference sub-area belongs to the reference area, and the determining sub-area is adjacent to the reference sub-area.
Optionally, as an embodiment, the judgment sub-region includes at least one first-type sub-region, and the reference sub-region includes at least one second-type sub-region; the first type of subarea and the second type of subarea are correspondingly combined to obtain an evaluation subarea group; referring to fig. 12, fig. 12 is a schematic structural diagram illustrating a second determining module 220 of a frame processing apparatus 200 according to an embodiment of the present invention, wherein the second determining module 220 includes a gray-scale square difference calculating unit 221 and a gray-scale square difference determining unit 222.
The gray scale square difference value calculating unit 221 is configured to obtain a gray scale square difference value corresponding to each evaluation sub-region group.
Alternatively, referring to fig. 13 as an implementation manner, fig. 13 shows a schematic structure diagram of the gray-scale square difference value calculating unit 221 of the image processing apparatus 200 according to an embodiment of the present invention, in which the gray-scale square difference value calculating unit 221 includes an average gray-scale value operator unit 2211 and a difference value calculating subunit 2212.
The average gray value calculating operator unit 2211 is configured to calculate the respective average gray values of the first type sub-region and the second type sub-region in the evaluation sub-region group, respectively, to obtain a first average gray value and a second average gray value.
The difference calculating subunit 2212 is configured to calculate a square difference value between the first average gray value and the second average gray value, so as to obtain the gray scale square difference value corresponding to the evaluation sub-region group.
Referring to fig. 12, the gray-scale square deviation value determining unit 222 is configured to determine whether the number of target sub-region groups reaches a preset threshold number, where the target sub-region group belongs to at least one of the evaluation sub-region groups, and the gray-scale square deviation value of the target sub-region group is greater than a second threshold; when the number of the target sub-area groups reaches a preset threshold number, determining that the target area is a fog surface area; and when the number of the target sub-area groups does not reach the preset threshold number, determining that the target area is not a fog surface area.
Referring to fig. 11, the image adjusting module 230 is configured to, when the target area is determined to be the fog area, adjust the image parameters of the target area according to the image attributes of the judgment sub-area and the reference sub-area, so that the image attributes of the judgment sub-area and the reference sub-area are matched.
Optionally, as an implementation manner, referring to fig. 14, fig. 14 shows a schematic structural diagram of a screen adjusting module 230 of a screen processing apparatus 200 according to an embodiment of the present invention, in which the screen adjusting module 230 includes a fog-feeling evaluation value calculating unit 231 and a fog-feeling evaluation judging unit 232.
The fog sense evaluation value calculation unit 231 is configured to obtain a first fog sense evaluation value and a second fog sense evaluation value according to the respective screen attributes of the judgment sub-area and the reference sub-area.
Referring to fig. 15, fig. 15 is a schematic structural diagram illustrating a fog-feeling evaluation value calculation unit 231 of a screen processing apparatus 200 according to an embodiment of the present invention, where the fog-feeling evaluation value calculation unit 231 includes a first evaluation calculation subunit 2311, a second evaluation calculation subunit 2312, a third evaluation calculation subunit 2313, and a fourth evaluation calculation subunit 2314.
The first evaluation calculation subunit 2311 is configured to obtain a contrast evaluation value of the judgment sub-region according to the gray value of each pixel in the judgment sub-region, the gray values of other pixels adjacent to each pixel, and the average brightness value of the judgment sub-region.
Referring to fig. 16, fig. 16 is a schematic structural diagram illustrating a first evaluation calculating subunit 2311 of the image processing apparatus 200 according to an embodiment of the present invention, wherein the first evaluation calculating subunit 2311 includes a gray scale difference value calculating subunit 23111, an average gray scale difference value calculating subunit 23112 and a contrast evaluation value calculating subunit 23113.
The gray difference value operator unit 23111 is configured to obtain a gray difference value of each pixel point according to the gray value of each pixel point in the judgment sub-area and the gray values of other pixel points adjacent to each pixel point.
The average gray difference value operator unit 23112 is configured to traverse the respective gray difference values of all the pixel points in the judgment sub-region to obtain an average gray difference value of the judgment sub-region.
The contrast evaluation value operator unit 23113 is configured to obtain the contrast evaluation value of the judgment sub-region according to the average gray difference and the average brightness value of the judgment sub-region.
Referring to fig. 15, the second evaluation calculating subunit 2312 is configured to obtain a sharpness evaluation value of the judgment sub-region according to the pixel value of each pixel in the judgment sub-region, the pixel value of each noise point in the judgment sub-region, and the average brightness value of the judgment sub-region, where the noise point is a pixel whose gray scale difference is greater than a fourth threshold or whose gray scale value is smaller than a fifth threshold, the gray scale difference is a difference between the gray scale value of the pixel and any one of the adjacent pixels, and the fourth threshold is greater than the fifth threshold.
Referring to fig. 17, fig. 17 is a schematic structural diagram illustrating a second evaluation calculating subunit 2312 of the image processing apparatus 200 according to an embodiment of the present invention, where the second evaluation calculating subunit 2312 includes a sharpness value calculating subunit 23121, a noise intensity value calculating subunit 23122 and a sharpness evaluation value calculating subunit 23123.
The sharpness value calculation subunit 23121 is configured to obtain a sharpness value of the judgment sub-region according to the pixel value of each pixel point.
The noise intensity value calculating operator unit 23122 is configured to obtain a noise intensity value of the judgment sub-region according to the pixel value of each noise point.
The sharpness evaluation value operator unit 23123 is configured to obtain a sharpness evaluation value of the judgment sub-region according to the sharpness value of the judgment sub-region, the noise intensity value, and the average brightness value of the judgment sub-region.
Referring to fig. 15, the third evaluation sub-unit 2313 is configured to obtain a luminance evaluation value of the judgment sub-region according to the average luminance value of the judgment sub-region.
The fourth evaluation calculation subunit 2314 is configured to obtain the first fog sense evaluation value of the judgment sub-region according to the contrast evaluation value, the sharpness evaluation value, and the brightness evaluation value of the judgment sub-region.
Referring to fig. 14, the fog sense evaluation unit 232 is configured to determine whether the fog sense evaluation difference is greater than a third threshold; when the fog sense evaluation difference is greater than the third threshold, adjusting the picture parameters of the target region, and the fog sense evaluation value calculation unit 231 continues to execute the process according to the respective picture attributes of the judgment sub-region and the reference sub-region to obtain a first fog sense evaluation value and a second fog sense evaluation value; when the fog evaluation difference value is smaller than or equal to the third threshold value, determining that the image attributes of the judgment sub-area and the reference sub-area are matched; the fog sense evaluation difference is a difference between the first fog sense evaluation value and the second fog sense evaluation value, and the fog sense evaluation value is used for evaluating the fog sense degree of the corresponding screen area.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiment of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In summary, according to the image processing method and apparatus provided in the embodiments of the present invention, when it is determined that the gray difference between the target region and the reference region is greater than the first threshold, and when it is determined that the target region is a fog-surface region, the image parameters of the target region are adjusted according to the respective image attributes of the judgment sub-region of the target region and the reference sub-region of the reference region, so as to match the image attributes of the judgment sub-region and the reference sub-region, and compared with the prior art, the image parameters of the target region can be automatically adjusted according to the fog-feeling degree of the target region compared with the reference region, so that the image effects after splicing a plurality of image regions are consistent; the evaluation subregion group is obtained by correspondingly combining the first-type subregions and the second-type subregions, wherein the first-type subregions and the second-type subregions are obtained in the judgment subregions, so that the evaluation subregion group with the gray level variance value larger than the second threshold value in all the evaluation subregion groups is used as the target subregion group, and compared with the prior art, whether the target region is a fog surface region or not can be automatically judged by judging whether the number of the target subregion groups reaches a preset number threshold value or not; compared with the prior art, the fog sense degree of the target area is obtained by calculating the selectable dimensions and is expressed by a specific numerical value, and the fog sense degree of the target area can be objectively evaluated.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. A picture processing method, characterized in that the method comprises:
when the target gray difference value is larger than a first threshold value, dividing a judgment sub-region and a reference sub-region into a plurality of first sub-regions and a plurality of second sub-regions according to a preset step length, correspondingly combining the first sub-regions and the second sub-regions to obtain an evaluation sub-region group, and determining whether the target region is a fog face region or not according to the evaluation sub-region group, wherein the target gray difference value is the gray difference value between the target region and the reference region, the judgment sub-region belongs to the target region, the reference sub-region belongs to the reference region, and the judgment sub-region is adjacent to the reference sub-region;
when the target area is determined to be a fog face area, obtaining a first fog sense evaluation value and a second fog sense evaluation value according to the respective picture attributes of the judgment sub-area and the reference sub-area, calculating a difference value between the first fog sense evaluation value and the second fog sense evaluation value to obtain a fog sense evaluation difference value, and adjusting the picture parameters of the target area according to the fog sense evaluation difference value to enable the picture attributes of the judgment sub-area and the reference sub-area to be matched; the fog evaluation difference value represents the degree of fog of the target area compared with the reference area.
2. The method of claim 1,
the step of determining whether the target area is a fog surface area or not according to the evaluation sub-area group includes:
obtaining a gray level square variance value corresponding to each evaluation subregion group;
when the number of the target sub-area groups reaches a preset threshold number, determining that the target area is a fog surface area, wherein the target sub-area group belongs to at least one evaluation sub-area group, and the gray level mean square difference value of the target sub-area group is larger than a second threshold;
and when the number of the target sub-area groups does not reach the preset threshold number, determining that the target area is not a fog surface area.
3. The method of claim 2, wherein said step of obtaining a respective gray scale variance value for each of said evaluation sub-region groups comprises:
respectively calculating the respective average gray values of the first type sub-region and the second type sub-region in the evaluation sub-region group to obtain a first average gray value and a second average gray value;
and calculating the square difference value of the first average gray value and the second average gray value to obtain the gray scale square difference value corresponding to the evaluation subregion group.
4. The method of claim 1, wherein the step of adjusting the frame parameter of the target area according to the fog evaluation difference comprises:
judging whether the fog sense evaluation difference value is larger than a third threshold value or not;
when the fog sense evaluation difference value is larger than the third threshold value, adjusting picture parameters of the target area, and returning to execute the step of obtaining a first fog sense evaluation value and a second fog sense evaluation value according to the respective picture attributes of the judgment sub-area and the reference sub-area;
and when the fog evaluation difference value is smaller than or equal to the third threshold value, determining that the image attributes of the judgment sub-area and the reference sub-area are matched.
5. The method according to claim 4, wherein the step of obtaining the first fog sense evaluation value according to the picture property of the judgment sub-area comprises:
obtaining a contrast evaluation value of the judgment sub-region according to the gray value of each pixel point in the judgment sub-region, the gray values of other pixel points adjacent to each pixel point and the average brightness value of the judgment sub-region;
obtaining a definition evaluation value of the judgment sub-region according to the pixel value of each pixel point in the judgment sub-region, the pixel value of each noise point in the judgment sub-region and the average brightness value of the judgment sub-region, wherein the noise point is a pixel point of which the gray difference value is greater than a fourth threshold value or of which the gray value is less than a fifth threshold value, and the gray difference value is the difference between the gray value of the pixel point and the gray value of any one of the adjacent pixel points;
obtaining a brightness evaluation value of the judgment sub-region according to the average brightness value of the judgment sub-region;
and obtaining the first fog sense evaluation value of the judgment sub-area according to the contrast evaluation value, the definition evaluation value and the brightness evaluation value of the judgment sub-area.
6. The method according to claim 5, wherein the step of obtaining the contrast evaluation value of the judgment sub-region according to the gray value of each pixel point in the judgment sub-region, the gray values of other pixel points adjacent to each pixel point, and the average brightness value of the judgment sub-region comprises:
obtaining a gray difference value of each pixel point according to the gray value of each pixel point in the judgment sub-area and the gray values of other pixel points adjacent to each pixel point;
traversing the respective gray level difference values of all the pixel points in the judgment subarea to obtain the average gray level difference value of the judgment subarea;
and obtaining the contrast evaluation value of the judgment sub-region according to the average gray difference value and the average brightness value of the judgment sub-region.
7. The method according to claim 5, wherein the step of obtaining the sharpness evaluation value of the judgment sub-region according to the pixel value of each pixel point in the judgment sub-region, the pixel value of each noise point in the judgment sub-region, and the average brightness value of the judgment sub-region comprises:
obtaining a definition value of the judgment subarea according to the pixel value of each pixel point;
obtaining a noise intensity value of the judgment sub-region according to the pixel value of each noise point;
and obtaining the definition evaluation value of the judgment sub-region according to the definition value of the judgment sub-region, the noise intensity value and the average brightness value of the judgment sub-region.
8. The method according to claim 5, wherein the calculation formula for obtaining the first fog sense evaluation value of the judgment sub-area based on the contrast evaluation value, the sharpness evaluation value, and the brightness evaluation value of the judgment sub-area is:
F1=w1*LV+w2*DV+w3*CV,
wherein, F1Is the first fog sense evaluation value, w1Is a first preset coefficient, LV is the brightness evaluation value, w2Is a second predetermined coefficient, DV is the sharpness evaluation value, w3CV is the contrast evaluation value.
9. The method of claim 4, wherein the step of adjusting the picture parameter of the target area comprises:
when the fog evaluation difference is larger than a sixth threshold, adjusting the picture parameters of the target area according to a first parameter adjustment step length;
and when the fog evaluation difference is smaller than or equal to the sixth threshold, adjusting the picture parameters of the target area according to a second parameter adjustment step length, wherein the second parameter adjustment step length is smaller than the first parameter adjustment step length.
10. A picture processing apparatus, characterized in that the apparatus comprises:
the first judgment module is used for judging whether a target gray difference value is larger than a first threshold value, wherein the target gray difference value is the gray difference value between a target area and a reference area;
the second judging module is used for dividing the judging sub-region and the reference sub-region into a plurality of first sub-regions and a plurality of second sub-regions according to a preset step length when the target gray difference value is larger than the first threshold value, correspondingly combining the first sub-regions and the second sub-regions to obtain an evaluation sub-region group, and determining whether the target region is a fog surface region or not according to the evaluation sub-region group, wherein the judging sub-region belongs to the target region, the reference sub-region belongs to the reference region, and the judging sub-region is adjacent to the reference sub-region;
the image adjusting module is used for obtaining a first fog sense evaluation value and a second fog sense evaluation value according to the image attributes of the judgment sub-area and the reference sub-area respectively when the target area is determined to be a fog face area, calculating a difference value between the first fog sense evaluation value and the second fog sense evaluation value to obtain a fog sense evaluation difference value, and adjusting the image parameters of the target area according to the fog sense evaluation difference value so as to enable the image attributes of the judgment sub-area and the reference sub-area to be matched; the fog evaluation difference value represents the degree of fog of the target area compared with the reference area.
CN201811415942.0A 2018-11-26 2018-11-26 Picture processing method and device Active CN111225180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811415942.0A CN111225180B (en) 2018-11-26 2018-11-26 Picture processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811415942.0A CN111225180B (en) 2018-11-26 2018-11-26 Picture processing method and device

Publications (2)

Publication Number Publication Date
CN111225180A CN111225180A (en) 2020-06-02
CN111225180B true CN111225180B (en) 2021-07-20

Family

ID=70827742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811415942.0A Active CN111225180B (en) 2018-11-26 2018-11-26 Picture processing method and device

Country Status (1)

Country Link
CN (1) CN111225180B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169576A (en) * 2011-04-02 2011-08-31 北京理工大学 Quantified evaluation method of image mosaic algorithms
CN105025215A (en) * 2014-04-23 2015-11-04 中兴通讯股份有限公司 Method and apparatus for achieving group shooting through terminal on the basis of multiple pick-up heads
CN105516614A (en) * 2015-11-27 2016-04-20 联想(北京)有限公司 Information processing method and electronic device
CN105894449A (en) * 2015-11-11 2016-08-24 乐卡汽车智能科技(北京)有限公司 Method and system for overcoming abrupt color change in image fusion processes
CN105979238A (en) * 2016-07-05 2016-09-28 深圳市德赛微电子技术有限公司 Method for controlling global imaging consistency of multiple cameras
CN106782367A (en) * 2016-12-15 2017-05-31 浙江宇视科技有限公司 Liquid crystal-spliced screen shows method and system
WO2017214523A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Mismatched foreign light detection and mitigation in the image fusion of a two-camera system
KR20180001869A (en) * 2016-06-28 2018-01-05 엘지이노텍 주식회사 Image Improving Apparatus for AVM System and Improving Method thereof
CN108022219A (en) * 2017-11-30 2018-05-11 安徽质在智能科技有限公司 A kind of two dimensional image tone correcting method
US10084959B1 (en) * 2015-06-25 2018-09-25 Amazon Technologies, Inc. Color adjustment of stitched panoramic video

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021774B (en) * 2014-05-29 2016-06-15 京东方科技集团股份有限公司 A kind of method of image procossing and device
CN104820964B (en) * 2015-04-17 2018-03-27 深圳华侨城文化旅游科技股份有限公司 Based on the image mosaic fusion method projected more and system
CN106303225A (en) * 2016-07-29 2017-01-04 努比亚技术有限公司 A kind of image processing method and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169576A (en) * 2011-04-02 2011-08-31 北京理工大学 Quantified evaluation method of image mosaic algorithms
CN105025215A (en) * 2014-04-23 2015-11-04 中兴通讯股份有限公司 Method and apparatus for achieving group shooting through terminal on the basis of multiple pick-up heads
US10084959B1 (en) * 2015-06-25 2018-09-25 Amazon Technologies, Inc. Color adjustment of stitched panoramic video
CN105894449A (en) * 2015-11-11 2016-08-24 乐卡汽车智能科技(北京)有限公司 Method and system for overcoming abrupt color change in image fusion processes
CN105516614A (en) * 2015-11-27 2016-04-20 联想(北京)有限公司 Information processing method and electronic device
WO2017214523A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Mismatched foreign light detection and mitigation in the image fusion of a two-camera system
KR20180001869A (en) * 2016-06-28 2018-01-05 엘지이노텍 주식회사 Image Improving Apparatus for AVM System and Improving Method thereof
CN105979238A (en) * 2016-07-05 2016-09-28 深圳市德赛微电子技术有限公司 Method for controlling global imaging consistency of multiple cameras
CN106782367A (en) * 2016-12-15 2017-05-31 浙江宇视科技有限公司 Liquid crystal-spliced screen shows method and system
CN108022219A (en) * 2017-11-30 2018-05-11 安徽质在智能科技有限公司 A kind of two dimensional image tone correcting method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
雾霾图像实时拼接技术研究与实现;许超;《中国优秀硕士学位论文全文数据库-信息科技辑》;20171215;I138-421 *

Also Published As

Publication number Publication date
CN111225180A (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
US10762649B2 (en) Methods and systems for providing selective disparity refinement
CN110661977B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
KR20160048140A (en) Method and apparatus for generating an all-in-focus image
CN112565589A (en) Photographing preview method and device, storage medium and electronic equipment
US20120076421A1 (en) Methods and systems for estimating illumination source characteristics from a single image
CN105554380B (en) A kind of switching method and device round the clock
US8948453B2 (en) Device, method and non-transitory computer readable storage medium for detecting object
US10942567B2 (en) Gaze point compensation method and apparatus in display device, and display device
CN104637068B (en) Frame of video and video pictures occlusion detection method and device
CN113192468B (en) Display adjustment method, device, equipment and storage medium
CN106506982B (en) method and device for obtaining photometric parameters and terminal equipment
CN106791451B (en) Photographing method of intelligent terminal
TWI552602B (en) Blur detection method of images, monitoring device, and monitoring system
CN109068060B (en) Image processing method and device, terminal device and computer readable storage medium
CN111225180B (en) Picture processing method and device
CN110689565B (en) Depth map determination method and device and electronic equipment
CN115423808B (en) Quality detection method for speckle projector, electronic device, and storage medium
CN108732178A (en) A kind of atmospheric visibility detection method and device
US9813640B2 (en) Image processing apparatus, image processing method, image processing program, and non-transitory recording for calculating a degree-of-invalidity for a selected subject type
CN109901716B (en) Sight point prediction model establishing method and device and sight point prediction method
JP2018201146A (en) Image correction apparatus, image correction method, attention point recognition apparatus, attention point recognition method, and abnormality detection system
CN116337412A (en) Screen detection method, device and storage medium
CN114359776B (en) Flame detection method and device integrating light and thermal imaging
US11631183B2 (en) Method and system for motion segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant