CN115499585B - Hybrid scene law enforcement video focus local correction method and system - Google Patents
Hybrid scene law enforcement video focus local correction method and system Download PDFInfo
- Publication number
- CN115499585B CN115499585B CN202211088409.4A CN202211088409A CN115499585B CN 115499585 B CN115499585 B CN 115499585B CN 202211088409 A CN202211088409 A CN 202211088409A CN 115499585 B CN115499585 B CN 115499585B
- Authority
- CN
- China
- Prior art keywords
- image
- focus
- target
- pixel value
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012937 correction Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000011156 evaluation Methods 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 8
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000006467 substitution reaction Methods 0.000 description 4
- 230000004075 alteration Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Abstract
The invention discloses a method and a system for locally correcting a focus of a law enforcement video of a hybrid scene, wherein the method comprises the following steps: acquiring an image shot by a law enforcement instrument in real time, and performing differential operation on the inter-frame image to acquire a moving target area; tracking a focus target in a moving target area; judging whether the focus target is out of focus; the improved Retinex algorithm is used to locally correct for the out-of-focus target. The video focus correction method only needs to correct the differential image, has small operand, uses the improved Retinex algorithm, has smaller operand compared with a neural network method, and is suitable for being used on handheld equipment.
Description
Technical Field
The invention belongs to the technical field of videos, and particularly relates to a hybrid scene law enforcement video focus local correction method and system.
Background
Currently, for civilized law enforcement, a handheld law enforcement recorder is generally used by law enforcement departments to record and evidence the on-site violations. Under the mixed scene, the environment in the video is diversified, especially daylighting is not good at night, the target is many, and in addition, law enforcement record appearance itself is also moving, and the video of shooting exists the shake condition, leads to the focus inaccurate, and is not timely to the target tracking that needs to record, and the definition is also not high. The existing video enhancement algorithm has a neural network method, is large in calculation amount, and is not suitable for being used on handheld equipment.
Disclosure of Invention
In view of the above, the invention provides a method and a system for local correction of a mixed scene law enforcement video focus.
The invention discloses a hybrid scene law enforcement video focus local correction method, which comprises the following steps:
acquiring an image shot by a law enforcement instrument in real time, and performing differential operation on the inter-frame image to acquire a moving target area;
tracking a focus target in a moving target area;
judging whether the focus target is out of focus;
the improved Retinex algorithm is used to locally correct for the out-of-focus target.
Further, the differential operation includes subtracting pixels at the same position of two continuous frames of images one by one to obtain a differential image, comparing each pixel point in the differential image with a preset threshold value, wherein a point with a pixel value smaller than the preset threshold value in the differential image is an image background point, and a point with a pixel value larger than the preset threshold value in the differential image is a moving target point.
Further, the determining whether the focus target is out of focus includes: and judging by using an evaluation function, namely calculating an evaluation value of the image by using an energy gradient function, comparing the evaluation value with a preset threshold value, and if the evaluation value is smaller than the preset threshold value, losing focus of the focus target.
Further, the modified Retinex algorithm includes:
carrying out Gaussian filtering on the differential image to obtain a processed image L (x, y);
calculating pixel value correction coefficients of each color channel of the image L (x, y), the following is a pixel value adjustment factor of the R channel:
wherein alpha is depth of field similarity, alpha thr Is depth of field similarity threshold, beta is saturation similarity, beta thr As the saturation similarity threshold value,k is an adjustment factor;
after the pixel value adjustment factors of all channels are obtained, the gray value of each channel of the differential image is multiplied by the corresponding pixel value adjustment factor, and a corrected image is obtained.
Further, the depth of field similarity α is calculated as follows:
wherein n is the number of targets in the previous frame image, a i Depth of field for the ith target, b i Depth of field for the i-th target of the current frame;
the saturation similarity β is calculated as follows:
wherein n is the number of targets in the previous frame image, c i Saturation for the ith target, d i The saturation of the i-th target of the current frame.
The invention discloses a hybrid scene law enforcement video focus local correction system in a second aspect, which comprises the following modules:
the target acquisition module: acquiring an image shot by a law enforcement instrument in real time, and performing differential operation on the inter-frame image to acquire a moving target area;
a focus tracking module: tracking a focus target in a moving target area;
and the defocus judging module is used for: judging whether the focus target is out of focus;
and a target correction module: the improved Retinex algorithm is used to locally correct for the out-of-focus target.
The beneficial effects of the invention are as follows:
the video focus correction method only needs to correct the differential image, has small operand, uses the improved Retinex algorithm, has smaller operand compared with a neural network method, and is suitable for being used on handheld equipment.
Drawings
Fig. 1 is a flow chart of a video focus local correction method of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings, without limiting the invention in any way, and any alterations or substitutions based on the teachings of the invention are intended to fall within the scope of the invention.
The invention discloses a hybrid scene law enforcement video focus local correction method, which comprises the following steps:
acquiring an image shot by a law enforcement instrument in real time, and performing differential operation on the inter-frame image to acquire a moving target area;
tracking a focus target in a moving target area;
judging whether the focus target is out of focus;
the improved Retinex algorithm is used to locally correct for the out-of-focus target.
In some embodiments, the differential operation includes subtracting pixels at the same position of two continuous frames of images one by one to obtain a differential image, comparing each pixel point in the differential image with a preset threshold value, wherein a point with a pixel value smaller than the preset threshold value in the differential image is an image background point, and a point with a pixel value larger than the preset threshold value in the differential image is a point of a moving object.
The differential algorithm is as follows:
Δl(x,y)=m(x,y)-n(x,y),x∈(1,M),y∈(1,N)
where M (x, y) and N (x, y) are images of two consecutive frames of video, respectively, Δl (x, y) is a differential image, and M, N is the size of the image.
In some embodiments, determining whether the focal target is out of focus comprises: and judging by using an evaluation function, namely calculating an evaluation value of the image by using an energy gradient function, comparing the evaluation value with a preset threshold value, and if the evaluation value is smaller than the preset threshold value, losing focus of the focus target.
The energy gradient function is calculated as follows:
the size of the image is M multiplied by N, p (x, y) is the pixel value of the (x, y) position in the image, f (p) is the evaluation value of the image, the evaluation value reflects the definition degree of the image, the larger the evaluation value is, the clearer the image is, and the focus of the image is not out of focus.
In some embodiments, a determination is made as to whether the focus target is out of focus, and a variance assessment method may also be used.
In some embodiments, the modified Retinex algorithm includes:
carrying out Gaussian filtering on the differential image to obtain a processed image L (x, y);
calculating pixel value correction coefficients of each color channel of the image L (x, y), the following is a pixel value adjustment factor of the R channel:
wherein alpha is depth of field similarity, alpha thr Setting a depth of field similarity threshold value according to an actual test result, wherein beta is saturation similarity and beta is thr And setting a saturation similarity threshold according to an actual test result, wherein k is k and is an adjustment factor.
After the pixel value adjustment factors of all channels are obtained, the gray value of each channel of the differential image is multiplied by the corresponding pixel value adjustment factor, and a corrected image is obtained.
In some embodiments, depth of field similarity α is calculated as follows:
wherein n is the number of targets in the previous frame image, a i Depth of field for the ith target, b i Depth of field for the i-th target of the current frame;
the saturation similarity β is calculated as follows:
wherein n is the number of targets in the previous frame image, c i Saturation for the ith target, d i The saturation of the i-th target of the current frame.
The invention also discloses a mixed scene law enforcement video focus local correction system, which comprises the following modules:
the target acquisition module: acquiring an image shot by a law enforcement instrument in real time, and performing differential operation on the inter-frame image to acquire a moving target area;
a focus tracking module: tracking a focus target in a moving target area;
and the defocus judging module is used for: judging whether the focus target is out of focus;
and a target correction module: the improved Retinex algorithm is used to locally correct for the out-of-focus target.
The invention has the following beneficial effects:
the video focus correction method only needs to correct the differential image, has small operand, uses the improved Retinex algorithm, has smaller operand compared with a neural network method, and is suitable for being used on handheld equipment.
The word "preferred" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferred" is intended to present concepts in a concrete fashion. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from the context, "X uses a or B" is intended to naturally include any of the permutations. That is, if X uses A; x is B; or X uses both A and B, then "X uses A or B" is satisfied in any of the foregoing examples.
Moreover, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. Furthermore, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or other features of the other implementations as may be desired and advantageous for a given or particular application. Moreover, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
The functional units in the embodiment of the invention can be integrated in one processing module, or each unit can exist alone physically, or a plurality of or more than one unit can be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. The above-mentioned devices or systems may perform the storage methods in the corresponding method embodiments.
In summary, the foregoing embodiment is an implementation of the present invention, but the implementation of the present invention is not limited to the embodiment, and any other changes, modifications, substitutions, combinations, and simplifications made by the spirit and principles of the present invention should be equivalent to the substitution manner, and all the changes, modifications, substitutions, combinations, and simplifications are included in the protection scope of the present invention.
Claims (3)
1. The local correction method for the focus of the mixed scene law enforcement video is characterized by comprising the following steps:
acquiring an image shot by a law enforcement instrument in real time, and performing differential operation on the inter-frame image to acquire a moving target area;
tracking a focus target in a moving target area;
judging whether the focus target is out of focus;
locally correcting the out-of-focus target by using a modified Retinex algorithm;
the differential operation comprises the steps of subtracting pixels at the same position of two continuous frames of images one by one to obtain a differential image, comparing each pixel point in the differential image with a preset threshold value, wherein a point with a pixel value smaller than the preset threshold value in the differential image is an image background point, and a point with a pixel value larger than the preset threshold value in the differential image is a moving target point;
the modified Retinex algorithm includes:
carrying out Gaussian filtering on the differential image to obtain a processed image L (x, y);
calculating pixel value correction coefficients of each color channel of the image L (x, y), the following is a pixel value adjustment factor λ of the R channel:
wherein alpha is depth of field similarity, alpha thr Is depth of field similarity threshold, beta is saturation similarity, beta thr Is a saturation similarity threshold, k is an adjustment factor;
after obtaining the pixel value adjustment factors of all channels, multiplying the gray values of all channels of the differential image with the corresponding pixel value adjustment factors to obtain a corrected image;
the depth of field similarity α is calculated as follows:
wherein n is the number of targets in the previous frame image, a i Depth of field for the ith target, b i
Depth of field for the i-th target of the current frame;
the saturation similarity β is calculated as follows:
wherein n is the number of targets in the previous frame image, c i Saturation for the ith target, d i The saturation of the i-th target of the current frame.
2. The hybrid scene law enforcement video focus local correction method of claim 1, wherein the determining whether the focus target is out of focus comprises: and judging by using an evaluation function, namely calculating an evaluation value of the image by using an energy gradient function, comparing the evaluation value with a preset threshold value, and if the evaluation value is smaller than the preset threshold value, losing focus of the focus target.
3. The system for locally correcting the focus of the mixed scene law enforcement video is characterized by comprising the following modules:
the target acquisition module: obtaining an image shot by a law enforcement instrument in real time, and performing differential operation on an inter-frame image to obtain a moving target area, wherein the differential operation comprises the steps of subtracting pixels at the same position of two continuous frames of images one by one to obtain a differential image, comparing each pixel point in the differential image with a preset threshold value, wherein a point with a pixel value smaller than the preset threshold value in the differential image is an image background point, and a point with a pixel value larger than the preset threshold value in the differential image is a moving target point;
a focus tracking module: tracking a focus target in a moving target area;
and the defocus judging module is used for: judging whether the focus target is out of focus;
and a target correction module: the improved Retinex algorithm is used to locally correct the out-of-focus target, the improved Retinex algorithm comprising:
the difference image is gaussian filtered, resulting in a processed image L (x, y),
calculating pixel value correction coefficients of each color channel of the image L (x, y), the following is a pixel value adjustment factor λ of the R channel:
wherein alpha is depth of field similarity, alpha thr Is depth of field similarity threshold, beta is saturation similarity, beta thr K is an adjustment factor for the saturation similarity threshold, after the pixel value adjustment factor of each channel is obtained, the gray value of each channel of the differential image is multiplied by the corresponding pixel value adjustment factor to obtain a corrected image,
the depth of field similarity α is calculated as follows:
wherein n is the number of targets in the previous frame image, a i Depth of field for the ith target, b i Depth of field for the i-th target of the current frame;
the saturation similarity β is calculated as follows:
wherein n is the number of targets in the previous frame image, c i Saturation for the ith target, d i The saturation of the i-th target of the current frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211088409.4A CN115499585B (en) | 2022-09-07 | 2022-09-07 | Hybrid scene law enforcement video focus local correction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211088409.4A CN115499585B (en) | 2022-09-07 | 2022-09-07 | Hybrid scene law enforcement video focus local correction method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115499585A CN115499585A (en) | 2022-12-20 |
CN115499585B true CN115499585B (en) | 2024-04-16 |
Family
ID=84468432
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211088409.4A Active CN115499585B (en) | 2022-09-07 | 2022-09-07 | Hybrid scene law enforcement video focus local correction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115499585B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6516154B1 (en) * | 2001-07-17 | 2003-02-04 | Eastman Kodak Company | Image revising camera and method |
CN104038688A (en) * | 2013-03-07 | 2014-09-10 | 卡西欧计算机株式会社 | Imaging Apparatus Having Optical Zoom Mechanism, And Viewing Angle Correction Method Therefor |
WO2016131300A1 (en) * | 2015-07-22 | 2016-08-25 | 中兴通讯股份有限公司 | Adaptive cross-camera cross-target tracking method and system |
WO2017000576A1 (en) * | 2015-06-30 | 2017-01-05 | 中兴通讯股份有限公司 | Non-contact automatic focus method and device |
CN107507221A (en) * | 2017-07-28 | 2017-12-22 | 天津大学 | With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model |
CN108496350A (en) * | 2017-09-27 | 2018-09-04 | 深圳市大疆创新科技有限公司 | A kind of focusing process method and apparatus |
CN108921857A (en) * | 2018-06-21 | 2018-11-30 | 中国人民解放军61062部队科技装备处 | A kind of video image focus area dividing method towards monitoring scene |
CN110495177A (en) * | 2017-04-13 | 2019-11-22 | 松下电器(美国)知识产权公司 | Code device, decoding apparatus, coding method and coding/decoding method |
CN110610150A (en) * | 2019-09-05 | 2019-12-24 | 北京佳讯飞鸿电气股份有限公司 | Tracking method, device, computing equipment and medium of target moving object |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6526234B1 (en) * | 2001-07-17 | 2003-02-25 | Eastman Kodak Company | Revision suggestion camera and method |
EP1601189A2 (en) * | 2004-05-26 | 2005-11-30 | Fujinon Corporation | Autofocus system |
-
2022
- 2022-09-07 CN CN202211088409.4A patent/CN115499585B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6516154B1 (en) * | 2001-07-17 | 2003-02-04 | Eastman Kodak Company | Image revising camera and method |
CN104038688A (en) * | 2013-03-07 | 2014-09-10 | 卡西欧计算机株式会社 | Imaging Apparatus Having Optical Zoom Mechanism, And Viewing Angle Correction Method Therefor |
WO2017000576A1 (en) * | 2015-06-30 | 2017-01-05 | 中兴通讯股份有限公司 | Non-contact automatic focus method and device |
WO2016131300A1 (en) * | 2015-07-22 | 2016-08-25 | 中兴通讯股份有限公司 | Adaptive cross-camera cross-target tracking method and system |
CN110495177A (en) * | 2017-04-13 | 2019-11-22 | 松下电器(美国)知识产权公司 | Code device, decoding apparatus, coding method and coding/decoding method |
CN107507221A (en) * | 2017-07-28 | 2017-12-22 | 天津大学 | With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model |
CN108496350A (en) * | 2017-09-27 | 2018-09-04 | 深圳市大疆创新科技有限公司 | A kind of focusing process method and apparatus |
CN108921857A (en) * | 2018-06-21 | 2018-11-30 | 中国人民解放军61062部队科技装备处 | A kind of video image focus area dividing method towards monitoring scene |
CN110610150A (en) * | 2019-09-05 | 2019-12-24 | 北京佳讯飞鸿电气股份有限公司 | Tracking method, device, computing equipment and medium of target moving object |
Non-Patent Citations (5)
Title |
---|
一种针对视频对象快速移动和遮挡的改进mean-shift跟踪算法;唐勇;孙磊;孙宾;;燕山大学学报;20100131(01);全文 * |
基于图像技术的光电测量设备智能调焦方法;秦富贞;陈玉杰;刘晓玲;;激光杂志;20200925(09);全文 * |
基于场景信息注意模型的目标检测技术研究;陈云彪;兰天;;信息与电脑(理论版);20171123(22);全文 * |
基于改进帧间差分与局部Camshift相结合的目标跟踪算法;周文静;陈玮;;软件导刊;20180315(03);全文 * |
微焦点X辐射图像分离非线性成像方法研究;李政, 王春燕, 高文焕, 刘亚强, 刘永康, 康克军;原子能科学技术;20010320(02);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115499585A (en) | 2022-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10672112B2 (en) | Method and system for real-time noise removal and image enhancement of high-dynamic range images | |
Lin et al. | Vehicle speed detection from a single motion blurred image | |
CN107292830B (en) | Low-illumination image enhancement and evaluation method | |
CN110728697A (en) | Infrared dim target detection tracking method based on convolutional neural network | |
CN102170526A (en) | Method for calculation of defocus fuzzy core and sharp processing of defocus fuzzy image of defocus fuzzy core | |
CN115661669B (en) | Method and system for monitoring illegal farmland occupancy based on video monitoring | |
CN111161172A (en) | Infrared image column direction stripe eliminating method, system and computer storage medium | |
CN110555866A (en) | Infrared target tracking method for improving KCF feature descriptor | |
CN111967345B (en) | Method for judging shielding state of camera in real time | |
Nieuwenhuizen et al. | Dynamic turbulence mitigation for long-range imaging in the presence of large moving objects | |
Raikwar et al. | An improved linear depth model for single image fog removal | |
Liu et al. | Texture filtering based physically plausible image dehazing | |
CN111192213A (en) | Image defogging adaptive parameter calculation method, image defogging method and system | |
CN115499585B (en) | Hybrid scene law enforcement video focus local correction method and system | |
CN116596792B (en) | Inland river foggy scene recovery method, system and equipment for intelligent ship | |
CN111652821A (en) | Low-light-level video image noise reduction processing method, device and equipment based on gradient information | |
CN111445435A (en) | No-reference image quality evaluation method based on multi-block wavelet transform | |
CN114845042B (en) | Camera automatic focusing method based on image information entropy | |
CN113781368B (en) | Infrared imaging device based on local information entropy | |
CN115564683A (en) | Ship detection-oriented panchromatic remote sensing image self-adaptive enhancement method | |
Hu et al. | Maritime video defogging based on spatial-temporal information fusion and an improved dark channel prior | |
CN116391203A (en) | Method for improving signal-to-noise ratio of image frame sequence and image processing device | |
Eppley et al. | Investigation of edge tracking via image subtraction for refractive index structure parameter estimation | |
CN117710245B (en) | Astronomical telescope error rapid detection method | |
Várkonyi-Kóczy et al. | High dynamic range image based on multiple exposure time synthetization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |