CN106651815B - Method and device for processing Bayer format video image - Google Patents
Method and device for processing Bayer format video image Download PDFInfo
- Publication number
- CN106651815B CN106651815B CN201710040061.4A CN201710040061A CN106651815B CN 106651815 B CN106651815 B CN 106651815B CN 201710040061 A CN201710040061 A CN 201710040061A CN 106651815 B CN106651815 B CN 106651815B
- Authority
- CN
- China
- Prior art keywords
- data
- algorithm
- highlight
- video image
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000012545 processing Methods 0.000 title claims abstract description 76
- 238000000034 method Methods 0.000 title claims abstract description 70
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 161
- 238000007781 pre-processing Methods 0.000 claims abstract description 21
- 238000001914 filtration Methods 0.000 claims description 27
- 238000012937 correction Methods 0.000 claims description 13
- 230000008859 change Effects 0.000 claims description 5
- 238000012544 monitoring process Methods 0.000 abstract description 12
- 238000013507 mapping Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 239000003086 colorant Substances 0.000 description 5
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000004321 preservation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004377 microelectronic Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 208000011580 syndromic disease Diseases 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G06T5/70—
-
- G06T5/73—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/57—Control of the dynamic range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Abstract
The application discloses a method and a device for processing a video image in a Bayer format. The method comprises the following steps: preprocessing an input video image in a Bayer format to generate preprocessed data; processing the preprocessed data to generate highlight data and low-light data; processing the high light data and the low light data by using a space domain variation algorithm to generate algorithm data; and outputting a video image through the algorithm data. The method and the device for processing the Bayer format video image can better restore the original image and meet the requirement of a real-time wide-dynamic video monitoring system.
Description
Technical Field
The invention relates to the field of computer vision, in particular to a method and a device for processing a Bayer format video image.
Background
Due to the promotion of security protection requirements in various industries and people's lives, various instruments and equipment for monitoring are continuously developed and gradually spread to all corners of our lives. Among these monitoring methods, the video monitoring technology is widely used due to its advantages of intuition, reliability and large amount of information, such as: traffic, electricity, stores, department stores, studios, finance, government agencies and windows, and military, public security, prison, aerospace and other key departments. With the advancement of digital image processing and microelectronics technologies, digital image acquisition, processing, transmission and analysis become more convenient and faster, and monitoring systems are gradually transitioning from analog to digital.
There are several ways to implement the dynamic range extension method of the digital imaging system, and at present, the method can be mainly classified into two types, namely a software extension method and a hardware extension method. Expanding the dynamic range of the system from hardware has very high technical difficulty, and no mature and reliable scheme exists yet. In addition, the method needs to modify or even redesign the camera or the image sensor, thereby spending a great deal of energy on hardware equipment and greatly increasing the manufacturing cost. The main idea of the software extension method is to perform multiple exposure imaging on a scene, change the brightness range detected by the system by setting different exposure time to obtain a plurality of images with different exposure degrees, and then synthesize the images into a high dynamic range image by a software method to recover the detailed information of the scene. The method has the disadvantages that a plurality of images need to be processed by shooting a plurality of scene pictures, and the real-time requirement of video monitoring is not met. Most of the improved methods of high dynamic processing are performed based on RGB images, and because the total amount of processed data is not changed, the improvement of the operation efficiency is limited, and the consumption of hardware resources is large, so that the embedded development is difficult to perform.
Therefore, a new method and apparatus for processing a Bayer format video image is needed.
The above information disclosed in this background section is only for enhancement of understanding of the background of the invention and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of this, the present invention provides a method and an apparatus for processing a Bayer format video image, which can better restore an original image and meet the requirements of a real-time wide-dynamic video monitoring system.
Additional features and advantages of the invention will be set forth in the detailed description which follows, or may be learned by practice of the invention.
According to an aspect of the present invention, there is provided a method for processing a Bayer format video image, the method comprising: preprocessing an input video image in a Bayer format to generate preprocessed data; processing the preprocessed data to generate highlight data and low-light data; processing the highlight data and the low light data by using a space domain variation algorithm to generate algorithm data; and outputting the video image through the algorithm data.
In an exemplary embodiment of the present disclosure, preprocessing an input video image in a Bayer format to generate preprocessed data includes: performing linear spatial filtering on an input video image in a Bayer format to generate filtering data; and carrying out filtering correction on the filtering data to generate preprocessed data.
In an exemplary embodiment of the present disclosure, processing the pre-processed data to generate highlight data and low-light data includes: and dividing the preprocessed data into highlight data and low-light data through the gray value of the video image.
In an exemplary embodiment of the present disclosure, the processing the highlight data and the low-light data by using a spatial variance algorithm to generate algorithm data includes: processing the highlight data through a space domain variation algorithm to generate highlight algorithm data; processing the low-light data through a space domain variation algorithm to generate low-light algorithm data; and generating algorithm data through the highlight algorithm data and the low light algorithm data.
In an exemplary embodiment of the present disclosure, the processing the low light data by the spatial domain variation algorithm to generate the low light algorithm data includes: processing the low light data through a low light compensation part algorithm formula to generate low light algorithm data;
the low light compensation part algorithm formula comprises:
wherein, Y2Is low light algorithm data, k is a low light compensation parameter, I is a pixel value of an input video image, Y1Is a correction value of the inputted video image.
In an exemplary embodiment of the present disclosure, the highlight data is processed by a spatial variance algorithm to generate highlight algorithm data, including: processing highlight data through a highlight compensation part algorithm formula to generate highlight algorithm data;
the highlight compensation part algorithm formula comprises:
wherein, Y3Is highlight algorithm data, alpha is a highlight compensation parameter, I is a pixel value of an input video image, Y1Max a is a maximum value of pixels of the input video image as a correction value of the input video image.
In one exemplary embodiment of the present disclosure, the highlight compensation parameter ranges from 0.7 to 1.
In an exemplary embodiment of the present disclosure, the algorithm data is generated by the highlight algorithm data and the low light algorithm data, and includes the following formula:
Y=Y2+Y3
wherein Y is algorithm data, Y is2For low light algorithm data, Y3Highlight algorithm data.
In an exemplary embodiment of the present disclosure, the video image is output as a wide dynamic image by the algorithm data.
According to an aspect of the present invention, there is provided an apparatus for processing a Bayer format video image, the apparatus comprising: the preprocessing module is used for preprocessing an input video image in a Bayer format to generate preprocessing data; the data module is used for processing the preprocessed data to generate highlight data and low-light data; the algorithm module is used for processing the highlight data and the low-light data by utilizing a space domain variation algorithm to generate algorithm data; and the output module is used for outputting the video image through the algorithm data.
According to the method and the device for processing the Bayer format video image, the original image can be well restored, and meanwhile, the requirement of a real-time wide-dynamic video monitoring system is met.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. The drawings described below are only some embodiments of the invention and other drawings may be derived from those drawings by a person skilled in the art without inventive effort.
Fig. 1 is a flow diagram illustrating a method for processing a Bayer format video image in accordance with an example embodiment.
Fig. 2 is a schematic diagram illustrating a filtering algorithm in a method for processing a Bayer pattern video image according to an exemplary embodiment.
Fig. 3 is a process before-after comparison diagram illustrating a method for processing a Bayer format video image according to another example embodiment.
Fig. 4 is a process before-after comparison diagram illustrating a method for processing a Bayer format video image according to another example embodiment.
Fig. 5 is a process front-to-back comparison diagram illustrating a method for processing a Bayer format video image according to another example embodiment.
Fig. 6 is a flow chart illustrating a method for processing a Bayer format video image according to another example embodiment.
Fig. 7 is a block diagram illustrating an apparatus for processing a Bayer format video image according to an example embodiment.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first component discussed below may be termed a second component without departing from the teachings of the disclosed concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be appreciated by those skilled in the art that the drawings are merely schematic representations of exemplary embodiments, and that the blocks or flow charts in the drawings are not necessarily required to practice the present invention and are, therefore, not intended to limit the scope of the present invention.
Fig. 1 is a flow diagram illustrating a method for processing a Bayer format video image in accordance with an example embodiment.
As shown in fig. 1, in S102, the input video image in the Bayer pattern is preprocessed to generate preprocessed data. For color images, it is necessary to collect the most basic colors, such as R, G, B, and the simplest method is to use a filter, where the red filter transmits red wavelengths, the green filter transmits green wavelengths, and the blue filter transmits blue wavelengths. If R, G, B three primary colors are to be collected, three filters are required, which is expensive and not easy to manufacture because the three filters must ensure that each pixel is aligned. Different colors are set on a filter of the image in the bayer format, and human eyes are more sensitive to green by analyzing the perception of the human eyes on the colors, so that the pixels in the green format of the image in the bayer format are the sum of the pixels R and the pixels G. Preprocessing the input Bayer pattern video image may, for example, perform linear spatial filtering on the input Bayer pattern video image, and may, for example, perform filter correction on the filtered data.
In S104, the preprocessed data is processed to generate highlight data and low-light data. In the real-time embodiment of the invention, in order to better process the image, the pre-processed data is divided into highlight data and low-light data to be respectively processed subsequently. The pre-processed data may be separated into highlight data and low-light data, for example, based on a weighted average of pixels in the input image and a predetermined threshold, or may be separated into highlight data and low-light data, for example, based on a gray scale value of each pixel in the image and another predetermined value. The invention is not limited thereto.
In S106, the highlight data and the low light data are processed by the spatial domain variation algorithm to generate algorithm data. The spatial domain variation algorithm belongs to one of tone mapping algorithms, and tone mapping is a computer graphics technology for approximately displaying high dynamic range images on a medium with limited dynamic range. Printed results, CRT or LCD displays, projectors, etc. have limited dynamic range. In essence, tone mapping is to solve the problem of performing large contrast attenuation to transform the scene brightness to a displayable range, while maintaining information such as image details and colors that are important for representing the original scene. There are many tone mapping algorithms, some of which are used to generate six different exposure images of a previous image. Yet another algorithm is contrast or gradient domain based methods, which focus on contrast preservation rather than brightness mapping, which usually results in very sharp images due to better preservation of contrast details, but at the cost of flattening the overall image contrast. In the embodiment of the invention, the highlight data and the low-light data are respectively processed through a space-domain variation algorithm to generate algorithm data.
In S108, the video image is output by the algorithm data. The algorithm data can be output by data processing algorithm in the prior art, for example. Wherein the video image is output as a wide dynamic image by the algorithm data.
According to the method for processing the Bayer format video image, the image data are divided into the highlight data and the low-light data, and the highlight data and the low-light data are respectively processed by utilizing the spatial domain variation algorithm, so that the original image can be well restored, and meanwhile, the requirement of a wide-dynamic real-time video monitoring system is met.
It should be clearly understood that the present disclosure describes how to make and use particular examples, but the principles of the present disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
In an exemplary embodiment of the present disclosure, preprocessing an input video image in a Bayer format to generate preprocessed data includes: performing linear spatial filtering on an input video image in a Bayer format to generate filtering data; and carrying out filtering correction on the filtering data to generate preprocessed data.
The realistic meaning of linear spatial filtering is to improve image quality, including removal of high frequency noise and interference, and image edge enhancement, linear enhancement and deblurring. The linear spatial filtering is a value obtained by specifying the size of a corresponding filter and performing linear operation on the pixels in the neighborhood, and the output response is the linear operation of the pixels in the filter. Linear spatial filtering, essentially a convolution or correlation operation of two matrices: this is achieved by convolution or correlation of the corresponding filter (or mask, which is also a two-dimensional matrix) with the image matrix.
The algorithm is expressed as follows:
Y1=Imfilter(I,GH,′conv′)+Mean·β
can be formulated as:
Y1=I*GH+Mean·β
wherein, Y1The method is characterized in that the method is used for correcting the input video image, wherein Imfilter is a linear spatial filter function, I is a pixel value of the input video image, and the input image is an RGBBER type image with a 12bit data format which does not need to be processed, so that the operation speed is greatly improved, and the overall effect is not influenced. GH is a filter matrix, conv is a convolution operation (convolution operation of the input pixel value I and the filter matrix GH), Mean is an average value of the entire frame image, and β is an adjustment average value (the adjustment average value is set artificially according to system requirements).
According to the method for processing the video image in the Bayer format, the noise of the input video image can be reduced by preprocessing the Bayer image data.
Fig. 2 is a schematic diagram illustrating a filtering algorithm in a method for processing a Bayer pattern video image according to an exemplary embodiment. The principle of the filtering algorithm is shown in fig. 2. For example, H is a 5 × 5 filter mask, weighted average, rotational symmetry, and the weight is larger closer to the center. In this embodiment, each pixel point of the original image is taken as a core, a matrix of 5 × 5 is taken, and the pixel values in the array are weighted by the weighting coefficients corresponding to the H matrix, and then divided by the weighting coefficients and 256. If the center pixel is at or near the border, the border is replicated, making up a 5 x 5 matrix.
The filter correction is the product of the adjusted average and the average of the entire frame of image (Mean · β).
In an exemplary embodiment of the present disclosure, processing the pre-processed data to generate highlight data and low-light data includes: and dividing the preprocessed data into highlight data and low-light data through the gray value of the video image.
In an exemplary embodiment of the present disclosure, the processing the highlight data and the low-light data by a spatial variance algorithm to generate algorithm data includes: processing the highlight data through a space domain variation algorithm to generate highlight algorithm data; processing the low-light data through a space domain variation algorithm to generate low-light algorithm data; and generating algorithm data through the highlight algorithm data and the low light algorithm data.
According to the method for processing the Bayer format video image, better effect can be obtained by processing the Bayer image data through the spatial domain variation algorithm, the image restoration degree is high, and the real-time requirement of a video monitoring system is met.
The tone mapping method can be divided into a global algorithm (spatial invariant algorithm) and a local algorithm (spatial variant algorithm). In the global algorithm, the processing of each pixel of the image is independent of the spatial position and the values of the surrounding pixels, and all pixels are processed by the same mapping function.
Because the mapping curve required by the space-domain invariant algorithm has uniformity, invariance and fixity, the space-domain invariant algorithm is simple and quick in calculation and easy to implement, and the algorithm complexity in mapping is low, but the final effect is influenced by simple mapping, so that the detailed characteristics in the image are likely to be lost, and the information such as local contrast in the original image is influenced accordingly.
The spatial domain variation algorithm is different from it. Compared with the space-domain invariant algorithm, the algorithm focuses on the relationship between the current pixel and the surrounding pixels, and once the pixel is changed, the corresponding mapping relationship is also changed.
In an exemplary embodiment of the present disclosure, the processing the low light data by the spatial domain variation algorithm to generate the low light algorithm data includes: and processing the low light data through a low light compensation part algorithm formula to generate low light algorithm data.
In an exemplary embodiment of the present disclosure, the low light compensation partial algorithm formula includes:
wherein, Y2Is low light algorithm data, k is a low light compensation parameter, I is a pixel value of an input video image, Y1Is a correction value of the inputted video image.
In an exemplary embodiment of the present disclosure, the highlight data is processed by a spatial variance algorithm to generate highlight algorithm data, including: and processing the highlight data through a highlight compensation part algorithm formula to generate highlight algorithm data.
In an exemplary embodiment of the present disclosure, the highlight compensation part algorithm formula includes:
wherein, Y3Is highlight algorithm data, alpha is a highlight compensation parameter, I is a pixel value of an input video image, Y1Max a is a maximum value of pixels of the input video image as a correction value of the input video image.
In one exemplary embodiment of the present disclosure, the highlight compensation parameter ranges from 0.7 to 1.
In an exemplary embodiment of the present disclosure, the algorithm data is generated by the highlight algorithm data and the low light algorithm data, and includes the following formula:
Y=Y2+Y3
namely:
wherein Y is algorithm data, Y is2For low light algorithm data, Y3Highlight algorithm data.
Fig. 3, 4, and 5 are process front and back contrast diagrams illustrating a method for processing a Bayer pattern video image according to another exemplary embodiment. As can be seen from the figure, the method in the embodiment of the invention effectively improves the dynamic range of the image, keeps good chrominance information of the image and obviously enhances the details of the image. Meanwhile, the method provided by the invention has the advantages of simple steps, good robustness and real-time performance.
Fig. 6 is a flow chart illustrating a method for processing a Bayer format video image according to another example embodiment.
As shown in fig. 6, a Bayer format video image is input S602.
S604, linear spatial filtering is performed on the input image.
And S606, filtering and correcting the filtered image.
S608, wide dynamic processing is carried out on the filtered video image based on the improved tone mapping method
S610 outputs the processed wide moving image.
Here, the linear spatial filtering in S604 is a preprocessing of the video image in order to reduce noise of the input video image. In the embodiment of the invention, the wide dynamic algorithm is applied to the video image in the Bayer format, the processed data is reduced by 2/3 compared with other algorithms fundamentally, the calculated amount of the wide dynamic algorithm is reduced to a great extent on the premise of not reducing the video image effect, and the algorithm efficiency is improved.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. The computer program, when executed by the CPU, performs the functions defined by the method provided by the present invention. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the method according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details which are not disclosed in the embodiments of the apparatus of the present invention, reference is made to the embodiments of the method of the present invention.
Fig. 7 is a block diagram illustrating an apparatus for processing a Bayer format video image according to an example embodiment.
The preprocessing module 702 is configured to preprocess an input video image in a Bayer format to generate preprocessed data.
The data module 704 is used for generating highlight data and low-light data by pre-processing the data according to a predetermined rule.
The algorithm module 706 is configured to process the highlight data and the low-light data through a spatial domain variation algorithm to generate algorithm data.
The output module 708 is used for outputting video images through algorithm data.
In an exemplary embodiment of the present disclosure, the preprocessing module includes: the filtering submodule is used for carrying out linear spatial filtering on an input video image in a Bayer format to generate filtering data; and the syndrome module is used for carrying out filtering correction on the filtering data to generate preprocessed data.
In an exemplary embodiment of the disclosure, the algorithm module includes: the highlight submodule is used for processing highlight data through a space domain variation algorithm to generate highlight algorithm data; and the low photon module is used for processing the low light data through a space domain variation algorithm to generate the low light algorithm data.
The device in the embodiment of the invention can be embedded into an FPGA (field programmable gate array) for implementation and is applied to a camera or a video camera with real-time high dynamic range.
Those skilled in the art will appreciate that the modules described above may be distributed in the apparatus according to the description of the embodiments, or may be modified accordingly in one or more apparatuses unique from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiment of the present invention.
Those skilled in the art will readily appreciate from the foregoing detailed description that the method and apparatus for processing a Bayer format video image according to embodiments of the present invention has one or more of the following advantages.
According to some embodiments, the method for processing the video image in the Bayer format can better restore the original image and meet the requirements of a real-time wide-dynamic video monitoring system by dividing the image data into highlight data and low-light data and respectively processing the highlight data and the low-light data by using a spatial domain variation algorithm.
According to other embodiments, the method for processing the video image in the Bayer format can reduce the noise of the input video image by preprocessing the Bayer image data.
According to some embodiments, the method for processing the Bayer format video image processes the Bayer image data through the spatial domain variation algorithm, can obtain a good effect, has high image reduction degree, and meets the real-time requirement of a video monitoring system.
Exemplary embodiments of the present invention are specifically illustrated and described above. It is to be understood that the invention is not limited to the precise construction, arrangements, or instrumentalities described herein; on the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
In addition, the structures, the proportions, the sizes, and the like shown in the drawings of the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used for limiting the limit conditions which the present disclosure can implement, so that the present disclosure has no technical essence, and any modification of the structures, the change of the proportion relation, or the adjustment of the sizes, should still fall within the scope which the technical contents disclosed in the present disclosure can cover without affecting the technical effects which the present disclosure can produce and the purposes which can be achieved. In addition, the terms "above", "first", "second" and "a" as used in the present specification are for the sake of clarity only, and are not intended to limit the scope of the present disclosure, and changes or modifications of the relative relationship may be made without substantial technical changes and modifications.
Claims (8)
1. A method for processing a Bayer format video image, comprising:
preprocessing an input video image in a Bayer format to generate preprocessed data;
processing the preprocessed data to generate highlight data and low-light data;
processing the high light data and the low light data by using a space domain variation algorithm to generate algorithm data; and
outputting a video image through the algorithm data;
wherein the processing the highlight data and the low-light data by using a spatial domain variation algorithm to generate algorithm data comprises:
processing the highlight data through the airspace change algorithm to generate highlight algorithm data;
processing the low light data through the airspace change algorithm to generate low light algorithm data;
generating the algorithm data through the highlight algorithm data and the low-light algorithm data;
processing the low light data through the spatial domain variation algorithm to generate low light algorithm data, including:
processing the low light data through a low light compensation part algorithm formula to generate low light algorithm data;
the low light compensation part algorithm formula comprises:
wherein, Y2For the low light algorithm data, k is a low light compensation parameter, I is a pixel value of the input video image, Y1For correction of input video image。
2. The method of claim 1, wherein the preprocessing the input Bayer formatted video image to generate preprocessed data comprises:
performing linear spatial filtering on an input video image in a Bayer format to generate filtering data; and
and carrying out filtering correction on the filtering data to generate the preprocessing data.
3. The method of claim 1, wherein said processing said pre-processed data to generate highlight data and low-light data comprises:
and dividing the preprocessing data into the highlight data and the low-light data according to the gray value of the video image.
4. The method of claim 1, wherein said processing said highlight data through said spatial variant algorithm to generate highlight algorithm data comprises:
processing the highlight data through a highlight compensation part algorithm formula to generate highlight algorithm data;
the highlight compensation part algorithm formula comprises:
wherein, Y3For the highlight algorithm data, α is a highlight compensation parameter, I is a pixel value of the input video image, Y1Max a is a maximum value of pixels of the input video image as a correction value of the input video image.
5. The method of claim 4, wherein the highlight compensation parameter is in a range of 0.7-1.
6. The method of claim 5, wherein said generating said algorithm data from said highlight algorithm data and said low light algorithm data comprises the formula:
Y=Y2+Y3
wherein Y is the algorithm data, Y is2For the low light algorithm data, Y3Is the highlight algorithm data.
7. The method of claim 1, wherein the output video image by the algorithm data is a wide motion image.
8. An apparatus for processing a Bayer format video image, comprising:
the preprocessing module is used for preprocessing an input video image in a Bayer format to generate preprocessing data;
the data module is used for processing the preprocessed data to generate highlight data and low-light data;
the algorithm module is used for processing the highlight data and the low light data by utilizing a space domain variation algorithm to generate algorithm data; and
the output module is used for outputting a video image through the algorithm data;
wherein the algorithm module comprises:
the highlight algorithm data module is used for processing the highlight data through the airspace variation algorithm to generate highlight algorithm data;
the low light algorithm data module is used for processing the low light data through the airspace change algorithm to generate low light algorithm data;
the algorithm data module is used for generating the algorithm data through the highlight algorithm data and the low-light algorithm data;
the low light algorithm data module comprises:
the low light compensation algorithm data module is used for processing the low light data through a low light compensation partial algorithm formula to generate the low light algorithm data;
the low light compensation part algorithm formula comprises:
wherein, Y2For the low light algorithm data, k is a low light compensation parameter, I is a pixel value of the input video image, Y1Is a correction value of the inputted video image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710040061.4A CN106651815B (en) | 2017-01-18 | 2017-01-18 | Method and device for processing Bayer format video image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710040061.4A CN106651815B (en) | 2017-01-18 | 2017-01-18 | Method and device for processing Bayer format video image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106651815A CN106651815A (en) | 2017-05-10 |
CN106651815B true CN106651815B (en) | 2020-01-17 |
Family
ID=58841992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710040061.4A Expired - Fee Related CN106651815B (en) | 2017-01-18 | 2017-01-18 | Method and device for processing Bayer format video image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106651815B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110111269B (en) * | 2019-04-22 | 2023-06-06 | 深圳久凌软件技术有限公司 | Low-illumination imaging algorithm and device based on multi-scale context aggregation network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102457669A (en) * | 2010-10-15 | 2012-05-16 | 华晶科技股份有限公司 | Image processing method |
CN102903081A (en) * | 2012-09-07 | 2013-01-30 | 西安电子科技大学 | Low-light image enhancement method based on red green blue (RGB) color model |
CN103729873A (en) * | 2013-12-31 | 2014-04-16 | 天津大学 | Content-aware ambient light sampling method |
CN105518717A (en) * | 2015-10-30 | 2016-04-20 | 厦门中控生物识别信息技术有限公司 | Face recognition method and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8374453B2 (en) * | 2010-11-10 | 2013-02-12 | Raytheon Company | Integrating image frames |
JP5668105B2 (en) * | 2013-06-25 | 2015-02-12 | アキュートロジック株式会社 | Image processing apparatus, image processing method, and image processing program |
-
2017
- 2017-01-18 CN CN201710040061.4A patent/CN106651815B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102457669A (en) * | 2010-10-15 | 2012-05-16 | 华晶科技股份有限公司 | Image processing method |
CN102903081A (en) * | 2012-09-07 | 2013-01-30 | 西安电子科技大学 | Low-light image enhancement method based on red green blue (RGB) color model |
CN103729873A (en) * | 2013-12-31 | 2014-04-16 | 天津大学 | Content-aware ambient light sampling method |
CN105518717A (en) * | 2015-10-30 | 2016-04-20 | 厦门中控生物识别信息技术有限公司 | Face recognition method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106651815A (en) | 2017-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Rao et al. | A Survey of Video Enhancement Techniques. | |
Rajkumar et al. | A comparative analysis on image quality assessment for real time satellite images | |
He et al. | Fhde 2 net: Full high definition demoireing network | |
CN111402258A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN106780417A (en) | A kind of Enhancement Method and system of uneven illumination image | |
CN106920221A (en) | Take into account the exposure fusion method that Luminance Distribution and details are presented | |
CN109035155B (en) | Multi-exposure image fusion method for removing halation | |
Hanumantharaju et al. | Color image enhancement using multiscale retinex with modified color restoration technique | |
US8488899B2 (en) | Image processing apparatus, method and recording medium | |
Kinoshita et al. | Automatic exposure compensation using an image segmentation method for single-image-based multi-exposure fusion | |
CN106709890B (en) | Method and device for low-illumination video image processing | |
Wang et al. | Enhancement for dust-sand storm images | |
CN111353955A (en) | Image processing method, device, equipment and storage medium | |
Fan et al. | Multiscale cross-connected dehazing network with scene depth fusion | |
Lo et al. | High dynamic range (hdr) video image processing for digital glass | |
Lee et al. | Image enhancement approach using the just-noticeable-difference model of the human visual system | |
CN106651815B (en) | Method and device for processing Bayer format video image | |
Wen et al. | A survey of image dehazing algorithm based on retinex theory | |
Li et al. | Content adaptive bilateral filtering | |
Lang et al. | A real-time high dynamic range intensified complementary metal oxide semiconductor camera based on FPGA | |
Wang et al. | Nighttime image dehazing using color cast removal and dual path multi-scale fusion strategy | |
Simon et al. | Contrast enhancement of color images using improved Retinex method | |
US11647298B2 (en) | Image processing apparatus, image capturing apparatus, image processing method, and storage medium | |
WO2020241337A1 (en) | Image processing device | |
Xu et al. | Attention‐based multi‐channel feature fusion enhancement network to process low‐light images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20191211 Address after: 100193 room 307, east area, Sansheng building, building 21, yard 10, northwest Wangdong Road, Haidian District, Beijing Applicant after: Julong science and Technology Co Ltd Address before: 100094, No. three, building 23, building 8, northeast Wang Xi Road, Beijing, Haidian District, 301 Applicant before: Julong wisdom Technology Co., Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200117 Termination date: 20210118 |
|
CF01 | Termination of patent right due to non-payment of annual fee |