CN112261390A - Vehicle-mounted camera equipment and image optimization device and method thereof - Google Patents
Vehicle-mounted camera equipment and image optimization device and method thereof Download PDFInfo
- Publication number
- CN112261390A CN112261390A CN202010841212.8A CN202010841212A CN112261390A CN 112261390 A CN112261390 A CN 112261390A CN 202010841212 A CN202010841212 A CN 202010841212A CN 112261390 A CN112261390 A CN 112261390A
- Authority
- CN
- China
- Prior art keywords
- image frame
- original image
- frame
- optical flow
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005457 optimization Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000003287 optical effect Effects 0.000 claims abstract description 141
- 238000012545 processing Methods 0.000 claims abstract description 44
- 230000035772 mutation Effects 0.000 claims abstract description 42
- 238000001514 detection method Methods 0.000 claims abstract description 34
- 238000004364 calculation method Methods 0.000 claims abstract description 30
- 238000000605 extraction Methods 0.000 claims abstract description 18
- 230000008859 change Effects 0.000 claims description 25
- 230000011218 segmentation Effects 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 9
- 238000003384 imaging method Methods 0.000 claims description 8
- 238000012821 model calculation Methods 0.000 claims description 4
- 239000013598 vector Substances 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241000519995 Stachys sylvatica Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- HOWHQWFXSLOJEF-MGZLOUMQSA-N systemin Chemical compound NCCCC[C@H](N)C(=O)N[C@@H](CCSC)C(=O)N[C@@H](CCC(N)=O)C(=O)N[C@@H]([C@@H](C)O)C(=O)N[C@@H](CC(O)=O)C(=O)OC(=O)[C@@H]1CCCN1C(=O)[C@H]1N(C(=O)[C@H](CC(O)=O)NC(=O)[C@H](CCCN=C(N)N)NC(=O)[C@H](CCCCN)NC(=O)[C@H](CO)NC(=O)[C@H]2N(CCC2)C(=O)[C@H]2N(CCC2)C(=O)[C@H](CCCCN)NC(=O)[C@H](CO)NC(=O)[C@H](CCC(N)=O)NC(=O)[C@@H](NC(=O)[C@H](C)N)C(C)C)CCC1 HOWHQWFXSLOJEF-MGZLOUMQSA-N 0.000 description 1
- 108010050014 systemin Proteins 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention provides a vehicle-mounted camera device, an image optimization device and an image optimization method thereof, wherein the device is connected with a camera device of the vehicle-mounted camera device and comprises the following steps: a frame extraction module for extracting original image frames frame by frame from an original video image collected and provided by the camera device; the initialization module acquires a reference image frame; the high-brightness region detection module is used for detecting whether a high-brightness region exists from the original image frame of the second frame; the optical flow calculation and judgment module is used for judging whether the high-brightness area has optical flow mutation compared with an area superposed with the high-brightness area in the reference image frame; the pixel compensation module is used for carrying out pixel compensation on a high-brightness area in an original image frame to obtain a pixel compensation image frame; the white balance module is used for carrying out white balance processing on the original image frame without the high-brightness area, the original image frame without the light stream mutation and the pixel compensation image frame; and the updating module updates the reference image frame. The embodiment can effectively perform white balance on the image.
Description
Technical Field
The embodiment of the invention relates to the technical field of motor vehicle image processing, in particular to vehicle-mounted camera equipment and an image optimization device and an image optimization method thereof.
Background
The vehicle-mounted camera usually works in an open environment in the driving process of the motor vehicle, and white spots are formed in a real-time image acquired by the motor vehicle camera due to mirror reflection or high-brightness light source images in the open environment. The conventional image optimization system of the motor vehicle performs white balance processing on a real-time image, and the existing automatic white balance method generally acquires an effective reference point or a reference area on an original image so as to perform white balance processing on the image. However, in the above open environment, based on the existing reference point or area acquisition rule, the high-brightness area is usually identified as a reference target, and the white balance algorithm based on the reference target is obviously inconsistent with the white balance target, which finally results in a large difference between the real-time image optimization effect of the motor vehicle and the actual environment.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide an image optimization device for a vehicle-mounted camera apparatus, which can effectively perform white balance on an image.
An embodiment of the present invention is directed to providing a vehicle-mounted imaging apparatus capable of effectively performing white balance on an image.
A further technical problem to be solved by embodiments of the present invention is to provide an image optimization method for a vehicle-mounted camera device, which can effectively perform white balance on an image.
In order to solve the above technical problem, an embodiment of the present invention provides the following technical solutions: an image optimization device of an on-vehicle camera device, connected with a camera device of the on-vehicle camera device, comprising:
the frame extraction module is connected with the camera device and used for extracting original image frames from the original video images collected and provided by the camera device frame by frame;
the initialization module is connected with the frame extraction module and used for processing a first frame of original image frame by adopting a pre-stored conventional white balance processing model and storing the processed image frame as a reference image frame;
the high-brightness region detection module is connected with the frame extraction module and is used for analyzing whether a high-brightness region exists in the original image frame from a second original image frame by frame;
the optical flow calculation and judgment module is connected with the high-brightness area detection module and the initialization module and is used for analyzing and judging whether optical flow mutation exists in the high-brightness area compared with an area superposed with the high-brightness area in the reference image frame when the high-brightness area detection module detects that the high-brightness area exists in the original image frame;
the pixel compensation module is connected with the optical flow calculation and judgment module and is used for performing pixel compensation on a high-brightness area in the original image frame with the high-brightness area when the optical flow calculation and judgment module judges that the optical flow mutation exists so as to obtain a pixel compensation image frame;
the white balance module is respectively connected with the high-brightness area detection module, the optical flow calculation and judgment module and the pixel compensation module and is used for carrying out white balance processing on the original image frame without the high-brightness area, the original image frame without the optical flow mutation and the pixel compensation image frame; and
and the updating module is used for updating the reference image frame by adopting the latest image frame subjected to white balance processing.
Further, the highlight area detection module includes:
the device comprises an extraction unit, a threshold segmentation unit and a brightness extraction unit, wherein the extraction unit is used for carrying out threshold segmentation on an R channel image of an original image frame from a second original image frame to extract a brightness area in the R channel image; and
and the highlight judging unit is used for respectively determining the gray values of the corresponding areas of the brightness areas in the G channel image and the B channel image of the original image frame and the R channel image, judging whether the gray values meet a preset gray threshold value, and if so, judging that the brightness area in the R channel image is the high-brightness area of the original image frame.
Further, the optical flow calculating and determining module includes:
the edge point calculating unit is used for calculating each edge point of the high-brightness area in the original image frame by adopting a pre-stored pattern detection model;
the optical flow calculating unit is used for calculating the optical flow of each edge point by adopting a prestored optical flow calculating model;
a difference value calculating unit for calculating the sum of optical flow difference values of each edge point of a high-brightness area of the original image frame and each edge point of an area corresponding to the high-brightness area in the reference image frame respectively; and
and the abrupt change judging unit is used for comparing the sum of the optical flow difference values with a preset optical flow abrupt change threshold value, and judging that the optical flow of the high-brightness area of the original image frame has abrupt change when the sum of the optical flow difference values is greater than the optical flow abrupt change threshold value.
Further, the pixel compensation module comprises:
a spatial model calculation unit configured to calculate a spatial motion model of the imaging apparatus;
the replacing target determining unit is used for calculating an original image area of the position corresponding to the high-brightness area in the reference image frame according to the space motion model; and
and the compensation unit is used for replacing a high-brightness area in the original image frame by using the original image area and performing edge fusion on the edge position of the original image area replaced to the high-brightness area and the original image frame to obtain the pixel compensation image frame.
On the other hand, in order to solve the above technical problem, an embodiment of the present invention provides the following technical solutions: a vehicle-mounted camera device comprises a camera device for collecting original video images around a motor vehicle and an image optimization device connected with the camera device and used for optimizing the original video images, wherein the image optimization device is the image optimization device.
In another aspect, to solve the above technical problem, an embodiment of the present invention provides the following technical solutions: an image optimization method of an on-vehicle image pickup apparatus, comprising the steps of:
extracting original image frames frame by frame from original video images acquired and provided by a camera device of the vehicle-mounted camera equipment;
processing a first frame of original image frame by adopting a pre-stored conventional white balance processing model and storing the processed image frame as a reference image frame;
analyzing whether a high-brightness area exists in a second original image frame by frame from the original image frame;
when detecting that a high-brightness area exists in an original image frame, analyzing and judging whether the high-brightness area has an optical flow mutation compared with an area which is overlapped with the high-brightness area in the reference image frame;
performing pixel compensation on a high-brightness area in the original image frame with the high-brightness area when the optical flow mutation is judged to exist so as to obtain a pixel compensation image frame;
carrying out white balance processing on the original image frame without the high-brightness area, the original image frame without the light stream mutation and the pixel compensation image frame; and
and updating the reference image frame by using the latest image frame subjected to white balance processing.
Further, the analyzing, frame by frame, whether a high brightness region exists in the original image frame starting from the second original image frame specifically includes:
starting from a second frame of original image frame, carrying out threshold segmentation on an R channel image of the original image frame, and extracting a brightness region in the R channel image; and
and respectively determining gray values of corresponding areas of the G channel image and the B channel image of the original image frame and the brightness area in the R channel image, judging whether the gray values all meet a preset gray threshold value, and if so, judging that the brightness area in the R channel image is the high-brightness area of the original image frame.
Further, when it is detected that a high-brightness region exists in the original image frame, analyzing and determining whether the high-brightness region has an optical flow discontinuity compared with the reference image frame specifically includes:
calculating each edge point of a high-brightness area in the original image frame by adopting a pre-stored pattern detection model;
calculating the optical flow of each edge point by adopting a prestored optical flow calculation model;
respectively calculating the sum of optical flow difference values of each edge point of a high-brightness area of the original image frame and each edge point of an area corresponding to the high-brightness area in the reference image frame; and
and comparing the sum of the optical flow difference values with a preset optical flow mutation threshold, and judging that the optical flow of the high-brightness area of the original image frame has mutation when the sum of the optical flow difference values is greater than the optical flow mutation threshold.
Further, the pixel compensation of the high-brightness area in the original image frame with the high-brightness area when it is determined that there is an abrupt change in optical flow to obtain a pixel compensated image frame specifically includes:
calculating a spatial motion model of the camera device;
calculating an original image area of a position corresponding to the high-brightness area in the reference image frame according to the space motion model; and
and replacing a high-brightness area in the original image frame by using the original image area, and performing edge fusion on the edge position of the original image area replaced to the high-brightness area and the original image frame to obtain the pixel compensation image frame.
After the technical scheme is adopted, the embodiment of the invention at least has the following beneficial effects: the embodiment of the invention extracts original image frames from original video images frame by frame through a frame extraction module, then an initialization module processes a first frame of original image frames by adopting a pre-stored conventional white balance processing model and stores the processed image frames as reference image frames, then a high-brightness region detection module analyzes whether a high-brightness region exists in the original image frames frame by frame, and further an optical flow calculation and judgment module analyzes and judges whether an optical flow mutation exists in the high-brightness region compared with a region which is superposed with the high-brightness region in the reference image frames when the high-brightness region detection module detects that the high-brightness region exists in the original image frames, so that a pixel compensation module performs pixel compensation on the high-brightness region in the original image frames with the high-brightness region when the optical flow calculation and judgment module judges that the optical flow mutation exists to obtain pixel compensation image frames, the method comprises the steps of compensating a high-brightness area with sudden change in an original image frame in advance to avoid influencing image white balance, carrying out white balance processing on the original image frame without the high-brightness area, the original image frame without the sudden change in optical flow and the pixel compensation image frame by a white balance module, and finally updating the reference image frame by an updating module by adopting the latest image frame subjected to the white balance processing for next image optimization, realizing step circulation and achieving a better overall white balance effect.
Drawings
Fig. 1 is a schematic block diagram of an alternative embodiment of the onboard camera apparatus of the present invention.
Fig. 2 is a schematic structural block diagram of a highlight area detection module according to an alternative embodiment of the image optimization apparatus of the vehicle-mounted camera device of the present invention.
Fig. 3 is a schematic structural block diagram of an optical flow calculating and determining module according to an alternative embodiment of the image optimizing apparatus of the vehicle-mounted camera device of the present invention.
Fig. 4 is a schematic structural block diagram of a pixel compensation module according to an alternative embodiment of the image optimization device of the vehicle-mounted camera apparatus of the present invention.
Fig. 5 is a flowchart illustrating steps of an alternative embodiment of the image optimization method for the vehicle-mounted camera device according to the present invention.
Fig. 6 is a schematic flow chart diagram of an alternative embodiment of the image optimization method of the vehicle-mounted camera device according to the invention.
Fig. 7 is a specific flowchart of step S3 in an alternative embodiment of the image optimization method for the vehicle-mounted camera device according to the present invention.
Fig. 8 is a specific flowchart of step S4 in an alternative embodiment of the image optimization method for the vehicle-mounted camera device according to the present invention.
Fig. 9 is a specific flowchart of step S5 in an alternative embodiment of the image optimization method for the vehicle-mounted camera device according to the present invention.
Detailed Description
The present application will now be described in further detail with reference to the accompanying drawings and specific examples. It should be understood that the following illustrative embodiments and description are only intended to explain the present invention, and are not intended to limit the present invention, and features of the embodiments and examples in the present application may be combined with each other without conflict.
As shown in fig. 1, an alternative embodiment of the present invention provides an image optimization apparatus 1 of a vehicle-mounted image pickup device, connected to an image pickup device 3 of the vehicle-mounted image pickup device, including:
a frame extraction module 10 connected to the camera 3 for extracting original image frames from the original video images collected and provided by the camera 3;
an initialization module 11 connected to the frame extraction module 10 for processing a first frame of original image frame by using a pre-stored conventional white balance processing model and storing the processed image frame as a reference image frame
A high brightness region detection module 12 connected to the frame extraction module 10 and configured to analyze, frame by frame, whether a high brightness region exists in the original image frame;
an optical flow calculation and judgment module 14, connected to the high brightness region detection module 12 and the initialization module 11, configured to analyze and judge whether there is an optical flow mutation in the high brightness region compared to the reference image frame when the high brightness region detection module 12 detects that there is a high brightness region in the original image frame;
a pixel compensation module 16, connected to the optical flow calculation and determination module 14, for performing pixel compensation on the high-luminance area in the original image frame with the high-luminance area when the optical flow calculation and determination module 14 determines that there is an abrupt change in optical flow, so as to obtain a pixel compensation image frame;
a white balance module 18, which is respectively connected to the highlight region detection module 12, the optical flow calculation and judgment module 14 and the pixel compensation module 16, and is configured to perform white balance processing on the original image frame without a highlight region, the original image frame without an optical flow mutation and the pixel compensation image frame; and
and an updating module 19 for updating the reference image frame by using the latest image frame after the white balance processing.
In the embodiment of the present invention, an original image frame is extracted from an original video image frame by the frame extraction module 10, then the high brightness region detection module 12 analyzes whether a high brightness region exists in the original image frame by frame, then the initialization module 11 processes a first frame of original image frame by using a pre-stored conventional white balance processing model and stores the processed image frame as a reference image frame, and then the optical flow calculation and determination module 14 analyzes and determines whether an optical flow mutation exists in the high brightness region compared with a region overlapping with the high brightness region in the reference image frame when the high brightness region detection module 12 detects that the high brightness region exists in the original image frame, so that the pixel compensation module 16 performs pixel compensation on the high brightness region in the original image frame with the high brightness region when the optical flow calculation and determination module 14 determines that the optical flow mutation exists to obtain a pixel compensation image frame, the method comprises the steps of compensating a high-brightness area with sudden change in an original image frame in advance to avoid influencing image white balance, carrying out white balance processing on the original image frame without the high-brightness area, the original image frame without the sudden change in optical flow and the pixel compensation image frame by a white balance module 18, and finally updating the reference image frame by an updating module 19 by using the latest image frame subjected to the white balance processing for next image optimization to realize step circulation and obtain a better white balance effect integrally.
In specific implementation, the white balance processing performed by the conventional white balance processing model and the white balance module 18 is a conventional image white balance processing method, for example: firstly, RGB channel gains of the image are calculated, white balance gains are obtained based on the RGB channel gains, and the white balance gains are acted on the RGB channel of the image to realize white balance.
In yet another alternative embodiment of the present invention, as shown in fig. 2, the highlight region detection module 12 includes:
an extracting unit 121, configured to perform threshold segmentation on an R-channel image of a second original image frame starting from the original image frame, and extract a luminance region in the R-channel image; and
the highlight determining unit 123 is configured to determine gray values of corresponding regions of the G channel image and the B channel image of the original image frame and the luminance region in the R channel image, respectively, determine whether the gray values both satisfy a predetermined gray threshold, and if so, determine that the luminance region in the R channel image is the high luminance region of the original image frame.
In this embodiment, the extracting unit 121 performs threshold segmentation on the R channel image of the original image frame, extracts a luminance region in the R channel image, and uses the luminance region in the R channel image as a reference, then the highlight determining unit 123 determines gray values of corresponding regions of the G channel image and the B channel image of the original image frame and the luminance region in the R channel image, and determines whether the gray values both satisfy a predetermined gray threshold, if so, determines that the luminance region in the R channel image is the luminance region of the original image frame, and compares the gray values of corresponding positions of the G channel image and the B channel image with the predetermined gray threshold, thereby determining a high luminance region, and being capable of effectively detecting the high luminance region.
In another alternative embodiment of the present invention, as shown in fig. 3, the optical flow calculating and determining module 14 includes:
an edge point calculating unit 141, configured to calculate each edge point of the high brightness region in the original image frame by using a pre-stored pattern detection model;
an optical flow calculation unit 143 for calculating optical flows of the respective edge points using a prestored optical flow calculation model;
a difference value calculating unit 145 for calculating a sum of optical flow difference values of each edge point of a high luminance region of the original image frame and each edge point of a region corresponding to the high luminance region in the reference image frame, respectively; and
and the abrupt change judging unit 147 is used for comparing the sum of the optical flow difference values with a preset optical flow abrupt change threshold value, and judging that the optical flow of the high-brightness area of the original image frame has abrupt change when the sum of the optical flow difference values is greater than the optical flow abrupt change threshold value.
The embodiment of the invention firstly uses the pre-stored pattern detection model to calculate each edge point of the high brightness area in the original image frame by the edge point calculation unit 141, usually the high brightness area is a pattern with each shape, uses the pattern detection model to effectively detect each edge point of the high brightness area in the original image frame, uses the pre-stored optical flow calculation model to calculate the optical flow of each edge point by the optical flow calculation unit 143, then uses the difference calculation unit 145 to calculate the sum of the optical flow difference values of each edge point of the high brightness area of the original image frame and each edge point of the area corresponding to the high brightness area in the reference image frame, i.e. the sum of the optical flow difference values of the high brightness area and the corresponding area of the two adjacent original image frames, finally the mutation determination unit 147 compares the sum of the optical flow difference values with the preset optical flow mutation threshold value, and judging that the optical flow of the high-brightness area of the original image frame has sudden change by taking the sum of the optical flow difference values larger than the optical flow sudden change threshold as a judgment basis, so that the optical flow sudden change of the high-brightness area can be effectively detected.
According to the image processing principle, the motion description of the optical flow is a two-dimensional vector, so that the optical flow is projected to a polar coordinate, and the direction and the distance of the motion are conveniently counted. Under normal conditions, the optical flow transformation of two adjacent frames meets the assumption of small movement, and when the optical flow vector of the current frame is greatly different from the movement predicted based on the adjacent previous frame, namely the noise (high-brightness area) exists in the current area image of the current frame is reflected, the noise causes the invalidation of the optical flow assumption constraint, and the movement is suddenly changed. In addition, when the calculation is carried out, the average value of the optical flow difference value of each edge point can be calculated by averaging and dividing the sum of the difference values by the number of the edge points, so that the overall judgment is more accurate and effective.
In specific implementation, since the high-luminance area is usually a bright spot, i.e. a circle, the Canny algorithm is used as a pattern detection model to calculate each edge point of the high-luminance area, then the LK algorithm is used as an optical flow calculation model to calculate a sparse optical flow of each edge point, an optical flow vector of each edge point is calculated, and then the optical flow vector corresponding to each edge point is projected into the following polar coordinates: ρ (μ)i,t-ui,t-1,vi,t-vi,t-1)=(ρi,θi) Wherein: (mu.) ai,t,vi,t) Representing the original coordinate point of the ith edge point in the t-th frame, (u)i,t-1,vi,t-1) Representing the original coordinate point of the ith edge point in the t-1 th frame, and rho and theta respectively represent the polar diameter and the polar angle in polar coordinates, (rhoi,θi) Representing the polar coordinates of the optical flow vector of the ith edge point, then, counting and comparing whether the optical flow of each edge point of the current original image frame meets a preset condition, wherein, firstly, the sum of the optical flow difference values of the high-brightness area and the corresponding area of the two adjacent original image frames is calculated by adopting the following formula:wherein: j is the consecutive number of the edge points, ptj(θ, r) represents an optical flow vector of the t-th frame and the j-th edge point in the polar coordinate, ρjt-1(theta, r) represents the optical flow vector of the jth edge point of the t-1 th frame and the corresponding area in the polar coordinate, r and theta represent the polar diameter and polar angle in the polar coordinate, respectively, and Vt=iAnd when the sum of the difference values is greater than a preset light stream mutation threshold T, judging that the light stream of the high-brightness area of the original image frame has mutation.
In yet another alternative embodiment of the present invention, as shown in fig. 4, the pixel compensation module 16 includes:
a spatial model calculation unit 161 for calculating a spatial motion model of the imaging apparatus 3;
a replacement target determining unit 163 for calculating an original image region of a position corresponding to the high luminance region in the reference image frame based on the spatial motion model; and
a compensating unit 165, configured to replace a high brightness region in the original image frame with the original image region, and perform edge blending on an edge position of the original image region replaced to the high brightness region and the original image frame to obtain the pixel compensated image frame.
The spatial model calculation unit 161 according to the embodiment of the present invention first calculates a spatial motion model of the imaging apparatus 3, for example: in the process that the camera device 3 performs camera ego-motion estimation when capturing video images by using a camera such as a camera, the replacement target determining unit 163 calculates an original image region of a position corresponding to the high brightness region in the reference image frame according to the spatial motion model, calculates the original image region of the position corresponding to the high brightness region based on the spatial motion model, and ensures the accuracy of the replacement position, it can be understood that the original image region is solved by performing inverse operation according to the spatial motion model, and finally the compensating unit 165 replaces the high brightness region in the original image frame with the original image region, and performs edge fusion between the edge position of the original image region replaced to the high brightness region and the original image frame to obtain the pixel compensation image frame, and in specific implementation, the fusion coefficient of the two can be calculated based on a gaussian probability model, and performing edge fusion on the edge position of the original image area replaced to the high-brightness area in the reference image frame and the original image frame, and then fusing and compensating the edge by adopting nonlinear interpolation to ensure the consistency of the compensated image and the original image before compensation.
In the implementation, a harris corner detection algorithm can be adopted to calculate key points in an original image frame, and then a key point sparse optical flow is calculated based on an LK algorithm, which is a conventional method for calculating a sparse optical flow in computer vision.
Next, the camera ego-motion is calculated based on the horizontal ground motion assumption simplified model, and first, the conventional camera determines a spatial three-dimensional motion vector V ═ { Δ x, Δ y, Δ z, Δ α, Δ β, Δ γ }, where V denotes the camera ego-motion vector, where: in addition, since the vehicle is driven on a horizontally hardened road surface in a city street view or an expressway, the embodiment of the present invention herein assumes that there is no motion of the pitch angle and the rotation angle, and the Z-axis direction motion is also zero, that is, Δ Z, Δ β, and Δ γ are all zero, and as a result, the above-mentioned camera ego-motion vector model is simplified to be V ═ Δ x, Δ y, Δ α }.
Then, according to the camera pinhole imaging principle, the following results are obtained: the coordinates of the motion space and the camera coordinates have the following corresponding relation formula:wherein:representing the projection of a point in the motion space coordinate system into the camera coordinate system, R represents the rotation matrix of the motion space coordinate system projected into the camera coordinate system,representing the projection of a spatial point in a motion space coordinate system,representing translation vectors of the motion space coordinate system projected to the camera coordinate system.
Then based on the perspective projection model, projecting the points of the image coordinate system of the original image frame to the camera coordinate systemIn (3), the perspective projection model is represented as:wherein,as projection coordinates of the spatial coordinates on the image plane, (u0, v0) represents center coordinates of the camera, fx, fy represent the focal lengths 1/dx and 1/dy of the camera in the lower x, y directions, respectively, as corresponding actual physical dimensions in the x, y directions (length direction and width direction) under a single pixel.
In conjunction with the above camera ego-motion vector simplified model, the key points of the t-1 th frame (the reference image frame adjacent to the previous frame of the original image frame) and the key points of the t-th frame (the original image frame) should satisfy the following motion conversion relationship:wherein: whereinT-1 frame key point coordinates estimated for motion-based simplified modelAt the coordinate position of the t-th frame, the key points represent the point pairs that are matched in advance by the LK algorithm.
Firstly, converting image coordinates (u, v) into camera coordinates (Xc, Yc, Zc) based on the perspective projection model and the transplanted camera internal parameters; then, the camera coordinate system (Xc, Yc, Zc) is converted into the motion space coordinate system (Xw, Yw, Zw) through the camera external parameters which are obtained through calibration, because the depth of field of the key points in the original image frame cannot be predicted, the conversion of Zc and Zw is inaccurate, and the depth of field of the local sub-blocks is approximately considered to be consistent, so that the influence caused by the depth of field is counteracted by a simultaneous depth constraint equation, and then the optical flow point pairs are obtained by combining with an LK algorithm, and under the motion space coordinate system, the optical flow point pairs are obtainedPoints calculated based on motion modelsAnd the actual measured pointThe difference of (2) is that the objective function can be constructed as follows: whereinA coordinate point representing a target position (original image area) in the i-1 th frame calculated based on the motion model,and representing the coordinate points of the target position (high-brightness area) in the ith frame, which are actually measured by adopting a high-brightness area detection algorithm, abs represents a corresponding absolute value, and the minimum value of the overall deviation of all the actually measured values and the estimated values of all the key points used for calculating the motion model is used as the target function of the motion model.
Finally, based on the above objective function for solving the motion model, the coordinates of the highlight region of the original image frame in the original image region of the adjacent previous frame can be calculated:wherein A is the representation of the space motion model in the image space,andcoordinates of the high brightness region and the original image region of the t-th frame and the t-1-th frame, respectively. In addition, it will be appreciated that if the high brightness region is large, it will beThe high brightness region is divided into several sub-regions, so that the depth of field of each sub-brightness region is approximately consistent, and the motion offsets are consistent, the same motion compensation amount can be given. Therefore, under the assumption that the local depth of field is approximately consistent, a depth constraint equation set can be established, and the influence of the depth of field on the estimation of the spatial motion parameters is eliminated.
On the other hand, as shown in fig. 4, an embodiment of the present invention provides an on-vehicle camera apparatus, including a camera device 3 for capturing an original video image around a motor vehicle, and an image optimization device 1 connected to the camera device 3 for performing optimization processing on the original video image, where the image optimization device 1 is an image optimization device as described in any one of the above. The vehicle-mounted camera equipment of the embodiment of the invention adopts the image optimization device 1, and can effectively perform white balance processing on the image.
In another aspect, as shown in fig. 5 and 6, an embodiment of the present invention provides an image optimization method for a vehicle-mounted imaging apparatus, including:
s1: extracting original image frames frame by frame from an original video image acquired and provided by a camera device 3 of the vehicle-mounted camera equipment;
s2: processing a first frame of original image frame by adopting a pre-stored conventional white balance processing model and storing the processed image frame as a reference image frame;
s3: analyzing whether a high-brightness area exists in a second original image frame by frame from the original image frame;
s4: when detecting that a high-brightness area exists in an original image frame, analyzing and judging whether the high-brightness area has an optical flow mutation compared with an area which is overlapped with the high-brightness area in the reference image frame;
s5: performing pixel compensation on a high-brightness area in the original image frame with the high-brightness area when the optical flow mutation is judged to exist so as to obtain a pixel compensation image frame;
s6: carrying out white balance processing on the original image frame without the high-brightness area, the original image frame without the light stream mutation and the pixel compensation image frame; and
s7: and updating the reference image frame by using the latest image frame subjected to white balance processing.
According to the method, firstly, the original image frame is extracted from the original video image frame by frame, then the reference image frame is determined, whether a high-brightness area exists in the original image frame is analyzed frame by frame, further, when the high-brightness area exists in the original image frame, whether the high-brightness area has an optical flow mutation compared with an area which is overlapped with the high-brightness area in the reference image frame is analyzed and judged, so that when the optical flow mutation is judged to exist, the high-brightness area in the original image frame with the high-brightness area is subjected to pixel compensation to obtain a pixel compensation image frame, the high-brightness area with the mutation in the original image frame is compensated in advance, the influence on the white balance of the image is avoided, and the white balance processing is carried out on the original image frame without the high-brightness area, the original image frame without the optical flow mutation and the pixel compensation image frame, and finally, updating the reference image frame by adopting the latest image frame subjected to white balance processing for next image optimization, realizing step circulation and obtaining a better white balance effect integrally.
In practical implementation, the conventional white balance processing model in step S2 and the white balance processing performed in steps S6 and S7 are conventional image white balance processing methods, such as: firstly, RGB channel gains of the image are calculated, white balance gains are obtained based on the RGB channel gains, and the white balance gains are acted on the RGB channel of the image to realize white balance.
In yet another alternative embodiment of the present invention, as shown in fig. 7, the step S3 specifically includes:
s31: starting from a second frame of original image frame, carrying out threshold segmentation on an R channel image of the original image frame, and extracting a brightness region in the R channel image; and
s32: and respectively determining gray values of corresponding areas of the G channel image and the B channel image of the original image frame and the brightness area in the R channel image, judging whether the gray values all meet a preset gray threshold value, and if so, judging that the brightness area in the R channel image is the high-brightness area of the original image frame.
In this embodiment, by the above method, threshold segmentation is performed on an R channel image of an original image frame starting from a second original image frame, a luminance region in the R channel image is extracted, the luminance region in the R channel image is used as a reference, then gray values of corresponding regions of a G channel image and a B channel image of the original image frame and the luminance region in the R channel image are respectively determined, and whether the gray values all satisfy a predetermined gray threshold is determined, if so, the luminance region in the R channel image is determined to be the luminance region of the original image frame, and the gray values of corresponding positions of the G channel image and the B channel image are compared with the predetermined gray threshold, so that a high-luminance region is determined, and the high-luminance region can be effectively detected.
In another alternative embodiment of the present invention, as shown in fig. 8, the step S4 specifically includes:
s41: calculating each edge point of a high-brightness area in the original image frame by adopting a pre-stored pattern detection model;
s42: calculating the optical flow of each edge point by adopting a prestored optical flow calculation model;
s43: respectively calculating the sum of optical flow difference values of each edge point of a high-brightness area of the original image frame and each edge point of an area corresponding to the high-brightness area in the reference image frame; and
s44: and comparing the sum of the optical flow difference values with a preset optical flow mutation threshold, and judging that the optical flow of the high-brightness area of the original image frame has mutation when the sum of the optical flow difference values is greater than the optical flow mutation threshold.
The method of the embodiment of the present invention first calculates each edge point of the high brightness region in the original image frame by using a pre-stored pattern detection model, where the high brightness region is a pattern of each shape, effectively detects each edge point of the high brightness region in the original image frame by using the pattern detection model, calculates the optical flow of each edge point by using the pre-stored optical flow calculation model, calculates the sum of the optical flow difference values of each edge point of the high brightness region of the original image frame and each edge point of the region corresponding to the high brightness region in the reference image frame, i.e. the sum of the optical flow difference values of the high brightness regions of two adjacent original image frames and the corresponding region thereof, finally compares the sum of the optical flow difference values with a preset mutation threshold, and determines that the sum of the optical flow difference values is greater than the optical flow mutation threshold, and judging that the optical flow of the high-brightness area of the original image frame has sudden change, and effectively detecting the sudden change of the optical flow of the high-brightness area.
In an alternative embodiment of the present invention, as shown in fig. 9, the step S5 specifically includes:
s51: calculating a spatial motion model of the camera 3;
s52: calculating an original image area of a position corresponding to the high-brightness area in the reference image frame according to the space motion model; and
s53: and replacing a high-brightness area in the original image frame by using the original image area, and performing edge fusion on the edge position of the original image area replaced to the high-brightness area and the original image frame to obtain the pixel compensation image frame.
In the embodiment of the present invention, by the above method, a spatial motion model of the imaging device 3 is first calculated, for example: when the camera 3 adopts a camera such as a camera to collect a video image, the camera ego-motion estimation is executed, then an original image area at a position corresponding to the high-brightness area in a reference image frame is calculated according to the spatial motion model, an original image area at a position corresponding to the high-brightness area is calculated based on the spatial motion model, and the accuracy of a replacement position is ensured, in the process, it can be understood that the original image area is solved by inverse operation, finally the high-brightness area in the original image frame is replaced by the original image area, and the edge position of the original image area replaced to the high-brightness area is edge-fused with the original image frame to obtain the pixel compensation image frame, in specific implementation, a fusion coefficient can be calculated based on a gaussian probability model, and the edge position of the original image area replaced to the high-brightness area is edge-fused with the image frame to be processed, and then, fusing the compensation edge by adopting nonlinear interpolation to ensure the consistency of the compensated image and the original image before compensation.
The functions described in the embodiments of the present invention may be stored in a storage medium readable by a computing device if they are implemented in the form of software functional modules or units and sold or used as independent products. Based on such understanding, part of the contribution of the embodiments of the present invention to the prior art or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computing device (which may be a personal computer, a server, a mobile computing device, a network device, or the like) to execute all or part of the steps of the method described in the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (9)
1. An image optimization device of a vehicle-mounted camera device, which is connected with a camera device of the vehicle-mounted camera device, is characterized by comprising:
the frame extraction module is connected with the camera device and used for extracting original image frames from the original video images collected and provided by the camera device frame by frame;
the initialization module is connected with the frame extraction module and used for processing a first frame of original image frame by adopting a pre-stored conventional white balance processing model and storing the processed image frame as a reference image frame;
the high-brightness region detection module is connected with the frame extraction module and is used for analyzing whether a high-brightness region exists in the original image frame from a second original image frame by frame;
the optical flow calculation and judgment module is connected with the high-brightness area detection module and the initialization module and is used for analyzing and judging whether optical flow mutation exists in the high-brightness area compared with an area superposed with the high-brightness area in the reference image frame when the high-brightness area detection module detects that the high-brightness area exists in the original image frame;
the pixel compensation module is connected with the optical flow calculation and judgment module and is used for performing pixel compensation on a high-brightness area in the original image frame with the high-brightness area when the optical flow calculation and judgment module judges that the optical flow mutation exists so as to obtain a pixel compensation image frame;
the white balance module is respectively connected with the high-brightness area detection module, the optical flow calculation and judgment module and the pixel compensation module and is used for carrying out white balance processing on the original image frame without the high-brightness area, the original image frame without the optical flow mutation and the pixel compensation image frame; and
and the updating module is used for updating the reference image frame by adopting the latest image frame subjected to white balance processing.
2. The image optimization apparatus of the in-vehicle image pickup device according to claim 1, wherein the highlight area detection module includes:
the device comprises an extraction unit, a threshold segmentation unit and a brightness extraction unit, wherein the extraction unit is used for carrying out threshold segmentation on an R channel image of an original image frame from a second original image frame to extract a brightness area in the R channel image; and
and the highlight judging unit is used for respectively determining the gray values of the corresponding areas of the brightness areas in the G channel image and the B channel image of the original image frame and the R channel image, judging whether the gray values meet a preset gray threshold value, and if so, judging that the brightness area in the R channel image is the high-brightness area of the original image frame.
3. The image optimization apparatus of an in-vehicle image pickup device according to claim 1, wherein the optical flow calculation and determination module includes:
the edge point calculating unit is used for calculating each edge point of the high-brightness area in the original image frame by adopting a pre-stored pattern detection model;
the optical flow calculating unit is used for calculating the optical flow of each edge point by adopting a prestored optical flow calculating model;
a difference value calculating unit for calculating the sum of optical flow difference values of each edge point of a high-brightness area of the original image frame and each edge point of an area corresponding to the high-brightness area in the reference image frame respectively; and
and the abrupt change judging unit is used for comparing the sum of the optical flow difference values with a preset optical flow abrupt change threshold value, and judging that the optical flow of the high-brightness area of the original image frame has abrupt change when the sum of the optical flow difference values is greater than the optical flow abrupt change threshold value.
4. The image optimization apparatus of the in-vehicle image pickup device according to claim 1, wherein the pixel compensation module includes: a spatial model calculation unit configured to calculate a spatial motion model of the imaging apparatus;
the replacing target determining unit is used for calculating an original image area of the position corresponding to the high-brightness area in the reference image frame according to the space motion model; and
and the compensation unit is used for replacing a high-brightness area in the original image frame by using the original image area and performing edge fusion on the edge position of the original image area replaced to the high-brightness area and the original image frame to obtain the pixel compensation image frame.
5. An on-board camera device, comprising a camera device for capturing original video images around a motor vehicle and an image optimization device connected to the camera device for performing optimization processing on the original video images, wherein the image optimization device is the image optimization device according to any one of claims 1 to 4.
6. An image optimization method for an in-vehicle image pickup apparatus, characterized by comprising:
extracting original image frames frame by frame from original video images acquired and provided by a camera device of the vehicle-mounted camera equipment;
processing a first frame of original image frame by adopting a pre-stored conventional white balance processing model and storing the processed image frame as a reference image frame;
analyzing whether a high-brightness region exists in a region, which is overlapped with the high-brightness region, in the original image frame by frame from a second original image frame;
when detecting that a high-brightness area exists in an original image frame, analyzing and judging whether the high-brightness area has an optical flow mutation compared with an area which is overlapped with the high-brightness area in the reference image frame;
performing pixel compensation on a high-brightness area in the original image frame with the high-brightness area when the optical flow mutation is judged to exist so as to obtain a pixel compensation image frame;
carrying out white balance processing on the original image frame without the high-brightness area, the original image frame without the light stream mutation and the pixel compensation image frame; and
and updating the reference image frame by using the latest image frame subjected to white balance processing.
7. The image optimization method of the vehicle-mounted camera device according to claim 6, wherein the analyzing whether a high-brightness region exists in the original image frame by frame from a second original image frame specifically comprises:
starting from a second frame of original image frame, carrying out threshold segmentation on an R channel image of the original image frame, and extracting a brightness region in the R channel image; and
and respectively determining gray values of corresponding areas of the G channel image and the B channel image of the original image frame and the brightness area in the R channel image, judging whether the gray values all meet a preset gray threshold value, and if so, judging that the brightness area in the R channel image is the high-brightness area of the original image frame.
8. The image optimization method of the in-vehicle image capturing apparatus according to claim 6, wherein the analyzing, upon detecting that a high-luminance area exists in an original image frame, whether there is an abrupt change in optical flow in the high-luminance area as compared to an area that coincides with the high-luminance area in the reference image frame specifically includes:
calculating each edge point of a high-brightness area in the original image frame by adopting a pre-stored pattern detection model;
calculating the optical flow of each edge point by adopting a prestored optical flow calculation model;
respectively calculating the sum of optical flow difference values of each edge point of a high-brightness area of the original image frame and each edge point of an area corresponding to the high-brightness area in the reference image frame; and
and comparing the sum of the optical flow difference values with a preset optical flow mutation threshold, and judging that the optical flow of the high-brightness area of the original image frame has mutation when the sum of the optical flow difference values is greater than the optical flow mutation threshold.
9. The image optimization method for an in-vehicle image capturing apparatus according to claim 6, wherein the pixel-compensating a high-luminance region in the original image frame in which a high-luminance region exists when it is determined that there is an abrupt change in optical flow to obtain a pixel-compensated image frame specifically includes:
calculating a spatial motion model of the camera device;
calculating an original image area of a position corresponding to the high-brightness area in the reference image frame according to the space motion model; and
and replacing a high-brightness area in the original image frame by using the original image area, and performing edge fusion on the edge position of the original image area replaced to the high-brightness area and the original image frame to obtain the pixel compensation image frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010841212.8A CN112261390B (en) | 2020-08-20 | 2020-08-20 | Vehicle-mounted camera equipment and image optimization device and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010841212.8A CN112261390B (en) | 2020-08-20 | 2020-08-20 | Vehicle-mounted camera equipment and image optimization device and method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112261390A true CN112261390A (en) | 2021-01-22 |
CN112261390B CN112261390B (en) | 2022-02-11 |
Family
ID=74223873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010841212.8A Active CN112261390B (en) | 2020-08-20 | 2020-08-20 | Vehicle-mounted camera equipment and image optimization device and method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112261390B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113048912A (en) * | 2021-02-26 | 2021-06-29 | 山东师范大学 | Calibration system and method for projector |
CN113284159A (en) * | 2021-07-20 | 2021-08-20 | 深圳大圆影业有限公司 | Image optimization processing device and processing method based on Internet |
CN115272423A (en) * | 2022-09-19 | 2022-11-01 | 深圳比特微电子科技有限公司 | Method and device for training optical flow estimation model and readable storage medium |
CN116958203A (en) * | 2023-08-01 | 2023-10-27 | 北京知存科技有限公司 | Image processing method and device, electronic equipment and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002118857A (en) * | 2000-10-05 | 2002-04-19 | Ricoh Co Ltd | White balance adjustment device, white balance adjustment method, and computer-readable recording medium storing program to allow computer to perform the method |
WO2002077920A2 (en) * | 2001-03-26 | 2002-10-03 | Dynapel Systems, Inc. | Method and system for the estimation and compensation of brightness changes for optical flow calculations |
US20150319911A1 (en) * | 2014-05-09 | 2015-11-12 | Raven Industries, Inc. | Optical flow sensing application in agricultural vehicles |
US20160100146A1 (en) * | 2014-10-07 | 2016-04-07 | Ricoh Company, Ltd. | Imaging apparatus, image processing method, and medium |
CN105957060A (en) * | 2016-04-22 | 2016-09-21 | 天津师范大学 | Method for dividing TVS events into clusters based on optical flow analysis |
CN109657600A (en) * | 2018-12-14 | 2019-04-19 | 广东工业大学 | A kind of video area removes altering detecting method and device |
CN109724951A (en) * | 2017-10-27 | 2019-05-07 | 黄晓淳 | A kind of dynamic super-resolution fluorescence imaging technique |
JP6509396B1 (en) * | 2018-03-14 | 2019-05-08 | 株式会社トヨタシステムズ | Motion detection device |
US20190180107A1 (en) * | 2017-12-07 | 2019-06-13 | Canon Kabushiki Kaisha | Colour look-up table for background segmentation of sport video |
CN110349163A (en) * | 2019-07-19 | 2019-10-18 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
-
2020
- 2020-08-20 CN CN202010841212.8A patent/CN112261390B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002118857A (en) * | 2000-10-05 | 2002-04-19 | Ricoh Co Ltd | White balance adjustment device, white balance adjustment method, and computer-readable recording medium storing program to allow computer to perform the method |
WO2002077920A2 (en) * | 2001-03-26 | 2002-10-03 | Dynapel Systems, Inc. | Method and system for the estimation and compensation of brightness changes for optical flow calculations |
US20150319911A1 (en) * | 2014-05-09 | 2015-11-12 | Raven Industries, Inc. | Optical flow sensing application in agricultural vehicles |
US20160100146A1 (en) * | 2014-10-07 | 2016-04-07 | Ricoh Company, Ltd. | Imaging apparatus, image processing method, and medium |
CN105957060A (en) * | 2016-04-22 | 2016-09-21 | 天津师范大学 | Method for dividing TVS events into clusters based on optical flow analysis |
CN109724951A (en) * | 2017-10-27 | 2019-05-07 | 黄晓淳 | A kind of dynamic super-resolution fluorescence imaging technique |
US20190180107A1 (en) * | 2017-12-07 | 2019-06-13 | Canon Kabushiki Kaisha | Colour look-up table for background segmentation of sport video |
JP6509396B1 (en) * | 2018-03-14 | 2019-05-08 | 株式会社トヨタシステムズ | Motion detection device |
CN109657600A (en) * | 2018-12-14 | 2019-04-19 | 广东工业大学 | A kind of video area removes altering detecting method and device |
CN110349163A (en) * | 2019-07-19 | 2019-10-18 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113048912A (en) * | 2021-02-26 | 2021-06-29 | 山东师范大学 | Calibration system and method for projector |
CN113284159A (en) * | 2021-07-20 | 2021-08-20 | 深圳大圆影业有限公司 | Image optimization processing device and processing method based on Internet |
CN115272423A (en) * | 2022-09-19 | 2022-11-01 | 深圳比特微电子科技有限公司 | Method and device for training optical flow estimation model and readable storage medium |
CN116958203A (en) * | 2023-08-01 | 2023-10-27 | 北京知存科技有限公司 | Image processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112261390B (en) | 2022-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112261390B (en) | Vehicle-mounted camera equipment and image optimization device and method thereof | |
US11170466B2 (en) | Dense structure from motion | |
CN109034047B (en) | Lane line detection method and device | |
JP3868876B2 (en) | Obstacle detection apparatus and method | |
US6658150B2 (en) | Image recognition system | |
US8189051B2 (en) | Moving object detection apparatus and method by using optical flow analysis | |
CN108229475B (en) | Vehicle tracking method, system, computer device and readable storage medium | |
JP4919036B2 (en) | Moving object recognition device | |
CN101029823B (en) | Method for tracking vehicle based on state and classification | |
CN108470356B (en) | Target object rapid ranging method based on binocular vision | |
US20130070095A1 (en) | Fast obstacle detection | |
US20060050788A1 (en) | Method and device for computer-aided motion estimation | |
JP7091686B2 (en) | 3D object recognition device, image pickup device and vehicle | |
JP2015184929A (en) | Three-dimensional object detection apparatus, three-dimensional object detection method and three-dimensional object detection program | |
JP2000048211A (en) | Movile object tracking device | |
CN111914627A (en) | Vehicle identification and tracking method and device | |
JP2011165170A (en) | Object detection device and program | |
KR102658268B1 (en) | Apparatus and method for AVM automatic Tolerance compensation | |
CN111860270A (en) | Obstacle detection method and device based on fisheye camera | |
CN113723432B (en) | Intelligent identification and positioning tracking method and system based on deep learning | |
CN110688876A (en) | Lane line detection method and device based on vision | |
JP5590387B2 (en) | Color target position determination device | |
JP2022099120A5 (en) | ||
JP2020187519A (en) | Device, program and method for estimating objective information from ground point containing image area | |
CN114973190B (en) | Distance detection method and device and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |