CN112069870A - Image processing method and device suitable for vehicle identification - Google Patents

Image processing method and device suitable for vehicle identification Download PDF

Info

Publication number
CN112069870A
CN112069870A CN202010677528.8A CN202010677528A CN112069870A CN 112069870 A CN112069870 A CN 112069870A CN 202010677528 A CN202010677528 A CN 202010677528A CN 112069870 A CN112069870 A CN 112069870A
Authority
CN
China
Prior art keywords
image
ghost
original image
original
point spread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010677528.8A
Other languages
Chinese (zh)
Inventor
林凡
张秋镇
陈健民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GCI Science and Technology Co Ltd
Original Assignee
GCI Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GCI Science and Technology Co Ltd filed Critical GCI Science and Technology Co Ltd
Priority to CN202010677528.8A priority Critical patent/CN112069870A/en
Publication of CN112069870A publication Critical patent/CN112069870A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The invention discloses an image processing method and device suitable for vehicle identification, wherein the method comprises the steps of obtaining an original image for vehicle identification, and constructing an image regression model of the original image; based on a multilateral filtering algorithm, performing edge reconstruction, ghost matrix estimation and potential image estimation on the original image to obtain an enhanced ghost image; and estimating a point spread function of the image degradation model by using the original image and the enhanced ghost image, and estimating a potential sharp image in the original ghost image by using an adaptive deconvolution function and the point spread function. The embodiment of the invention eliminates the single image motion ghost image based on the mixed algorithm of edge optimization and the polygonal filter, recovers the strong edge by adopting the improved algorithm based on the edge optimization, reduces the noise at the same time, and smoothes the non-strong edge part of the image by adopting the polygonal filter to eliminate the noise and the narrow edge; abundant image details can be recovered, the ghost image removing effect is enhanced, and accurate vehicle identification is facilitated.

Description

Image processing method and device suitable for vehicle identification
Technical Field
The invention relates to the technical field of image recognition, in particular to an image processing method and device suitable for vehicle recognition.
Background
With the increasing of the automobile holding capacity and the increasing of the road traffic pressure, the safety management problem related to the automobile is increasingly highlighted, and in order to realize the optimized management and scheduling of the running automobile, the number of the vehicles can be acquired by detecting and identifying the characteristics of the automobile, so that visual information reference is provided for a driver and a vehicle management scheduling center. The vehicle identification has important application value in the fields of vehicle safety management, road traffic control and the like, and the research on the vehicle feature extraction method has good application prospect in the aspect of detecting illegal crimes related to vehicles.
In the prior art, for example, a feature extraction system of a road traffic vehicle with application number 201910091807.3 relates to a feature extraction system of a road traffic vehicle, which performs edge and information enhancement processing on an acquired vehicle image through an edge contour detection module and an enhancement processing module, and processes vehicle corner distribution information in an invariant region through a feature extraction module in a simulation manner, so as to extract vehicle pixel feature points, and has the characteristic of high accuracy of feature extraction. However, the inventor finds that the prior art does not consider interference from uneven lighting of the vehicle and random and variable background environment of the vehicle during the driving process of the vehicle, so that the vehicle cannot be stably and accurately identified.
Disclosure of Invention
The invention provides an image processing method suitable for vehicle identification, which aims to solve the technical problem that the accuracy of the existing vehicle identification is influenced by interference.
In order to solve the above technical problem, an embodiment of the present invention provides an image processing method suitable for vehicle identification, including:
acquiring an original image for vehicle identification, and constructing an image regression model of the original image;
based on a multilateral filtering algorithm, performing edge reconstruction, ghost matrix estimation and potential image estimation on the original image to obtain an enhanced ghost image;
and estimating a point spread function of the image degradation model by using the original image and the enhanced ghost image, and estimating a potential sharp image in the original ghost image by using an adaptive deconvolution function and the point spread function.
In one embodiment of the present invention, the performing edge reconstruction, ghost matrix estimation and latent image estimation on the original image includes:
performing edge reconstruction on the original image, and extracting a strong edge region in the original image;
based on the strong edge region, estimating a ghost kernel by using an iterative method to perform ghost matrix estimation.
In one embodiment of the present invention, the edge reconstruction is performed on the original image to extract a strong edge region in the original image, and specifically, the edge reconstruction is performed by:
converting the original image from an RGB color space to a YCbCr color space, and performing down-sampling and up-sampling on the converted image to obtain a sampling result;
and obtaining a high-frequency layer of the original image based on the brightness difference between the brightness of the original image and the brightness of the sampling result, and extracting a strong edge region in the original image by using the high-frequency layer.
In one embodiment of the present invention, the ghost kernel is estimated by using an iterative method based on the strong edge region to perform ghost matrix estimation, specifically:
establishing a prior definition function of the original image and setting a regularization term in the prior definition function;
estimating ghost kernels using the a priori definition function and the regularization term based on the strong edge region.
In one embodiment of the present invention, the constructing the image regression model of the original image includes:
constructing a relation among ghost pixels, real pixels, a point spread function and additive noise:
g(x,y)=f(x,y)*h(x,y)+n(x,y) (1)
obtaining a point spread function according to the relation:
Figure BDA0002584351070000021
wherein g (x, y) represents ghost pixels, f (x, y) represents true pixels, h (x, y) represents a point spread function, n (x, y) represents additive noise, and the symbol x represents a convolution operator; l denotes a ghost length, and θ denotes a ghost angle.
An embodiment of the present invention further provides an image processing apparatus suitable for vehicle identification, including:
the image ghost processing module is used for acquiring an original image for vehicle identification and constructing an image regression model of the original image;
the image edge processing module is used for carrying out edge reconstruction, ghost matrix estimation and potential image estimation on the original image based on a multilateral filtering algorithm to obtain an enhanced ghost image;
and the image latent image processing module is used for estimating a point spread function of the image degradation model by using the original image and the enhanced ghost image, and evaluating a latent clear image in the original ghost image by adopting an adaptive deconvolution function and the point spread function.
In one embodiment of the present invention, the image edge processing module is further configured to:
performing edge reconstruction on the original image, and extracting a strong edge region in the original image;
based on the strong edge region, estimating a ghost kernel by using an iterative method to perform ghost matrix estimation.
In one embodiment of the present invention, the image edge processing module is further configured to:
converting the original image from an RGB color space to a YCbCr color space, and performing down-sampling and up-sampling on the converted image to obtain a sampling result;
and obtaining a high-frequency layer of the original image based on the brightness difference between the brightness of the original image and the brightness of the sampling result, and extracting a strong edge region in the original image by using the high-frequency layer.
In one embodiment of the present invention, the image edge processing module is further configured to:
establishing a prior definition function of the original image and setting a regularization term in the prior definition function;
estimating ghost kernels using the a priori definition function and the regularization term based on the strong edge region.
In one embodiment of the present invention, the image ghosting processing module is configured to:
constructing a relation among ghost pixels, real pixels, a point spread function and additive noise:
g(x,y)=f(x,y)*h(x,y)+n(x,y) (1)
obtaining a point spread function according to the relation:
Figure BDA0002584351070000031
wherein g (x, y) represents ghost pixels, f (x, y) represents true pixels, h (x, y) represents a point spread function, n (x, y) represents additive noise, and the symbol x represents a convolution operator; l denotes a ghost length, and θ denotes a ghost angle.
Compared with the prior art, the method and the device have the advantages that the single image motion ghost is eliminated based on the mixed algorithm of the edge optimization and the polygonal filter, the strong edge is recovered by adopting the improved algorithm based on the edge optimization, the noise is reduced, and the non-strong edge part of the image is smoothed by adopting the polygonal filter so as to eliminate the noise and the narrow edge. The invention can recover abundant image details, has better ghost image removing effect and is beneficial to accurately identifying the vehicle.
Drawings
Fig. 1 is a step diagram of an image processing method suitable for vehicle identification in the embodiment of the present invention;
fig. 2 is a flowchart of an image processing method suitable for vehicle identification in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides an image processing method suitable for vehicle identification, which starts a deghosting process from edge recovery by smoothing an entire image with a polygonal filter, estimates a point spread function using an enhanced ghost image and an original ghost image, and then estimates a potentially sharp image using the point spread function and the original ghost image, including the following steps:
s1, obtaining an original image for vehicle identification, and constructing an image regression model of the original image;
in an embodiment of the present invention, the constructing the image regression model of the original image includes: constructing a relation among ghost pixels, real pixels, a point spread function and additive noise:
g(x,y)=f(x,y)*h(x,y)+n(x,y) (1)
assuming that the target object moves at a constant speed relative to the camera during the exposure time and forms an angle θ with the horizontal axis, the point spread function of the moving ghost can be described as:
Figure BDA0002584351070000051
wherein g (x, y) represents ghost pixels, f (x, y) represents true pixels, h (x, y) represents a point spread function, n (x, y) represents additive noise, and the symbol x represents a convolution operator; l denotes a ghost length, and θ denotes a ghost angle.
S2, based on a multilateral filtering algorithm, performing edge reconstruction, ghost matrix estimation and potential image estimation on the original image to obtain an enhanced ghost image;
in the present embodiment, the implementation is performed by the following substeps:
the edges and details of the image are typically regions with severe transformation, corresponding to high frequency components in the frequency domain. The high frequency components of the moving ghost image are extracted, and a rich edge region is obtained through step S21.
S21, performing edge reconstruction on the original image, and extracting a strong edge region in the original image, specifically:
converting the original image from an RGB color space to a YCbCr color space, and performing down-sampling and up-sampling on the converted image to obtain a sampling result;
and obtaining a high-frequency layer of the original image based on the brightness difference between the brightness of the original image and the brightness of the sampling result, and extracting a strong edge region in the original image by using the high-frequency layer.
For ease of understanding, the step of step S21 is subdivided into:
(1) the original image is converted from the RGB color space to the YCbCr color space, and the value of the Y channel is extracted using equation (3).
Y=0.257×R+0.564×G+0.098×B+16 (3)
(2) The Y channel is downsampled by a factor of 2 and then upsampled by a factor of 2 using bilinear interpolation, as shown in equation (4):
Y′=B2(Y(2:2:m,2:2:n)) (4)
wherein m and n are respectively the line number and column number of the original image, B2Is the up-sampling bilinear interpolation of a factor of 2, and Y' is the sampling result.
(3) And subtracting the brightness of the sampling result from the brightness of the original image to obtain a high-frequency layer of the image, as shown in formula (5):
H=Y-Y′ (5)
where H denotes a high frequency layer of an image.
The multilateral filtering algorithm is an improved algorithm based on Gaussian filtering, in an area with gentle gray level change, a value domain filtering kernel function is close to 1, at the moment, airspace filtering plays a main role, and the multilateral filter degenerates into a traditional Gaussian filter to carry out smoothing operation on an image. In the edge part of the image, because the difference between pixels is large, the value range filtering plays a main role at the moment, and the edge information is protected from ghosting. And finally, realizing image filtering by using a finite impulse response filter, and improving the signal-to-noise ratio of the image. An adaptive filtering model is used:
Figure BDA0002584351070000061
Figure BDA0002584351070000062
wherein g (x) is an edge stop function; "Tmax" means convolution; c. CηAnd cξIs a positive constant and controls the weight of the impact filter and the forward diffusion process, respectively. The model combines an edge stop function term with a forward diffusion process term. To ensure continuity and strength of the impact filter at the image edges, embodiments of the present invention use the tanh (x) function, rather than arctan (x), to optimize the impact filtering model. the tanh (x) function balances the two functions, and in this model, the weight of the impact filter can adaptively adjust the reduction of edge diffusion according to the image gradient. The larger the image gradient, the smaller the magnitude of the weight and edge spread reduction and vice versa. The weight of the impact filter will change continuously with the change of the image gradient, so the edge diffusion will also decrease with the local gradient of the image.
And S22, on the basis of the strong edge region extracted in the step S21, estimating a ghost kernel by using an iterative method by adopting L0 regularization strength and gradient prior in a substep S22.
Estimating a ghost kernel by using an iterative method based on the strong edge region to estimate a ghost matrix, specifically:
establishing a prior definition function of the original image and setting a regularization term in the prior definition function;
estimating ghost kernels using the a priori definition function and the regularization term based on the strong edge region.
For ease of understanding, the step of step S22 is subdivided into:
(1) setting a prior definition of the image:
Figure BDA0002584351070000063
in the formula, Pt(x) Representing the number of non-zero pixels,
Figure BDA0002584351070000064
is the image gradient value and σ is the weight.
(2) The ghost kernel used to estimate the ghost image in equation (9) using a priori p (x) as the regularization term:
Figure BDA0002584351070000071
where W is the extracted strong edge region, x is the latent image of W during ghost kernel calculation, x represents the convolution, k is a ghost kernel, and γ and λ are weights.
When an image is captured and displayed on the camera negative, it becomes a point ghost because the ideal point source does not appear as a point, but rather is spread out, a function called point spread. A non-point source is typically the sum of a number of individual point sources, and the pixels in the recorded image may be represented by a point spread function and a latent image:
Figure BDA0002584351070000072
in the formula, pijIs the point spread function, j and i are the real image and the image recorded by the camera, respectively, ujIs the coordinate value of the real image j, diAre coordinate values for image presentation.
And S3, estimating a point spread function of the image degradation model by using the original image and the enhanced ghost image, and evaluating a potential sharp image in the original ghost image by using an adaptive deconvolution function and the point spread function.
Referring to fig. 2, in the present embodiment, once the exact point spread function is estimated, a fast adaptive non-blind deconvolution method is used to perform the final deconvolution:
Figure BDA0002584351070000073
where i is the index through all pixels,
Figure BDA0002584351070000074
representing the direct product of two matrices. Based on empirical values, the present invention uses 2/3 as the q value collectively. f. ofkIs a first derivative filter:
f1=[1-1] (12)
f2=[1-1]T (13)
finding L, minimizing reconstruction errors
Figure BDA0002584351070000075
Image priors prefer to use L for a correct, clear interpretation.
However, q is<1 makes the optimization problem non-convex. Aiming at the non-convex problem, several solving methods are provided. The fast algorithm based on the semi-quadratic splitting introduces two auxiliary variables into each pixel point: w is a1And w2Will be
Figure BDA0002584351070000081
Removing (| · nonqThe expression, equation (11), can be transformed into the following optimization problem:
Figure BDA0002584351070000082
in the formula (I), the compound is shown in the specification,
Figure BDA0002584351070000083
for restraining
Figure BDA0002584351070000084
wkAnd λ are control parameters that have different roles in the iterative process. As the parameter λ becomes larger, the solution of equation (14) converges to the solution of equation (11), which is also referred to as alternating minimization. The invention adopts a common image recovery technology. Minimizing equation (14) for fixing λ can be obtained by performing two steps alternately, which means that the sub-problems of ω and L are solved separately.
Solving the sub-problem of ω: the input ghost image B is set as the original sharp image I. Given a fixed L, finding the optimal ω can be simplified as:
Figure BDA0002584351070000085
wherein v ═ fkL), take q 2/3.
To find and select the correct root of the quadratic polynomial described above, the present invention obtains the latent image by solving the L-subproblem. Given a fixed value ω from the previous iteration, the optimum L is obtained by the following optimization problem. Then equation (14) is modified to:
Figure BDA0002584351070000086
through iterative solution, a deghosting result can be obtained.
An embodiment of the present invention further provides an image processing apparatus suitable for vehicle identification, including:
the image ghost processing module is used for acquiring an original image for vehicle identification and constructing an image regression model of the original image;
the image edge processing module is used for carrying out edge reconstruction, ghost matrix estimation and potential image estimation on the original image based on a multilateral filtering algorithm to obtain an enhanced ghost image;
and the image latent image processing module is used for estimating a point spread function of the image degradation model by using the original image and the enhanced ghost image, and evaluating a latent clear image in the original ghost image by adopting an adaptive deconvolution function and the point spread function.
In one embodiment of the present invention, the image edge processing module is further configured to:
performing edge reconstruction on the original image, and extracting a strong edge region in the original image;
based on the strong edge region, estimating a ghost kernel by using an iterative method to perform ghost matrix estimation.
In one embodiment of the present invention, the image edge processing module is further configured to:
converting the original image from an RGB color space to a YCbCr color space, and performing down-sampling and up-sampling on the converted image to obtain a sampling result;
and obtaining a high-frequency layer of the original image based on the brightness difference between the brightness of the original image and the brightness of the sampling result, and extracting a strong edge region in the original image by using the high-frequency layer.
In one embodiment of the present invention, the image edge processing module is further configured to:
establishing a prior definition function of the original image and setting a regularization term in the prior definition function;
estimating ghost kernels using the a priori definition function and the regularization term based on the strong edge region.
In one embodiment of the present invention, the image ghosting processing module is configured to:
constructing a relation among ghost pixels, real pixels, a point spread function and additive noise:
g(x,y)=f(x,y)*h(x,y)+n(x,y) (1)
obtaining a point spread function according to the relation:
Figure BDA0002584351070000091
wherein g (x, y) represents ghost pixels, f (x, y) represents true pixels, h (x, y) represents a point spread function, and n (x, y) represents additive noise; l denotes a ghost length, and θ denotes a ghost angle.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention. It will be understood by those skilled in the art that all or part of the processes of the above embodiments may be implemented by hardware related to instructions of a computer program, and the computer program may be stored in a computer readable storage medium, and when executed, may include the processes of the above embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

Claims (10)

1. An image processing method suitable for vehicle identification, comprising the steps of:
acquiring an original image for vehicle identification, and constructing an image regression model of the original image;
based on a multilateral filtering algorithm, performing edge reconstruction, ghost matrix estimation and potential image estimation on the original image to obtain an enhanced ghost image;
and estimating a point spread function of the image degradation model by using the original image and the enhanced ghost image, and estimating a potential sharp image in the original ghost image by using an adaptive deconvolution function and the point spread function.
2. The image processing method of claim 1, wherein the performing edge reconstruction, ghost matrix estimation, and latent image estimation on the original image comprises:
performing edge reconstruction on the original image, and extracting a strong edge region in the original image;
based on the strong edge region, estimating a ghost kernel by using an iterative method to perform ghost matrix estimation.
3. The image processing method according to claim 2, wherein the edge reconstruction is performed on the original image to extract a strong edge region in the original image, specifically:
converting the original image from an RGB color space to a YCbCr color space, and performing down-sampling and up-sampling on the converted image to obtain a sampling result;
and obtaining a high-frequency layer of the original image based on the brightness difference between the brightness of the original image and the brightness of the sampling result, and extracting a strong edge region in the original image by using the high-frequency layer.
4. The image processing method according to claim 2, wherein the ghost kernel is estimated using an iterative method for ghost matrix estimation based on the strong edge region, in particular:
establishing a prior definition function of the original image and setting a regularization term in the prior definition function;
estimating ghost kernels using the a priori definition function and the regularization term based on the strong edge region.
5. The image processing method of claim 1, wherein said constructing an image degradation model of the original image comprises:
constructing a relation among ghost pixels, real pixels, a point spread function and additive noise:
g(x,y)=f(x,y)*h(x,y)+n(x,y) (1)
obtaining a point spread function according to the relation:
Figure FDA0002584351060000021
wherein g (x, y) represents ghost pixels, f (x, y) represents true pixels, h (x, y) represents a point spread function, n (x, y) represents additive noise, and the symbol x represents a convolution operator; l denotes a ghost length, and θ denotes a ghost angle.
6. An image processing apparatus adapted for vehicle recognition, comprising:
the image ghost processing module is used for acquiring an original image for vehicle identification and constructing an image regression model of the original image;
the image edge processing module is used for carrying out edge reconstruction, ghost matrix estimation and potential image estimation on the original image based on a multilateral filtering algorithm to obtain an enhanced ghost image;
and the image latent image processing module is used for estimating a point spread function of the image degradation model by using the original image and the enhanced ghost image, and evaluating a latent clear image in the original ghost image by adopting an adaptive deconvolution function and the point spread function.
7. The image processing apparatus of claim 1, wherein the image edge processing module is further configured to:
performing edge reconstruction on the original image, and extracting a strong edge region in the original image;
based on the strong edge region, estimating a ghost kernel by using an iterative method to perform ghost matrix estimation.
8. The image processing apparatus of claim 7, wherein the image edge processing module is further configured to:
converting the original image from an RGB color space to a YCbCr color space, and performing down-sampling and up-sampling on the converted image to obtain a sampling result;
and obtaining a high-frequency layer of the original image based on the brightness difference between the brightness of the original image and the brightness of the sampling result, and extracting a strong edge region in the original image by using the high-frequency layer.
9. The image processing apparatus of claim 7, wherein the image edge processing module is further configured to:
establishing a prior definition function of the original image and setting a regularization term in the prior definition function;
estimating ghost kernels using the a priori definition function and the regularization term based on the strong edge region.
10. The image processing apparatus of claim 1, wherein the image ghosting processing module is to:
constructing a relation among ghost pixels, real pixels, a point spread function and additive noise:
g(x,y)=f(x,y)*h(x,y)+n(x,y) (1)
obtaining a point spread function according to the relation:
Figure FDA0002584351060000031
wherein g (x, y) represents ghost pixels, f (x, y) represents true pixels, h (x, y) represents a point spread function, n (x, y) represents additive noise, and the symbol x represents a convolution operator; l denotes a ghost length, and θ denotes a ghost angle.
CN202010677528.8A 2020-07-14 2020-07-14 Image processing method and device suitable for vehicle identification Pending CN112069870A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010677528.8A CN112069870A (en) 2020-07-14 2020-07-14 Image processing method and device suitable for vehicle identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010677528.8A CN112069870A (en) 2020-07-14 2020-07-14 Image processing method and device suitable for vehicle identification

Publications (1)

Publication Number Publication Date
CN112069870A true CN112069870A (en) 2020-12-11

Family

ID=73657301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010677528.8A Pending CN112069870A (en) 2020-07-14 2020-07-14 Image processing method and device suitable for vehicle identification

Country Status (1)

Country Link
CN (1) CN112069870A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731436A (en) * 2020-12-17 2021-04-30 浙江大学 Multi-mode data fusion travelable area detection method based on point cloud up-sampling

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130242129A1 (en) * 2010-09-28 2013-09-19 Stefan Harmeling Method and device for recovering a digital image from a sequence of observed digital images
CN107103317A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN107451973A (en) * 2017-07-31 2017-12-08 西安理工大学 Motion blur image restoration method based on the extraction of abundant fringe region
CN109410143A (en) * 2018-10-31 2019-03-01 泰康保险集团股份有限公司 Image enchancing method, device, electronic equipment and computer-readable medium
CN109919027A (en) * 2019-01-30 2019-06-21 合肥特尔卡机器人科技股份有限公司 A kind of Feature Extraction System of road vehicles
CN110415193A (en) * 2019-08-02 2019-11-05 平顶山学院 The restored method of coal mine low-light (level) blurred picture
CN111292257A (en) * 2020-01-15 2020-06-16 重庆邮电大学 Retinex-based image enhancement method in dark vision environment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130242129A1 (en) * 2010-09-28 2013-09-19 Stefan Harmeling Method and device for recovering a digital image from a sequence of observed digital images
CN107103317A (en) * 2017-04-12 2017-08-29 湖南源信光电科技股份有限公司 Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN107451973A (en) * 2017-07-31 2017-12-08 西安理工大学 Motion blur image restoration method based on the extraction of abundant fringe region
CN109410143A (en) * 2018-10-31 2019-03-01 泰康保险集团股份有限公司 Image enchancing method, device, electronic equipment and computer-readable medium
CN109919027A (en) * 2019-01-30 2019-06-21 合肥特尔卡机器人科技股份有限公司 A kind of Feature Extraction System of road vehicles
CN110415193A (en) * 2019-08-02 2019-11-05 平顶山学院 The restored method of coal mine low-light (level) blurred picture
CN111292257A (en) * 2020-01-15 2020-06-16 重庆邮电大学 Retinex-based image enhancement method in dark vision environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
史海玲 等: "运动模糊车辆图像复原方法研究", 计算机技术与发展, vol. 26, no. 8, pages 60 - 64 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731436A (en) * 2020-12-17 2021-04-30 浙江大学 Multi-mode data fusion travelable area detection method based on point cloud up-sampling
CN112731436B (en) * 2020-12-17 2024-03-19 浙江大学 Multi-mode data fusion travelable region detection method based on point cloud up-sampling

Similar Documents

Publication Publication Date Title
CN109859147B (en) Real image denoising method based on generation of antagonistic network noise modeling
Li et al. Rain streak removal using layer priors
Ding et al. Single image rain and snow removal via guided L0 smoothing filter
US9779491B2 (en) Algorithm and device for image processing
CN102326379B (en) Method for removing blur from image
CN106920220B (en) The turbulent flow method for blindly restoring image optimized based on dark primary and alternating direction multipliers method
CN110796616B (en) Turbulence degradation image recovery method based on norm constraint and self-adaptive weighted gradient
CN105184743B (en) A kind of image enchancing method based on non-linear Steerable filter
EP2916537B1 (en) Image processing device
CN112215773B (en) Local motion deblurring method and device based on visual saliency and storage medium
CN105513025B (en) A kind of improved rapid defogging method
CN111861925A (en) Image rain removing method based on attention mechanism and gate control circulation unit
CN111340732B (en) Low-illumination video image enhancement method and device
CN113962908B (en) Pneumatic optical effect large-visual-field degraded image point-by-point correction restoration method and system
CN116563146A (en) Image enhancement method and system based on leachable curvature map
Chen et al. Visual depth guided image rain streaks removal via sparse coding
CN104766287A (en) Blurred image blind restoration method based on significance detection
CN108171124B (en) Face image sharpening method based on similar sample feature fitting
CN112069870A (en) Image processing method and device suitable for vehicle identification
CN113421210A (en) Surface point cloud reconstruction method based on binocular stereo vision
CN115965552B (en) Frequency-space-time domain joint denoising and recovering system for low signal-to-noise ratio image sequence
Anantrasirichai et al. Mitigating the effects of atmospheric distortion using DT-CWT fusion
CN112330566B (en) Image denoising method and device and computer storage medium
Ranipa et al. A practical approach for depth estimation and image restoration using defocus cue
Kim et al. Single image dehazing of road scenes using spatially adaptive atmospheric point spread function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination