CN113421215A - Automatic tracking system of car based on artificial intelligence - Google Patents

Automatic tracking system of car based on artificial intelligence Download PDF

Info

Publication number
CN113421215A
CN113421215A CN202110812782.9A CN202110812782A CN113421215A CN 113421215 A CN113421215 A CN 113421215A CN 202110812782 A CN202110812782 A CN 202110812782A CN 113421215 A CN113421215 A CN 113421215A
Authority
CN
China
Prior art keywords
image
processing
module
lane line
carrying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110812782.9A
Other languages
Chinese (zh)
Inventor
金文�
张翟容
徐萌
万晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jinhaixing Navigation Technology Co ltd
Original Assignee
Jiangsu Jinhaixing Navigation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jinhaixing Navigation Technology Co ltd filed Critical Jiangsu Jinhaixing Navigation Technology Co ltd
Priority to CN202110812782.9A priority Critical patent/CN113421215A/en
Publication of CN113421215A publication Critical patent/CN113421215A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Abstract

The invention provides an automobile automatic tracking system based on artificial intelligence, which comprises a shooting module, an image processing module, a lane recognition module, a distance calculation module and a departure early warning module, wherein the shooting module is used for shooting a vehicle; the shooting module is used for acquiring real-time images of the automobile in four directions, namely front, rear, left and right directions; the image processing module is used for carrying out image merging processing on the real-time image to obtain a panoramic image; the lane recognition module is used for carrying out image recognition processing on the panoramic image to obtain a left lane line and a right lane line in the panoramic image; the distance calculation module is used for calculating the distance L between the central axis of the automobile and the left lane line and calculating the distance R between the central axis of the automobile and the right lane line; and the deviation early warning module is used for sending vehicle deviation early warning to a driver when L or R is greater than a preset threshold value. The invention can better solve the problem of inaccurate lane identification caused by the blockage of the front vehicle.

Description

Automatic tracking system of car based on artificial intelligence
Technical Field
The invention relates to the field of automobile driving, in particular to an automatic automobile tracking system based on artificial intelligence.
Background
In advanced assistant driving, lane line deviation early warning can effectively remind a driver to correct the current vehicle position in time, and accidents or influence on rear vehicle running caused by distraction of the driver are avoided. In the current vehicle-mounted driving recorder, because the vehicle-mounted driving recorder can only shoot a front real-time image (a forward-looking video), if the lane line detection is directly carried out in the forward-looking video, due to the factors of complex road conditions, shielding, illumination and the like, particularly under the conditions of crowded urban areas, rainy and snowy days and night, a front vehicle shields the lane line, and the accurate real-time detection of the lane line in front is very difficult.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide an artificial intelligence-based automatic tracking system for an automobile, which includes a shooting module, an image processing module, a lane recognition module, a distance calculation module, and a departure warning module;
the shooting module is used for acquiring real-time images of the automobile in four directions, namely front, rear, left and right directions;
the image processing module is used for carrying out image merging processing on the real-time image to obtain a panoramic image;
the lane recognition module is used for carrying out image recognition processing on the panoramic image to obtain a left lane line and a right lane line in the panoramic image;
the distance calculation module is used for calculating the distance L between the central axis of the automobile and the left lane line and calculating the distance R between the central axis of the automobile and the right lane line;
and the deviation early warning module is used for sending vehicle deviation early warning to a driver when L or R is greater than a preset threshold value.
Preferably, the shooting module comprises 4 fisheye lenses with 180 degrees, and the fisheye lenses are respectively arranged in the front, the back, the left and the right of the automobile;
the fisheye lens is used for acquiring a real-time image of the direction in which the fisheye lens is located.
Preferably, the image merging processing on the real-time image to obtain a top plan image includes:
carrying out camera calibration processing on the fisheye lens to obtain internal reference and external reference of the fisheye lens;
carrying out distortion correction processing on the real-time image according to the internal parameters and the external parameters to obtain a distortion corrected image;
carrying out perspective transformation processing on the distortion correction image to obtain a top plan image;
and carrying out image splicing processing on the four overlooking plane images to obtain a panoramic image.
Preferably, the performing image recognition processing on the panoramic image to obtain a left lane line and a right lane line in the panoramic image includes:
carrying out graying processing on the panoramic image to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
extracting an ROI area image in the noise-reduced image;
performing binarization processing on the ROI area image according to gradient information in the ROI area image to obtain a binarized image;
acquiring a connected domain in a binary image;
and identifying and obtaining a left lane line and a right lane line according to the geometric characteristics of the connected domain.
Preferably, the graying the panoramic image to obtain a grayscale image includes:
carrying out graying processing on the panoramic image by using the following formula to obtain a grayscale image:
gray(c)=w1×R(c)+w2×G(c)+w3×B(c)
wherein, gray (c) represents the pixel value of the pixel point c in the panoramic image in the gray image gray, R (c), G (c), B (c) represent the pixel value of the pixel point c in the panoramic image in the image R, the image G, and the image B, and the image R, the image G, and the image B represent the image of the red component, the image of the green component, and the image of the blue component in the RGB color space of the panoramic image, respectively.
According to the lane line identification method and the lane line identification device, the lane line is obtained by obtaining the panoramic image of the periphery of the automobile, and then whether the automobile deviates from the lane is judged according to the distance between the lane line and the central axis of the automobile.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an exemplary embodiment of an automatic tracking system for a vehicle based on artificial intelligence according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 1, the present invention provides an automatic tracking system for an automobile based on artificial intelligence, which includes a shooting module, an image processing module, a lane recognition module, a distance calculation module, and a departure warning module;
the shooting module is used for acquiring real-time images of the automobile in four directions, namely front, rear, left and right directions;
the image processing module is used for carrying out image merging processing on the real-time image to obtain a panoramic image;
the lane recognition module is used for carrying out image recognition processing on the panoramic image to obtain a left lane line and a right lane line in the panoramic image;
the distance calculation module is used for calculating the distance L between the central axis of the automobile and the left lane line and calculating the distance R between the central axis of the automobile and the right lane line;
and the deviation early warning module is used for sending vehicle deviation early warning to a driver when L or R is greater than a preset threshold value.
In particular, the centre axis of the vehicle refers to a line perpendicular to the line connecting the centre points of the two rear wheels of the vehicle.
The vehicle departure warning can be sent by voice prompt or by popping out a corresponding pop-up window in a central control screen of the vehicle for prompt.
Preferably, the shooting module comprises 4 fisheye lenses with 180 degrees, and the fisheye lenses are respectively arranged in the front, the back, the left and the right of the automobile;
the fisheye lens is used for acquiring a real-time image of the direction in which the fisheye lens is located.
Preferably, the image merging processing on the real-time image to obtain a top plan image includes:
carrying out camera calibration processing on the fisheye lens to obtain internal reference and external reference of the fisheye lens;
carrying out distortion correction processing on the real-time image according to the internal parameters and the external parameters to obtain a distortion corrected image;
carrying out perspective transformation processing on the distortion correction image to obtain a top plan image;
and carrying out image splicing processing on the four overlooking plane images to obtain a panoramic image.
Specifically, a 90-degree overlapping area exists between two adjacent fisheye lenses, the 4 fisheye lenses work synchronously, and real-time images of the automobile in the front, the rear, the left and the right directions are acquired at the same time.
Specifically, in the image measurement process and machine vision application, in order to determine the correlation between the three-dimensional geometric position of a certain point on the surface of an object in space and the corresponding point in the image, a geometric model of the camera imaging must be established, and the geometric model parameters are the camera parameters. Under most conditions, the parameters must be obtained through experiments and calculation, and the process of solving the parameters is called camera calibration. In the invention, an internal reference and an external reference of the fisheye camera are determined by adopting a Zhangyingyou calibration method.
Specifically, the distortion of the fisheye lens is classified into radial distortion and tangential distortion.
Radial distortion is distortion distributed along the radius of the lens, arising from rays being more curved at the center of the principle lens than near the center, and the mathematical model of distortion can be described by the first few terms of a Taylor series expansion around the principal point, as follows:
x0=x(1+k1r2+k2r4+k3r6)
y0=y(1+k1r2+k2r4+k3r6)
wherein r is2=x2+y2;x0,y0Is the coordinate of the actual imaging plane coordinate system with distortion; x, y are coordinates in an ideal imaging plane coordinate system, k1、k2、k3Represents a radial distortion correction coefficient;
the tangential distortion is due to the fact that the lens itself is not parallel to the camera sensor plane (imaging plane) or image plane, with two additional parameters p1And p2To describe, the model is as follows:
x0=x+[2p1xy+p2(r2+2x2)]
y0=y+[2p2xy+p1(r2+2y2)]
the mapping relation between the distorted image coordinates and the corrected image coordinates can be established according to the distortion model, and the coordinates corresponding to the corrected image can be calculated according to the coordinates of the distorted image obtained by shooting. Meanwhile, the correction coordinates obtained by calculation in the mapping process have decimal numbers, and interpolation calculation needs to be carried out on the coordinates.
Specifically, the image stitching processing is performed on four overlooking plane images to obtain a panoramic image, and the method comprises the following steps:
and respectively acquiring feature points in two adjacent plane images, then acquiring a feature point matching pair between the two images, deleting the wrong feature point matching pair, acquiring a screened feature point matching pair, and splicing the two adjacent plane images according to the feature point matching pair.
Preferably, the performing image recognition processing on the panoramic image to obtain a left lane line and a right lane line in the panoramic image includes:
carrying out graying processing on the panoramic image to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
extracting an ROI area image in the noise-reduced image;
performing binarization processing on the ROI area image according to gradient information in the ROI area image to obtain a binarized image;
acquiring a connected domain in a binary image;
and identifying and obtaining a left lane line and a right lane line according to the geometric characteristics of the connected domain.
Preferably, the graying the panoramic image to obtain a grayscale image includes:
carrying out graying processing on the panoramic image by using the following formula to obtain a grayscale image:
gray(c)=w1×R(c)+w2×G(c)+w3×B(c)
wherein, gray (c) represents the pixel value of the pixel point c in the panoramic image in the gray image gray, R (c), G (c), B (c) represent the pixel value of the pixel point c in the panoramic image in the image R, the image G, and the image B, and the image R, the image G, and the image B represent the image of the red component, the image of the green component, and the image of the blue component in the RGB color space of the panoramic image, respectively.
Preferably, the performing noise reduction processing on the grayscale image to obtain a noise-reduced image includes:
carrying out blocking processing on the gray level image to obtain a plurality of image blocks;
adaptively selecting a processing function according to the type of an image block to perform noise reduction processing on the image block to obtain a noise-reduced image block;
and forming a noise-reduced image by all the noise-reduced image blocks.
The image blocks in the prior art are generally divided equally according to the area, but the processing mode is not favorable for obtaining the uniformity of the types of the pixel points in the image blocks, and is not favorable for adaptively selecting the corresponding processing function for noise reduction processing according to the types of the image blocks. The existing noise reduction processing mode generally adopts the same noise reduction function to process all pixel points, the processing mode obviously influences the final noise reduction effect, and the single noise reduction function can not adapt to various types of pixel points.
Specifically, the blocking processing of the grayscale image to obtain a plurality of image blocks includes:
and carrying out blocking processing on the gray level image in a cyclic blocking mode:
the first blocking processing is carried out, the gray image is divided into N image blocks with equal areas, whether each image block needs to be divided again is judged, if yes, the image blocks are stored into a set dcu1If not, storing the image block into a set finu;
second blocking process for dcu1Image block nk of1,j,j∈[1,numdcu1],numdcu1Denotes dcu1Total number of image blocks contained in (b), n1,jDividing the image into N image blocks with equal areas, respectively judging whether each image block needs to be divided again, if so, storing the image blocks into a set dcu2If not, storing the image block into a set finu;
block processing for nth time, for dcun-1Image block nk ofn-1,i,i∈[1,numdcun-1],numdcun-1Denotes dcun-1Total number of image blocks contained in (b), nn-1,iDividing the image into N image blocks with equal areas, respectively judging whether each image block needs to be divided again, if so, storing the image blocks into a set dcunIf not, storing the image block into a set finu;
the conditions for ending the blocking process are as follows: n is greater than the upper limit of the number of the block processing times (nth or dcu)nThe number of elements in (1) is 0.
The blocking processing process can judge whether the image blocks need to be further divided according to the blocking parameters of the image blocks, so that the image blocks which contain too many pixel points and have large difference among the pixel points are further divided. Meanwhile, the judgment condition of the pair is also set, so that the influence on the efficiency caused by undersize of image block division is avoided.
Specifically, whether the image block needs to be divided again is judged by the following method:
calculating the blocking parameters of the image block:
Figure BDA0003168826410000061
where cdkis (u) denotes a blocking parameter, g, of the image block u1And g2Representing a preset weight parameter, cu representing a set of all pixel points of the image block u, nofcu representing the number of pixel points contained in cu, v representing the pixel points contained in cu, f (v) representing the gradient amplitude of the pixel point v, valb representing a standard value of a preset gradient amplitude variance, numb representing a preset pixel point number standard value,
if the blocking parameter is larger than the preset blocking parameter threshold, the image block needs to be divided again, otherwise, the image block does not need to be divided again.
And calculating the blocking index, reflecting the difference between the pixel points by using the difference between the gradients of the pixel points, and obtaining the blocking index by using a weighted summation mode, so that the invention can adjust the weight of the variable in the blocking parameter according to different requirements, thereby ensuring to obtain an accurate image block result. The method is favorable for improving the noise reduction accuracy.
Preferably, adaptively selecting a processing function according to the type of the image block to perform noise reduction processing on the image block includes:
for image block h, if cdkis (h) ≦ sthre1If the pixel block belongs to the first type, denoising the image block h by using a median filtering algorithm, wherein cdkis (h) represents the partitioning parameter of the pixel block h;
if sthre1<cdkis(h)<sthre2If it is of the second type, the image block h is denoised using the following processing function:
Figure BDA0003168826410000062
if sthre2If the cdkis (h) is less than or equal to cdkis (h), the image block h belongs to the third type, and a wavelet denoising function is adopted to perform denoising treatment on the image block h;
in the formula, uop represents a set of pixel points in a neighborhood of H multiplied by H size of a pixel point op in an image block H, q represents the pixel points contained in the uop, g (q) represents the pixel value of the pixel point q in the image block H, g (op) represents the pixel value of the pixel point op in the image block H, ah represents the image block after noise reduction, ah (op) represents the pixel value of the pixel point op in the image block after noise reduction,
Figure BDA0003168826410000071
dti (q, op) ═ dtbl (q) -dtbl (op) |, dtbl (q) represents the minimum distance of pixel point q from the edge of image block h, dtbl (op) represents the minimum distance of pixel point op from the edge of image block h, numuop represents the total number of elements contained in uop,
Figure BDA0003168826410000072
sthre1and sthre2Respectively representing a preset first judgment threshold and a second judgment threshold.
In the above embodiment, the type of the image block is determined by setting the first determination threshold and the second determination threshold, and then different processing functions are correspondingly set according to different types to perform noise reduction processing.
According to the lane line identification method and the lane line identification device, the lane line is obtained by obtaining the panoramic image of the periphery of the automobile, and then whether the automobile deviates from the lane is judged according to the distance between the lane line and the central axis of the automobile.
Specifically, the left lane line and the right lane line are identified and obtained according to the geometric features of the connected domain, and the lane lines have slender features, so that the lane lines can be identified according to the aspect ratio of the connected domain.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (5)

1. An automobile automatic tracking system based on artificial intelligence is characterized by comprising a shooting module, an image processing module, a lane recognition module, a distance calculation module and a departure early warning module;
the shooting module is used for acquiring real-time images of the automobile in four directions, namely front, rear, left and right directions;
the image processing module is used for carrying out image merging processing on the real-time image to obtain a panoramic image;
the lane recognition module is used for carrying out image recognition processing on the panoramic image to obtain a left lane line and a right lane line in the panoramic image;
the distance calculation module is used for calculating the distance L between the central axis of the automobile and the left lane line and calculating the distance R between the central axis of the automobile and the right lane line;
and the deviation early warning module is used for sending vehicle deviation early warning to a driver when L or R is greater than a preset threshold value.
2. The artificial intelligence based automatic automobile tracking system according to claim 1, wherein the shooting module comprises 4 fisheye lenses of 180 degrees, and the fisheye lenses are respectively arranged in the front, back, left and right directions of the automobile;
the fisheye lens is used for acquiring a real-time image of the direction in which the fisheye lens is located.
3. The artificial intelligence based automatic tracking system for automobiles according to claim 2, wherein said image merging process for said real-time images to obtain top-view plane images comprises:
carrying out camera calibration processing on the fisheye lens to obtain internal reference and external reference of the fisheye lens;
carrying out distortion correction processing on the real-time image according to the internal parameters and the external parameters to obtain a distortion corrected image;
carrying out perspective transformation processing on the distortion correction image to obtain a top plan image;
and carrying out image splicing processing on the four overlooking plane images to obtain a panoramic image.
4. The system of claim 1, wherein the performing image recognition processing on the panoramic image to obtain a left lane line and a right lane line in the panoramic image comprises:
carrying out graying processing on the panoramic image to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
extracting an ROI area image in the noise-reduced image;
performing binarization processing on the ROI area image according to gradient information in the ROI area image to obtain a binarized image;
acquiring a connected domain in a binary image;
and identifying and obtaining a left lane line and a right lane line according to the geometric characteristics of the connected domain.
5. The artificial intelligence based automatic tracking system for automobiles according to claim 4, wherein said graying the panoramic image to obtain a grayscale image comprises:
carrying out graying processing on the panoramic image by using the following formula to obtain a grayscale image:
gray(c)=w1×R(c)+w2×G(c)+w3×B(c)
wherein, gray (c) represents the pixel value of the pixel point c in the panoramic image in the gray image gray, R (c), G (c), B (c) represent the pixel value of the pixel point c in the panoramic image in the image R, the image G, and the image B, and the image R, the image G, and the image B represent the image of the red component, the image of the green component, and the image of the blue component in the RGB color space of the panoramic image, respectively.
CN202110812782.9A 2021-07-19 2021-07-19 Automatic tracking system of car based on artificial intelligence Pending CN113421215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110812782.9A CN113421215A (en) 2021-07-19 2021-07-19 Automatic tracking system of car based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110812782.9A CN113421215A (en) 2021-07-19 2021-07-19 Automatic tracking system of car based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN113421215A true CN113421215A (en) 2021-09-21

Family

ID=77721290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110812782.9A Pending CN113421215A (en) 2021-07-19 2021-07-19 Automatic tracking system of car based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN113421215A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114614525A (en) * 2022-02-22 2022-06-10 南京安充智能科技有限公司 Intelligent charging pile management system
CN114821531A (en) * 2022-04-25 2022-07-29 广州优创电子有限公司 Lane line recognition image display system based on electronic outside rear-view mirror ADAS
CN115082573A (en) * 2022-08-19 2022-09-20 小米汽车科技有限公司 Parameter calibration method and device, vehicle and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114614525A (en) * 2022-02-22 2022-06-10 南京安充智能科技有限公司 Intelligent charging pile management system
CN114821531A (en) * 2022-04-25 2022-07-29 广州优创电子有限公司 Lane line recognition image display system based on electronic outside rear-view mirror ADAS
CN115082573A (en) * 2022-08-19 2022-09-20 小米汽车科技有限公司 Parameter calibration method and device, vehicle and storage medium
CN115082573B (en) * 2022-08-19 2023-04-11 小米汽车科技有限公司 Parameter calibration method and device, vehicle and storage medium

Similar Documents

Publication Publication Date Title
CN113421215A (en) Automatic tracking system of car based on artificial intelligence
CN109886896B (en) Blue license plate segmentation and correction method
CN107679520B (en) Lane line visual detection method suitable for complex conditions
CN108921176B (en) Pointer instrument positioning and identifying method based on machine vision
CN108256521B (en) Effective area positioning method for vehicle body color identification
CN109784344A (en) A kind of non-targeted filtering method of image for ground level mark identification
EP2580740A2 (en) An illumination invariant and robust apparatus and method for detecting and recognizing various traffic signs
CN111080661A (en) Image-based line detection method and device and electronic equipment
CN106778551A (en) A kind of fastlink and urban road Lane detection method
CN107845101B (en) Method and device for calibrating characteristic points of vehicle-mounted all-round-view image and readable storage medium
CN108447016B (en) Optical image and SAR image matching method based on straight line intersection point
WO2023273375A1 (en) Lane line detection method combined with image enhancement and deep convolutional neural network
CN108986129B (en) Calibration plate detection method
CN110674812B (en) Civil license plate positioning and character segmentation method facing complex background
CN110060259A (en) A kind of fish eye lens effective coverage extracting method based on Hough transformation
Chang et al. An efficient method for lane-mark extraction in complex conditions
CN110991264A (en) Front vehicle detection method and device
CN113554646A (en) Intelligent urban road pavement detection method and system based on computer vision
CN111680580A (en) Red light running identification method and device, electronic equipment and storage medium
CN111080542A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN114241438A (en) Traffic signal lamp rapid and accurate identification method based on prior information
CN111462250A (en) Correction system and correction method
Barua et al. An Efficient Method of Lane Detection and Tracking for Highway Safety
CN115661110B (en) Transparent workpiece identification and positioning method
CN110633705A (en) Low-illumination imaging license plate recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination