CN106803073B - Auxiliary driving system and method based on stereoscopic vision target - Google Patents

Auxiliary driving system and method based on stereoscopic vision target Download PDF

Info

Publication number
CN106803073B
CN106803073B CN201710018411.7A CN201710018411A CN106803073B CN 106803073 B CN106803073 B CN 106803073B CN 201710018411 A CN201710018411 A CN 201710018411A CN 106803073 B CN106803073 B CN 106803073B
Authority
CN
China
Prior art keywords
image
road surface
filtering
road condition
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710018411.7A
Other languages
Chinese (zh)
Other versions
CN106803073A (en
Inventor
叶春
张蓉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Yingzhen Technology Co ltd
Original Assignee
Jiangsu Vocational College of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Vocational College of Information Technology filed Critical Jiangsu Vocational College of Information Technology
Priority to CN201710018411.7A priority Critical patent/CN106803073B/en
Publication of CN106803073A publication Critical patent/CN106803073A/en
Application granted granted Critical
Publication of CN106803073B publication Critical patent/CN106803073B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Abstract

The invention discloses an auxiliary driving system and method based on a stereoscopic vision target, wherein the method comprises the following steps: s1, acquiring road condition scene images in front of the vehicle through 2 lenses at the upper part of the vehicle; s2, converting the effective characteristics of the acquired road condition scene images into space coordinate information; and S3, identifying the mark lines on the road surface and the sign in front, and assisting the driver to strengthen the visual identification of the road condition in front. The invention can identify the marking lines on the road surface and various indication boards in front under different climatic environments, and assists the driver to strengthen the visual identification of the road condition in front.

Description

Auxiliary driving system and method based on stereoscopic vision target
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to an auxiliary driving system and method based on a stereoscopic vision target.
Background
With the coming of the road intelligent transportation era, the concept of the intelligent vehicle is increasingly popularized, and the functional appeal of a driver for the active safety of the vehicle is increasingly important. However, the current vehicles running on the road still need to depend on the whole operation of the driver. Although the concept of road traffic safety is continuously advocated by the department of transportation, the incidence of road traffic accidents still remains high, indicating that the improvement effect of road traffic safety has reached the bottleneck.
According to the statistical information of the traffic department, the causes of road traffic accidents are mainly cases of fatigue driving, drunken driving, driver distraction, and non-attention to surrounding road conditions. In addition, the traffic department committee for traffic safety further analyzes the situation that may become a cause of trouble at ordinary times with respect to normal driving behaviors. The drivers are difficult to concentrate on paying attention to the surrounding road conditions every minute and every second at any time because the inherent emotion and the external environment influence the road condition recognition capability, so that the order of the attention of each driver is different, the key road condition information is easy to omit, and the traffic requirement of the road is fully displayed to be greatly improved.
Research in the field of intelligent transportation systems is also the core of future phase research. Particularly, under the background of continuous development of computer technology and communication technology, the method has an important role in promoting the development of the intelligent traffic system, can exert all-round and large-range effects in the future, and promotes the traffic management system to develop towards the direction of accuracy, real time and high efficiency.
In recent years, computer vision applications have been gradually increased in intelligent transportation systems, and specific applications in intelligent transportation systems can be divided into two aspects. One is a roadside video monitoring system; and the other is a vehicle-mounted automatic driving system. The former is that a camera is arranged above a road or beside the road, and the main function is to transmit information such as the position, the speed and the type of a vehicle to an intelligent transportation system; the latter is characterized in that the camera moves along with the vehicle, and can mainly monitor and transmit the conditions around the vehicle body, the fatigue state of a driver and the like to the system.
In order to deeply discuss the problem of stereoscopic vision identification and the research situation, how to identify common pixel characteristics and obtain characteristic depth information through a double-lens image is a main bottleneck of a stereoscopic vision identification algorithm process. The current development situation is explored, the applicable range of the stereoscopic vision is widely extended to various fields around life, but if all the characteristics in the image are subjected to the stereoscopic vision identification, complex and unnecessary additional algorithm data are brought on the one hand, and the system requirements of the real-time visual identification and the service control are difficult to achieve due to too long algorithm time.
Therefore, in order to solve the above technical problems, it is necessary to provide a driving assistance system and method based on a stereoscopic vision target.
Disclosure of Invention
In view of the above, the present invention provides a driving assistance system and method based on a stereoscopic vision target.
In order to achieve the above purpose, the technical solutions provided by the embodiments of the present invention are as follows:
a driving assistance system based on a stereoscopic vision target, the driving assistance system comprising:
the image acquisition unit comprises 2 lenses arranged at the upper part of the automobile and respectively acquires road condition scene images in front of the automobile;
the characteristic conversion unit is used for converting the effective characteristics of the acquired road condition scene images into space coordinate information;
and the characteristic identification unit is used for identifying the marking lines on the road surface and the indication boards in front.
As a further improvement of the invention, the central height h of 2 lenses in the image acquisition unit is 1.5 m, and the distance b between 2 lenses in the image acquisition unit is 0.5 m.
The technical scheme provided by the embodiment of the invention is as follows:
a method of assisted driving based on a stereoscopic vision target, the method comprising:
s1, acquiring road condition scene images in front of the vehicle through 2 lenses at the upper part of the vehicle;
s2, converting the effective characteristics of the acquired road condition scene images into space coordinate information;
and S3, identifying the mark lines on the road surface and the sign in front, and assisting the driver to strengthen the visual identification of the road condition in front.
As a further improvement of the present invention, the step S1 further includes:
and local edge cutting is carried out on the road condition scene image so as to enable the horizontal direction and the vertical direction to be the same.
As a further improvement of the present invention, the step S2 includes:
converting the characteristics from a three-dimensional space coordinate system to a two-dimensional pixel coordinate system; and/or
And (4) solving the depth of field of the known common features of the images by a two-dimensional pixel coordinate system, and further converting the known common features of the images into a three-dimensional space coordinate system.
As a further improvement of the present invention, the valid features in step S2 include pixel features of a double yellow solid line, a right red line, and a left red line.
As a further improvement of the present invention, the step S3 specifically includes:
s31, processing 10 times by a median filtering matrix, and subtracting the original image to obtain an image;
s32, the Sobel filter is firstly provided with a matrix SxProcessing and filtering the shadow possibly appearing on the road surface, and then using a matrix SxyObtaining a second-order edge feature distribution image;
s33, filtering out high-frequency components in the homologous regions;
s34, selecting a binarization threshold value according to image processing experience;
s35, converting the binary image by the original image and the negative film, respectively performing volume marking treatment, setting a high-pass filtering threshold value according to the size of the noise volume mark, and filtering small-sized noise and pores;
s36, labeling the block after noise filtering, selecting a filtering upper limit threshold and a filtering lower limit threshold according to the size of the volume label of the road surface area, and filtering out the road surface characteristics;
and S37, according to the position of the driving direction on the image plane, the road surface area is located below the image.
As a further improvement of the present invention, the binarization threshold in the step S34 is 80% to 95% gray scale bin.
As a further improvement of the present invention, the step S37 further includes:
the upper region is filtered out, and the lower image region is collected as the ROI interest region.
The invention has the beneficial effects that:
the invention reproduces the process of identifying the road condition in front of the vehicle and the reading behavior of the visual information by the driver, simulates and reproduces the visual identification processing mode of the driver by means of the algorithm processing method for identifying the road scene in front of the vehicle by the stereoscopic vision image, can identify the marking lines on the road surface and various indication boards in front under different climatic environments, and assists the driver in strengthening the visual identification of the road condition in front.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a block diagram of a driving assistance system based on stereoscopic vision targets according to the present invention;
FIG. 2 is a conceptual diagram of a front position of the traveling crane according to the present invention;
FIG. 3 is a flow chart of the driving assistance method based on the stereoscopic vision target of the present invention;
FIGS. 4 a-4 d are schematic diagrams of left side reticle feature filtering, right side reticle feature filtering, left side green mark seating v-coordinate, and right side green mark seating v-coordinate, respectively, in an embodiment of the present invention;
FIG. 5 is a diagram illustrating specific steps of feature recognition in accordance with an embodiment of the present invention;
fig. 6a and 6b are schematic diagrams of left-side feature recognition processing and right-side feature recognition processing, respectively.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention discloses a driving assistance system based on a stereoscopic vision target, comprising:
the image acquisition unit 10 comprises 2 lenses arranged at the upper part of the automobile and respectively acquires road condition scene images in front of the travelling crane;
the feature conversion unit 20 is used for converting the effective features of the collected road condition scene images into space coordinate information;
and a feature recognition unit 30 for recognizing the marking lines on the road surface and the sign in front.
The invention is a stereo vision which is based on parallel lenses, as shown in figure 2, 2 lenses 11 arranged at the upper part of an automobile replace the vision of real people, the invention selects a road scene, sticks green adhesive tapes at fixed intervals on the road surface in front of the lenses to mark the position of a space coordinate, and then collects the image in front of a traveling crane by a parallel double-lens framework. By the effective characteristic identification strategy and the depth information processing introduced by the invention, the effective characteristics of the image are reversely converted into the depth information, and the depth information is compared with the space coordinate position estimated on site to assist in evaluating the simulation effect.
Preferably, in the present invention, the lens center height h is 1.5 m, the dual-lens distance b is 0.5 m, the focal length f, the coordinate direction follows the right-hand rule of vector outer product, and a coordinate system is constructed by the pinhole imaging principle. The straight ahead position means an orientation parallel to the undulation direction of the road surface with respect to the direction of the straight running path of the vehicle, as shown in fig. 2. And solving the pixel coordinate position of the position right in front of the vehicle corresponding to the lenses on the two sides, so that the reservation of an ROI area below the image is facilitated, and a reference for coordinate system conversion is provided.
Referring to fig. 3, the invention also discloses an auxiliary driving method based on a stereoscopic vision target, which comprises the following steps:
s1, acquiring road condition scene images in front of the vehicle through 2 lenses at the upper part of the vehicle;
s2, converting the effective characteristics of the acquired road condition scene images into space coordinate information;
and S3, identifying the mark lines on the road surface and the sign in front, and assisting the driver to strengthen the visual identification of the road condition in front.
Under the ideal projection imaging condition, the method does not consider the error generated by the distortion of the lens, can directly convert the characteristics from a three-dimensional space coordinate system to a two-dimensional pixel coordinate system by the relation of similar triangles, and can also obtain the depth of field of the known common characteristics of the images from the two-dimensional pixel coordinate system and further convert the depth of the field to the three-dimensional space coordinate system. However, during the linear transformation between the two coordinate systems, the position corresponding to the pixel coordinate is still required to be corrected by the known orientation of the spatial coordinate as the reference for the transformation of the coordinate system.
In addition to the reference of coordinate system conversion, the position of the pixel right in front of the vehicle can also be used for selecting an ROI (region of interest) below a horizontal plane in space coordinates in the image processing process of visually recognizing the road surface and lane markings. In the road scene discussed in the invention, the road surface and the lane marking are determined to be below the horizontal line of the space coordinate, so that the recognition effect of the characteristics of the road surface and the lane marking can be ensured and the unnecessary processing problems such as confusion of effective characteristic regions and the like can be avoided if only the image region below the horizontal line is considered to be processed. For example, the sky and the road surface belong to a region with lower second-order high-frequency components, and the ROI is selected by using the position right in front of the driving vehicle, so that the sky can be prevented from being regarded as a part of the road surface characteristics.
According to the method, the road scene is collected according to a simulation framework, in order to enable the horizontal direction and the vertical direction of the double-lens collected image to be the same, the image is subjected to local edge cutting so as to achieve the condition that the horizontal direction and the vertical direction are the same, and therefore the length and the width of the image adopted by the simulation recognition are (736 x 592). The system firstly converts the pixel format from RGB to HSV color system according to the characteristic identification strategy and belongs to the daytime scene mode according to the characteristic identification strategy, and filters out the characteristics of the central double yellow solid line and the red marking lines on the two sides according to the selected HSV color system threshold value, as shown in figures 4a and 4 b.
In order to compare the difference between the coordinate distribution result calculated by the characteristic depth of field and the actual estimated position, the invention pastes a green adhesive tape beside the lane marking on the road surface as a mark, takes the lens shooting position as the starting origin, takes the position right in front of the vehicle as the extending direction, and fixes every 5 meters as the mark distance. After the images are collected by the parallel twin-lens, the v coordinates of the green mark seats are read by the images, as shown in fig. 4c and 4d, and then the positions of the marks stuck on the road surface are recorded one by the depth of field z coordinates corresponding to the v coordinates of the pixels. The position right in front of the traveling crane is located at the pixel coordinate position of the lens images on the two sides and is also the origin of the coordinate system.
In the simulation framework of the present invention, FOEs have been set to known coordinates in advance and locked to static image attributes in subsequent simulation processes, so the processes discussed in the simulation framework will mainly adopt static images, and CANONIXUS960IS Digital Camera is selected as the static image acquisition source. The road surface feature identification strategy and algorithm flow is shown in fig. 5, and the specific flow steps are as follows:
and S31, processing the problem of scene illumination nonuniformity by an adaptive brightness uniformization mechanism according to the strong light shadow. Processing 10 times by a median filter matrix [5 multiplied by 5], and subtracting an original image to obtain an image;
s32, the Sobel filter is firstly provided with a matrix SxProcessing and filtering the shadow possibly appearing on the road surface, and then using a matrix SxyObtaining a second-order edge feature distribution image;
s33, in order to retain the effective characteristics of the far road surface, processing the long road surface for 5 times by the size [3 × 3] of a Median Filter matrix, processing the long road surface for 5 times by the size [5 × 5], and filtering high-frequency components in the region with the same property so as to be beneficial to the selection of a subsequent binarization threshold value;
s34, selecting 80 to 95 percent gray scale sub-bits as a binarization threshold value according to image processing experience;
s35, converting the binary image by the original image and the negative film, respectively performing volume labeling treatment, setting a Size Filter high-pass filtering threshold according to the Size of a noise volume label, and filtering small-Size noise and pores;
s36, labeling the block after noise filtering, and selecting a SizeFilter filtering upper limit and a SizeFilter filtering lower limit threshold value according to the size of the volume label of the road surface area, so that the road surface characteristics can be filtered;
and S37, depending on the processing result, according to the position of the driving right before on the image plane, the road surface area is located below the image, in order to avoid the upper low-frequency image area from being filtered out together, the upper area can be filtered out in the processing process, and the lower image area is collected to be used as the ROI interest area.
After the characteristic is identified, the corrected internal parameters are obtained, namely the characteristic is projected on the known pixel coordinates of the images at two sides and converted through a matrix equation according to the pinhole imaging principle and the geometric derivation concept of similar triangles. The simulation result of the feature transformed to the spatial coordinate distribution through the matrix presents the transformed coordinate and the simulation graphic by the form and the graph file. And sequentially and independently displaying the pixel characteristic identification corresponding coordinates of the double yellow solid lines, the right red line and the left red line, and respectively carrying out error comparison analysis on the characteristic coordinate positions estimated on site.
According to the simulation result, the deviation error of the image feature recognition position in the space coordinate is analyzed compared with the deviation error between the ideal positions of the on-site estimation features. And d is taken as an image characteristic to identify the distance error between the position and the ideal position of the scene estimation, and the distance error is expressed as follows:
Figure GDA0002326219430000061
and analyzing according to the stereoscopic vision identification result of the double-yellow solid line characteristics. Since the depth of field coordinates 80 and the horizontal line of 85 meters do not pass through the solid line of double yellow, no clear feature coordinate information can be obtained. The feature at the position of 75 m of the depth of field coordinate is slightly influenced by the irregular shape because the position is at the end of the double yellow solid line, and the resolution of the pixel at the far position is poor, so that the common feature collected by the double-lens image and the precision of the horizontal offset (Disparity) collection are insufficient, and the spatial coordinate position converted by the algorithm is also obviously distorted. And the feature at the position of 15 meters in the close-range coordinate is limited in the range of the image captured by the lens on the right side, and only local double-yellow solid lines are acquired, so that the real central position of the double-yellow solid lines cannot be obtained.
In this embodiment, the height of the lens from the ground is 0.7 m, and the left and right images after the effective feature recognition processing are respectively as shown in fig. 6a and 6 b. And finally, respectively calculating the space coordinate information of the double yellow solid lines and the right red line of the effective characteristics searched by the 4 horizontal lines, acquiring the depth information z coordinates of the two points of characteristics searched by the horizontal lines, and calculating the average value. In the top horizontal line of this embodiment, the depth information of the top horizontal line is presented by the depth information of the solid lines with two yellow lines because no effective features can be found in the images on the right side.
The daytime and ideal climate scene mode can identify the robustness of the road surface and the lane marking, and can reach the required standard.
According to the technical scheme, the invention has the following beneficial effects:
the invention reproduces the process of identifying the road condition in front of the vehicle and the reading behavior of the visual information by the driver, simulates and reproduces the visual identification processing mode of the driver by means of the algorithm processing method for identifying the road scene in front of the vehicle by the stereoscopic vision image, can identify the marking lines on the road surface and various indication boards in front under different climatic environments, and assists the driver in strengthening the visual identification of the road condition in front.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (3)

1. A driving assistance method based on a stereoscopic vision target comprises a driving assistance system, and the driving assistance system comprises: the image acquisition unit comprises 2 lenses arranged at the upper part of the automobile and respectively acquires road condition scene images in front of the automobile;
the characteristic conversion unit is used for converting the effective characteristics of the acquired road condition scene images into space coordinate information;
the characteristic identification unit is used for identifying marking lines on the road surface and a sign in front of the road surface;
the central height h of 2 lenses in the image acquisition unit is 1.5 meters, and the distance b between the 2 lenses is 0.5 meter; it is characterized in that the preparation method is characterized in that,
the operation method comprises the following steps:
s1, acquiring road condition scene images in front of the vehicle through 2 lenses at the upper part of the vehicle;
s2, converting the effective characteristics of the acquired road condition scene images into space coordinate information;
s3, identifying marking lines on the road surface and a sign in front, and assisting a driver in strengthening visual identification of the road condition in front;
the step S1 further includes:
local edge cutting is carried out on the road condition scene image, so that the horizontal direction and the vertical direction are the same; the step S2 includes: converting the characteristics from a three-dimensional space coordinate system to a two-dimensional pixel coordinate system; and/or
Obtaining the depth of field of the known common features of the images by a two-dimensional pixel coordinate system, and further converting the common features of the images into a three-dimensional space coordinate system;
the valid features in step S2 include pixel features of double yellow solid lines, right red lines, and left red lines;
the step S3 specifically includes:
s31, processing 10 times by a median filtering matrix, and subtracting the original image to obtain an image;
s32, the Sobel filter is firstly provided with a matrix SxProcessing and filtering the shadow possibly appearing on the road surface, and then using a matrix SxyObtaining a second-order edge feature distribution image;
s33, filtering out high-frequency components in the homologous regions;
s34, selecting a binarization threshold value according to image processing experience;
s35, converting the binary image by the original image and the negative film, respectively performing volume marking treatment, setting a high-pass filtering threshold value according to the size of the noise volume mark, and filtering small-sized noise and pores;
s36, labeling the block after noise filtering, selecting a filtering upper limit threshold and a filtering lower limit threshold according to the size of the volume label of the road surface area, and filtering out the road surface characteristics;
and S37, according to the position of the driving direction on the image plane, the road surface area is located below the image.
2. The stereoscopic vision target-based driving assistance method as claimed in claim 1, wherein the binarization threshold value in the step S34 is 80% to 95% grayscale binning.
3. The stereoscopic vision target-based driving assistance method as claimed in claim 1, wherein the step S37 further comprises:
the upper region is filtered out, and the lower image region is collected as the ROI interest region.
CN201710018411.7A 2017-01-10 2017-01-10 Auxiliary driving system and method based on stereoscopic vision target Active CN106803073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710018411.7A CN106803073B (en) 2017-01-10 2017-01-10 Auxiliary driving system and method based on stereoscopic vision target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710018411.7A CN106803073B (en) 2017-01-10 2017-01-10 Auxiliary driving system and method based on stereoscopic vision target

Publications (2)

Publication Number Publication Date
CN106803073A CN106803073A (en) 2017-06-06
CN106803073B true CN106803073B (en) 2020-05-05

Family

ID=58985475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710018411.7A Active CN106803073B (en) 2017-01-10 2017-01-10 Auxiliary driving system and method based on stereoscopic vision target

Country Status (1)

Country Link
CN (1) CN106803073B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10776939B2 (en) * 2018-04-03 2020-09-15 Altumview Systems Inc. Obstacle avoidance system based on embedded stereo vision for unmanned aerial vehicles
CN109242903B (en) * 2018-09-07 2020-08-07 百度在线网络技术(北京)有限公司 Three-dimensional data generation method, device, equipment and storage medium
CN111288890A (en) * 2020-02-13 2020-06-16 福建农林大学 Road sign dimension and height automatic measurement method based on binocular photogrammetry technology
TWI798022B (en) * 2022-03-10 2023-04-01 台灣智慧駕駛股份有限公司 A reminder method and system for road indicating objects

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2063404A1 (en) * 2007-11-23 2009-05-27 Traficon A detector for detecting traffic participants.
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
CN102685516A (en) * 2011-03-07 2012-09-19 李慧盈 Active safety type assistant driving method based on stereoscopic vision
CN104376297B (en) * 2013-08-12 2017-06-23 株式会社理光 The detection method and device of the line style Warning Mark on road

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
交通指示牌自动识别技术研究;荣慕华;《科技促进发展》;20111231(第S1期);第214-215、221页 *

Also Published As

Publication number Publication date
CN106803073A (en) 2017-06-06

Similar Documents

Publication Publication Date Title
CN102682292B (en) Method based on monocular vision for detecting and roughly positioning edge of road
US20200041284A1 (en) Map road marking and road quality collecting apparatus and method based on adas system
CN105260699B (en) A kind of processing method and processing device of lane line data
CN105313782B (en) Vehicle travel assist system and its method
CN103400150B (en) A kind of method and device that road edge identification is carried out based on mobile platform
CN106803073B (en) Auxiliary driving system and method based on stereoscopic vision target
EP3183721A1 (en) Method and axle-counting device for contact-free axle counting of a vehicle and axle-counting system for road traffic
DE102009050505A1 (en) Clear path detecting method for vehicle i.e. motor vehicle such as car, involves modifying clear path based upon analysis of road geometry data, and utilizing clear path in navigation of vehicle
DE102009048892A1 (en) Clear traveling path detecting method for vehicle e.g. car, involves generating three-dimensional map of features in view based upon preferential set of matched pairs, and determining clear traveling path based upon features
CN102938057B (en) A kind of method for eliminating vehicle shadow and device
DE102009050501A1 (en) Motor vehicle i.e. car, travel clear path detecting method, involves determining content of traffic infrastructure indication, modifying clear path based upon content, and utilizing modified clear path in navigation of vehicle
DE102009050492A1 (en) Travel's clear path detection method for motor vehicle i.e. car, involves monitoring images, each comprising set of pixels, utilizing texture-less processing scheme to analyze images, and determining clear path based on clear surface
JP2007234019A (en) Vehicle image area specifying device and method for it
CN107886034B (en) Driving reminding method and device and vehicle
CN103204104B (en) Monitored control system and method are driven in a kind of full visual angle of vehicle
CN106802144A (en) A kind of vehicle distance measurement method based on monocular vision and car plate
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
CN104463935A (en) Lane rebuilding method and system used for traffic accident restoring
KR20130053980A (en) Obstacle detection method using image data fusion and apparatus
CN111539303A (en) Monocular vision-based vehicle driving deviation early warning method
CN112419154A (en) Method, device, equipment and computer readable storage medium for detecting travelable area
CN114372919B (en) Method and system for splicing panoramic all-around images of double-trailer train
CN111652033A (en) Lane line detection method based on OpenCV
Alami et al. Local fog detection based on saturation and RGB-correlation
CN110733416A (en) lane departure early warning method based on inverse perspective transformation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201223

Address after: 214000 Tianan smart city 2-405 / 406 / 407, Xinwu District, Wuxi City, Jiangsu Province

Patentee after: WUXI YINGZHEN TECHNOLOGY CO.,LTD.

Address before: 214000 No.1 qianou Road, Wuxi City, Jiangsu Province

Patentee before: JIANGSU VOCATIONAL College OF INFORMATION TECHNOLOGY

TR01 Transfer of patent right
CP03 Change of name, title or address

Address after: F4, 200 Linghu Avenue, Xinwu District, Wuxi City, Jiangsu Province, 214000

Patentee after: Wuxi Yingzhen Technology Co.,Ltd.

Address before: 214000 Tianan smart city 2-405 / 406 / 407, Xinwu District, Wuxi City, Jiangsu Province

Patentee before: WUXI YINGZHEN TECHNOLOGY CO.,LTD.

CP03 Change of name, title or address