CN110422168B - Lane recognition system and method and automatic driving automobile - Google Patents

Lane recognition system and method and automatic driving automobile Download PDF

Info

Publication number
CN110422168B
CN110422168B CN201910731264.7A CN201910731264A CN110422168B CN 110422168 B CN110422168 B CN 110422168B CN 201910731264 A CN201910731264 A CN 201910731264A CN 110422168 B CN110422168 B CN 110422168B
Authority
CN
China
Prior art keywords
lane
information
radar
road edge
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910731264.7A
Other languages
Chinese (zh)
Other versions
CN110422168A (en
Inventor
路兆铭
王鲁晗
周婧蓉
傅彬
王刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingdao Technology (Yangquan) Co.,Ltd.
Original Assignee
Zhiyou Open Source Communication Research Institute Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhiyou Open Source Communication Research Institute Beijing Co ltd filed Critical Zhiyou Open Source Communication Research Institute Beijing Co ltd
Priority to CN201910731264.7A priority Critical patent/CN110422168B/en
Publication of CN110422168A publication Critical patent/CN110422168A/en
Application granted granted Critical
Publication of CN110422168B publication Critical patent/CN110422168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses a lane recognition system, a lane recognition method and an automatic driving automobile, wherein the lane recognition system comprises: the system comprises at least one camera, at least one radar, a data processing unit and a lane recognition unit; the lane recognition unit is connected with the data processing unit and used for determining the weight proportion of the road edge characteristics obtained by processing the image information and the road edge characteristics obtained by processing the radar information and recognizing lane positions according to the weight proportion, the road edge characteristics obtained by processing the image information, the road edge characteristics obtained by processing the radar information and the lane line characteristics. According to the technical scheme, data fusion of the camera and the radar equipment on lane identification is realized, so that complex road conditions and environmental interference can be better dealt with, dangerous accidents are avoided, and the driving safety is improved.

Description

Lane recognition system and method and automatic driving automobile
Technical Field
The disclosure relates to the technical field of automatic driving, in particular to a lane recognition system and method and an automatic driving automobile.
Background
With the development of automobile intelligent technology, the intelligent automobile can realize automatic driving. In order to accurately drive according to the navigation path, the intelligent automobile acquires lane position information according to an installed camera or radar equipment.
In the course of proposing the present disclosure, the inventor finds that the lane position identification in the prior art adopts a method of lane line detection based on road characteristics, which performs segmentation and processing of subsequent images by using the difference of physical characteristics of a lane line and a road environment, so as to highlight the lane line characteristics, thereby realizing the identification of the lane position. However, the actual road conditions are complex and various, and the lane recognition by using the lane line features is easily interfered by the environment, such as weather, shadow shielding and the like, so that the recognition effect is poor. When the lane position information cannot be accurately detected, the intelligent automobile may not be automatically driven, and a dangerous accident may be caused more seriously.
Disclosure of Invention
The embodiment of the disclosure provides a lane recognition system, a lane recognition method and an automatic driving automobile.
In a first aspect, an embodiment of the present disclosure provides a lane recognition system, including:
the system comprises at least one camera, a controller and a display, wherein the camera is used for acquiring image information of lanes in a driving path of a vehicle;
the system comprises at least one radar, a central processing unit and a central processing unit, wherein the radar is used for acquiring radar information of an area where a vehicle runs;
a processing unit to:
processing the image information to obtain lane line characteristics and road edge characteristics;
processing the radar information to obtain road edge characteristics;
determining a weight ratio of a road edge feature obtained by processing the image information and a road edge feature obtained by processing the radar information;
recognizing lane positions according to the weight ratios, road edge features obtained by processing the image information, road edge features obtained by processing the radar information, and the lane line features.
With reference to the first aspect, in a first implementation manner of the first aspect, the system further includes: the system comprises at least one illumination meter, a controller and a display, wherein the at least one illumination meter is used for acquiring illumination intensity information of an area where the vehicle runs; wherein the processing unit is configured to determine the weight proportion according to the illumination intensity information.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the present disclosure further includes:
a vehicle-to-vehicle communication device for acquiring road traffic information and auxiliary lane information of an area where the vehicle is traveling,
wherein:
the processing unit is used for determining the weight proportion of the road edge feature obtained by processing the image information, the weight proportion of the road edge feature obtained by processing the radar information and the weight proportion of the auxiliary lane information according to the road traffic information and the illumination information, and identifying the lane position according to the weight proportion, the road edge feature obtained by processing the image information, the road edge feature obtained by processing the radar information, the lane line feature and the auxiliary lane information.
In a second aspect, an embodiment of the present disclosure provides a lane recognition method, including:
according to the image information, acquiring lane line characteristics of a lane in a vehicle driving path and road edge characteristics on two sides of the lane; wherein the image information is collected by a camera mounted on the vehicle;
determining the weight proportion of road edge characteristics of two sides of a lane respectively acquired according to the image information and the radar information; wherein the radar information is collected by a radar installed in the vehicle;
and recognizing lane positions according to the lane line characteristics, the weight proportion, the road edge characteristics obtained through the image information and the road edge characteristics obtained through the radar information.
With reference to the second aspect, in a first implementation manner of the second aspect, the method further includes:
acquiring illumination intensity information of an area where the vehicle runs;
and determining the weight proportion of the road edge characteristics obtained by processing the image information and the road edge characteristics obtained by processing the radar information according to the illumination intensity information.
With reference to the first implementation manner of the second aspect, in a second implementation manner of the second aspect, the method further includes:
acquiring road traffic information and auxiliary lane information of an area where the vehicle runs;
determining a weight ratio of the road edge feature obtained by processing the image information, a weight ratio of the road edge feature obtained by processing the radar information, and a weight ratio of the auxiliary lane information according to the road traffic information and the illuminance information, and identifying a lane position according to the weight ratio, the road edge feature obtained by processing the image information, the road edge feature obtained by processing the radar information, the lane line feature, and the auxiliary lane information.
With reference to the second aspect, in a third implementation manner of the second aspect, the acquiring lane line features of a lane and road edge features on two sides of the lane in a driving path of the vehicle according to the image information is implemented as follows:
carrying out enhanced white balance processing on the image information, and dividing the image information into regions according to the RGB values of pixel points;
graying the image information of the divided regions, and extracting road characteristics;
and inputting the road characteristics into a pre-trained deep learning model, and outputting lane line characteristics and road edge characteristics.
With reference to the second aspect, in a fourth implementation manner of the second aspect, the acquiring road edge features on two sides of the lane according to the radar information is implemented as:
and filtering the radar information, and extracting road edge characteristics.
With reference to the second aspect, in a fifth implementation manner of the second aspect, the identifying lane positions according to the lane line features, the weight ratio, the road edge features obtained through the image information, and the road edge features obtained through the radar information includes:
calculating the lane width according to the lane line characteristics;
calculating road width according to the weight proportion, the road edge characteristics obtained through the image information and the road edge characteristics obtained through the radar information;
and calculating the number of lanes according to the lane width and the road width, and identifying lane positions based on the lane width and the number of lanes.
In a third aspect, the disclosed embodiment provides an automatic driving automobile, which includes the lane recognition system disclosed in the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the lane recognition system provided by the embodiment of the disclosure includes: the system comprises at least one camera, a controller and a display, wherein the camera is used for acquiring image information of lanes in a driving path of a vehicle; the system comprises at least one radar, a central processing unit and a central processing unit, wherein the radar is used for acquiring radar information of an area where a vehicle runs; the data processing unit is connected with the camera and used for processing the image information to obtain lane line characteristics and road edge characteristics; the radar is connected and used for processing the radar information to obtain road edge characteristics; and the lane recognition unit is connected with the data processing unit and used for determining the weight proportion of the road edge characteristics obtained by processing the image information and the road edge characteristics obtained by processing the radar information and recognizing the lane position according to the weight proportion, the road edge characteristics obtained by processing the image information, the road edge characteristics obtained by processing the radar information and the lane line characteristics. According to the technical scheme, data collected by the camera and the radar are processed, lane line characteristics and road edge characteristics used for recognizing lane positions are extracted, the road edge characteristics extracted from the data collected by the camera and the radar are fused according to a weight ratio, lane positions are further recognized according to the fused road edge characteristics and the lane line characteristics, data fusion of the camera and the radar equipment on lane recognition is achieved, the current lane position of the intelligent automobile can be obtained, lane width and lane quantity information of the road where the intelligent automobile is located can also be obtained, lane selection and path planning can be better performed on the intelligent automobile, complex road conditions and environmental interference can be better handled, dangerous accidents are avoided, and driving safety is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
fig. 1 shows a schematic view of an application scenario of a lane recognition system according to an embodiment of the present disclosure;
FIG. 2 illustrates an architectural schematic diagram of a lane recognition system according to an embodiment of the present disclosure;
fig. 3 shows an architectural schematic diagram of a lane recognition system according to another embodiment of the present disclosure;
fig. 4 shows a flow diagram of a lane identification method according to an embodiment of the present disclosure;
fig. 5 illustrates a block diagram of a lane recognition apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram showing a structure of a lane recognition apparatus according to another embodiment of the present disclosure;
fig. 7 illustrates a block diagram of a lane recognition apparatus according to still another embodiment of the present disclosure;
fig. 8 shows a block diagram of an autonomous vehicle according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility that one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof may be present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
As mentioned above, with the development of automobile intelligent technology, an intelligent automobile can realize automatic driving. In order to accurately drive according to the navigation path, the intelligent automobile acquires lane position information according to an installed camera or radar equipment.
In the course of proposing the present disclosure, the inventor finds that, in the lane position identification in the prior art, a method of lane line detection based on road characteristics is adopted, and the difference between the lane line and the physical characteristics of the road environment is utilized to segment and process the subsequent image, so as to highlight the lane line characteristics, thereby realizing the identification of the lane position. However, the actual road conditions are complex and various, and the lane recognition by using the lane line features is easily interfered by the environment, such as weather, shadow shielding and the like, so that the recognition effect is poor. When the lane position information cannot be accurately detected, the intelligent automobile may not be automatically driven, and a dangerous accident may be caused more seriously.
In view of the above drawbacks, an embodiment of the present disclosure provides a lane recognition system, including: the system comprises at least one camera, a controller and a display, wherein the camera is used for acquiring image information of lanes in a driving path of a vehicle; the system comprises at least one radar, a central processing unit and a central processing unit, wherein the radar is used for acquiring radar information of an area where a vehicle runs; the data processing unit is connected with the camera and used for processing the image information to obtain lane line characteristics and road edge characteristics; the radar is connected and used for processing the radar information to obtain road edge characteristics; and the lane recognition unit is connected with the data processing unit and used for determining the weight proportion of the road edge characteristics obtained by processing the image information and the road edge characteristics obtained by processing the radar information and recognizing the lane position according to the weight proportion, the road edge characteristics obtained by processing the image information, the road edge characteristics obtained by processing the radar information and the lane line characteristics. According to the technical scheme, data collected by the camera and the radar are processed, lane line characteristics and road edge characteristics used for recognizing lane positions are extracted, the road edge characteristics extracted from the data collected by the camera and the radar are fused according to a weight ratio, lane positions are recognized according to the fused road edge characteristics and the lane line characteristics, data fusion of the camera and the radar on lane recognition is achieved, the current lane position of the intelligent automobile can be obtained, lane width and lane number information of the road where the intelligent automobile is located can also be obtained, lane selection and path planning can be better performed on the intelligent automobile, complex road conditions and environmental interference can be better handled, dangerous accidents are avoided, and driving safety is improved.
Fig. 1 shows a schematic view of an application scenario of a lane recognition system according to an embodiment of the present disclosure.
As shown in fig. 1, the lane line 200 divides the road into three lanes, and the outermost side of the road is a road edge 300, and in the driving path of the vehicle 100, the lane on which the vehicle 100 is driven may be changed due to a change in road conditions. For example, the vehicle 100 shown in fig. 1 may travel in the left lane, then may travel in the center lane, and then may travel in the right lane. When the vehicle 100 is traveling in the current lane, it is necessary to know the position of the lane where the vehicle is located or the positions of other lanes in the road in consideration of the change of the road condition, such as lane changing operation or other path planning operation. Based on this scenario, the embodiments of the present disclosure provide a lane recognition system that recognizes a lane position.
Fig. 2 shows an architectural schematic diagram of a lane recognition system according to an embodiment of the present disclosure.
As shown in fig. 2, the lane recognition system includes: camera 1, radar 2, processing unit 3, wherein processing unit 3 comprises a data processing unit 31 and a lane recognition unit 32. The processing unit 3 may be implemented by a programmable logic device or a dedicated chip or a general-purpose processor running software according to an embodiment of the present disclosure. The number and the installation position of the cameras 1 and the radar 2 installed on the vehicle are not limited. For example, 6 cameras can be installed around the vehicle body, and 4 radars can be installed at different positions of the bumper. It should be noted that the radar may be an imaging millimeter wave radar or a laser radar, which is not limited herein.
According to the embodiment of the present disclosure, the camera 1 and the radar 2 are respectively connected to the data processing unit 31. The camera 1 is used for acquiring image information of a lane in a vehicle driving path and sending the acquired image information to the data processing unit 31; the radar 2 is configured to acquire radar information of an area where the vehicle is traveling, and send the acquired radar information to the data processing unit 31; the data processing unit 31 is configured to process the image information to obtain lane line features and road edge features, and send the obtained lane line features and road edge features to the lane recognition unit 32, and is configured to process the radar information to obtain road edge features, and send the obtained road edge features to the lane recognition unit 32; the lane recognition unit 32 is configured to determine a weight ratio of a road edge feature obtained by processing the image information and a road edge feature obtained by processing the radar information, and recognize a lane position according to the weight ratio, the road edge feature obtained by processing the image information, the road edge feature obtained by processing the radar information, and the lane line feature.
It should be noted that, referring to fig. 2, the data processing unit 31 includes: a lane line feature module 311 and a road edge feature module 312. The lane line feature module 311 is connected to the camera 1, and the road edge feature module 312 is connected to the camera 1 and the radar 2, respectively. The lane recognition unit 32 includes: an identification module 321 and a weighting module 322. The recognition module 321 is respectively connected with the lane line feature module 311, the road edge feature module 312 and the weighting module 322, and the weighting module 322 is connected with the road edge feature module 312.
The lane line feature module 311 is configured to obtain lane line features from the image information; the road edge feature module 312 is configured to obtain road edge features from the image information and the radar information, respectively; the weight module 322 is used for determining the weight proportion of the road edge feature obtained from the image information and the road edge feature obtained from the radar information; the identification module 321 is configured to identify a lane position according to the information determined by the lane line feature module 311, the road edge feature module 312, and the weight module 322.
According to an embodiment of the present disclosure, referring to fig. 1, the lane line characteristics refer to position information of the lane line 200 in the road, for example, the lane line 200 may be divided into a white line and a yellow line, the yellow line is located in the center of the road and distinguishes vehicles traveling in different directions, and the white line distinguishes vehicles traveling in different lanes in the same direction. The road edge feature acquired from the image information refers to edge position information of the lane line 200, such as contour information of the lane line 200; the road edge feature obtained from the radar information refers to edge position information of the road edge 300, such as the position of a guardrail or a step. It should be noted that the edge position information of the lane line 200 shown in fig. 1 has a certain distance from the edge position information of the road edge 300, and in some cases, the lane line 200 is adjacent to the road edge 300, which is not limited herein.
In the embodiment, the lane width can be calculated by using the lane edge characteristics, the lane width can be calculated by using the lane line characteristics, and the number of lanes in the road can be calculated according to the lane width and the lane width, so that the lane positions of other lanes in the subsequent lane changing operation or path planning operation of the vehicle can be determined based on the lane width and the lane number information, and the vehicle can better select the lanes and plan the paths, thereby better coping with complex road conditions and environmental interference, avoiding the occurrence of dangerous accidents, and improving the driving safety.
Fig. 3 shows an architectural schematic diagram of a lane recognition system according to another embodiment of the present disclosure.
As shown in fig. 3, the system further includes: at least one illumination meter 4. The illumination meter 4 is used for acquiring illumination intensity information of an area where the vehicle runs; wherein, the processing unit 3 is configured to determine a weight ratio of the road edge feature obtained from the road edge feature module 312 according to the illumination intensity information.
According to the embodiment of the disclosure, because camera 1 receives the influence of illumination intensity easily when gathering image information, illumination intensity is low excessively, and the light that explains the environment is darker, and camera 1 can't obtain clear image, and when illumination intensity was too high, the exposure level of image was higher, probably leads to unable accurate discernment lane. The radar is not greatly influenced by the illumination intensity, and when the weather condition is better, the detection precision of the radar is high. Accordingly, the vehicle may employ the illumination intensity information to determine a weight proportion of the road edge features obtained from the road edge feature module 312. For example, the weight of the recognition result of the camera 1 may be set to 0.3 to 0.6 and the weight of the recognition result of the radar 2 may be set to 0.45 to 0.6, and when the detected illumination intensity is within the intensity threshold range, the weight ratio of the camera 1 may be increased, and when the detected illumination intensity is outside the intensity threshold range, the weight ratio of the radar 2 may be increased. In this embodiment, the intensity threshold range may be set in advance, for example, 60 to 120 candelas.
According to the embodiment of the present disclosure, since the collection of radar information is easily affected by rainy and snowy weather, before the weight ratio of the road edge feature acquired from the road edge feature module 312 is adjusted according to the illumination intensity, the weight of the recognition result of the radar 2 may be set to 0.05-0.15, while the weight of the recognition result of the camera 1 may remain unchanged to 0.3-0.6, taking into account the influence factor of the weather. That is, the recognition result of the radar 2 may be set to different weight ranges in advance, then the weight range of the recognition result of the radar 2 may be selected according to the acquired weather information, and further the weight ratio of the recognition results of the camera 1 and the radar 2 may be adjusted according to the acquired illumination intensity information, so that a more accurate lane position recognition result may be obtained.
According to an embodiment of the present disclosure, the system further comprises: a vehicle-to-vehicle communication device 5 for acquiring road traffic information and auxiliary lane information of an area where the vehicle is traveling, wherein: the processing unit 3 is configured to determine a weight ratio of the road edge feature obtained by processing the image information, a weight ratio of the road edge feature obtained by processing the radar information, and a weight ratio of the auxiliary lane information according to the road traffic information and the illuminance information, and identify a lane position according to the weight ratio, the road edge feature obtained by processing the image information, the road edge feature obtained by processing the radar information, the lane line feature, and the auxiliary lane information.
In the present embodiment, the road traffic information refers to the density of vehicles or the distance information between vehicles in the vehicle travel area. The auxiliary lane information is lane information in which other vehicles in the vehicle travel area recognize a lane position of the vehicle. Considering that when the current vehicle performs lane position recognition, due to the change of the road traffic information, other nearby vehicles may block part of lane lines or influence the radar on information acquisition of the road edge features, and the recognition results of the other vehicles may be more accurate, the weight ratio of the recognition result of the current vehicle to the auxiliary lane information may be determined according to the road traffic information, and the weight ratio of the road edge features acquired from the road edge feature module 312 may be determined according to the recognition results of the current vehicle, so as to obtain a more accurate lane position recognition result. For example, when there are few vehicles, the weight of the recognition result of the camera 1 may be set to 0.3 to 0.5, the weight of the recognition result of the radar 2 may be set to 0.45 to 0.6, and the recognition result of the auxiliary lane information may be set to 0 to 0.2. When there are many vehicles, the weight of the recognition result of the camera 1 may be set to 0.1 to 0.2, the weight of the recognition result of the radar 2 may be set to 0.15 to 0.3, and the recognition result of the auxiliary lane information may be set to 0.35 to 0.6.
Fig. 4 shows a flow diagram of a lane identification method according to an embodiment of the present disclosure.
As shown in fig. 4, the lane recognition method includes the following steps S101 to S103:
in step S101: according to the image information, acquiring lane line characteristics of a lane in a vehicle driving path and road edge characteristics on two sides of the lane; wherein the image information is collected by a camera mounted on the vehicle.
In step S102: determining the weight proportion of road edge characteristics of two sides of a lane respectively acquired according to the image information and the radar information; wherein the radar information is collected by a radar installed in the vehicle.
In step S103: and recognizing lane positions according to the lane line characteristics, the weight proportion, the road edge characteristics obtained through the image information and the road edge characteristics obtained through the radar information.
As mentioned above, with the development of automobile intelligent technology, an intelligent automobile can realize automatic driving. In order to accurately drive according to the navigation path, the intelligent automobile acquires lane position information according to an installed camera or radar equipment.
In the course of proposing the present disclosure, the inventor finds that, in the lane position identification in the prior art, a method of lane line detection based on road characteristics is adopted, and the difference between the lane line and the physical characteristics of the road environment is utilized to segment and process the subsequent image, so as to highlight the lane line characteristics, thereby realizing the identification of the lane position. However, the actual road conditions are complex and various, and the lane recognition by using the lane line features is easily interfered by the environment, such as weather, shadow shielding and the like, so that the recognition effect is poor. When the lane position information cannot be accurately detected, the intelligent automobile may not be automatically driven, and a dangerous accident may be caused more seriously.
In view of the above drawbacks, an embodiment of the present disclosure provides a lane recognition system, including: the system comprises at least one camera, a controller and a display, wherein the camera is used for acquiring image information of lanes in a driving path of a vehicle; the system comprises at least one radar, a central processing unit and a central processing unit, wherein the radar is used for acquiring radar information of an area where a vehicle runs; the data processing unit is connected with the camera and used for processing the image information to obtain lane line characteristics and road edge characteristics; the radar is connected and used for processing the radar information to obtain road edge characteristics; and the lane recognition unit is connected with the data processing unit and used for determining the weight proportion of the road edge characteristics obtained by processing the image information and the road edge characteristics obtained by processing the radar information and recognizing the lane position according to the weight proportion, the road edge characteristics obtained by processing the image information, the road edge characteristics obtained by processing the radar information and the lane line characteristics. According to the technical scheme, data collected by the camera and the radar are processed, lane line characteristics and road edge characteristics used for recognizing lane positions are extracted, the road edge characteristics extracted from the data collected by the camera and the radar are fused according to a weight ratio, lane positions are recognized according to the fused road edge characteristics and the lane line characteristics, data fusion of the camera and the radar on lane recognition is achieved, the current lane position of the intelligent automobile can be obtained, lane width and lane number information of the road where the intelligent automobile is located can also be obtained, lane selection and path planning can be better performed on the intelligent automobile, complex road conditions and environmental interference can be better handled, dangerous accidents are avoided, and driving safety is improved.
According to an embodiment of the present disclosure, referring to fig. 1, the lane line feature refers to position information of a location of a lane line 200, for example, the lane line 200 is divided into a white line and a yellow line, the yellow line is located in the center of a road and distinguishes vehicles traveling in different directions, and the white line distinguishes vehicles traveling in different lanes in the same direction. The road edge feature acquired from the image information refers to edge position information of the lane line 200, such as contour information of the lane line 200; the road edge feature obtained from the radar information refers to edge position information of the road edge 300, such as the position of a guardrail or a step. It should be noted that the edge position information of the lane line 200 shown in fig. 1 has a certain distance from the edge position information of the road edge 300, and in some cases, the lane line 200 is adjacent to the road edge 300, which is not limited herein.
In the embodiment, the lane width can be calculated by using the lane edge characteristics, the lane width can be calculated by using the lane line characteristics, and the number of lanes of the road can be calculated according to the lane width and the lane width, so that the lane position in the subsequent lane changing operation or path planning operation of the vehicle can be determined based on the lane width and the lane number information, and the vehicle can be conveniently and better subjected to lane selection and path planning, thereby better coping with complex road conditions and environmental interference, avoiding the occurrence of dangerous accidents, and improving the driving safety.
Two embodiments for determining the weight proportion of the road edge features on both sides of the lane respectively obtained from the image information and the radar information are provided below:
according to an embodiment of the present disclosure, one of the implementation modes is: the method comprises the steps of firstly acquiring illumination intensity information of an area where the vehicle runs, and then determining the weight proportion of road edge features obtained by processing the image information and road edge features obtained by processing the radar information according to the illumination intensity information.
In this embodiment, because the camera receives the influence of illumination intensity easily when gathering image information, illumination intensity is low excessively, and the light that explains the environment is darker, and the camera can't obtain clear image, and when illumination intensity was too high, the exposure level of image was higher, probably leads to the inaccuracy of lane recognition result. The radar is not greatly influenced by the illumination intensity, and when the weather condition is better, the detection precision of the radar is high. Accordingly, the vehicle may employ the illumination intensity information to determine a weight proportion of the road edge features obtained from the road edge feature module 312. For example, the weight of the recognition result of the camera may be set to 0.3 to 0.6 and the weight of the recognition result of the radar may be set to 0.45 to 0.6, and when the detected illumination intensity is within the intensity threshold range, the weight ratio of the camera may be increased, and when the detected illumination intensity is outside the intensity threshold range, the weight ratio of the radar may be increased. In this embodiment, the intensity threshold range may be set in advance, for example, 60 to 120 candelas.
In the present embodiment, since the collection of radar information is easily affected by rainy and snowy weather, the weight of the recognition result of the radar may be set to 0.05 to 0.15 while the weight of the recognition result of the camera may be kept from 0.3 to 0.6 in consideration of the influence factor of the weather before adjusting the weight ratio of the road edge feature obtained from the image information and the road edge feature obtained by processing the radar information according to the illumination intensity. That is to say, the identification result of the radar can be set to different weight ranges in advance, then the weight range of the identification result of the radar is selected according to the acquired weather information, and further the weight proportion of the identification result of the camera and the radar is adjusted according to the acquired illumination intensity information, so that a more accurate lane position identification result can be obtained.
According to an embodiment of the present disclosure, another implementation is: firstly, acquiring road traffic information and auxiliary lane information of an area where the vehicle runs, then determining a weight proportion of road edge features obtained by processing the image information, a weight proportion of road edge features obtained by processing the radar information and a weight proportion of the auxiliary lane information according to the road traffic information and the illuminance information, and identifying lane positions according to the weight proportions, the road edge features obtained by processing the image information, the road edge features obtained by processing the radar information, the lane line features and the auxiliary lane information.
In the present embodiment, the road traffic information refers to the density of vehicles or the distance information between vehicles in the vehicle travel area. The auxiliary lane information is lane information in which other vehicles in the vehicle travel area recognize a lane position of the vehicle. Considering that when the current vehicle carries out lane position identification, other vehicles nearby may block part of lane lines or influence the radar on information acquisition of the road edge features, and the lane information identified by other vehicles may be more accurate, therefore, the weight ratio of the identification result of the current vehicle to the auxiliary lane information may be determined according to the road traffic information, and the identification result of the current vehicle may determine the weight ratio of the road edge features obtained from the image information and the road edge features obtained by processing the radar information according to the illumination information, thereby obtaining a more accurate lane position identification result. For example, when there are few vehicles, the weight of the recognition result of the camera 1 may be set to 0.3 to 0.5, the weight of the recognition result of the radar 2 may be set to 0.45 to 0.6, and the recognition result of the auxiliary lane information may be set to 0 to 0.2. When there are many vehicles, the weight of the recognition result of the camera 1 may be set to 0.1 to 0.2, the weight of the recognition result of the radar 2 may be set to 0.15 to 0.3, and the recognition result of the auxiliary lane information may be set to 0.35 to 0.6.
According to the embodiment of the present disclosure, the step S101 of acquiring the lane line feature of the lane and the road edge features on both sides of the lane in the driving path of the vehicle according to the image information is implemented as:
and carrying out enhanced white balance processing on the image information, and dividing the image information into regions according to the RGB values of the pixel points.
And carrying out graying processing on the image information of the divided regions to extract road characteristics.
And inputting the road characteristics into a pre-trained deep learning model, and outputting lane line characteristics and road edge characteristics.
In the embodiment, it is considered that the lane lines in the actual road are usually a combination of white and yellow, and are obviously different from the road colors, so the lane lines can be recognized by using color features, but due to factors such as weather, shadow shielding, lane line abrasion and the like, the image information collected by the camera needs to be subjected to enhanced white balance processing, that is, the white RGB values are used as the standard, and the RGB values are proportionally adjusted by other colors in the image. The white RGB value is determined by drawing a color temperature curve through an AWB algorithm for a plurality of pictures in different scenes (indoor, outdoor, sunset and the like). After the image information is subjected to enhanced white balance processing, the image information is divided into regions according to the RGB values of the pixel points, and the regions with similar RGB values are divided into the same region, so that a plurality of regions are divided in the whole image. In order to reduce the influence of factors such as illumination and brightness on the identification of the lane lines in the image, the image information of the divided areas needs to be subjected to graying processing to extract the road characteristics. Specifically, the road feature may be a cell feature, a block feature, an HOG feature, or a Hu moment feature. And then inputting at least two road characteristics into a pre-trained deep learning model, and outputting lane line characteristics and road edge characteristics.
The cell feature, for example, cell (6 × 6 pixels), means that each 6 × 6 pixels is a cell, the features (gradient, variance, etc.) of the image in this range are referred to as cell features, and different cells are not overlapped.
The block features, for example, one block every 2 × 2 cells, features (gradient, variance, etc.) of the image in this range are called block features, and different blocks may overlap with each other.
The HOG feature, namely, a Histogram of Oriented Gradients (HOG), is formed by calculating and counting Gradient direction histograms of local regions of an image, and is obtained from the cell feature and the block feature.
Characteristic of Hu moment: the Hu moment is an image characteristic with translation, rotation and scale invariance, and the Hu moment characteristic is obtained by forming 7 invariant moments by using second-order and third-order normalized central moments of image information. The second moment refers to the rotation radius of the road picture to be identified, and the third moment refers to the direction and the inclination of the road route in the road picture to be identified, and reflects the distortion of the road route. The 7 invariant moments refer to 7 features derived from the second and third order matrices, which remain invariant over image translation, rotation, and scale changes.
According to an embodiment of the present disclosure, the step S102 of identifying lane positions according to the lane line features, the weight ratio, the road edge features acquired through the image information, and the road edge features acquired through the radar information is implemented as:
calculating the lane width according to the lane line characteristics;
calculating road width according to the weight proportion, the road edge characteristics obtained through the image information and the road edge characteristics obtained through the radar information;
and calculating the number of lanes according to the lane width and the road width, and identifying lane positions based on the lane width and the number of lanes.
According to an embodiment of the present disclosure, the acquiring of the road edge characteristics of both sides of the lane according to the radar information is implemented as: and filtering the radar information, and extracting road edge characteristics. In this embodiment, Sobel operator may be used to perform filtering to extract the image edge, for example, a threshold may be set to perform contour extraction to obtain the longitudinal contour of the road, such as a guard rail.
According to the embodiment of the disclosure, the actual width L of the lane line is generally known, that is, L generally has the same width in various roads, and after the characteristics of the lane line are identified in the data collected by the camera, the pixel number N in the lane line width direction can be obtained, and meanwhile, the pixel number M between adjacent lanes can also be obtained, so that the actual width D of the lane is M × L/N. And calculating the distance between two sides of the road edge, namely the road width S according to the weight proportion, the road edge characteristics obtained through the image information and the road edge characteristics obtained through the radar information. From the road width S and the lane width D, the total number of lanes K can be calculated as S/D.
Fig. 5 shows a block diagram of a lane recognition apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the lane recognition apparatus includes:
the acquiring module 501 is configured to acquire lane line characteristics of a lane in a vehicle driving path and road edge characteristics on two sides of the lane according to the image information; wherein the image information is collected by a camera mounted on the vehicle.
A determining module 502, configured to determine weight proportions of road edge features on two sides of a lane, where the weight proportions are respectively obtained according to the image information and the radar information; wherein the radar information is collected by a radar installed in the vehicle.
An identifying module 503, configured to identify a lane position according to the lane line feature, the weight ratio, the road edge feature obtained through the image information, and the road edge feature obtained through the radar information.
According to an embodiment of the present disclosure, the obtaining module 501 is configured to:
carrying out enhanced white balance processing on the image information, and dividing the image information into regions according to the RGB values of pixel points;
graying the image information of the divided regions, and extracting road characteristics;
and inputting the road characteristics into a pre-trained deep learning model, and outputting lane line characteristics and road edge characteristics.
According to an embodiment of the present disclosure, the determining module 502 is configured to:
calculating the lane width according to the lane line characteristics;
calculating road width according to the weight proportion, the road edge characteristics obtained through the image information and the road edge characteristics obtained through the radar information;
and calculating the number of lanes according to the lane width and the road width, and identifying lane positions based on the lane width and the number of lanes.
In this embodiment, the acquiring the road edge characteristics of the two sides of the lane according to the radar information is implemented as: and filtering the radar information, and extracting road edge characteristics.
Fig. 6 shows a block diagram of a lane recognition apparatus according to another embodiment of the present disclosure.
As shown in fig. 6, the lane recognition apparatus further includes:
the first obtaining submodule 601 is configured to obtain illumination intensity information of an area where the vehicle runs.
A first determining sub-module 602, configured to determine, according to the illumination intensity information, a weight ratio of a road edge feature obtained by processing the image information and a road edge feature obtained by processing the radar information.
Fig. 7 shows a block diagram of a lane recognition apparatus according to still another embodiment of the present disclosure.
As shown in fig. 7, the lane recognition apparatus further includes:
the second obtaining submodule 701 is configured to obtain road traffic information and auxiliary lane information of an area where the vehicle travels.
A second determining sub-module 702, configured to determine, according to the road traffic information and the illuminance information, a weight ratio of the road edge feature obtained by processing the image information, a weight ratio of the road edge feature obtained by processing the radar information, and a weight ratio of the auxiliary lane information, and identify a lane position according to the weight ratio, the road edge feature obtained by processing the image information, the road edge feature obtained by processing the radar information, the lane line feature, and the auxiliary lane information.
According to the embodiment of the disclosure, by processing data collected by the camera and the radar, lane line characteristics and road edge characteristics used for recognizing lane positions are extracted, the road edge characteristics extracted from the data collected by the camera and the radar are fused according to a weight ratio, lane positions are recognized according to the fused road edge characteristics and the lane line characteristics, data fusion of the camera and the radar equipment on lane recognition is realized, not only can the current lane position of the intelligent automobile be obtained, but also lane width and lane number information of the road where the intelligent automobile is located can be obtained, lane selection and path planning can be better performed on the intelligent automobile, so that complex road conditions and environmental interference can be better handled, dangerous accidents are avoided, and driving safety is improved.
Fig. 8 shows a block diagram of an autonomous vehicle 800 according to an embodiment of the disclosure.
As shown in fig. 8, the autonomous vehicle 800 includes: a processor 801 and a memory 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 802 is used to store at least one instruction for execution by the processor 801 to implement the lane position information acquisition method provided by the method embodiments herein.
In some embodiments, the autonomous vehicle 800 may also optionally include: a peripheral interface 803 and at least one peripheral. The processor 801, memory 802 and peripheral interface 803 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a touch screen display 805, a camera 806, a radar 807, an illuminance meter 808, and a vehicle-to-vehicle communication device 809.
The peripheral interface 803 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 804 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. The touch signal may be input to the processor 801 as a control signal for processing. At this point, the display 805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 805 may be one, providing the front panel of the autonomous vehicle 800; in other embodiments, the display 805 may be at least two, each disposed on a different surface of the autonomous vehicle 800 or in a folded design; in still other embodiments, the display 805 may be a flexible display disposed on a curved surface or on a folding surface of the autonomous vehicle 800. Even further, the display 805 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 805 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera 806 is used to capture images or video. Optionally, the cameras 806 include front and rear cameras. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions.
The radar 807 may include imaging millimeter wave radar and laser radar. The millimeter wave radar operates in the millimeter wave band. Usually, the millimeter wave refers to the frequency band of 30 to 300GHz (the wavelength is 1 to 10 mm). The millimeter wave radar can distinguish and identify very small targets and can identify a plurality of targets simultaneously, and has the characteristics of imaging capability, small volume and strong maneuverability. The laser radar is a scanning type sensor adopting a non-contact laser ranging technology, can accurately acquire high-precision physical space environment information, and the ranging precision can reach centimeter level. .
The illumination meter 808 is used for measuring the light intensity level, the currents generated by the photocells with different light intensities are different, the currents are subjected to direct current amplification, and direct current signals are converted into digital signals directly reflecting the light intensity through a digital-to-analog conversion circuit to be displayed.
The vehicle-to-vehicle communication device 809 is used for sharing data information such as the driving speed and the relative position of the vehicles, helps the driver to judge in advance through data sharing and analysis, and warns the driver to make a safety response decision in advance before other high-speed vehicles enter the position of the driving blind spot.
In some embodiments, the autonomous vehicle 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: an acceleration sensor 811.
The acceleration sensor 811 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the autonomous vehicle 800. For example, the acceleration sensor 811 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 801 may control the touch screen 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. The acceleration sensor 811 may also be used for acquisition of motion data of a game or a user.
That is, the present disclosure not only provides an autonomous vehicle comprising a processor and a memory for storing processor-executable instructions, wherein the processor is configured to perform the method in the embodiment shown in fig. 4. In addition, the present disclosure also provides an autonomous vehicle including the lane recognition system of fig. 2 or 3. Furthermore, the present disclosure also provides a computer-readable storage medium having stored therein a computer program, which when executed by a processor may implement the lane recognition method in the embodiment shown in fig. 4.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (5)

1. A lane recognition system, comprising:
the system comprises at least one camera, a controller and a display, wherein the camera is used for acquiring image information of lanes in a driving path of a vehicle;
the system comprises at least one radar, a central processing unit and a central processing unit, wherein the radar is used for acquiring radar information of an area where a vehicle runs;
the system comprises at least one illumination meter, a controller and a display, wherein the at least one illumination meter is used for acquiring illumination intensity information of an area where the vehicle runs;
the vehicle-to-vehicle communication device is used for acquiring road traffic information and auxiliary lane information of an area where the vehicle runs;
a processing unit to:
processing the image information to obtain lane line characteristics and road edge characteristics;
processing the radar information to obtain road edge characteristics;
determining a weight ratio of road edge features obtained by processing the image information, a weight ratio of road edge features obtained by processing the radar information, and a weight ratio of the auxiliary lane information according to the road traffic information and the illumination intensity information, and identifying lane positions according to the weight ratios, the road edge features obtained by processing the image information, the road edge features obtained by processing the radar information, the lane line features, and the auxiliary lane information.
2. A lane recognition method, characterized by comprising:
according to the image information, acquiring lane line characteristics of a lane in a vehicle driving path and road edge characteristics on two sides of the lane; wherein the image information is collected by a camera mounted on the vehicle;
acquiring road edge characteristics on two sides of a lane according to the radar information; wherein the radar information is collected by a radar installed in the vehicle;
acquiring illumination intensity information of an area where the vehicle runs;
acquiring road traffic information and auxiliary lane information of an area where the vehicle runs;
determining a weight ratio of road edge features obtained by processing the image information, a weight ratio of road edge features obtained by processing the radar information, and a weight ratio of the auxiliary lane information according to the road traffic information and the illumination intensity information, and identifying lane positions according to the weight ratios, the road edge features obtained by processing the image information, the road edge features obtained by processing the radar information, the lane line features, and the auxiliary lane information.
3. The method according to claim 2, wherein the obtaining of lane line features of a lane and road edge features on both sides of the lane in the vehicle travel path from the image information is implemented as:
carrying out enhanced white balance processing on the image information, and dividing the image information into regions according to the RGB values of pixel points;
graying the image information of the divided regions, and extracting road characteristics;
and inputting the road characteristics into a pre-trained deep learning model, and outputting lane line characteristics and road edge characteristics.
4. The method according to claim 2, wherein the obtaining of the road edge characteristics on both sides of the lane from the radar information is implemented as:
and filtering the radar information, and extracting road edge characteristics.
5. An autonomous vehicle comprising the lane recognition system of claim 1.
CN201910731264.7A 2019-08-08 2019-08-08 Lane recognition system and method and automatic driving automobile Active CN110422168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910731264.7A CN110422168B (en) 2019-08-08 2019-08-08 Lane recognition system and method and automatic driving automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910731264.7A CN110422168B (en) 2019-08-08 2019-08-08 Lane recognition system and method and automatic driving automobile

Publications (2)

Publication Number Publication Date
CN110422168A CN110422168A (en) 2019-11-08
CN110422168B true CN110422168B (en) 2020-06-16

Family

ID=68413308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910731264.7A Active CN110422168B (en) 2019-08-08 2019-08-08 Lane recognition system and method and automatic driving automobile

Country Status (1)

Country Link
CN (1) CN110422168B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110979318B (en) * 2019-11-20 2021-06-04 苏州智加科技有限公司 Lane information acquisition method and device, automatic driving vehicle and storage medium
WO2023141940A1 (en) * 2022-01-28 2023-08-03 华为技术有限公司 Intelligent driving method and device, and vehicle
CN114821531B (en) * 2022-04-25 2023-03-28 广州优创电子有限公司 Lane line recognition image display system based on electronic exterior rearview mirror ADAS

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103786729A (en) * 2012-10-26 2014-05-14 现代自动车株式会社 Lane recognition method and system
KR101558756B1 (en) * 2014-04-10 2015-10-07 현대자동차주식회사 Lane Detecting Method and Apparatus for Estimating Lane Based on Camera Image and Sensor Information
CN106096525A (en) * 2016-06-06 2016-11-09 重庆邮电大学 A kind of compound lane recognition system and method
CN107389084A (en) * 2017-06-09 2017-11-24 深圳市速腾聚创科技有限公司 Planning driving path planing method and storage medium
CN107826092A (en) * 2017-10-27 2018-03-23 智车优行科技(北京)有限公司 Advanced drive assist system and method, equipment, program and medium
CN109492566A (en) * 2018-10-31 2019-03-19 奇瑞汽车股份有限公司 Lane position information acquisition method, device and storage medium
CN105825173B (en) * 2016-03-11 2019-07-19 福州华鹰重工机械有限公司 General road and lane detection system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103786729A (en) * 2012-10-26 2014-05-14 现代自动车株式会社 Lane recognition method and system
KR101558756B1 (en) * 2014-04-10 2015-10-07 현대자동차주식회사 Lane Detecting Method and Apparatus for Estimating Lane Based on Camera Image and Sensor Information
CN105825173B (en) * 2016-03-11 2019-07-19 福州华鹰重工机械有限公司 General road and lane detection system and method
CN106096525A (en) * 2016-06-06 2016-11-09 重庆邮电大学 A kind of compound lane recognition system and method
CN107389084A (en) * 2017-06-09 2017-11-24 深圳市速腾聚创科技有限公司 Planning driving path planing method and storage medium
CN107826092A (en) * 2017-10-27 2018-03-23 智车优行科技(北京)有限公司 Advanced drive assist system and method, equipment, program and medium
CN109492566A (en) * 2018-10-31 2019-03-19 奇瑞汽车股份有限公司 Lane position information acquisition method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉和毫米波雷达的车道级定位方法;赵翔等;《上海交通大学学报》;20180131;第52卷(第1期);第34页 *

Also Published As

Publication number Publication date
CN110422168A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN107506760B (en) Traffic signal detection method and system based on GPS positioning and visual image processing
CN110422168B (en) Lane recognition system and method and automatic driving automobile
CN109506664B (en) Guide information providing device and method using pedestrian crossing recognition result
US9959624B2 (en) Early detection of turning condition identification using perception technology
CN101950350B (en) Clear path detection using a hierachical approach
JP5254102B2 (en) Environment recognition device
US11527077B2 (en) Advanced driver assist system, method of calibrating the same, and method of detecting object in the same
US11680813B2 (en) Method, apparatus, electronic device, computer program, and computer readable recording medium for measuring inter-vehicle distance based on vehicle image
WO2014017625A1 (en) Three-dimensional object detection device, and three-dimensional object detection method
CN107408288B (en) Warning device, warning method, and warning program
WO2020154990A1 (en) Target object motion state detection method and device, and storage medium
KR20150113589A (en) Electronic apparatus and control method thereof
CN111976601B (en) Automatic parking method, device, equipment and storage medium
JP2007179386A (en) Method and apparatus for recognizing white line
KR20200069084A (en) Method, apparatus, electronic device, computer program and computer readable recording medium for determining road speed limit
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN114841910A (en) Vehicle-mounted lens shielding identification method and device
US11054245B2 (en) Image processing apparatus, device control system, imaging apparatus, image processing method, and recording medium
CN114943941A (en) Target detection method and device
US20230273033A1 (en) Method, apparatus, electronic device, computer program, and computer readable recording medium for measuring inter-vehicle distance based on vehicle image
CN107992788B (en) Method and device for identifying traffic light and vehicle
Itu et al. An efficient obstacle awareness application for android mobile devices
CN109330833B (en) Intelligent sensing system and method for assisting visually impaired patients to safely go out
CN107992789B (en) Method and device for identifying traffic light and vehicle
WO2022142827A1 (en) Road occupancy information determination method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210928

Address after: 045000 room 610, government business building, North Street, mining area, Yangquan City, Shanxi Province

Patentee after: Xingdao Technology (Yangquan) Co.,Ltd.

Address before: Room 106-771, No. 2 Building, 8 Yuan, Xingsheng South Road, Miyun District, Beijing, 101500

Patentee before: Zhiyou Open Source Communication Research Institute (Beijing) Co.,Ltd.