WO2022082571A1 - Procédé et appareil de détection de ligne de voie - Google Patents

Procédé et appareil de détection de ligne de voie Download PDF

Info

Publication number
WO2022082571A1
WO2022082571A1 PCT/CN2020/122716 CN2020122716W WO2022082571A1 WO 2022082571 A1 WO2022082571 A1 WO 2022082571A1 CN 2020122716 W CN2020122716 W CN 2020122716W WO 2022082571 A1 WO2022082571 A1 WO 2022082571A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane line
image
lane
pixels
lines
Prior art date
Application number
PCT/CN2020/122716
Other languages
English (en)
Chinese (zh)
Inventor
罗达新
高鲁涛
马莎
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/122716 priority Critical patent/WO2022082571A1/fr
Priority to CN202080004827.3A priority patent/CN112654998B/zh
Publication of WO2022082571A1 publication Critical patent/WO2022082571A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Definitions

  • the present application relates to the field of sensor technology, and in particular, to a lane line detection method and device.
  • lane line detection can be performed based on sensors. For example, when the vehicle is driving, the camera is used to obtain road pictures, and the vehicle driving system detects and recognizes the lane lines in the road pictures to assist in deciding whether to take measures such as adjusting the direction and changing lanes.
  • the first is a detection method based on deep learning.
  • machine learning methods such as convolutional neural networks are used to learn the features of the lane lines, segment the lane lines, and then simulate the lane lines.
  • the traditional computer vision detection method uses the Hough transform to estimate the positions of multiple lane lines, extracts the area where the lane lines are located, and then fits each area separately.
  • the embodiments of the present application provide a lane line detection method and device, which can obtain at least one first area according to a first image, obtain the first lane line in the first area, and then constrain the first lane according to the law followed by the lane line This can avoid problems such as excessive curvature of lane lines, non-parallel lane lines or intersection of lane lines in the identified lane lines, thereby improving the accuracy of lane line detection. Rate.
  • an embodiment of the present application provides a lane line detection method, which determines at least one first area according to a first image; obtains at least one first lane line according to the at least one first area;
  • the second lane line of the constraint condition; the constraint condition includes the law that the lane line follows.
  • This embodiment of the present application constrains the relationship between the first lane lines according to the rules followed by the lane lines, and obtains a lane line detection result that satisfies the constraint conditions, so as to avoid excessive curvature of the lane lines and lane lines existing in the identified lane lines. problems such as non-parallel or intersection of lane lines, thereby improving the accuracy of lane line detection.
  • the rules followed by the lane lines include at least one of the following: the width between pixels with the same ordinate in two adjacent first lane lines satisfies the first range, the first lane line The curvature satisfies the second range, the distance between two adjacent first lane lines satisfies the third range, and the curvature difference between the two adjacent first lane lines satisfies the fourth range.
  • the embodiment of the present application determines at least one first area according to the first image, including: acquiring a third lane line according to the first image; determining at least one first area according to the third lane line and the first distance an area; wherein the first distance is related to the width of the lane.
  • the first area is determined according to the first distance and the third lane line with better recognition effect, so that the first lane line determined in the first area is also relatively more accurate.
  • the embodiment of the present application determines at least one first region according to the first image, including: acquiring a third lane line according to the first image; according to the third lane line and an integral map constructed by using the first image , and determine a plurality of first regions in the first image; wherein, the abscissa of the integral map is the number of pixel columns of the image, and the ordinate is the number of pixels of the image in the vertical axis direction.
  • the first region is determined according to the third lane line and the maximum value of the integral map, wherein the position of the maximum value of the integral map may be a position where the pixels of the lane line are concentrated, so that the maximum value is determined at the maximum value.
  • the first area is also more accurate.
  • the embodiment of the present application determines at least one first area according to the third lane line and the integral map constructed by using the first image, including: determining, according to the third lane line, where the third lane line is located obtain a plurality of maxima of the integral map; at the positions corresponding to the plurality of maxima, determine at least one first region parallel to the region where the third lane line is located.
  • the embodiment of the present application obtains multiple maximum values of the integral map, including: straightening the first image according to the third lane line to obtain the second image; wherein, the straightened second image
  • the third lane line in is parallel to the vertical axis; an integral map is generated according to the second image; multiple maxima of the integral map are obtained.
  • the embodiment of the present application uses any pixel point of the third lane line as a reference point, and straightens the third lane line into a fourth lane line parallel to the longitudinal axis; according to the third lane line
  • the position and direction of other pixels moving in the straightening process are straightened, and the pixels with the same ordinate as other pixels in the first image are straightened to obtain the second image.
  • the third lane line is the lane line with the largest number of pixels in the first image; or, the number of pixels of the third lane line is greater than the first threshold.
  • obtaining the first lane line in the at least one first area in the embodiment of the present application includes: using a random sampling consistency algorithm to respectively fit the pixel points in the at least one first area to obtain A first lane line in at least one first area.
  • the embodiment of the present application uses the random sampling consistency algorithm to respectively fit the pixel points in the at least one first region, including: using the random sampling consistency algorithm to perform parallel matching on the pixels in the at least one first region The pixel points are fitted.
  • the RANSAC algorithm is used to simultaneously fit the first area, which can improve the efficiency of detecting lane lines.
  • lane lines that satisfy the constraint conditions are determined in the first area N times, and multiple lane lines are obtained; wherein, N is a non-zero natural number; pixels are determined in the multiple lane lines The one with the largest number of lane lines gets the second lane line.
  • This embodiment of the present application constrains the relationship between the first lane lines according to the rules followed by the lane lines, and selects a lane line with the largest number of pixels among the first lane lines that satisfy the constraint conditions as the second lane line, thus obtaining a lane Line detection results are also more accurate.
  • the first image is an overhead image of the lane line.
  • an embodiment of the present application provides a lane line detection device.
  • the lane line detection device can be a vehicle with a lane line detection function, or other components with a lane line detection function.
  • the lane line detection device includes but is not limited to: on-board terminals, on-board controllers, on-board modules, on-board modules, on-board components, on-board chips, on-board units, on-board radars or on-board cameras and other sensors.
  • the lane line detection device can be an intelligent terminal, or set in other intelligent terminals with lane line detection function except the vehicle, or set in a component of the intelligent terminal.
  • the intelligent terminal may be other terminal equipment such as intelligent transportation equipment, smart home equipment, and robots.
  • the lane line detection device includes, but is not limited to, a smart terminal or a controller, a chip, a radar or a camera and other sensors in the smart terminal, and other components.
  • the lane line detection device may be a general-purpose device or a special-purpose device.
  • the apparatus can also be a desktop computer, a portable computer, a network server, a PDA (personal digital assistant, PDA), a mobile phone, a tablet computer, a wireless terminal device, an embedded device, or other devices with processing functions.
  • PDA personal digital assistant
  • the embodiment of the present application does not limit the type of the lane line detection device.
  • the lane line detection device may also be a chip or processor with a processing function, and the lane line detection device may include at least one processor.
  • the processor can be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • the chip or processor with processing function may be arranged in the sensor, or may not be arranged in the sensor, but arranged at the receiving end of the output signal of the sensor.
  • the processor includes but is not limited to a central processing unit (central processing unit, CPU), a graphics processing unit (graphics processing unit, GPU), a micro control unit (micro control unit, MCU), a microprocessor (micro processor unit, MPU) ), at least one of the coprocessors.
  • the lane line detection device may also be a terminal device, or a chip or a chip system in the terminal device.
  • the lane line detection device may include a processing unit.
  • the processing unit may be a processor.
  • the lane line detection device may further include a storage unit, which may be a memory. The storage unit is used for storing instructions, and the processing unit executes the instructions stored in the storage unit, so that the terminal device implements a lane line detection method described in the first aspect or any possible implementation manner of the first aspect .
  • the processing unit may be a processor.
  • the processing unit executes the instructions stored in the storage unit, so that the terminal device implements a lane line detection method described in the first aspect or any possible implementation manner of the first aspect.
  • the storage unit may be a storage unit (eg, a register, a cache, etc.) in the chip, or a storage unit (eg, a read-only memory, a random access memory, etc.) located outside the chip in the terminal device.
  • the processing unit is specifically used to determine at least one first area according to the first image; the processing unit is specifically used to obtain at least one first lane line according to the at least one first area; the processing unit is specifically used to obtain at least one first lane line according to the at least one first area; At least one first lane line determines a second lane line that satisfies a constraint condition; the constraint condition includes a law followed by the lane line.
  • the rules followed by the lane lines include at least one of the following: the width between pixels with the same ordinate in two adjacent first lane lines satisfies the first range, the first lane line The curvature satisfies the second range, the distance between two adjacent first lane lines satisfies the third range, and the curvature difference between the two adjacent first lane lines satisfies the fourth range.
  • the processing unit is specifically configured to acquire the third lane line according to the first image; the processing unit is further configured to determine at least one first area according to the third lane line and the first distance; Among them, the first distance is related to the width of the lane.
  • the processing unit is specifically configured to acquire the third lane line according to the first image; the processing unit is further configured to, according to the third lane line and the integral map constructed by using the first image, in A plurality of first regions are determined in the first image; wherein, the abscissa of the integral map is the number of pixel columns of the image, and the ordinate is the number of pixels of the image in the vertical axis direction.
  • the processing unit is specifically used for, according to the third lane line, to determine the area where the third lane line is located; the processing unit is specifically used for acquiring multiple maximum values of the integral graph; processing The unit is specifically further configured to determine at least one first area parallel to the area where the third lane line is located at positions corresponding to the multiple maximum values.
  • the processing unit is specifically configured to straighten the first image according to the third lane line to obtain the second image; wherein the third lane line in the straightened second image and the vertical axis parallel; the processing unit is specifically used for generating an integral map according to the second image; the processing unit is also specifically used for acquiring multiple maximum values of the integral map.
  • the processing unit is specifically configured to, using any pixel point of the third lane line as a reference point, straighten the third lane line into a fourth lane line parallel to the longitudinal axis; the processing unit , which is also specifically used to straighten the pixels with the same ordinate as other pixels in the first image according to the positions and directions of other pixels in the third lane line moving in the straightening to obtain the second image.
  • the third lane line is the lane line with the largest number of pixels in the first image; or, the number of pixels of the third lane line is greater than the first threshold.
  • the processing unit is specifically configured to use a random sampling consistency algorithm to respectively fit the pixel points in the at least one first area to obtain the first lane line in the at least one first area.
  • the processing unit is specifically configured to use a random sampling consistency algorithm to perform fitting on the pixels in the at least one first region in parallel
  • the processing unit is specifically configured to determine lane lines satisfying the constraint conditions in the first area N times to obtain a plurality of lane lines; wherein, N is a non-zero natural number; the processing unit is specifically further It is used to determine one lane with the largest number of pixels among the multiple lanes to obtain the second lane.
  • the first image is an overhead image of the lane line.
  • an embodiment of the present application further provides a sensor system for providing a vehicle with a lane line detection function. It includes at least one lane line detection device mentioned in the above embodiments of the present application, and other sensors such as cameras and radars. At least one sensor device in the system can be integrated into a whole machine or equipment, or at least one sensor device in the system. The sensor device can also be provided independently as an element or device.
  • the embodiments of the present application further provide a system, which is applied in unmanned driving or intelligent driving, which includes at least one of the lane line detection devices, cameras, radars and other sensors mentioned in the above-mentioned embodiments of the present application.
  • a system which is applied in unmanned driving or intelligent driving, which includes at least one of the lane line detection devices, cameras, radars and other sensors mentioned in the above-mentioned embodiments of the present application.
  • At least one, at least one device in the system can be integrated into a whole machine or equipment, or at least one device in the system can also be independently set as a component or device.
  • any of the above systems may interact with the vehicle's central controller to provide detection and/or fusion information for decision-making or control of the vehicle's driving.
  • an embodiment of the present application further provides a terminal, where the terminal includes at least one lane line detection device mentioned in the above embodiments of the present application or any of the above systems.
  • the terminal may be smart home equipment, smart manufacturing equipment, smart industrial equipment, smart transportation equipment (including drones, vehicles, etc.) and the like.
  • the present application provides a chip or a chip system, the chip or chip system includes at least one processor and a communication interface, the communication interface and the at least one processor are interconnected by a line, and the at least one processor is used for running a computer program or instruction, The lane line detection method described in any one of the implementation manners of the first aspect is performed.
  • the communication interface in the chip may be an input/output interface, a pin, a circuit, or the like.
  • the chip or chip system described above in this application further includes at least one memory, where instructions are stored in the at least one memory.
  • the memory may be a storage unit inside the chip, such as a register, a cache, etc., or a storage unit of the chip (eg, a read-only memory, a random access memory, etc.).
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program or instruction is stored in the computer-readable storage medium, and when the computer program or instruction is run on a computer, the computer is made to execute any one of the first aspect.
  • an embodiment of the present application provides a target tracking device, including: at least one processor and an interface circuit, where the interface circuit is configured to provide information input and/or information output for the at least one processor; at least one processor is configured to run code instructions to implement any method of the first aspect or any possible implementations of the first aspect.
  • FIG. 1 is a schematic diagram of an automatic driving scenario provided by an embodiment of the present application
  • Fig. 2 is the schematic diagram of the problem existing in the existing detection method
  • FIG. 3 is a schematic structural diagram of an autonomous vehicle provided by an embodiment of the present application.
  • Fig. 4 is a kind of integral diagram constructed for the embodiment of this application.
  • FIG. 5 is a schematic flowchart of a lane line detection method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a first area determined in an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of determining a first area according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of determining a lane line position according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a first area determined in an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of determining a first area according to an embodiment of the present application.
  • FIG. 11 is a flowchart of determining a maximum value provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of straightening a first image according to an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of determining a second lane line according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a lane line detection device provided by an embodiment of the application.
  • FIG. 15 is a schematic structural diagram of a chip according to an embodiment of the present application.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect.
  • the first log and the second log are only for distinguishing network logs in different time windows, and the sequence of the logs is not limited.
  • the words “first”, “second” and the like do not limit the quantity, and the words “first”, “second” and the like do not limit certain differences.
  • “at least one” means one or more, and “plurality” means two or more.
  • “And/or”, which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural.
  • possible lane line detection methods include: detection methods based on deep learning and detection methods based on traditional computer vision.
  • the vehicle driving system uses a machine learning method such as a convolutional neural network to learn lane line features, segment the lane lines, and then fit the lane lines to obtain the lane line detection method. result.
  • a machine learning method such as a convolutional neural network
  • detection methods based on deep learning require specially labeled data, which may lead to insufficient data or low data quality.
  • the labeled data needs to be trained by high-performance computers to obtain models, which has certain limitations.
  • a possible implementation of the computer vision-based detection method is: using Hough transform to fit a road image to determine multiple lane lines, and obtain a lane line detection result.
  • the embodiment of the present application provides a lane line detection method, which can obtain at least one first area according to the first image, and obtain the first area in the first area. Then, according to the rules followed by the lane lines, the relationship between the first lane lines is constrained, and the lane line detection results that meet the constraints are obtained, which can avoid the excessive curvature of the lane lines and the lane lines existing in the identified lane lines. problems such as non-parallel or intersection of lane lines, thereby improving the accuracy of lane line detection.
  • FIG. 3 is a functional block diagram of a vehicle 300 according to an embodiment of the present invention.
  • the vehicle 300 is configured in a fully or partially autonomous driving mode.
  • the vehicle 300 can control itself while in an autonomous driving mode, and can determine the current state of the vehicle and its surroundings through human manipulation, determine the likely behavior of at least one other vehicle in the surrounding environment, and determine the other vehicle
  • the vehicle 300 is controlled based on the determined information with a confidence level corresponding to the likelihood of performing the possible behavior.
  • the vehicle 300 may be placed to operate without human interaction.
  • Vehicle 300 may include various subsystems, such as travel system 302 , sensor system 304 , control system 306 , one or more peripherals 308 and power supply 310 , computer system 312 and user interface 316 .
  • vehicle 300 may include more or fewer subsystems, and each subsystem may include multiple elements. Additionally, each of the subsystems and elements of the vehicle 300 may be interconnected by wire or wirelessly. The following is a detailed description of the computer system 312 related to the present invention.
  • Computer system 312 may include at least one processor 313 that executes instructions 315 stored in a non-transitory computer-readable medium such as data storage device 314 .
  • Computer system 312 may also be a plurality of computing devices that control individual components or subsystems of vehicle 300 in a distributed fashion.
  • Processor 313 may be any conventional processor, such as a commercially available CPU.
  • the processor may be a dedicated device such as an ASIC or other hardware-based processor.
  • FIG. 3 functionally illustrates the processor, memory, and other elements of the computer 310 in the same block, one of ordinary skill in the art will understand that the processor, computer, or memory may actually include a processor, a computer, or a memory that may or may not Multiple processors, computers, or memories stored within the same physical enclosure.
  • the memory may be a hard drive or other storage medium located within an enclosure other than computer 310 .
  • reference to a processor or computer will be understood to include reference to a collection of processors or computers or memories that may or may not operate in parallel.
  • some components such as the steering and deceleration components, may each have their own processors that only perform computations related to component-specific functions.
  • a processor may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are performed on a processor disposed within the vehicle while others are performed by a remote processor, including taking steps necessary to perform a single maneuver.
  • data storage device 314 may include instructions 315 (eg, program logic) executable by processor 313 to perform various functions of vehicle 300 , including those described above.
  • the data storage device 314 may contain lane line detection instructions 315 that may be executed by the processor 313 to perform the function of lane line detection of the vehicle 300 .
  • the data storage device 314 may store data such as road maps, route information, the vehicle's position, direction, speed, and other such vehicle data, among other information. Such information may be used by the vehicle 300 and the computer system 312 during operation of the vehicle 300 in autonomous, semi-autonomous and/or manual modes.
  • the data storage device 314 may store environmental information obtained from the sensor system 304 or other components of the vehicle 300 .
  • the environmental information may be, for example, whether there are green belts, traffic lights, pedestrians, etc. near the current environment of the vehicle. Algorithms such as machine learning can be used to calculate whether there are green belts, traffic lights, pedestrians, etc. near the current environment.
  • the data storage device 314 may also store state information of the vehicle itself, as well as state information of other vehicles with which the vehicle interacts.
  • the state information includes, but is not limited to, the speed, acceleration, heading angle, etc. of the vehicle.
  • the vehicle obtains the distance between other vehicles and itself, the speed of other vehicles, etc. based on the speed measurement and distance measurement functions of the radar 326 .
  • the processor 313 can obtain the above-mentioned environmental information or state information from the data storage device 314, and execute the instruction 315 including the lane line detection program to obtain the lane line detection result in the road. And based on the environmental information of the environment where the vehicle is located, the state information of the vehicle itself, the state information of other vehicles, and the traditional rule-based driving strategy, combined with the lane line detection results, the final driving strategy is obtained, and the steering system 332 is used to control the vehicle. Autopilot (such as steering, U-turn, etc.).
  • one or more of these components described above may be installed or associated with the vehicle 300 separately.
  • data storage device 314 may exist partially or completely separate from vehicle 300 .
  • the above-described components may be communicatively coupled together in a wired and/or wireless manner.
  • the above component is just an example.
  • components in each of the above modules may be added or deleted according to actual needs, and FIG. 3 should not be construed as a limitation on the embodiment of the present invention.
  • the above-mentioned vehicle 300 can be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground vehicle, construction equipment, a tram, a golf cart, a train, a cart, etc.
  • the embodiments of the invention are not particularly limited.
  • the integral map described in the embodiments of the present application may be constructed based on a grayscale image.
  • the grayscale image may be a grayscale image obtained by performing grayscale processing on the first image.
  • the abscissa of the integral graph is the number of pixel columns of the image, and the ordinate is the number of pixels of the image in the direction of the vertical axis.
  • FIG. 4 is an integral graph constructed by an embodiment of the present application for the grayscale image of the first image.
  • the range of the abscissa in the integral graph is 0 to 250
  • the range of the ordinate of the integral graph is 0 to 700, where points A, B, and C are the maxima of the integral graph.
  • FIG. 5 provides a lane line detection method according to an embodiment of the present application, comprising the following steps:
  • S501 Determine at least one first area according to the first image.
  • the first image described in this embodiment of the present application may be a road picture acquired by a camera.
  • the first image may be a color image.
  • the camera in the embodiment of the present application may be a camera of a driver monitoring system, a cockpit-type camera, an infrared camera, a driving recorder (ie, a video recording terminal), etc., which is not limited in the specific embodiment of the present application.
  • the first area described in this embodiment of the present application may be an area estimated in the road where lane lines may exist.
  • the first area described in the embodiments of the present application is not the first area in a specific image, but an area that may include lane lines in each image, and the first area may correspond to different images. different content.
  • the part of the first image corresponding to the first area may be an image located in the first area in the first image; the part of the second image corresponding to the first area may be an image located in the first area in the second image .
  • a first possible implementation of determining at least one first region according to the first image is: the vehicle driving system acquires a grayscale image of the first image, and then constructs an integral map according to the grayscale image of the first image, where the integral The location of at least one maximum value of the map defines at least one first region.
  • the grayscale image of the first image is obtained by performing grayscale processing on the first image.
  • Grayscale processing is a relatively common technology, and details are not described here.
  • the number of maximum values corresponds to the number of lane lines in the road, and a first area is determined at the position of each maximum value.
  • the width of the first area may be set by the machine, and the height of the first area may be the same as the height of the first image.
  • the left image of FIG. 6 is a grayscale image of the first image
  • the middle image of FIG. 6 is an integral graph constructed according to the grayscale image of the first image.
  • a rectangular area can be used to frame the possible areas of the lane lines, and the three first areas shown in the right figure of Figure 6 can be obtained.
  • the second possible implementation of determining the at least one first region according to the first image is: straightening the first image. At least one maximum value is obtained according to the integral map constructed from the grayscale image corresponding to the straightened first image, and then at least one first region is determined at the position of at least one maximum value of the integral map.
  • the vehicle driving system may rotate the first image, so that the pixels of the lane line in the rotated first image are more concentrated in the vertical direction.
  • the angle at which the first image is rotated may be set by the machine.
  • the grayscale image of the straightened first image is obtained by performing grayscale processing on the straightened first image.
  • Grayscale processing is a relatively common technology, and details are not described here.
  • the position of the maximum value is the position where the lane line pixels are more concentrated, and the lane line pixels are more concentrated in the vertical direction after straightening the first image.
  • the first region determined at the position of the maximum value of the integral map constructed by the straightened first image is also relatively more accurate.
  • S502 Obtain at least one first lane line according to at least one first area.
  • the first lane line described in this embodiment of the present application may be a lane line obtained by detecting pixel points in the first area.
  • the first lane line may be a lane line obtained by detecting pixels in the first area by using methods such as Hough transform, sliding window, or random sample consensus (RANSAC).
  • the vehicle driving system uses the Hough transform algorithm to fit the pixel points in the at least one first area to obtain the at least one first lane line.
  • a possible implementation is: The coordinate value is transformed into a curve in the parameter space, and the intersection of the curves is obtained in the parameter space, thereby determining at least one first lane line.
  • the Hough transform is suitable for detecting straight lines.
  • the sliding window or RANSAC algorithm can be considered for detection.
  • the vehicle driving system uses the sliding window algorithm to fit the pixels in the at least one first area, and a possible implementation of obtaining the at least one first lane line is: position, select N (N can be a natural number greater than or equal to 1) pixels as the search starting point, and then generate an initial sliding window with the selected search starting point as the center to complete the search from the bottom to the top.
  • the number of search starting points may correspond to the number of first regions.
  • the search from the bottom to the top of each initial sliding window can be understood as the process of finding a pixel of a lane line in a first area.
  • the number of sliding windows in the vertical direction and the width of the sliding windows can be set manually or by machine, and the height of the sliding windows can be obtained by dividing the number of pixels in the vertical direction in the first area by the set number of sliding windows.
  • the vehicle driving system After determining the width and height of the initial sliding window, the vehicle driving system determines the center of the next sliding window according to the mean value of the pixel coordinates of the lane lines in the initial sliding window, and then repeats this operation, that is, the position of a sliding window in each search is determined by the center within the next window until the sliding window covers the lane line pixels in the image. Finally, a second-order polynomial fitting is performed on these center points to obtain at least one first lane line.
  • the vehicle driving system uses the RANSAC algorithm to fit the pixel points in the at least one first area to obtain the at least one first lane line.
  • a possible implementation is: randomize the lane line pixel points in the first area. Sampling to obtain the pixels of part of the lane lines respectively. Fit the acquired pixel points of part of the lane lines to obtain the corresponding lane lines, and record the number of pixels in the lane lines. Repeat the above steps to obtain multiple lane lines, and select the one with the largest number of pixel points among the multiple lane lines to obtain at least one first lane line.
  • S503 Determine a second lane line that satisfies the constraint condition according to the at least one first lane line.
  • the constraint condition satisfied by the first lane line may include a rule followed by the lane line.
  • the constraint condition satisfied by the first lane lines may be that two adjacent first lane lines are parallel lane lines, or two adjacent first lane lines do not intersect, and so on.
  • the corresponding curvatures of the two adjacent first lane lines are calculated respectively.
  • the curvatures corresponding to the two adjacent first lane lines are not equal, it may be that the distance between the two adjacent first lane lines is too close or the intersection, etc., resulting in non-parallel two adjacent lane lines, and the actual On the road, the lane line usually conforms to the driving rules of the vehicle, and there is no situation such as too close or crossing, so it can be judged that the obtained lane line detection result is inaccurate.
  • the curvatures corresponding to the two adjacent first lane lines are equal, it can be judged that the detection result of the lane lines is accurate, and a second lane line that conforms to the law followed by the lane lines is obtained.
  • the coordinate values corresponding to the pixels in the first lane line in the image are counted respectively. If the same coordinates are found in the coordinate values corresponding to the lane lines, it may be that two adjacent first lane lines intersect, etc. However, in the actual road, the lane lines usually conform to the driving rules of vehicles, and there will be no intersection, etc. Therefore, it is possible to It is judged that the obtained lane line detection result is inaccurate. When the same coordinates are not found in the coordinate values corresponding to each first lane line, it can be judged that the lane line detection result is accurate, and a second lane line that conforms to the law followed by the lane lines is obtained.
  • An embodiment of the present application provides a lane line detection method, which can obtain at least one first area according to a first image, obtain the first lane line in the first area, and then constrain the first lane line according to the law followed by the lane line In this way, the problems of excessive curvature of lane lines, non-parallel lane lines or intersection of lane lines in the identified lane lines can be reduced, thereby improving the accuracy of lane line detection.
  • the first image is an overhead image of the lane line.
  • the road picture acquired by the camera undergoes perspective transformation, for example, the lane line in the distant view is closer to the middle, and the thickness of the lane line in the distant view and the close view is different.
  • the vehicle driving system can perform inverse perspective transformation on the road picture undergoing perspective transformation, such as converting the road picture to a top-view perspective to obtain the first image.
  • the lane lines in the first image obtained after inverse perspective transformation are parallel to each other, and the widths of the lane lines are equal.
  • a possible implementation of performing inverse perspective transformation on the road picture to obtain the first image is: calculating the transformation matrix of the camera, wherein the transformation matrix of the camera can be obtained by multiplying the internal parameter matrix and the external parameter matrix of the camera.
  • the transformation matrix of the camera represents the imaging of the camera. If the transformation matrix of the camera is inversely transformed, the inverse perspective transformation can be realized to eliminate the perspective deformation.
  • the transformation process can be expressed by the following formula:
  • the extrinsic parameter matrix calibrated for the camera are the coordinates after inverse perspective transformation, are the coordinates before inverse perspective transformation, fx and fy in the internal parameter matrix are related to the lens focal length of the camera, and c x and cy are the positions of the optical center of the camera in the pixel coordinate system, corresponding to the center coordinates of the image matrix.
  • the parameters in the internal parameter matrix and the external parameter matrix of the camera can be obtained through camera calibration.
  • the method of performing inverse perspective transformation on the first image is not limited to the above calculation method, and those skilled in the art can also obtain the overhead image of the road picture by calculating in other ways, which are not specifically limited in the embodiments of the present application.
  • the road picture obtained by the camera of the vehicle driving system may not undergo perspective transformation.
  • performing inverse perspective transformation on the road picture obtained by the camera is an optional step, and the road picture obtained by the camera can be used as the first step. an image.
  • FIG. 7 shows a possible implementation manner of S501.
  • S501 includes:
  • S701 Acquire a third lane line according to the first image.
  • the third lane line described in the embodiment of the present application may be any lane line obtained by identifying the first image, and the third lane line may be used as a reference for obtaining the first area.
  • the third lane line can be a lane line with distinctive features, or can be understood as a lane line with better recognition effect.
  • the third lane line may be the lane line with the largest number of pixels in the first image.
  • the number of pixels of the third lane line is greater than the first threshold.
  • the first threshold can be set manually or by a machine. When the number of pixels of the third lane line is greater than the first threshold, it can be considered that the third lane line is relatively complete, and the subsequent determination is based on the area where the third lane line is located. In the first region, a more accurate first region can be obtained.
  • the vehicle driving system detects the first image to obtain multiple lane lines. Select the one with the largest number of pixels among the multiple lane lines to get the third lane line.
  • a possible implementation of the third lane line being a lane line with a number of pixels greater than the first threshold is: the vehicle driving system sets a first threshold for the number of pixels of the third lane line, and selects to detect the first image.
  • One of the plurality of obtained lane lines whose number of pixels is greater than the first threshold is obtained, and a third lane line is obtained.
  • the vehicle driving system when the number of pixels in multiple lane lines does not reach the first threshold, the vehicle driving system performs image enhancement on the first image, and then re-detects the first image obtained after the image enhancement.
  • the vehicle driving system selects one of the plurality of lane lines detected again whose number of pixels is greater than the first threshold to obtain a third lane line.
  • a possible implementation of obtaining the third lane line according to the first image in the embodiment of the present application is: the vehicle driving system detects the first image to obtain a plurality of lane lines. Select any one of the obtained multiple lane lines as the third lane line.
  • the method for detecting the first image in the embodiment of the present application may include: a method based on deep learning, a method based on computer vision, and the like.
  • the vehicle driving system uses a method based on deep learning to detect the first image. For example, an image sample containing lane lines can be used to train a neural network model capable of outputting multiple lane lines, and multiple lane lines can be obtained by inputting the first image into the neural network model. Then select one lane line from the obtained multiple lane lines as the third lane line.
  • the image samples of the lane lines in this embodiment of the present application may include road image samples, and the road image samples may be obtained through a database.
  • the database used in this embodiment of the present application may be an existing public database or a created database.
  • the embodiment of the present application uses a method based on computer vision to detect the first image, and obtains multiple lane lines after processing such as lane line pixel extraction and lane line fitting. Then select one lane line from the obtained multiple lane lines as the third lane line.
  • the vehicle driving system obtains the lane line pixel information by performing edge detection on the first image.
  • the vehicle driving system may perform grayscale processing on the first image, and change the first image containing brightness and color into a grayscale image, so as to facilitate subsequent edge detection on the image.
  • Gaussian blurring may be performed on the grayscale image of the first image.
  • some relatively unclear noises in the grayscale image of the first image can be removed, so that edge information of the lane line can be obtained more accurately.
  • the vehicle driving system uses an algorithm such as Canny to perform edge detection on the processed first image to obtain edge information of the processed first image.
  • the edge information obtained by performing edge detection on the processed first image may include, in addition to edge information of lane lines, other edge information, such as edge information of trees and houses beside the road.
  • the vehicle driving system can infer the position of the lane line in the first image according to the angle of the camera, shooting direction, etc., filter out other edge information, retain the edge information of the lane line, and finally get the information contained in the road.
  • Image of lane line pixel information may include, in addition to edge information of lane lines, other edge information, such as edge information of trees and houses beside the road.
  • the vehicle driving system to infer the position of the lane line in the first image according to the angle of the camera, the shooting direction, etc. is as follows: when the vehicle is moving forward, the shooting direction of the camera is the road area in front of the vehicle, and it can be inferred that the lane line is located at the front of the vehicle. The lower position of the first image; when the vehicle is reversing, the camera shooting direction is the road area behind the rear of the vehicle, and it can be inferred that the lane line is located below the first image; when the camera is a 360-degree multi-angle camera, the shooting direction can be the vehicle. In the surrounding 360-degree road area, it can also be inferred that the lane line is located below the first image.
  • the left image of Figure 8 is obtained after the vehicle driving system performs edge detection on the processed first image.
  • the shooting direction of the camera is the area in front of the front of the vehicle, and it can be inferred that the lane line is located in the image. in the lower area.
  • the vehicle driving system sets the area below the image as the area of interest, and obtains the pixel information of the lane line.
  • the lane line is located in the entire acquired image.
  • the edge information obtained by the edge detection of the processed first image is the pixel information of the lane line. That is, the vehicle driving system can infer the position of the lane line in the first image according to the angle of the camera, the shooting direction, etc., and select other edge information except the lane line as an optional step.
  • the vehicle driving system may perform color segmentation on the first image according to the color features of the lane lines in the first image to obtain an image containing lane line pixel information.
  • the vehicle driving system may choose to set corresponding color intervals in the color space (such as RGB color space, etc.) to extract the corresponding color in the first image. Pixel information for lane lines. Wherein, when there are two colors of lane lines in the acquired first image, the vehicle driving system combines the pixel information of the lane lines extracted in different color intervals to obtain an image containing the pixel information of the lane lines.
  • the color space such as RGB color space, etc.
  • the vehicle driving system uses sliding window, Hough transform and other algorithms to fit the image containing the lane line pixel information to obtain multiple lane lines. According to the multiple lane lines obtained by fitting, the third lane line is determined.
  • the vehicle driving system uses algorithms such as sliding window and Hough transform to fit an image containing lane line pixel information, and a possible implementation of obtaining multiple lane lines is: according to the position of the bottom lane line pixel in the image, Select N (N can be a natural number greater than or equal to 1) pixels as the search starting point, and then generate an initial sliding window with the selected search starting point as the center to complete the search from bottom to top.
  • N can be a natural number greater than or equal to 1
  • the number of search starting points and the number of lane lines in the road can be the same, and the search from the bottom to the top of each initial sliding window can be understood as the process of finding a pixel of a lane line.
  • the number of sliding windows in the vertical direction and the width of the sliding windows can be set manually or by machines, and the height of the sliding windows can be obtained by dividing the number of pixels in the vertical direction in the picture by the set number of sliding windows.
  • the vehicle driving system After determining the width and height of the initial sliding window, the vehicle driving system determines the center of the next sliding window according to the mean value of the pixel coordinates of the lane lines in the initial sliding window, and then repeats this operation, that is, the position of a sliding window in each search is determined by the center within the next window until the sliding window covers the lane line pixels in the image. Finally, a second-order polynomial fitting is performed on these center points to obtain multiple lane lines.
  • a possible implementation of determining the third lane line according to the multiple obtained lane lines is: the vehicle driving system selects one lane line from the obtained multiple lane lines to obtain the third lane line.
  • determining the third lane line according to the multiple lane lines obtained by fitting is: according to the positions of the multiple lane lines obtained by fitting, generate the area of each lane line respectively, and then use The random sampling consistency algorithm RANSAC algorithm respectively fits the pixels in the area where the lane lines are located, and obtains multiple lane lines.
  • RANSAC algorithm The random sampling consistency algorithm
  • the multiple lane lines obtained by fitting the area through the RANSAC algorithm have better recognition effect than the multiple lane lines obtained directly by image fitting.
  • the vehicle driving system selects one lane line from the multiple lane lines fitted by the RANSAC algorithm to obtain the third lane line.
  • S702 Determine at least one first area according to the third lane line and the first distance.
  • the first distance is related to the width of the lane.
  • the relationship between the first distance and the lane width may be determined by acquiring internal parameters and external parameters of the camera. For example, a linear relationship between the width of the lane and the pixels in the first image is obtained from the intrinsic and extrinsic parameter matrices of the camera. Then, according to the linear relationship between the width of the lane and the pixels in the first image, a first distance corresponding to the width of the lane in the first image is determined.
  • the first distance corresponding to the lane width in the first image may be the number of pixels corresponding to the lane width in the first image.
  • the relationship between the first distance and the lane width may also be determined through prior knowledge.
  • the prior knowledge may be a table established based on the relationship between the pixels in the picture acquired according to the history of the camera and the distances corresponding to the pixels in practice.
  • the first distances corresponding to different road widths are also different. After obtaining the specific road width, the first distance can be obtained by querying the table.
  • the embodiment of the present application may determine the area where the third lane line is located according to the position of the third lane line and the first distance in the first image, and then translate the area where the third lane line is located, At least one first area is determined according to the area where the third lane line is located and the area obtained after the translation.
  • the distance for translating the area where the third lane line is located may be determined by the first distance.
  • the third lane line is at the left position in the figure, and a rectangular area is used to frame the area where the third lane line is located to obtain the third lane line as shown in the middle figure of FIG. 9 . the area in which it is located.
  • the resolution of the known image is 250*700, that is, there are 250 pixels in the horizontal direction. If the width of the lane is 3 meters, according to the internal parameter matrix and the external parameter matrix of the camera, it can be obtained that the lane width of 3 meters corresponds to 70 pixels in the horizontal direction in the first image, that is, the first distance of the lane width in the first image. .
  • the vehicle driving system translates the area where the third lane line is located to obtain the areas where the other lane lines are located. As shown on the right of Figure 9, the area where the third lane line is located and the area obtained after translation constitute three first areas.
  • the embodiment of the present application can estimate the positions of other lane lines according to the position of the third lane line and the first distance in the first image, and use rectangles for the positions of the third lane line and other lane lines respectively.
  • the area frames the area where the third lane line and the other lane lines are located to obtain at least one first area.
  • An embodiment of the present application provides a lane line detection method, which can determine a first area according to a first distance and a third lane line with a better recognition effect, so that the first lane line determined in the first area is also relatively more accurate .
  • S501 includes:
  • S1001 Acquire a third lane line according to the first image.
  • S1001 can correspond to the term to explain the record about the third lane line, which will not be repeated here.
  • S1002 Determine an area where the third lane line is located according to the third lane line.
  • a rectangular area is used to frame the area where the third lane line is located.
  • the vehicle driving system may determine, according to the position of the third lane line as shown in the left figure of FIG. 9 , the area where the third lane line is located as shown in the middle figure of FIG. 9 , that is, the rectangular area outside the third lane line .
  • S1003 Construct an integral map according to the first image.
  • a grayscale image of the first image is acquired, and an integral map is constructed according to the grayscale image of the first image.
  • S1003 may correspond to the description of the integral graph in the noun explanation part, and details are not repeated here.
  • the ordinate of the integral graph is the number of pixels of the image in the direction of the ordinate axis.
  • the position of the maximum value of the integral map is the position where the pixels of the lane line are concentrated, and the number of the maximum value of the integral map is the same as the number of lane lines in the road.
  • S1005 At the position corresponding to the maximum value, determine at least one first area parallel to the area where the third lane line is located.
  • a first area parallel to the area where the third lane line is located is respectively generated at the position of the maximum value.
  • the positions of three maxima are obtained respectively according to the integral map constructed from the grayscale image of the first image.
  • Three first regions parallel to the region where the third lane line is located are generated at the positions of the maximum values, as shown in the right figure of FIG. 9 .
  • An embodiment of the present application provides a lane line detection method.
  • the first area is determined according to the third lane line and the maximum value of the integral map.
  • the first region determined at the maximum value is also more accurate.
  • S1004 includes:
  • the third lane line is straightened into a third lane line parallel to the longitudinal axis. Then, according to the moving position and direction of other pixels in the third lane line during the straightening, straighten the pixels with the same ordinate as other pixels in the first image to obtain the second image.
  • the left picture of FIG. 12 is the third lane line obtained according to the first image. If the pixels of the image are 250*700, there are 700 rows of pixels in the image. If the number of pixels in the third lane is 700, then if the pixels in the third lane also have 700 rows, and each row has one pixel. Taking the pixel point of the first row in the third lane line as the reference point, move the other pixel points of the third lane line to the same place as the abscissa of the reference point.
  • other pixel points in the third lane line may refer to the corresponding lines from the 2nd to the 700th row in the third lane line. of pixels. Then record the moving positions and directions of other pixels in the third lane. For example, the pixels in the second row in the third lane move two pixels along the positive semi-axis of the horizontal axis, etc., to obtain the first parallel to the vertical axis. Four-lane line.
  • the pixels with the same ordinate as other pixels in the first image as shown in the middle image of Fig. 12 will be moved to the same position and direction.
  • the pixel points of the second row in an image are moved by two pixels in the direction of the positive semi-axis of the horizontal axis, etc., to obtain the second image as shown in the right image of FIG. 12 .
  • the pixel point in the first image that has the same vertical coordinate as other pixel points may be a pixel point in the first image that is in a row with other pixel points.
  • S1101 is an optional step.
  • S1102 Generate an integral map according to the second image.
  • a grayscale image of the second image may be acquired, and an integral map is constructed according to the grayscale image of the second image.
  • S1103 Acquire at least one maximum value of the integral graph.
  • the embodiment of the present application provides a lane line detection method, wherein the position of the maximum value is determined according to the straightened first image, wherein the lane line pixels in the straightened second image are more concentrated in the vertical direction, The position of the maximum value obtained in this way is also more accurate. Therefore, the first region determined by the maximum value is also relatively accurate.
  • S502 includes: the vehicle driving system uses the RANSAC algorithm to fit the pixel points in at least one first area to obtain at least one first lane line.
  • the vehicle driving system randomly samples the pixels in at least one first area, obtains some pixels in the first area, and then fits some pixels in the first area to obtain the corresponding Lane line, and record the number of pixels in the lane line. Repeat the above steps to obtain multiple lane lines, and select the one with the largest number of pixel points among the multiple lane lines to obtain at least one first lane line.
  • using the RANSAC algorithm to fit the pixel points in the at least one first region may be to perform fitting on the pixel points in the at least one first region in parallel.
  • the RANSAC algorithm is used to simultaneously and separately fit the pixels in the at least one first region.
  • An embodiment of the present application provides a lane line detection method, which uses the RANSAC algorithm to simultaneously fit the first area, which can improve the efficiency of lane line detection.
  • the relationship between the first lane lines is constrained according to the rules followed by the lane lines, and the detection results of the lane lines that meet the constraints can be obtained, which can avoid the excessive curvature of the lane lines, the non-parallel lane lines or the lane lines caused by the separate fitting. problems such as intersections, thereby improving the accuracy of lane line detection.
  • S503 includes:
  • S1301 Determine lane lines satisfying the constraint conditions in the first area N times, and obtain multiple lane lines.
  • the constraint condition satisfied by the lane line may include a rule followed by the lane line.
  • the law followed by the lane line may include at least one of the following: the width between the pixels with the same ordinate in the two adjacent first lane lines satisfies the first range, the curvature of the first lane line satisfies the second range, The distance between two adjacent first lane lines satisfies the third range, and the curvature difference between the two adjacent first lane lines satisfies the fourth range.
  • the width between the pixels with the same ordinate in two adjacent first lane lines does not satisfy the first range, it may be that the distance between the two adjacent first lane lines is too close or intersects, etc.
  • the lane lines usually conform to the vehicle driving rules, and there will be no situations such as too close or intersection, so it can be judged that the obtained lane line detection results are inaccurate.
  • the width between pixels with the same ordinate in two adjacent first lane lines satisfies the first range, it can be judged that the lane line detection result is accurate, and a second lane line conforming to the law that the lane lines follow is obtained.
  • the first range can be set according to actual application scenarios.
  • the first range can include a value equal to or similar to the width of the vehicle, or a common lane width value, etc.
  • the first range is not specifically limited in this embodiment of the present application. .
  • the lane line when the curvature of the first lane line does not meet the second range, it may be that the curvature of the first lane line is too large, but in an actual road, the lane line can be divided into vertical lane lines and curved lanes in shape. In different curved lane lines, the lane line usually conforms to the vehicle driving rules, and there is no such thing as excessive curvature, so it can be judged that the obtained lane line detection result is inaccurate.
  • the curvature of the first lane line satisfies the second range, it can be judged that the detection result of the lane line is accurate, and a second lane line that conforms to the law followed by the lane line is obtained.
  • the second range may be set according to an actual application scenario, for example, the second range may include a common lane line curvature value, and the embodiment of the present application does not specifically limit the first range.
  • the third range may be set according to actual application scenarios, for example, the third range may include a value equal to or similar to the width of the vehicle, or a common lane width value, which is not specifically limited in this embodiment of the present application.
  • the fourth range may be set according to an actual application scenario, for example, the fourth range may include the curvature difference of a common lane line, etc.
  • the embodiment of the present application does not specifically limit the first range.
  • N is a non-zero natural number, such as 1, 2, 3, etc.
  • the RANSAC algorithm is used to detect the pixel points in the first area, and the first lane line is determined in the first area.
  • the first lane line satisfies the constraint condition
  • S1302 Determine a lane line with the largest number of pixels among the plurality of lane lines to obtain a second lane line.
  • a lane line with the largest number of pixels is determined among the plurality of lane lines to obtain the second lane line.
  • An embodiment of the present application provides a lane line detection method, which constrains the relationship between the first lane lines according to the rules followed by the lane lines, and selects a lane line with the largest number of pixels among the first lane lines that satisfy the constraint condition, as For the second lane line, the detection result of the lane line is also more accurate.
  • the vehicle driving system may mark the obtained second lane line in the first image, and then output it to the display screen in the vehicle driving system.
  • the first image is obtained by performing inverse perspective transformation on the road picture obtained by the camera, in this case, the first image containing the detection result of the lane line can be subjected to perspective transformation, and then output to the vehicle driving system. in the display.
  • the vehicle driving system may combine the lane line detection results based on the environmental information of the environment where the vehicle is located, the state information of the vehicle itself and/or the state information of other vehicles, and Get driving strategies (such as steering, U-turn, etc.) to ensure the safety of the vehicle.
  • the vehicle driving system can also send out warning information when the vehicle is about to deviate from its own lane (by means of screen display, voice broadcast or vibration, etc.), and the user can manually intervene according to the warning information to ensure the safety of the vehicle.
  • the first image may also be a processed image, for example, the first image may also be obtained after processing a road picture.
  • the grayscale image, etc., the step of performing grayscale processing on the first image may be omitted in the above steps, which will not be repeated here.
  • FIG. 14 shows a schematic structural diagram of a lane line detection device provided by an embodiment of the present application.
  • the lane line detection device includes: a processing unit 1401 .
  • the processing unit 1401 is used to complete the step of lane line detection.
  • the processing unit 1401 is configured to support the lane line detection apparatus to perform S501 to S503 in the above embodiment, or S1001 to S1001 to S1005, or S1101 to S1103, etc.
  • the lane line detection apparatus may further include: a communication unit 1402 and a storage unit 1403 .
  • the processing unit 1401, the communication unit 1402, and the storage unit 1403 are connected through a communication bus.
  • the storage unit 1403 may include one or more memories, and the memories may be devices in one or more devices or circuits for storing programs or data.
  • the storage unit 1403 may exist independently, and is connected to the processing unit 101 of the lane line detection apparatus through a communication bus.
  • the storage unit 1403 may also be integrated with the processing unit.
  • the lane line detection device can be used in communication equipment, circuits, hardware components or chips.
  • the communication unit 1402 may be an input or output interface, a pin, a circuit, or the like.
  • the storage unit 103 may store computer execution instructions of the method of the terminal device, so that the processing unit 1401 executes the method of the terminal device in the foregoing embodiments.
  • the storage unit 1403 may be a register, a cache or a RAM, etc., and the storage unit 1403 may be integrated with the processing unit 101 .
  • the storage unit 1403 may be a ROM or other types of static storage devices that may store static information and instructions, and the storage unit 1403 may be independent of the processing unit 1401 .
  • An embodiment of the present application provides a lane line detection device
  • the lane line detection device includes one or more modules for implementing the method in the steps included in the above-mentioned FIG. 4 to FIG. 13
  • the one or more modules may be Corresponds to the steps of the method in the steps contained in FIGS. 4-13 above.
  • a unit or module in the terminal device that performs each step in the method.
  • a module that performs detection of lane lines may be referred to as a processing module.
  • a module that performs the steps of processing messages or data on the side of the lane line detection device may be referred to as a communication module.
  • FIG. 15 is a schematic structural diagram of a chip 150 provided by an embodiment of the present invention.
  • the chip 150 includes one or more (including two) processors 1510 and a communication interface 1530 .
  • the chip 150 shown in FIG. 15 further includes a memory 1540 , which may include read-only memory and random access memory, and provides operation instructions and data to the processor 1510 .
  • a portion of memory 1540 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • memory 1540 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set of them:
  • the corresponding operation is performed by calling the operation instruction stored in the memory 1540 (the operation instruction may be stored in the operating system).
  • a possible implementation manner is: the structure of the chips used by the terminal equipment, the wireless access network device or the session management network element is similar, and different devices may use different chips to realize their respective functions.
  • the processor 1510 controls the operation of the terminal device, and the processor 1510 may also be referred to as a central processing unit (central processing unit, CPU).
  • Memory 1540 may include read-only memory and random access memory, and provides instructions and data to processor 1510 .
  • a portion of memory 1540 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 1540, the communication interface 1530, and the memory 1540 are coupled together through the bus system 1520, wherein the bus system 1520 may include a power bus, a control bus, a status signal bus, and the like in addition to a data bus.
  • the various buses are labeled as bus system 1520 in FIG. 15 .
  • the above communication unit may be an interface circuit or a communication interface of the device for receiving signals from other devices.
  • the communication unit is an interface circuit or a communication interface used by the chip to receive or transmit signals from other chips or devices.
  • the methods disclosed in the above embodiments of the present invention may be applied to the processor 1510 or implemented by the processor 1510 .
  • the processor 1510 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method may be completed by an integrated logic circuit of hardware in the processor 1510 or an instruction in the form of software.
  • the above-mentioned processor 1510 may be a general-purpose processor, a digital signal processing (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or Other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present invention may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 1540, and the processor 1510 reads the information in the memory 1540, and completes the steps of the above method in combination with its hardware.
  • the communication interface 1530 is configured to perform the steps of receiving and sending the terminal equipment, radio access network device or session management network element in the embodiments shown in FIG. 4-FIG. 13 .
  • the processor 1510 is configured to perform processing steps of the terminal device, the radio access network device or the session management network element in the embodiments shown in FIGS. 4-13 .
  • the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product.
  • the computer program product can be pre-written in the memory, or downloaded and installed in the memory in the form of software.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the procedures or functions according to the embodiments of the present application are generated in whole or in part.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored on or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g. coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
  • a wire e.g. coaxial cable, fiber optic, digital subscriber line (DSL)
  • wireless e.g, infrared, wireless, microwave, etc.
  • the computer-readable storage medium can be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that includes one or more available media integrated.
  • Useful media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks, SSDs), and the like.
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media can include both computer storage media and communication media and also include any medium that can transfer a computer program from one place to another.
  • the storage medium can be any target medium that can be accessed by a computer.
  • the computer readable medium may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium intended to carry or in an instruction or data structure
  • the required program code is stored in the form and can be accessed by the computer.
  • any connection is properly termed a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
  • coaxial cable, fiber optic cable , twisted pair, DSL or wireless technologies such as infrared, radio and microwave
  • Disk and disc as used herein includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un appareil de détection de ligne de voie, qui se rapportent au domaine technique des capteurs, et qui peuvent être utilisés pour une protection de sécurité, une conduite assistée et une conduite autonome. Le procédé comprend les étapes consistant à : déterminer au moins une première zone selon une première image (S501) ; obtenir au moins une première ligne de voie selon la ou les premières zones (S502) ; et déterminer, en fonction de ladite au moins une première ligne de voie, une seconde ligne de voie satisfaisant des conditions de contrainte (S503), les conditions de contrainte comprenant une règle suivie par les lignes de voie. De cette manière, la relation entre les premières lignes de voie est contrainte selon la règle suivie par les lignes de voie, un résultat de détection de ligne de voie satisfaisant des conditions de contrainte est obtenu, de telle sorte que les problèmes de courbure de ligne de voie étant trop grande, de lignes de voie n'étant pas parallèles ou de lignes de voie étant croisées, etc. dans les lignes de voie reconnues peuvent être évitées, ce qui permet d'améliorer la précision de détection de ligne de voie. La capacité de système d'aide à la conduite avancé (ADAS) dans la conduite autonome ou la conduite assistée est améliorée, et le procédé et l'appareil de détection de ligne de voie peuvent être appliqués à l'Internet des véhicules, tel que le véhicule-à-tout (V2X), la technologie d'évolution à long terme (LTE-V) de communication entre véhicules et un véhicule-à-véhicule (V2V).
PCT/CN2020/122716 2020-10-22 2020-10-22 Procédé et appareil de détection de ligne de voie WO2022082571A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/122716 WO2022082571A1 (fr) 2020-10-22 2020-10-22 Procédé et appareil de détection de ligne de voie
CN202080004827.3A CN112654998B (zh) 2020-10-22 2020-10-22 一种车道线检测方法和装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/122716 WO2022082571A1 (fr) 2020-10-22 2020-10-22 Procédé et appareil de détection de ligne de voie

Publications (1)

Publication Number Publication Date
WO2022082571A1 true WO2022082571A1 (fr) 2022-04-28

Family

ID=75368435

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/122716 WO2022082571A1 (fr) 2020-10-22 2020-10-22 Procédé et appareil de détection de ligne de voie

Country Status (2)

Country Link
CN (1) CN112654998B (fr)
WO (1) WO2022082571A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911574A (zh) * 2024-03-18 2024-04-19 腾讯科技(深圳)有限公司 道路拉直数据处理方法、装置及电子设备

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311635B (zh) * 2022-07-26 2023-08-01 阿波罗智能技术(北京)有限公司 车道线处理方法、装置、设备及存储介质
CN117710795A (zh) * 2024-02-06 2024-03-15 成都同步新创科技股份有限公司 一种基于深度学习的机房线路安全性检测方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217427A (zh) * 2014-08-22 2014-12-17 南京邮电大学 一种交通监控视频中车道线定位方法
CN106529493A (zh) * 2016-11-22 2017-03-22 北京联合大学 一种基于透视图的鲁棒性多车道线检测方法
CN106682646A (zh) * 2017-01-16 2017-05-17 北京新能源汽车股份有限公司 一种车道线的识别方法及装置
JP6384182B2 (ja) * 2013-08-12 2018-09-05 株式会社リコー 道路上の線形指示標識の検出方法及び装置
CN109583365A (zh) * 2018-11-27 2019-04-05 长安大学 基于成像模型约束非均匀b样条曲线拟合车道线检测方法
CN110287779A (zh) * 2019-05-17 2019-09-27 百度在线网络技术(北京)有限公司 车道线的检测方法、装置及设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6384182B2 (ja) * 2013-08-12 2018-09-05 株式会社リコー 道路上の線形指示標識の検出方法及び装置
CN104217427A (zh) * 2014-08-22 2014-12-17 南京邮电大学 一种交通监控视频中车道线定位方法
CN106529493A (zh) * 2016-11-22 2017-03-22 北京联合大学 一种基于透视图的鲁棒性多车道线检测方法
CN106682646A (zh) * 2017-01-16 2017-05-17 北京新能源汽车股份有限公司 一种车道线的识别方法及装置
CN109583365A (zh) * 2018-11-27 2019-04-05 长安大学 基于成像模型约束非均匀b样条曲线拟合车道线检测方法
CN110287779A (zh) * 2019-05-17 2019-09-27 百度在线网络技术(北京)有限公司 车道线的检测方法、装置及设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911574A (zh) * 2024-03-18 2024-04-19 腾讯科技(深圳)有限公司 道路拉直数据处理方法、装置及电子设备

Also Published As

Publication number Publication date
CN112654998B (zh) 2022-04-15
CN112654998A (zh) 2021-04-13

Similar Documents

Publication Publication Date Title
US10599930B2 (en) Method and apparatus of detecting object of interest
CN112417967B (zh) 障碍物检测方法、装置、计算机设备和存储介质
CN111666921B (zh) 车辆控制方法、装置、计算机设备和计算机可读存储介质
WO2022082571A1 (fr) Procédé et appareil de détection de ligne de voie
US20210150231A1 (en) 3d auto-labeling with structural and physical constraints
US11531892B2 (en) Systems and methods for detecting and matching keypoints between different views of a scene
EP4152204A1 (fr) Procédé de détection de ligne de voie et appareil associé
CN111860227B (zh) 训练轨迹规划模型的方法、装置和计算机存储介质
WO2022104774A1 (fr) Procédé et appareil de détection de cible
US10891795B2 (en) Localization method and apparatus based on 3D color map
US11195064B2 (en) Cross-modal sensor data alignment
US11475628B2 (en) Monocular 3D vehicle modeling and auto-labeling using semantic keypoints
EP4307219A1 (fr) Procédé et appareil de détection de cible tridimensionnelle
CN112753038A (zh) 识别车辆变道趋势的方法和装置
WO2023179027A1 (fr) Procédé et appareil de détection d'obstacle routier, et dispositif et support de stockage
CN112800822A (zh) 利用结构约束和物理约束进行3d自动标记
WO2022082574A1 (fr) Procédé et appareil de détection de ligne de voie
WO2022204905A1 (fr) Procédé et appareil de détection d'obstacles
KR20230140654A (ko) 운전자 보조 시스템 및 운전자 보조 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20958154

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20958154

Country of ref document: EP

Kind code of ref document: A1