CN113591720A - Lane departure detection method, apparatus and computer storage medium - Google Patents

Lane departure detection method, apparatus and computer storage medium Download PDF

Info

Publication number
CN113591720A
CN113591720A CN202110880287.1A CN202110880287A CN113591720A CN 113591720 A CN113591720 A CN 113591720A CN 202110880287 A CN202110880287 A CN 202110880287A CN 113591720 A CN113591720 A CN 113591720A
Authority
CN
China
Prior art keywords
lane
vehicle
information
road
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110880287.1A
Other languages
Chinese (zh)
Inventor
胡博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202110880287.1A priority Critical patent/CN113591720A/en
Publication of CN113591720A publication Critical patent/CN113591720A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a lane departure detection method, a lane departure detection device and a computer storage medium, wherein the method comprises the following steps: acquiring at least two frames of images including lane information of a road; the at least two frames of images are simultaneously captured by at least two image capturing devices arranged side by side in a preset direction of the vehicle; transversely splicing the at least two frames of images according to the shooting position sequence, and determining first lane information corresponding to a target lane of the vehicle in the road based on the obtained road straight splicing images; acquiring second lane information corresponding to a positioning lane of the vehicle in the road according to the positioning information of the vehicle; and detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information so as to judge whether the vehicle deviates from the lane based on the detection result. So, can in time and accurate the detection whether the lane departure appears to the vehicle, and simple operation, reliable.

Description

Lane departure detection method, apparatus and computer storage medium
Technical Field
The present invention relates to the field of vehicle driving technologies, and in particular, to a lane departure detection method, apparatus, and computer storage medium.
Background
The traditional lane departure detection method is used for carrying out lane-level matching on a sensing result of a single-frame front view shot by a single camera and a high-precision map, under the normal condition, the sensing result of only ten meters in the length of the current lane can be obtained on the basis of the single-frame front view shot by the single camera, the information of adjacent lanes cannot be accurately detected due to the shooting visual angle, and a plurality of lanes with similar or even the same patterns usually exist in a multi-lane road, so that once the positioning result of a vehicle is inaccurate, the accurate detection result cannot be obtained on the basis of the traditional lane departure detection method. However, how to detect whether the vehicle has a lane departure in a timely and accurate manner is under investigation.
Disclosure of Invention
In view of the above technical problems, the present application provides a lane departure detection method, apparatus, and computer storage medium, which can detect whether a vehicle has a lane departure in time and accurately, and are convenient and reliable to operate.
In order to solve the above technical problem, the present application provides a lane departure detection method, including the steps of:
acquiring at least two frames of images including lane information of a road; wherein the at least two frames of images are simultaneously captured by at least two image capturing devices arranged side by side in a preset direction of the vehicle;
transversely splicing the at least two frames of images according to the shooting position sequence, and determining first lane information corresponding to a target lane of the vehicle in the road based on the obtained road straight splicing images;
acquiring second lane information corresponding to a positioning lane of the vehicle in the road according to the positioning information of the vehicle;
and detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information so as to judge whether the vehicle deviates from the lane based on the detection result.
Optionally, the image capturing device includes three and photographed regions respectively corresponding to a right front region, a left front region, and a right front region of the vehicle, and the acquiring at least two frames of images including lane information of a road includes:
controlling the image capturing device to simultaneously capture images of a right front area, a left front area, and a right front area of the vehicle, respectively, to obtain at least three frame images including lane information of a road.
Optionally, the transversely stitching the at least two frames of images according to the shooting position sequence includes:
acquiring at least two frames of top view conversion graphs corresponding to the at least two frames of images;
removing the transverse offset information in the at least two frames of overhead converted graphs by taking the driving direction of the vehicle as the longitudinal direction;
and transversely splicing the at least two frames of overhead converted graphs according to the shooting position sequence to obtain a road straight splicing image.
Optionally, the obtaining of the at least two top conversion maps corresponding to the at least two frames of images includes:
acquiring internal parameters, external parameters and distortion parameters of the image capturing device; wherein the internal parameter is selected from at least one of a focal length, an optical center, a distortion parameter, and/or the external parameter is selected from at least one of a pitch angle, a yaw angle, a ground height;
and performing inverse perspective conversion on the at least two frames of images according to the internal parameters, the external parameters and the distortion parameters of the image capturing device.
Optionally, the transversely stitching the at least two frames of top-view transformation graphs according to the shooting position sequence to obtain a road straight graph image includes:
acquiring pose information of the image capturing device;
and sequentially staggering, covering and splicing the at least two frames of the overlooking conversion graphs in a staggered manner according to the pose information, and/or splicing the frames of the overlooking conversion graphs after cutting according to the pose information to obtain a road straight splicing image.
Optionally, the determining, based on the obtained road straight mosaic image, first lane information corresponding to a target lane where the vehicle is located in the road includes:
responding to the road straight jigsaw image for visual perception, and obtaining a road characteristic point bitmap;
according to the pose information, the optical center and the pixel source of each frame of the overlook conversion map, performing geometric recovery on the road feature point bitmap to obtain a road feature image;
and acquiring first lane information corresponding to a target lane of the vehicle on the road according to the road characteristic image and the pose information of the vehicle.
Optionally, the obtaining, according to the positioning information of the vehicle, second lane information corresponding to a lane where the vehicle is located in the road includes:
according to the positioning information of the vehicle, local map information corresponding to the positioning information is obtained;
and determining second lane information corresponding to the positioning lane of the vehicle in the road according to the local map information.
Optionally, the detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information to determine whether the vehicle deviates from the lane based on the detection result includes:
comparing whether the first lane information and the second lane information are consistent;
if the lane is consistent with the positioning lane, the target lane and the positioning lane are judged to be the same lane;
and if the lane is not consistent with the positioning lane, judging that the target lane and the positioning lane are not the same lane.
Accordingly, the present application provides a lane departure detection apparatus that performs the above method, comprising: a processor and a memory storing a computer program, the steps of the lane departure detection method being implemented when the processor runs the computer program.
Accordingly, the present application provides a computer storage medium having a computer program stored therein, which when executed by a processor, implements the steps of the above-described lane departure detection method.
As described above, the lane departure detection method, apparatus, and computer storage medium of the present application include: acquiring at least two frames of images including lane information of a road; the at least two frames of images are simultaneously captured by at least two image capturing devices arranged side by side in a preset direction of the vehicle; transversely splicing the at least two frames of images according to the shooting position sequence, and determining first lane information corresponding to a target lane of the vehicle in the road based on the obtained road straight splicing images; acquiring second lane information corresponding to a positioning lane of the vehicle in the road according to the positioning information of the vehicle; and detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information so as to judge whether the vehicle deviates from the lane based on the detection result. Therefore, the multi-frame images comprising the lane information of the road are spliced, the information of the target lane of the vehicle in the road is determined based on the obtained road straight splicing images, and whether the vehicle deviates from the lane or not is detected by combining the positioning information of the vehicle, so that whether the vehicle deviates from the lane or not can be detected timely and accurately, and the operation is convenient and reliable.
Drawings
Fig. 1 is a schematic flow chart of a lane departure detection method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a lane departure detection method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of the present invention before image stitching;
FIG. 4 is a schematic diagram of image stitching according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating visual perception and geometric recovery in an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a process of detecting whether a vehicle deviates from a lane according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a lane departure detection apparatus according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
It should be noted that step numbers such as S101 and S102 are used herein for the purpose of more clearly and briefly describing the corresponding contents, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S102 first and then S101 in specific implementations, but these steps should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
Referring to fig. 1, for the lane departure detection method provided in the embodiment of the present application, the lane departure detection method may be executed by the lane departure detection apparatus provided in the embodiment of the present application, the apparatus may be implemented in a software and/or hardware manner, and the apparatus may be specifically applied to a cloud server and/or a terminal device, in which the lane departure detection method is applied to a vehicle-mounted terminal in the embodiment, for example, the lane departure detection method includes the following steps:
step S101: acquiring at least two frames of images including lane information of a road; wherein the at least two frames of images are simultaneously captured by at least two image capturing devices arranged side by side in a preset direction of the vehicle;
alternatively, the lane information includes, but is not limited to, lane line information, lane marking information, speed limit information, etc., the image capturing devices may be cameras, onboard cameras, etc., and the number of the image capturing devices may be equal to or greater than two. The preset direction may be set according to actual requirements, for example, the preset direction may be a front direction of the vehicle, or a rear direction of the vehicle. If there are two image capturing devices, the shooting areas of the two image capturing devices respectively correspond to the left area and the right area in front of the vehicle, that is, the images respectively shot by the two image capturing devices should include the area in front of the vehicle after being transversely seamlessly spliced, and the shooting areas of the two image capturing devices may be partially overlapped. If there are three image capturing devices, the shooting areas of the three image capturing devices respectively correspond to a region right in front of the vehicle, a region left in front of the vehicle, and a region right in front of the vehicle, and the acquiring at least two frames of images including lane information of a road includes: controlling the image capturing device to simultaneously capture images of an area directly in front of the vehicle, an area left in front of the vehicle, and an area right in front of the vehicle, respectively, to obtain at least three frame images including lane information of a road. It can be understood that by photographing different areas in front of the vehicle by a plurality of image capturing devices, not only the current driving lane information of the vehicle but also the surrounding lane information of the vehicle can be obtained, and more useful information can be obtained compared with a manner of photographing based on a single image capturing device, thereby further improving the accuracy of lane departure detection.
Step S102: transversely splicing the at least two frames of images according to the shooting position sequence, and determining first lane information corresponding to a target lane of the vehicle in the road based on the obtained road straight splicing images;
optionally, the transversely stitching the at least two frames of images according to the shooting position sequence includes: acquiring at least two frames of top view conversion graphs corresponding to the at least two frames of images; removing the transverse offset information in the at least two frames of overhead converted graphs by taking the driving direction of the vehicle as the longitudinal direction; and transversely splicing the at least two frames of overhead converted graphs according to the shooting position sequence to obtain a road straight splicing image.
Optionally, the obtaining of the at least two top conversion maps corresponding to the at least two frames of images includes: acquiring internal parameters, external parameters and distortion parameters of the image capturing device; wherein the internal parameter is selected from at least one of a focal length, an optical center, a distortion parameter, and/or the external parameter is selected from at least one of a pitch angle, a yaw angle, a ground height; and performing inverse perspective conversion on the at least two frames of images according to the internal parameters, the external parameters and the distortion parameters of the image capturing device. It will be appreciated that in order to capture information such as the lane lines of a road, the image capture device may be inclined from the road surface rather than facing directly vertically downwards (orthographic projection), and that it may be desirable to correct the image to an orthographic form, requiring the use of perspective transformation. The inverse perspective conversion may perform inverse perspective conversion on the multi-frame image according to internal parameters, external parameters, and distortion parameters of the image capturing apparatus using an ipm (intelligent peripheral mapping) algorithm. In an embodiment, the internal parameter is selected from at least one of a focal length, an optical center, a distortion parameter. In another embodiment, the external parameter is selected from at least one of pitch angle, yaw angle, ground height. In this way, by performing the top view conversion on at least two frames of images taken by different image capturing devices at the same time, the top view conversion maps of the orthographic projections corresponding to the at least two frames of images can be obtained, that is, based on the parameters of the image capturing devices, the actual road data with specific dimensions can be obtained by measuring the feature points in the images.
Alternatively, since the lateral offset information of the road is not needed in the visual perception, before the images are transversely spliced, the at least two overhead conversion maps may be transversely processed, that is, the lateral offset information in the curved road may be removed with the vehicle driving direction as the longitudinal direction.
Alternatively, since different images are captured by different image capturing devices and the positions of the image capturing devices are fixed, that is, the sequence of the capturing positions of the at least two frames of images is fixed, the at least two frames of top view conversion maps can be transversely stitched according to the sequence of the capturing positions to obtain a road straight stitched image. Optionally, the transversely stitching the at least two frames of top-view transformation graphs according to the shooting position sequence to obtain a road straight graph image includes: acquiring pose information of the image capturing device; and sequentially staggering, covering and splicing the at least two frames of the overlooking conversion graphs in a staggered manner according to the pose information, and/or splicing the frames of the overlooking conversion graphs after cutting according to the pose information to obtain a road straight splicing image. Here, the pose information includes a position and a posture, the position being three-dimensional information in space, and the posture being three-dimensional rotation. And after the pose information of the image capturing device is calculated, copying the corresponding images to a specific position and splicing according to a specific angle. The spliced road straight mosaic image not only comprises driving lane information, but also comprises surrounding lane information, and can visually perceive a wider road image contained in a plurality of frames of images within a larger physical scale.
Optionally, the determining, based on the obtained road straight mosaic image, first lane information corresponding to a target lane where the vehicle is located in the road includes: responding to the road straight jigsaw image for visual perception, and obtaining a road characteristic point bitmap; according to the pose information, the optical center and the pixel source of each frame of the overlook conversion map, performing geometric recovery on the road feature point bitmap to obtain a road feature image; and acquiring first lane information corresponding to a target lane of the vehicle on the road according to the road characteristic image and the pose information of the vehicle. Here, the visually perceiving the road straight mosaic image may be visually perceiving the road straight mosaic image based on a preset visual perception model, that is, the road straight mosaic image is used as an input of the visual perception model, an output of the visual perception model is a road feature point bitmap, and the visual perception model may be obtained by training in advance based on non-real-time images of the same vehicle or different vehicles. And visually perceiving the road straight picture mosaic to obtain a road characteristic point bitmap comprising information such as lane lines, pedestrian crossings, speed limit signs, lane marks and the like. And because the road straight mosaic image for visual perception is obtained by transversely mosaicing at least two frames of overhead view conversion maps with the transverse offset information of the road removed, the road feature point bitmap for visual perception is lack of the transverse offset information. At this time, the road feature point bitmap needs to be geometrically restored, so that the pose information, the optical center and the pixel source of each frame of image are used for restoring the lateral offset information of the road, and the road feature image is obtained. It can be understood that, based on the road feature image, the number of lanes, the number of lane lines, the position and type of each lane line, and whether there is a lane mark, a speed limit mark or not in the lane included in the road feature image may be known, and then, according to the road feature image and the pose information of the vehicle, the position relationship between the target lane where the vehicle is located and each lane in the road, that is, the position relationship between the target lane where the vehicle is located and each lane line in the road, such as whether the target lane where the vehicle is located in the road is a lane between the first lane line and the second lane line, may be obtained. Specifically, if the number of all lanes included in the road feature image is smaller than the number of lanes in the road, it is determined that the at least two frames of images are obtained by capturing images of a part of the lanes in the road, and at this time, the first lane information corresponding to the target lane in which the vehicle is located in the road includes the number of surrounding lanes and/or the type of surrounding lane lines, for example, the left side and the right side of the target lane each include several lanes, the type of lane lines forming the target lane, the number and the type of lane lines on the left side and the right side of the target lane, and the like. If the number of all lanes contained in the road characteristic image is the same as the number of lanes in the road, it is described that the at least two frames of images are taken of all the lanes in the road, and at this time, which lane of the road the vehicle is in can be directly determined according to the road characteristic image and the pose information of the vehicle, that is, the first lane information corresponding to the target lane of the vehicle in the road is the lane identification of the target lane. Therefore, the lane information corresponding to the target lane of the vehicle in the road is determined according to the road straight jigsaw image, the operation is convenient and fast, and the identification accuracy rate is high.
Step S103: acquiring second lane information corresponding to a positioning lane of the vehicle in the road according to the positioning information of the vehicle;
optionally, the obtaining, according to the positioning information of the vehicle, the positioning lane information of the vehicle on the road includes: according to the positioning information of the vehicle, local map information corresponding to the positioning information is obtained; and determining second lane information corresponding to the positioning lane of the vehicle in the road according to the local map information. It is understood that after determining the positioning information of the vehicle, such as coordinates, the positioning information may be matched with the existing map data to obtain local map information corresponding to the positioning information, such as map information with a radius of 50 meters or 100 meters and taking the vehicle as a center of a circle, and then determine second lane information of a positioning lane where the vehicle is located in the road according to the local map information and the positioning information of the vehicle. It should be noted that the second lane information may include one or more of lane identification (i.e., the number of lanes), the number of surrounding lanes, and the type of surrounding lane lines, for example, there are several lanes on the left side and the right side of the positioning lane, the type of lane lines forming the positioning lane, the number and the type of lane lines on the left side and the right side of the positioning lane, and the like.
Step S104: and detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information so as to judge whether the vehicle deviates from the lane based on the detection result.
Optionally, the lane information includes at least one of a lane identification, a number of surrounding lanes, and a type of surrounding lane lines, and the detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information to determine whether the vehicle deviates from the lane based on a detection result includes: comparing whether the first lane information and the second lane information are consistent; if the lane is consistent with the positioning lane, the target lane and the positioning lane are judged to be the same lane; and if the lane is not consistent with the positioning lane, judging that the target lane and the positioning lane are not the same lane. Specifically, if the lane information includes lane marks, whether the first lane information and the second lane information are consistent refers to whether the first lane mark and the second lane mark are the same; if the lane information includes the number of surrounding lanes, whether the first lane information and the second lane information are consistent refers to whether the distribution and the number of the first surrounding lane number and the second surrounding lane number are the same, for example, if it is known that the total lanes of the road are four, it is determined that there are two lanes on the left side and one lane on the right side of the target lane, and there are two lanes on the left side and one lane on the right side of the positioning lane, it is determined that the first lane information and the second lane information are consistent; if the lane information includes surrounding lane types, whether the first lane information and the second lane information are consistent means whether the distribution and the number of the first surrounding lane type and the second surrounding lane type are the same. It can be understood that when the target lane and the positioning lane are the same lane, it indicates that the vehicle does not deviate from the lane, and also indicates that the positioning of the vehicle is accurate; when the target lane and the positioning lane are not the same lane, the vehicle is indicated to deviate from the lane, and the positioning of the vehicle is also indicated to be inaccurate.
In summary, the lane departure detection method provided by the above embodiment includes: acquiring at least two frames of images including lane information of a road; the at least two frames of images are simultaneously captured by at least two image capturing devices arranged side by side in a preset direction of the vehicle; transversely splicing the at least two frames of images according to the shooting position sequence, and determining first lane information corresponding to a target lane of the vehicle in the road based on the obtained road straight splicing images; acquiring second lane information corresponding to a positioning lane of the vehicle in the road according to the positioning information of the vehicle; and detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information so as to judge whether the vehicle deviates from the lane based on the detection result. Therefore, the multi-frame images comprising the lane information of the road are spliced, the information of the target lane of the vehicle in the road is determined based on the obtained road straight splicing images, and whether the vehicle deviates from the lane or not is detected by combining the positioning information of the vehicle, so that whether the vehicle deviates from the lane or not can be detected timely and accurately, and the operation is convenient and reliable.
The foregoing embodiments will be specifically described below by way of a specific example based on the same inventive concept as the foregoing embodiments. The image capturing device is taken as a camera in this example as an example.
Referring to fig. 2, a specific flowchart of the lane departure detection method according to the embodiment of the present invention is shown, which includes the following steps:
step S201: acquiring images acquired by a plurality of cameras on the road surface of the same road;
referring to fig. 3, in this embodiment, taking an example that a left camera, a main camera and a right camera are used to respectively acquire one frame of image on the road surface of the same road, fig. 3 (a) is taken by the left camera, fig. 3 (b) is taken by the middle camera, and fig. 3 (c) is taken by the right camera.
Step S202: converting the images into top views, and then combining the top views to obtain a road straight splicing image;
referring to fig. 4, (a) in fig. 4 is a top view image obtained by top-view converting (a) in fig. 3, (b) in fig. 4 is a top view image obtained by top-view converting (b) in fig. 3, and (c) in fig. 4 is a top view image obtained by top-view converting (c) in fig. 3. Then, the (a), (b) and (c) in fig. 4 are transversely spliced, i.e. merged, to obtain a road straight-spliced image, as shown in fig. 4 (d).
Step S203: visually perceiving the road straight picture mosaic to obtain a road characteristic point bitmap;
here, the road straight figure may be input into a preset visual perception model to obtain a road feature point bitmap including information of a lane line and the like.
Step S204: geometrically recovering the road characteristic point bitmap to obtain a perception result of a lane line around a vehicle;
here, the road feature point bitmap may be geometrically restored according to the pose information, the optical center, and the pixel source of each frame of the overhead image, so as to obtain the sensing result of the lane line around the vehicle.
As shown in fig. 5, for a schematic diagram of visual perception and geometric restoration, for a curved road shown in image (a) in fig. 5, a plurality of frames of images shot by a vehicle-mounted camera during traveling are first subjected to image straight stitching in sequence after road curvature information is removed, so as to obtain image (b) in fig. 5. And performing visual perception based on the image (b) in the figure 5 to obtain a road feature point bitmap, namely the image (c) in the figure 5. And finally, geometrically restoring the transverse bending information in the road feature point bitmap according to the pose information, the optical center and the pixel source of each frame of image to obtain a feature image of a real bending angle, namely a feature image shown in (d) in fig. 5.
Step S205: determining first lane information of the vehicle in the sensing result according to the sensing result of the lane lines around the vehicle and the pose information of the vehicle;
step S205: and detecting whether the vehicle deviates from a lane or not according to the first lane information and second lane information where the vehicle is located in the map data.
Here, the second lane information of the vehicle in the map data may be obtained according to the positioning data of the vehicle, for example, first obtaining local high-precision map data of the position of the vehicle, and then obtaining the lane position corresponding to the positioning data by comparing the coordinate corresponding to the positioning data with the coordinate of the high-precision map data.
In the lane departure judging process, whether images including lanes shot by the multiple cameras include all lanes of the road can be judged according to the real-time position information, if so, the positions of the lanes where the vehicle is located are directly analyzed by using the collected images so as to obtain the current specific lane information; if not, the lane position of the vehicle is further determined by analyzing which lanes are shot according to the lanes in the collected image or other information on the road.
Referring to fig. 6, a specific flowchart for detecting whether the vehicle deviates from the lane according to the embodiment of the present invention includes the following steps:
step S301: judging whether the lane number characteristics corresponding to the first lane information and the second lane information are consistent, if not, executing a step S302, otherwise, executing a step S304;
for example, if it is known that there are two lanes on the left side and one lane on the right side of the vehicle according to the first lane information, and it is known that there are two lanes on the left side and one lane on the right side of the vehicle according to the first lane information, it may be determined that the number characteristics of the lanes corresponding to the first lane information and the second lane information are the same, otherwise, it is determined that the number characteristics are not the same.
Step S302: judging whether the lane line type characteristics corresponding to the first lane information and the second lane information are consistent, if not, executing a step S303, otherwise, executing a step S304;
it can be understood that when the first lane information is used to know that 1 lane is on both sides of the vehicle, and the second lane information is used to know that 2 lanes are on both sides of the vehicle, it indicates that the first lane information may only include partial information of a road, and at this time, the lane type characteristics corresponding to the first lane information and the second lane information may be compared, for example, whether the types of the surrounding lane lines are both a dotted line, a solid line, and the like.
Step S303: determining that the vehicle deviates from a lane;
step S304: it is determined that the vehicle does not deviate from the lane.
In summary, in the lane departure detection method provided in the above embodiment, the actual lane position of the vehicle is determined by merging the images acquired by the multiple cameras, and the actual lane position of the vehicle is compared with the located lane position of the vehicle, so as to determine whether the location is correct, and the method is convenient and fast to operate and has high accuracy.
Based on the same inventive concept as the previous embodiment, an embodiment of the present invention provides a map loading device, as shown in fig. 7, the lane departure detection device including: a processor 310 and a memory 311 storing computer programs; the processor 310 illustrated in fig. 7 is not used to refer to the number of the processors 310 as one, but is only used to refer to the position relationship of the processor 310 relative to other devices, and in practical applications, the number of the processors 310 may be one or more; similarly, the memory 311 shown in fig. 7 has the same meaning, i.e. it is only used to refer to the position relationship of the memory 311 with respect to other devices, and in practical applications, the number of the memory 311 may be one or more. The lane departure detection method applied to the above-described lane departure detection apparatus is implemented when the processor 310 runs the computer program.
The lane departure detection apparatus may further include: at least one network interface 312. The various components of the map loader are coupled together by a bus system 313. It will be appreciated that the bus system 313 is used to enable communications among the components connected. The bus system 313 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 313 in FIG. 7.
The memory 311 may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 311 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 311 in the embodiment of the present invention is used to store various types of data to support the operation of the lane departure detection apparatus. Examples of such data include: any computer program for operating on the lane departure detection apparatus, such as an operating system and application programs; contact data; telephone book data; a message; a picture; video, etc. The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs may include various application programs such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. Here, the program that implements the method of the embodiment of the present invention may be included in an application program.
Based on the same inventive concept of the foregoing embodiments, this embodiment further provides a computer storage medium, where a computer program is stored in the computer storage medium, where the computer storage medium may be a Memory such as a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read Only Memory (CD-ROM), and the like; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc. The computer program stored in the computer storage medium, when executed by a processor, implements the lane departure detection method applied to the above-described lane departure detection apparatus. Please refer to the description of the embodiment shown in fig. 1 for a specific step flow realized when the computer program is executed by the processor, which is not described herein again.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A lane departure detection method, characterized by comprising:
acquiring at least two frames of images including lane information of a road; wherein the at least two frames of images are simultaneously captured by at least two image capturing devices arranged side by side in a preset direction of the vehicle;
transversely splicing the at least two frames of images according to the shooting position sequence, and determining first lane information corresponding to a target lane of the vehicle in the road based on the obtained road straight splicing images;
acquiring second lane information corresponding to a positioning lane of the vehicle in the road according to the positioning information of the vehicle;
and detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information so as to judge whether the vehicle deviates from the lane based on the detection result.
2. The lane departure detection method according to claim 1, wherein the image capturing device includes three and photographed regions respectively corresponding to a right front region, a left front region, and a right front region of the vehicle, the acquiring at least two frames of images including lane information of a road includes:
controlling the image capturing device to simultaneously capture images of an area directly in front of the vehicle, an area left in front of the vehicle, and an area right in front of the vehicle, respectively, to obtain at least three frame images including lane information of a road.
3. The lane departure detection method according to claim 1 or 2, wherein said transversely stitching the at least two frame images in order of shooting position comprises:
acquiring at least two frames of top view conversion graphs corresponding to the at least two frames of images;
removing the transverse offset information in the at least two frames of overhead converted graphs by taking the driving direction of the vehicle as the longitudinal direction;
and transversely splicing the at least two frames of overhead converted graphs according to the shooting position sequence to obtain a road straight splicing image.
4. The lane departure detection method according to claim 3, wherein said obtaining at least two frames of top-view transition maps corresponding to said at least two frames of images comprises:
acquiring internal parameters, external parameters and distortion parameters of the image capturing device; wherein the internal parameter is selected from at least one of a focal length, an optical center, a distortion parameter, and/or the external parameter is selected from at least one of a pitch angle, a yaw angle, a ground height;
and performing inverse perspective conversion on the at least two frames of images according to the internal parameters, the external parameters and the distortion parameters of the image capturing device.
5. The lane departure detection method according to claim 3, wherein said transversely stitching said at least two top-view transition maps in order of shooting position to obtain a road straight-up map image comprises:
acquiring pose information of the image capturing device;
and sequentially staggering, covering and splicing the at least two frames of the overlooking conversion graphs in a staggered manner according to the pose information, and/or splicing the frames of the overlooking conversion graphs after cutting according to the pose information to obtain a road straight splicing image.
6. The method according to claim 1, wherein the determining the first lane information corresponding to the target lane of the vehicle in the road based on the obtained road straight mosaic image comprises:
responding to the road straight jigsaw image for visual perception, and obtaining a road characteristic point bitmap;
according to the pose information, the optical center and the pixel source of each frame of the overlook conversion map, performing geometric recovery on the road feature point bitmap to obtain a road feature image;
and acquiring first lane information corresponding to a target lane of the vehicle on the road according to the road characteristic image and the pose information of the vehicle.
7. The lane departure detection method according to claim 1, wherein said acquiring, from the positioning information of the vehicle, second lane information corresponding to a lane where the vehicle is positioned in the road comprises:
according to the positioning information of the vehicle, local map information corresponding to the positioning information is obtained;
and determining second lane information corresponding to the positioning lane of the vehicle in the road according to the local map information.
8. The method of claim 1, wherein the lane information includes at least one of a lane identification, a number of surrounding lanes, and a type of surrounding lane lines, and the detecting whether the target lane and the positioning lane are the same lane according to the first lane information and the second lane information to determine whether the vehicle deviates from the lane based on the detection result includes:
comparing whether the first lane information and the second lane information are consistent;
if the lane is consistent with the positioning lane, the target lane and the positioning lane are judged to be the same lane;
and if the lane is not consistent with the positioning lane, judging that the target lane and the positioning lane are not the same lane.
9. A lane departure detection apparatus, characterized by comprising: a processor and a memory storing a computer program which, when executed by the processor, implements the lane departure detection method of any of claims 1 to 8.
10. A computer storage medium, characterized in that a computer program is stored which, when executed by a processor, implements the lane departure detection method of any one of claims 1 to 8.
CN202110880287.1A 2021-08-02 2021-08-02 Lane departure detection method, apparatus and computer storage medium Pending CN113591720A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110880287.1A CN113591720A (en) 2021-08-02 2021-08-02 Lane departure detection method, apparatus and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110880287.1A CN113591720A (en) 2021-08-02 2021-08-02 Lane departure detection method, apparatus and computer storage medium

Publications (1)

Publication Number Publication Date
CN113591720A true CN113591720A (en) 2021-11-02

Family

ID=78253683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110880287.1A Pending CN113591720A (en) 2021-08-02 2021-08-02 Lane departure detection method, apparatus and computer storage medium

Country Status (1)

Country Link
CN (1) CN113591720A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117575878A (en) * 2023-11-16 2024-02-20 杭州众诚咨询监理有限公司 Intelligent management method and device for traffic facility asset data, electronic equipment and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101992778A (en) * 2010-08-24 2011-03-30 上海科世达-华阳汽车电器有限公司 Lane deviation early warning and driving recording system and method
CN106355951A (en) * 2016-09-22 2017-01-25 深圳市元征科技股份有限公司 Device and method for controlling vehicle traveling
CN106347363A (en) * 2016-10-12 2017-01-25 深圳市元征科技股份有限公司 Lane keeping method and lane keeping device
CN107292214A (en) * 2016-03-31 2017-10-24 比亚迪股份有限公司 Deviation detection method, device and vehicle
CN107512264A (en) * 2017-07-25 2017-12-26 武汉依迅北斗空间技术有限公司 The keeping method and device of a kind of vehicle lane
CN108537197A (en) * 2018-04-18 2018-09-14 吉林大学 A kind of lane detection prior-warning device and method for early warning based on deep learning
CN109887032A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of vehicle positioning method and system based on monocular vision SLAM
CN110329253A (en) * 2018-03-28 2019-10-15 比亚迪股份有限公司 Lane Departure Warning System, method and vehicle
US20190340732A1 (en) * 2016-12-29 2019-11-07 Huawei Technologies Co., Ltd. Picture Processing Method and Apparatus
CN111553319A (en) * 2020-05-14 2020-08-18 北京百度网讯科技有限公司 Method and device for acquiring information
CN112200917A (en) * 2020-09-30 2021-01-08 北京零境科技有限公司 High-precision augmented reality method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101992778A (en) * 2010-08-24 2011-03-30 上海科世达-华阳汽车电器有限公司 Lane deviation early warning and driving recording system and method
CN107292214A (en) * 2016-03-31 2017-10-24 比亚迪股份有限公司 Deviation detection method, device and vehicle
CN106355951A (en) * 2016-09-22 2017-01-25 深圳市元征科技股份有限公司 Device and method for controlling vehicle traveling
CN106347363A (en) * 2016-10-12 2017-01-25 深圳市元征科技股份有限公司 Lane keeping method and lane keeping device
US20190340732A1 (en) * 2016-12-29 2019-11-07 Huawei Technologies Co., Ltd. Picture Processing Method and Apparatus
CN107512264A (en) * 2017-07-25 2017-12-26 武汉依迅北斗空间技术有限公司 The keeping method and device of a kind of vehicle lane
CN110329253A (en) * 2018-03-28 2019-10-15 比亚迪股份有限公司 Lane Departure Warning System, method and vehicle
CN108537197A (en) * 2018-04-18 2018-09-14 吉林大学 A kind of lane detection prior-warning device and method for early warning based on deep learning
CN109887032A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of vehicle positioning method and system based on monocular vision SLAM
CN111553319A (en) * 2020-05-14 2020-08-18 北京百度网讯科技有限公司 Method and device for acquiring information
CN112200917A (en) * 2020-09-30 2021-01-08 北京零境科技有限公司 High-precision augmented reality method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117575878A (en) * 2023-11-16 2024-02-20 杭州众诚咨询监理有限公司 Intelligent management method and device for traffic facility asset data, electronic equipment and medium
CN117575878B (en) * 2023-11-16 2024-04-26 杭州众诚咨询监理有限公司 Intelligent management method and device for traffic facility asset data, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN111160172B (en) Parking space detection method, device, computer equipment and storage medium
WO2020102944A1 (en) Point cloud processing method and device and storage medium
US9270891B2 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
CN111815707B (en) Point cloud determining method, point cloud screening method, point cloud determining device, point cloud screening device and computer equipment
CN110516665A (en) Identify the neural network model construction method and system of image superposition character area
CN111539484B (en) Method and device for training neural network
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN114898313B (en) Method, device, equipment and storage medium for generating bird's eye view of driving scene
CN111932627B (en) Marker drawing method and system
CN114842446A (en) Parking space detection method and device and computer storage medium
CN112753038A (en) Method and device for identifying lane change trend of vehicle
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
CN113591720A (en) Lane departure detection method, apparatus and computer storage medium
CN112001357B (en) Target identification detection method and system
CN113284194A (en) Calibration method, device and equipment for multiple RS (remote sensing) equipment
CN117184075A (en) Vehicle lane change detection method and device and computer readable storage medium
CN112685527A (en) Method, device and electronic system for establishing map
JP2011170400A (en) Program, method, and apparatus for identifying facility
CN110827340B (en) Map updating method, device and storage medium
CN115719442A (en) Intersection target fusion method and system based on homography transformation matrix
CN115601336A (en) Method and device for determining target projection and electronic equipment
CN111460854A (en) Remote target detection method, device and system
CN113147746A (en) Method and device for detecting ramp parking space
CN115249345A (en) Traffic jam detection method based on oblique photography three-dimensional live-action map
CN114677458A (en) Road mark generation method and device for high-precision map, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination