CN116486377B - Method and device for generating drivable area - Google Patents

Method and device for generating drivable area Download PDF

Info

Publication number
CN116486377B
CN116486377B CN202310462185.7A CN202310462185A CN116486377B CN 116486377 B CN116486377 B CN 116486377B CN 202310462185 A CN202310462185 A CN 202310462185A CN 116486377 B CN116486377 B CN 116486377B
Authority
CN
China
Prior art keywords
line segment
detection frame
determining
endpoint
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310462185.7A
Other languages
Chinese (zh)
Other versions
CN116486377A (en
Inventor
万韶华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310462185.7A priority Critical patent/CN116486377B/en
Publication of CN116486377A publication Critical patent/CN116486377A/en
Application granted granted Critical
Publication of CN116486377B publication Critical patent/CN116486377B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a device for generating a drivable region, and relates to the technical field of automatic driving. The method for generating the drivable region comprises the following steps: acquiring a target image acquired by a vehicle; performing obstacle recognition on the target image to obtain a target detection frame where an obstacle grounding wire is located and line segment representations of the target detection frame; a travelable region of the vehicle is generated based on the segment representation of the target detection frame. According to the method and the device for identifying the obstacle in the vehicle, the drivable area is judged through identifying the obstacle of the real-time acquired image, feasibility of the drivable area is guaranteed, line segment representation of the target detection frame can be carried out in the obstacle identification process, the position of the obstacle grounding wire in the target image can be accurately and rapidly located based on the line segment representation, and the drivable area of the vehicle is high in obtaining efficiency and accuracy.

Description

Method and device for generating drivable area
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a method and an apparatus for generating a drivable area in automatic driving.
Background
With the development of intelligent vehicle technologies by automobile manufacturers, various levels of auxiliary driving technologies are rapidly developed; based on sensor devices such as a visual sensor, a laser range finder, ultrasonic waves, an infrared sensor and the like, obstacle sensing of the surrounding environment is one of key technologies for realizing automatic navigation of a vehicle, and is also one of key capabilities for realizing automatic driving and automatic parking.
Whether the vehicle is automatically parked or autonomously parked, the vehicle needs to sense the position of an obstacle in the surrounding environment and the free space of the drivable area in real time, and the local path planning is mainly performed by the free space of the drivable area, so the free space is a key technology in the autonomous parking detection process of the vehicle.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating a drivable area.
An embodiment of a first aspect of the present application provides a method for generating a drivable region, including:
acquiring a target image acquired by a vehicle;
performing obstacle recognition on the target image to obtain a target detection frame where an obstacle grounding wire is located and line segment representations of the target detection frame;
and generating a drivable area of the vehicle based on the line segment representation of the target detection frame.
In one embodiment of the present application, the generating the drivable area of the vehicle based on the line segment representation includes:
determining the offset corresponding to the target detection frame from the line segment representation;
determining a position parameter of a line segment endpoint from the line segment representation, wherein the line segment endpoint is a predicted point on the boundary of the target detection frame;
Determining a first global position of the line segment endpoint in the target image according to the offset and the position parameter;
and generating a drivable area of the vehicle based on the first global position corresponding to the detection frame where the obstacle grounding wire is located.
In one embodiment of the present application, the determining, according to the offset and the location parameter, a first global location of the line segment endpoint in the target image includes:
determining a second global position of the detection frame according to the offset;
and determining a first global position of the line segment endpoint according to the position parameter and the second global position.
In one embodiment of the present application, the determining, according to the location parameter and the second global location, the first global location of the line segment endpoint includes:
determining a first coordinate value of the line segment end point under a detection frame coordinate system according to the position parameter;
and determining a sum value of the first coordinate value and the second global position, and determining the sum value as the first global position of the line segment endpoint.
In one embodiment of the present application, the position parameter is a first coordinate value of the line segment endpoint in a detection frame coordinate system.
In one embodiment of the present application, the method further comprises:
acquiring a prediction endpoint in the target detection frame;
and performing straight line fitting according to the predicted end points, acquiring intersection points of the fitted straight line and the boundary of the target detection frame, and taking the intersection points as the line segment end points.
In one embodiment of the present application, the determining, according to the location parameter, a first coordinate value of the line segment endpoint in a detection frame coordinate system includes:
if the position parameter is a value of the line segment endpoint in a set boundary range, determining the boundary of the line segment endpoint according to the value, and determining an increment value corresponding to the line segment endpoint based on the boundary point value of the boundary and the value;
determining a second coordinate value of the line segment endpoint according to the boundary point value and the increment value;
and obtaining a first coordinate value of the line segment endpoint according to the area of the detection frame and the second coordinate value.
In one embodiment of the present application, the determining, according to the location parameter, a first coordinate value of the line segment endpoint in a detection frame coordinate system includes:
and if the position parameter is the offset angle of the line segment endpoint, performing trigonometric function operation on the offset angle to obtain a first coordinate value of the line segment endpoint.
In an embodiment of the present application, the identifying the obstacle for the target image, obtaining a target detection frame where the ground line of the obstacle is located and a segment representation of the target detection frame, includes:
inputting the target image into a pre-trained obstacle recognition model, and outputting a prediction label of a candidate detection frame and a line segment representation of the candidate detection frame by the obstacle recognition model;
and determining the target detection frame from the candidate detection frames according to the prediction labels of the candidate detection frames.
In one embodiment of the present application, the training process of the obstacle recognition model includes:
acquiring a sample image, wherein the sample image comprises a detection frame, a mark label of the detection frame and a reference line segment representation of the detection frame;
inputting the sample image into an obstacle recognition model to be trained for training, and obtaining a loss function of the obstacle recognition model;
and carrying out model parameter adjustment on the obstacle recognition model based on the loss function, and continuing training by using the next sample image until training is finished to obtain the obstacle recognition model.
In one embodiment of the present application, the inputting the sample image into an obstacle recognition model to be trained to train, to obtain a loss function of the obstacle recognition model, includes:
Predicting the sample image by the obstacle recognition model to obtain a prediction endpoint in the detection frame;
acquiring line segment representations of the prediction endpoints under a detection frame coordinate system;
and obtaining the loss function based on the line segment representation of the prediction endpoint under the detection frame coordinate system, the prediction label, the marking label and the reference line segment representation.
In one embodiment of the present application, the inputting the sample image into an obstacle recognition model to be trained to train, to obtain a loss function of the obstacle recognition model, includes:
predicting the sample image by the obstacle recognition model to obtain a predicted line segment representation and a predicted label corresponding to the detection frame;
and obtaining the loss function according to the predicted line segment representation and the predicted label, and the marking label and the reference line segment representation.
In the embodiment of the application, the object image acquired by the vehicle is subjected to obstacle recognition to obtain the object detection frame where the obstacle grounding wire is located and the line segment representation of the object detection frame, and the position of the obstacle grounding wire in the object image can be obtained based on the line segment representation of the object detection frame, so that the obstacle grounding wire in the object image can be obtained based on the line segment representation of the object detection frame, and the obstacle grounding wire is the boundary between the obstacle and the drivable area, so that the drivable area of the vehicle is obtained when the obstacle grounding wire in the object image is obtained. According to the method and the device for identifying the obstacle in the vehicle, the drivable area is judged through identifying the obstacle of the real-time acquired image, feasibility of the drivable area is guaranteed, line segment representation of the target detection frame can be carried out in the obstacle identification process, the position of the obstacle grounding wire in the target image can be accurately and rapidly located based on the line segment representation, the drivable area is determined through the obstacle grounding wire, and the obtaining efficiency and accuracy of the drivable area of the vehicle are high.
An embodiment of a second aspect of the present application provides a device for generating a drivable area, including:
the first acquisition module is used for acquiring a target image acquired by the vehicle;
the second acquisition module is used for identifying the obstacle of the target image and acquiring a target detection frame where the obstacle grounding wire is and line segment representations of the target detection frame;
and the area generating module is used for generating a travelable area of the vehicle based on the line segment representation of the target detection frame.
In one embodiment of the present application, the area generating module is configured to:
determining the offset corresponding to the target detection frame from the line segment representation;
determining a position parameter of a line segment endpoint from the line segment representation, wherein the line segment endpoint is a predicted point on the boundary of the target detection frame;
determining a first global position of the line segment endpoint in the target image according to the offset and the position parameter;
and generating a drivable area of the vehicle based on the first global position corresponding to the detection frame where the obstacle grounding wire is located.
In one embodiment of the present application, the area generating module is configured to:
determining a second global position of the detection frame according to the offset;
And determining a first global position of the line segment endpoint according to the position parameter and the second global position.
In one embodiment of the present application, the area generating module is configured to:
determining a first coordinate value of the line segment end point under a detection frame coordinate system according to the position parameter;
and determining a sum value of the first coordinate value and the second global position, and determining the sum value as the first global position of the line segment endpoint.
In one embodiment of the present application, the position parameter is a first coordinate value of the line segment endpoint in a detection frame coordinate system.
In one embodiment of the present application, the apparatus further comprises:
acquiring a prediction endpoint in the target detection frame;
and performing straight line fitting according to the predicted end points, acquiring intersection points of the fitted straight line and the boundary of the target detection frame, and taking the intersection points as the line segment end points.
In one embodiment of the present application, the area generating module is configured to:
if the position parameter is a value of the line segment endpoint in a set boundary range, determining the boundary of the line segment endpoint according to the value, and determining an increment value corresponding to the line segment endpoint based on the boundary point value of the boundary and the value;
Determining a second coordinate value of the line segment endpoint according to the boundary point value and the increment value;
and obtaining a first coordinate value of the line segment endpoint according to the area of the detection frame and the second coordinate value.
In one embodiment of the present application, the area generating module is configured to:
and if the position parameter is the offset angle of the line segment endpoint, performing trigonometric function operation on the offset angle to obtain a first coordinate value of the line segment endpoint.
In one embodiment of the present application, the second obtaining module is configured to:
inputting the target image into a pre-trained obstacle recognition model, and outputting a prediction label of a candidate detection frame and a line segment representation of the candidate detection frame by the obstacle recognition model;
and determining the target detection frame from the candidate detection frames according to the prediction labels of the candidate detection frames.
In one embodiment of the present application, the second obtaining module is configured to:
acquiring a sample image, wherein the sample image comprises a detection frame, a mark label of the detection frame and a reference line segment representation of the detection frame;
inputting the sample image into an obstacle recognition model to be trained for training, and obtaining a loss function of the obstacle recognition model;
And carrying out model parameter adjustment on the obstacle recognition model based on the loss function, and continuing training by using the next sample image until training is finished to obtain the obstacle recognition model.
In one embodiment of the present application, the second obtaining module is configured to:
predicting the sample image by the obstacle recognition model to obtain a prediction endpoint in the detection frame;
acquiring line segment representations of the prediction endpoints under a detection frame coordinate system;
and obtaining the loss function based on the line segment representation of the prediction endpoint under the detection frame coordinate system, the prediction label, the marking label and the reference line segment representation.
In one embodiment of the present application, the second obtaining module is configured to:
predicting the sample image by the obstacle recognition model to obtain a predicted line segment representation and a predicted label corresponding to the detection frame;
and obtaining the loss function according to the predicted line segment representation and the predicted label, and the marking label and the reference line segment representation.
An embodiment of a third aspect of the present application provides an electronic device, including: a processor; a memory for storing the processor-executable instructions; the processor is configured to execute the instructions to implement a method for generating a drivable area according to an embodiment of the first aspect of the present application.
An embodiment of a fourth aspect of the present application provides a vehicle, including or connected to an electronic device according to an embodiment of the third aspect of the present application.
Embodiments of a fifth aspect of the present application provide a non-transitory computer readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the method provided by the embodiments of the first aspect of the present application.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flow chart of a method for generating a drivable region according to an embodiment of the present application;
fig. 1a is a schematic diagram illustrating generation of a travelable region according to an embodiment of the present application;
fig. 2 is a flowchart of another method for generating a drivable region according to an embodiment of the present disclosure;
fig. 3 is a flowchart of another method for generating a drivable region according to an embodiment of the present disclosure;
FIG. 3a is a schematic diagram of a line segment endpoint distribution according to an embodiment of the present disclosure;
FIG. 3b is a schematic diagram of another line segment endpoint distribution according to an embodiment of the present disclosure;
FIG. 3c is a schematic diagram of another line segment endpoint distribution according to an embodiment of the present disclosure;
fig. 4 is a flowchart of another method for generating a drivable region according to an embodiment of the present disclosure;
FIG. 4a is a schematic diagram of each boundary data range of a target detection frame according to an embodiment of the present disclosure;
fig. 5 is a flowchart of another method for generating a drivable region according to an embodiment of the present disclosure;
FIG. 5a is a schematic diagram of another line segment endpoint distribution according to an embodiment of the present disclosure;
fig. 6 is a flowchart of another method for generating a drivable region according to an embodiment of the present disclosure;
FIG. 6a is a schematic diagram illustrating determining line segment endpoints based on predicted endpoints according to an embodiment of the present application;
fig. 7 is a flowchart of another method for generating a drivable region according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a device for generating a drivable area according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 10 is a schematic structural diagram of another electronic device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of another electronic device according to an embodiment of the present application;
fig. 12 is a schematic functional block diagram of a vehicle according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with the embodiments of the present application. Rather, they are merely examples of apparatus and methods consistent with aspects of embodiments of the present application as detailed in the accompanying claims.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the application. As used in this application in the examples and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present application. The words "if" and "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the like or similar elements throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
It should be noted that, the method for generating a drivable region according to any one of the embodiments of the present application may be performed alone or in combination with possible implementation methods in other embodiments, and may also be performed in combination with any one of the technical solutions of the related art. A method for generating a travelable region and an apparatus thereof according to embodiments of the present application are described below with reference to the accompanying drawings.
The method and the device can be suitable for generating the drivable area in the automatic parking process of the vehicle, and the automatic parking process mainly depends on the drivable area to conduct path planning, so that accurate acquisition of the drivable area is one of key factors of success of automatic parking of the vehicle.
Fig. 1 is a flow chart of a method for generating a drivable region according to an embodiment of the present application. As shown in fig. 1, the method includes, but is not limited to, the steps of:
s101, acquiring a target image acquired by a vehicle.
In some implementations, the target image may be acquired using a camera device carried by the vehicle itself; it will be appreciated that the target image is an image for assisting autonomous parking of the vehicle, and thus the range presented in the target image is typically an image of the environment surrounding the location where the vehicle is located.
In some implementations, in order to facilitate identification of the drivable area, the target image may be a Bird's Eye View (BEV), and the BEV image may well implement ground plane estimation, road segmentation and 3D target detection, so that an area near the vehicle can be completely presented in the target image, and influence of a shooting dead angle on generation of the drivable area is reduced.
S102, identifying the obstacle of the target image, and acquiring the target detection frame where the obstacle grounding wire is and the line segment representation of the target detection frame.
Since various types of obstacles exist in the vehicle type environment, such as stationary vehicles, pillars, trees, fences, and other static obstacles, moving vehicles, pedestrians, and other dynamic obstacles, it is necessary to avoid all the obstacles during normal autonomous parking of the vehicle.
In the obstacle recognition process, regional features in the target image need to be extracted, and different features correspond to different types of detection frames; further, from all types of detection frames, a target detection frame in which the obstacle ground wire is located is determined.
Alternatively, the region characteristics may be based on the actual situation of the region, for example, the region adjacent to two obstacles is one type of region characteristics, and the region adjacent to the road is another type of region characteristics.
In the embodiment of the application, a detection frame corresponding to an area where an obstacle is adjacent to a road is used as a target detection frame where an obstacle grounding wire is located. The target detection frame where the obstacle grounding wire is positioned is used for assisting the line segment representation of the obstacle grounding wire; the line segment representation of the target detection frame is actually a line segment representation of the obstacle ground line in the target detection frame, and the line segment representation is used for determining the position of the line segment of the obstacle ground line in the target image.
Alternatively, the line segment representations may be position representations, length representations, and data representations under a particular rule.
S103, generating a travelable area of the vehicle based on the line segment representation of the target detection frame.
After the line segment representation of the target detection frame is acquired, a drivable region of the vehicle is generated based on the line segment representation of the target detection frame. For example, according to the line segment representation of the target detection frame, the actual line segment position is located in the target image, and the complete obstacle grounding line is obtained from all the line segment positions in the target image, wherein the obstacle grounding line is the boundary between the drivable area and the obstacle, so that the obstacle grounding line can reflect the position of the obstacle and the drivable area of the vehicle avoiding the obstacle. As shown in fig. 1a, which shows an obstacle such as a stationary vehicle, a building, and a fence, all of the obstacles shown in fig. 1a need to be avoided when the vehicle is autonomously parked; the intersection positions of all the obstacles and the road constitute an obstacle ground wire, and it is understood that the obstacle ground wire is also a boundary between the drivable region and the obstacle, so that the drivable region of the vehicle is generated when the entire obstacle ground wire is obtained.
In the embodiment of the application, the object image acquired by the vehicle is subjected to obstacle recognition to obtain the object detection frame where the obstacle grounding wire is located and the line segment representation of the object detection frame, and the position of the obstacle grounding wire in the object image can be obtained based on the line segment representation of the object detection frame, so that the obstacle grounding wire in the object image can be obtained based on the line segment representation of the object detection frame, and the obstacle grounding wire is the boundary between the obstacle and the drivable area, so that the drivable area of the vehicle is obtained when the obstacle grounding wire in the object image is obtained. According to the method and the device for identifying the obstacle in the vehicle, the drivable area is judged through identifying the obstacle of the real-time acquired image, feasibility of the drivable area is guaranteed, line segment representation of the target detection frame can be carried out in the obstacle identification process, the position of the obstacle grounding wire in the target image can be accurately and rapidly located based on the line segment representation, the drivable area is determined through the obstacle grounding wire, and the obtaining efficiency and accuracy of the drivable area of the vehicle are high.
Fig. 2 is a flowchart of another method for generating a drivable region according to an embodiment of the present application. As shown in fig. 2, the method includes, but is not limited to, the steps of:
S201, acquiring a target image acquired by a vehicle.
In the embodiment of the present application, the implementation manner of step S201 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S202, identifying the obstacle to the target image, and acquiring the target detection frame where the obstacle grounding wire is and the line segment representation of the target detection frame.
In the embodiment of the present application, the line segment representation may include an offset of the target detection frame and a position parameter corresponding to an end point of the line segment.
The implementation manner of step S202 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not repeated herein.
S203, determining the offset corresponding to the target detection frame from the line segment representation.
Optionally, determining an offset corresponding to the target detection frame where the line segment is located based on the line segment representation; the offset may reflect where the target detection frame is projected into the target image, that is, a position corresponding to the target detection frame in the target image may be determined based on the line segment representation.
S204, determining the position parameter of the line segment end point from the line segment representation, wherein the line segment end point is a point on the boundary of the predicted target detection frame.
In some implementations, the line segment representations may reflect the location parameters at which the line segment endpoints are located.
Optionally, the position parameter may be a coordinate value of the line segment endpoint in the corresponding detection frame coordinate system, which is used to reflect the position information of the line segment endpoint, and the position parameter may be represented by an angle, a coordinate, specific data, and the like.
Alternatively, the line segment end points may be the intersections of the line segments with the boundaries of the predicted target detection frame, that is, the line segment end points are points on the boundaries of the predicted target detection frame.
S205, determining a first global position of the line segment endpoint in the target image according to the offset and the position parameter.
In some implementations, to facilitate the representation of the location of the target detection frame, an image coordinate system is constructed based on the target image, and the location of the target detection frame in the target image can be quickly located according to the offset corresponding to the target detection frame.
In other implementations, to facilitate the representation of the position of the line segment end points on the target detection frame, a detection frame coordinate system is constructed based on the target detection frame, and the position information of the line segment end points on the target detection frame is directly located according to the position parameters of the line segment end points.
Alternatively, the image coordinate system may be constructed with the upper left corner of the target image as the origin of coordinates, and the detection frame coordinate system may be constructed with the upper left corner of the detection frame as the origin of coordinates.
Further, a second global position of the target detection frame in the target image may be determined based on the offset of the target detection frame and the image coordinate system.
Further, based on the position parameter of the line end point on the corresponding target detection frame and the detection frame coordinate system, the first coordinate value of the line end point on the target detection frame, that is, the position of the line end point on the target detection frame, can be determined.
Alternatively, a sum of the first coordinate value corresponding to the line segment end point and the second global position of the target detection frame may be calculated, and the sum of the first coordinate value corresponding to the line segment end point and the second global position of the target detection frame may be determined as the first global position of the line segment end point in the target image.
S206, generating a drivable area of the vehicle based on the first global position corresponding to the detection frame where the obstacle grounding wire is located.
Since the obstacle ground line is formed by all the line segments, when the first global position corresponding to the end point of the line segment is known, that is, the first global position corresponding to each line segment in the obstacle ground line is known, the obstacle ground line in the target image, that is, the drivable region of the vehicle in the target image, is generated according to the first global positions of all the line segments.
Alternatively, the obstacle ground line may be obtained by connecting all line segment end points.
In the embodiment of the application, when determining the line segment representation of the target detection frame, determining the offset corresponding to the target detection frame based on the line segment representation, wherein the offset is used for reflecting the corresponding position of the target detection frame in the target image; further, position parameters of the line-outgoing-line-segment endpoints can be confirmed from the line segment representation to obtain position information of the line segment endpoints in the target detection frames, the position parameters of the line segment endpoints are combined with offset of the target detection frames to obtain first global positions of the line segment endpoints in the target image, the line segments in each target detection frame are positioned in the target image by utilizing the line segment representation, and a travelable area is generated based on the first global positions of all the line segment endpoints. In the embodiment of the application, the position parameters are confirmed by using the line segment representation, the position parameters are converted into the image coordinate system of the target image, the same coordinate system is used as a reference, all the determined line segments represent the positions of the corresponding line segment endpoints in the target image more accurately, the drivable region in the target image is obtained based on the positions of all the line segment endpoints in the target image, the accuracy of obtaining the drivable region is ensured, the positions of the obstacle grounding line segments in the target image can be reflected more intuitively by the line segment representation, and the drivable region is detected more conveniently.
Fig. 3 is a flowchart of another method for generating a drivable region according to an embodiment of the present application. As shown in fig. 3, the method includes, but is not limited to, the steps of:
s301, acquiring a target image acquired by a vehicle.
In this embodiment of the present application, the implementation manner of step S301 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S302, identifying the obstacle to the target image, and acquiring the target detection frame where the obstacle grounding wire is and the line segment representation of the target detection frame.
In this embodiment of the present application, the implementation manner of step S302 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S303, determining the offset corresponding to the target detection frame from the line segment representation.
In this embodiment of the present application, the implementation manner of step S303 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S304, determining the position parameter of the line segment end point from the line segment representation, wherein the line segment end point is a point on the boundary of the predicted target detection frame.
In this embodiment of the present application, the implementation manner of step S304 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S305, determining a second global position of the detection frame according to the offset.
Because the offset reflects the position information of the detection frame in the target image, a second global position of the detection frame in the target image can be determined based on the offset; the second global position of the detection frame is used for assisting in determining the position of the line segment in the detection frame in the target image.
S306, the position parameter is the first coordinate value of the line segment end point under the detection frame coordinate system.
Alternatively, the line segment end points are points on the boundary of the corresponding target detection frame, for example, one line segment may include a line segment end point a and a line segment end point B. As shown in fig. 3a, the line segment endpoint a may be a point located on the upper boundary of the target detection frame, and the line segment endpoint B may be a point located on the lower boundary of the target detection frame; the line segment endpoint a may be a point located on the upper boundary of the target detection frame and the line segment endpoint B may be a point located on the right boundary of the target detection frame as shown in fig. 3B. The line segment endpoint a may be a point located on the left boundary of the target detection frame and the line segment endpoint B may be a point located on the lower boundary of the target detection frame as shown in fig. 3 c. It will be appreciated that the two endpoints of a line segment are on two different boundaries.
After the position parameters of the line segment end points are obtained, the position parameters can reflect the positions of the line segment end points on the target detection frame, and the first coordinate values of the line segment end points under the coordinates of the target detection frame can be obtained based on the position parameters and used for providing a basis for the follow-up obtaining of global coordinates of the line segment end points under the image coordinate system.
Optionally, the position parameter may be a coordinate value of a line segment endpoint in a target detection frame coordinate system; optionally, under the condition that the position parameter does not directly reflect the coordinate value of the line segment endpoint in the target detection frame coordinate system, the position parameter may be mapped, and the position parameter is mapped to the detection frame coordinate to obtain the first coordinate value of the line segment endpoint in the detection frame coordinate.
In some implementations, the origin of the target detection frame coordinate system may be the upper left corner of the target detection frame, and it is understood that each target detection frame corresponds to a respective detection frame coordinate system.
S307, determining a sum value of the first coordinate value and the second global position, and determining the sum value as the first global position of the line segment endpoint.
After obtaining a first coordinate value of the line segment end point under the coordinate system of the detection frame, determining a sum value of the first coordinate value and a second global position of the line segment end point corresponding to the target detection frame, wherein the sum value reflects the position of the line segment end point in the target image, and therefore the sum value is determined to be the first global position of the line segment end point.
In the embodiment of the application, the position parameter of the line segment endpoint is obtained by using the detection frame coordinate system, that is, the position parameter of the line segment endpoint is the first coordinate value of the line segment endpoint under the detection frame coordinate system, the position of the line segment endpoint in the detection frame coordinate system can be directly reflected by the coordinate value, so as to obtain the position of the line segment endpoint in the detection frame, and the first global position of the line segment endpoint is determined by further combining the second global position of the detection frame in the target image. In the embodiment of the application, the first coordinate value is determined by using the detection frame coordinate system, so that the position parameter is more visual and convenient to present, and the accuracy is higher.
Fig. 4 is a flowchart of another method for generating a drivable region according to an embodiment of the present application. As shown in fig. 4, the method includes, but is not limited to, the steps of:
s401, acquiring a target image acquired by a vehicle.
In this embodiment of the present application, the implementation manner of step S401 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S402, performing obstacle recognition on the target image, and acquiring a target detection frame where the obstacle grounding wire is and line segment representations of the target detection frame.
In the embodiment of the present application, the implementation manner of step S402 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S403, determining the offset corresponding to the target detection frame from the line segment representation.
In this embodiment of the present application, the implementation manner of step S403 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S404, determining the position parameter of the line segment end point from the line segment representation, wherein the line segment end point is a point on the boundary of the predicted target detection frame.
In this embodiment of the present application, the implementation manner of step S404 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S405, determining a second global position of the detection frame according to the offset.
In this embodiment of the present application, the implementation manner of step S405 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S406, if the position parameter is the value of the line segment endpoint in the set boundary range, determining the boundary of the line segment endpoint according to the value, and determining the increment value corresponding to the line segment endpoint based on the boundary point value of the boundary and the value of the line segment endpoint.
Optionally, a one-dimensional data range may be set for each boundary of the target detection frame, and the line segment end points may correspond to the values of the one-dimensional data on the boundary.
In some implementations, the one-dimensional data range for each boundary of the target detection frame may be between 0-1; for example, as shown in FIG. 4a, the four boundary ranges of the target detection frame are [0,0.25], [0.25,0.5], [0.5,0.75], [0.75,1], respectively.
In this embodiment, the upper boundary data range of the target detection frame is [0,0.25], the right boundary data range is [0.25,0.5], the lower boundary parameter range is [0.5,0.75], and the left boundary data range is [0.75,1].
Because the line segment end points are points on the boundary of the target detection frame, when the position parameters are values of the line segment end points in a set boundary range, the boundary where the line segment end points are located can be determined based on the values, and the increment value corresponding to the line segment end points is determined based on the boundary point values of the boundary where the line segment end points are located and the values of the line segment end points.
For example, assuming that the line segment end points include an end point a and an end point B, it may be determined that the boundary where the line segment end point a is located is an upper boundary of the target detection frame and the boundary where the line segment end point B is located is a right boundary of the target detection frame according to the location parameter (0.07,0.42). The boundary point value of the upper boundary of the target detection frame is 0, and the increment value corresponding to the line segment endpoint A can be determined to be 0.07 according to the value of 0.07 of the line segment endpoint A. The boundary point value of the right boundary of the target detection frame is 0.25, and the increment value corresponding to the line segment endpoint A can be determined to be 0.17 according to the value of 0.42 of the line segment endpoint B.
S407, determining a second coordinate value of the line segment endpoint according to the boundary point value and the increment value.
After the increment value of the line segment endpoint is obtained, a second coordinate value of the line segment endpoint can be determined based on the boundary point value of the boundary where the line segment endpoint is located and the increment value, wherein the second coordinate value represents the position of the line segment endpoint under the coordinate system constructed by the target detection frame. The target detection frame constructs a coordinate system by taking the upper left corner as the origin of the coordinate system.
For example, the increment value corresponding to the line segment endpoint a is 0.07, the boundary point value of the upper boundary where the line segment endpoint a is located is 0, and the second coordinate value of the line segment endpoint a may be determined to be (0.0,0.07); the increment value of the line segment endpoint B is 0.17, the boundary point value of the right boundary where the line segment endpoint B is located is 0.25, and the second coordinate value of the line segment endpoint B can be determined to be (0.25,0.17).
S408, according to the area of the detection frame and the second coordinate value, obtaining a first coordinate value of the line segment endpoint.
When the one-dimensional data range of each boundary of the target detection frame is set to 0 to 1, it can be understood that the range of each boundary is obtained by performing normalization processing based on the area of the target detection frame. Therefore, the inverse normalization processing can be performed on each boundary range of the target detection frame according to the area of the target detection frame.
Optionally, according to the area of the detection frame and the second coordinate value of the line segment endpoint, the position of the line segment endpoint in the coordinate system of the detection frame may be mapped to determine the first coordinate value of the line segment endpoint.
It should be noted that, the first coordinate value and the second coordinate value of the line end point both represent the position of the line end point in the detection frame; the second coordinate value is a coordinate value at which each boundary range of the detection frame is 0 to 1, and the second coordinate value is a coordinate value at which each boundary range of the detection frame is a normal value.
S409, determining a sum value of the first coordinate value and the second global position, and determining the sum value as the first global position of the line segment endpoint.
In this embodiment of the present application, the implementation manner of step S409 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
In the embodiment of the application, the data range of each boundary of the target detection frame is normalized based on the area of the target detection frame, the first coordinate value of the line segment endpoint is determined through the data range of each boundary of the specific target detection frame, the calculated amount is small, the line segment endpoint can be rapidly positioned on which boundary of the target detection frame the line segment endpoint belongs to through the position parameter of the line segment endpoint, the position of the line segment endpoint in the target detection frame is conveniently determined, and further, the first global position of the line segment endpoint is obtained by combining the second global position corresponding to the target detection frame. In the embodiment of the application, the data range of each boundary of the target detection frame is set, the boundary of the target detection frame where the line end point is located can be determined based on the position parameter of the line end point, and the line end point is positioned more accurately and conveniently.
Fig. 5 is a flowchart of another method for generating a drivable region according to an embodiment of the present application. As shown in fig. 5, the method includes, but is not limited to, the steps of:
s501, acquiring a target image acquired by a vehicle.
In the embodiment of the present application, the implementation manner of step S501 may be implemented in any manner of each embodiment of the present disclosure, which is not limited herein, and is not described herein again.
S502, identifying the obstacle to the target image, and acquiring the target detection frame where the obstacle grounding wire is and the line segment representation of the target detection frame.
In this embodiment of the present application, the implementation manner of step S502 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S503, determining the offset corresponding to the target detection frame from the line segment representation.
In this embodiment of the present application, the implementation manner of step S503 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S504, determining the position parameter of the line segment end point from the line segment representation, wherein the line segment end point is a point on the boundary of the predicted target detection frame.
In this embodiment of the present application, the implementation manner of step S504 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S505, determining a second global position of the detection frame according to the offset.
In the embodiment of the present application, the implementation manner of step S505 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S506, if the position parameter is the offset angle of the line segment end point, performing trigonometric function operation on the offset angle to obtain a first coordinate value of the line segment end point.
In some implementations, the center point of the target detection frame is taken as the origin of the coordinate system; alternatively, the coordinate system may be a rectangular coordinate system or a polar coordinate system.
In some implementations, the offset angle of the line segment end point may reflect an angle between a line formed by the line segment end point and the origin of the coordinate system and the set reference line. For example, in a rectangular coordinate system, the X-axis may be used as a reference line, and under the condition of knowing an offset angle (included angle), a line segment forming the offset angle with the reference line may be determined, where an intersection point of the line segment and the target detection frame is a line segment endpoint.
For example, in a polar coordinate system, an axis where 0 ° and 180 ° are located may be used as a reference line, and under the condition of a known offset angle (included angle), a line segment forming the offset angle with the reference line may be determined, where an intersection point of the line segment and the target detection frame is a line segment endpoint.
As shown in fig. 5a, a line segment endpoint a and a line segment endpoint B are respectively on an upper boundary and a left boundary of the target detection frame, the offset angle of the line segment endpoint a is α, and the offset angle of the line segment endpoint B is β; the boundary length of the target detection frame is known, that is, the ordinate indicating the line end point a is known with the abscissa indicating the line end point B, and the ordinate indicating the line end point a is consistent with the abscissa indicating the line end point B.
The length of the line segment between the line segment endpoint a and the origin of the coordinate system can be obtained based on the sine function sin alpha in the trigonometric function under the condition that the ordinate of the line segment endpoint a and the offset angle alpha are known, and the abscissa of the line segment endpoint a is determined according to the cosine function cos alpha under the condition that the length of the line segment between the line segment endpoint a and the origin of the coordinate system and the offset angle alpha are known.
The length of the line segment between the line segment endpoint B and the origin of the coordinate system is determined based on a cosine function cos (- β) in the trigonometric function under the condition that the abscissa of the line segment endpoint B and the offset angle β are known, and the ordinate of the line segment endpoint B is determined based on a sine function sin (- β) under the condition that the length of the line segment between the line segment endpoint B and the origin of the coordinate system and the offset angle β are known.
Further, first coordinate values respectively corresponding to the line segment end point A and the line segment end point B are determined.
S507, determining a sum value of the first coordinate value and the second global position, and determining the sum value as the first global position of the line segment endpoint.
In the embodiment of the present application, the implementation manner of step S507 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
In the embodiment of the application, the line segment is represented by using the offset angle, and the position of the line segment endpoint is determined based on the offset angle between the line segment endpoint and the reference line. According to the method and the device for determining the position of the obstacle grounding wire in the target image, the position of the obstacle grounding wire is determined in different line segment representation modes, and therefore flexibility is high.
Fig. 6 is a flowchart of another method for generating a drivable region according to an embodiment of the present application. As shown in fig. 6, the method includes, but is not limited to, the steps of:
s601, acquiring a target image acquired by a vehicle.
In the embodiment of the present application, the implementation manner of step S601 may be implemented by any one of the methods in the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
S602, inputting the target image into a pre-trained obstacle recognition model, and outputting a prediction label of the candidate detection frame and a line segment representation of the candidate detection frame by the obstacle recognition model.
Alternatively, the target image is input into a pre-trained obstacle recognition model, which may output the predictive labels of the candidate detection boxes and the line segment representations of the candidate detection boxes.
S603, determining a target detection frame from the candidate detection frames according to the prediction labels of the candidate detection frames.
Alternatively, the candidate detection frame may be an area between the obstacle and the road, or an area between the obstacle and the road. The area between the obstacle and the road is the target area determined in the embodiment of the application.
Alternatively, the prediction tag may indicate whether its corresponding detection frame is a ground line of an obstacle; that is, the prediction tag of the area between the obstacle and the road is different from the prediction tag of the area between the obstacle and the road.
And determining a target detection frame of the area between the obstacle and the road from the candidate detection frames based on the prediction labels of the candidate detection frames.
S604, a drivable region of the vehicle is generated based on the line segment representation of the target detection frame.
In the embodiment of the present application, the implementation manner of step S604 may be implemented by any one of the embodiments of the present disclosure, which is not limited herein, and is not described herein again.
In some implementations, the target image is input into a pre-trained obstacle recognition model, the line segment representation of the candidate detection frame output by the obstacle recognition model may include a position parameter of a predicted endpoint in the target detection frame, the predicted endpoint may be considered to be located in the target detection frame, and further, to obtain an endpoint on the boundary of the target detection frame, a straight line is fitted to the predicted endpoint to obtain a fitted straight line. And the intersection point exists between the fitted straight line and the boundary of the target detection frame, and the intersection point can be used as a line segment endpoint. As shown in fig. 6a, the target image is input into a pre-trained obstacle recognition model, and the obstacle recognition model outputs position parameters of predicted endpoints in the target detection frame, that is, coordinate values of the predicted endpoints p1 and p2 in the target detection frame; and performing straight line fitting based on the coordinates of the predicted endpoints P1 and P2 to obtain a fitted straight line, namely, taking an extension line of a connecting line between the predicted endpoints P1 and P2 as the fitted straight line, wherein intersection points P1 and P2 exist between the fitted straight line and the boundary of the target detection frame, and the intersection points P1 and P2 are line segment endpoints.
In the embodiment of the application, the obstacle recognition model is trained through the sample image, so that the final output result of the obstacle recognition model is more accurate and reliable; further, a prediction tag of the candidate detection frame and a line segment representation of the candidate detection frame are obtained by using the obstacle recognition model, a target detection frame is selected from all the candidate detection frames, and a drivable area of the vehicle is determined based on the line segment representation of the target detection frame. In the embodiment of the application, the target detection frame in the target image is identified through the trained obstacle identification model, so that global analysis on the target image is avoided, and the calculation amount is reduced by more aiming at the analysis on the target detection frame; the acquisition of the travelable region based on the segment representation of the target detection frame is more rapid and accurate.
Fig. 7 is a flowchart of another method for generating a drivable region according to an embodiment of the present application. As shown in fig. 7, the method includes, but is not limited to, the steps of:
s701, a sample image is acquired, where the sample image includes a detection frame and a label tag of the detection frame and a reference line segment representation of the detection frame.
Optionally, a sample image is acquired, the sample image may be an image of a surrounding environment of the vehicle in parking, where the sample image includes a marked detection frame, a mark tag of the detection frame, and a reference line segment representation of the detection frame.
Alternatively, all the detection frames in the sample image may be the same size; the mark label of the detection frame is used for reflecting the position information of the detection frame in the sample image.
S702, inputting the sample image into the obstacle recognition model to be trained for training, and obtaining a loss function of the obstacle recognition model to be trained.
In some implementations, the sample image is input into an obstacle recognition model, and the obstacle recognition model to be trained predicts the sample image to obtain a prediction endpoint in the detection frame and a prediction label of each detection frame.
Further, a segment representation of the predicted endpoint in the detection frame coordinate system is obtained. Optionally, after obtaining the predicted endpoint in the detection frame, obtaining a line segment representation of the predicted endpoint based on a detection frame coordinate system of the detection frame in which the predicted endpoint is located. Alternatively, the detection frame coordinate system may be constructed with the upper left corner of the detection frame as the origin. Wherein the line segment representation of the predicted endpoint may be a coordinate position representation of the predicted endpoint in a corresponding detection frame coordinate system.
Further, a loss function is derived based on the line segment representations of the prediction end points in the detection frame coordinate system, the prediction tags and the marking tags, and the reference line segment representations.
In some implementations, the difference between the predictive label and the marker label obtains the loss function based on the difference between the line segment representation corresponding to the predictive endpoint and the reference line segment.
The loss function is used for reflecting the difference between the predicted value and the true value; alternatively, the loss function may be a mean square error loss function, a cross entropy loss function, or a mean absolute error loss function.
In other implementations, the sample image is input into an obstacle recognition model, and the obstacle recognition model predicts the sample image to obtain a prediction line segment representation and a prediction label corresponding to the detection frame.
Further, a loss function is derived from the predicted line segment representation and the predicted label, and the label and the reference line segment representation.
In some implementations, the difference between the predictive label and the marker label obtains the loss function based on the difference between the predictive segment representation and the reference segment.
The loss function is used for reflecting the difference between the predicted value and the true value; alternatively, the loss function may be a mean square error loss function, a cross entropy loss function, or a mean absolute error loss function.
S703, performing model parameter adjustment on the obstacle recognition model based on the loss function, and continuing training by using the next sample image until training is finished to obtain the obstacle recognition model.
Optionally, the model parameter adjustment is performed on the obstacle recognition model based on the loss function, so as to reduce the gap between the model predicted value and the true value.
Alternatively, the model parameter adjustment may employ manual adjustment, network searching, or random searching.
After the model parameters of the obstacle recognition model are adjusted, the next sample image is continuously input into the adjusted obstacle recognition model to obtain a new loss function, and iterative training is performed based on the new loss function until the loss function converges, and training of the obstacle recognition model is finished.
In the embodiment of the application, when the obstacle recognition model is trained, different loss functions in the obstacle recognition model are built based on different position parameter representation conditions, the obstacle recognition model training is more targeted, the generalization capability of the obstacle recognition model is stronger when the line segment representation of the target detection frame is acquired by using the obstacle recognition model, accurate results can be obtained under different representation conditions, and the accuracy of the generation of the drivable region is ensured.
Fig. 8 is a schematic structural diagram of a device for generating a travelable region according to an embodiment of the present application. As shown in fig. 8, the device 800 for generating a travelable region includes:
A first obtaining module 801, configured to obtain a target image collected by a vehicle;
a second obtaining module 802, configured to identify an obstacle in the target image, and obtain a target detection frame where the obstacle ground line is located and a segment representation of the target detection frame;
the area generating module 803 is configured to generate a drivable area of the vehicle based on the line segment representation of the target detection frame.
In some implementations, the region generation module 803 is configured to:
determining the offset corresponding to the target detection frame from the line segment representation;
determining a position parameter of a line segment endpoint from the line segment representation, wherein the line segment endpoint is a point on the boundary of the predicted target detection frame;
determining a first global position of a line segment endpoint in the target image according to the offset and the position parameter;
and generating a drivable area of the vehicle based on the first global position corresponding to the detection frame where the obstacle grounding wire is located.
In some implementations, the region generation module 803 is configured to:
determining a second global position of the detection frame according to the offset;
and determining a first global position of the line segment endpoint according to the position parameter and the second global position.
In some implementations, the region generation module 803 is configured to:
determining a first coordinate value of a line segment endpoint under a detection frame coordinate system according to the position parameter;
And determining a sum value of the first coordinate value and the second global position, and determining the sum value as the first global position of the line segment endpoint.
In some implementations, the location parameter is a first coordinate value of the line segment endpoint in a detection frame coordinate system.
In some implementations, the apparatus 800 further includes:
acquiring a prediction endpoint in a target detection frame;
and performing straight line fitting according to the predicted end points, acquiring intersection points of the fitted straight lines and the boundary of the target detection frame, and taking the intersection points as line segment end points.
In some implementations, the region generation module 803 is configured to:
if the position parameter is the value of the line segment endpoint in the set boundary range, determining the boundary of the line segment endpoint according to the value, and determining the increment value corresponding to the line segment endpoint based on the boundary point value and the value of the boundary;
determining a second coordinate value of the line segment end point according to the boundary point value and the increment value;
and obtaining a first coordinate value of the line segment endpoint according to the area of the detection frame and the second coordinate value.
In some implementations, the region generation module 803 is configured to:
and if the position parameter is the offset angle of the line segment endpoint, performing trigonometric function operation on the offset angle to obtain a first coordinate value of the line segment endpoint.
In some implementations, a second acquisition module 802 is configured to:
Inputting the target image into a pre-trained obstacle recognition model, and outputting a prediction label of a candidate detection frame and a line segment representation of the candidate detection frame by the obstacle recognition model;
and determining the target detection frame from the candidate detection frames according to the prediction labels of the candidate detection frames.
In some implementations, a second acquisition module 802 is configured to:
acquiring a sample image, wherein the sample image comprises a detection frame, a mark label of the detection frame and a reference line segment representation of the detection frame;
inputting the sample image into an obstacle recognition model to be trained for training to obtain a loss function of the obstacle recognition model;
and carrying out model parameter adjustment on the obstacle recognition model based on the loss function, and continuing training by using the next sample image until the training is finished to obtain the obstacle recognition model.
In some implementations, a second acquisition module 802 is configured to:
predicting the sample image by using the obstacle recognition model to obtain a prediction endpoint in the detection frame;
acquiring line segment representations of a prediction endpoint under a detection frame coordinate system;
the loss function is derived based on the line segment representation, the predictive and marker labels, and the reference line segment representation of the predicted endpoint in the detection frame coordinate system.
In some implementations, a second acquisition module 802 is configured to:
predicting the sample image by using the obstacle recognition model to obtain a predicted line segment representation and a predicted label corresponding to the detection frame;
and obtaining a loss function according to the predicted line segment representation and the predicted label, and the marked label and the reference line segment representation.
In the embodiment of the application, the object image acquired by the vehicle is subjected to obstacle recognition to obtain the object detection frame where the obstacle grounding wire is located and the line segment representation of the object detection frame, and the position of the obstacle grounding wire in the object image can be obtained based on the line segment representation of the object detection frame, so that the obstacle grounding wire in the object image can be obtained based on the line segment representation of the object detection frame, and the obstacle grounding wire is the boundary between the obstacle and the drivable area, so that the drivable area of the vehicle is obtained when the obstacle grounding wire in the object image is obtained. According to the method and the device for identifying the obstacle in the vehicle, the drivable area is judged through identifying the obstacle of the real-time acquired image, feasibility of the drivable area is guaranteed, line segment representation of the target detection frame can be carried out in the obstacle identification process, the position of the obstacle grounding wire in the target image can be accurately and rapidly located based on the line segment representation, the drivable area is determined through the obstacle grounding wire, and the obtaining efficiency and accuracy of the drivable area of the vehicle are high.
Fig. 9 is a block diagram of an electronic device, according to an example embodiment. As shown in fig. 9, the electronic apparatus 900 includes a drivable region generating apparatus 800. The electronic device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a cell phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, wearable device, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc., and the non-mobile electronic device may be a network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
There is also provided, in accordance with an embodiment of the present application, an electronic device including: a processor; a memory for storing the processor-executable instructions, wherein the processor is configured to execute the instructions to implement the method of generating a travelable region as described above.
In order to implement the above embodiment, the present application also proposes a storage medium.
Wherein the instructions in the storage medium, when executed by the processor of the electronic device, enable the electronic device to perform the method of generating a travelable region as described above.
To achieve the above embodiments, the present application also provides a computer program product.
Wherein the computer program product, when executed by a processor of an electronic device, enables the electronic device to perform the method as described above.
Fig. 10 is a block diagram of an electronic device, according to an example embodiment. The electronic device shown in fig. 10 is only an example, and should not impose any limitation on the functionality and scope of use of the embodiments of the present application.
As shown in fig. 10, the electronic device 1000 includes a processor 1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a Memory 1006 into a random access Memory (RAM, random Access Memory) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 are also stored. The processor 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An Input/Output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: a memory 1006 including a hard disk and the like; and a communication section 1007 including a network interface card such as a LAN (local area network ) card, a modem, or the like, the communication section 1007 performing communication processing via a network such as the internet; the drive 1008 is also connected to the I/O interface 1005 as required.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program embodied on a computer readable medium, the computer program containing program code for performing the methods shown in the flowcharts. In such an embodiment, the computer program can be downloaded and installed from the network through the communication section 1007. The above-described functions defined in the method of the present application are performed when the computer program is executed by the processor 1001.
In an exemplary embodiment, a storage medium is also provided, e.g., a memory, comprising instructions executable by the processor 1001 of the electronic device 1000 to perform the above-described method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Fig. 11 is a block diagram illustrating a structure of an electronic device according to an exemplary embodiment. The electronic device shown in fig. 11 is only an example, and should not impose any limitation on the functionality and scope of use of the embodiments of the present application. As shown in fig. 11, the electronic device 1100 includes a processor 1101 and a memory 1102. The memory 1102 is used for storing program codes, and the processor 1101 is connected to the memory 1102, and is used for reading the program codes from the memory 1102, so as to implement the method for generating a travelable region in the above embodiment.
Alternatively, the number of processors 1101 may be one or more.
Optionally, the electronic device may further include an interface 1103, and the number of the interfaces 1103 may be plural. The interface 1103 can be connected to an application program, and can receive data of an external device such as a sensor, and the like.
Fig. 12 is a functional block diagram of a vehicle 1200, according to an exemplary embodiment. For example, the vehicle 1200 may be a hybrid vehicle, or may be a non-hybrid vehicle, an electric vehicle, a fuel cell vehicle, or other type of vehicle. The vehicle 1200 may be an autonomous vehicle, a semi-autonomous vehicle, or a non-autonomous vehicle.
Referring to fig. 12, a vehicle 1200 may include various subsystems, such as an infotainment system 1210, a perception system 1220, a decision control system 1230, a drive system 1240, and a computing platform 1250. Wherein the vehicle 1200 may also include more or fewer subsystems, and each subsystem may include multiple components. In addition, interconnections between each subsystem and between each component of the vehicle 1200 may be achieved by wired or wireless means.
In some embodiments, the infotainment system 1210 may include a communication system, an entertainment system, a navigation system, and the like.
The sensing system 1220 may include a variety of sensors for sensing information of the environment surrounding the vehicle 1200. For example, the sensing system 1220 may include a global positioning system (which may be a GPS system, a beidou system, or other positioning system), an inertial measurement unit (inertial measurement unit, IMU), a lidar, millimeter wave radar, an ultrasonic radar, and a camera device.
Decision control system 1230 may include a computing system, a vehicle controller, a steering system, a throttle, and a braking system.
The drive system 1240 may include components that provide powered motion to the vehicle 1200. In one embodiment, the drive system 1240 may include an engine, an energy source, a transmission, and wheels. The engine may be one or a combination of an internal combustion engine, an electric motor, an air compression engine. The engine is capable of converting energy provided by the energy source into mechanical energy.
Some or all of the functions of the vehicle 1200 are controlled by a computing platform 1250. The computing platform 1250 may include at least one processor 1251 and a memory 1252, the processor 1251 may execute instructions 1253 stored in the memory 1252.
The processor 1251 may be any conventional processor, such as a commercially available CPU. The processor may also include, for example, an image processor (Graphic Process Unit, GPU), a field programmable gate array (Field Programmable Gate Array, FPGA), a System On Chip (SOC), an application specific integrated Chip (Application Specific Integrated Circuit, ASIC), or a combination thereof.
The memory 1252 may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
In addition to instructions 1253, the memory 1252 may also store data such as road maps, route information, vehicle location, direction, speed, and the like. The data stored by memory 1252 may be used by computing platform 1250.
In an embodiment of the present application, the processor 1251 may execute the instructions 1253 to perform all or part of the steps of the method for generating a travelable region described above.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

1. A method of generating a drivable region, comprising:
acquiring a target image acquired by a vehicle;
performing obstacle recognition on the target image to obtain a target detection frame where an obstacle grounding wire is located and line segment representations of the target detection frame; generating a drivable region of the vehicle based on the segment representation of the target detection frame;
the generating a drivable region of the vehicle based on the segment representation includes:
determining an offset corresponding to the target detection frame from the line segment representation, wherein the offset is used for reflecting the position where the target detection frame is projected to the target image;
determining a position parameter of a line segment endpoint from the line segment representation, wherein the line segment endpoint is a predicted point on the boundary of the target detection frame, and the position parameter is a first coordinate value of the line segment endpoint under a detection frame coordinate system;
determining a first global position of the line segment endpoint in the target image according to the offset and the position parameter;
Generating a drivable area of the vehicle based on the first global position corresponding to the detection frame where the obstacle grounding wire is located;
the determining, according to the offset and the position parameter, a first global position of the line segment endpoint in the target image includes:
determining a second global position of the detection frame according to the offset;
determining a first global position of the line segment endpoint according to the position parameter and the second global position;
the determining, according to the location parameter and the second global location, a first global location of the line segment endpoint includes:
determining a first coordinate value of the line segment end point under a detection frame coordinate system according to the position parameter;
and determining a sum value of the first coordinate value and the second global position, and determining the sum value as the first global position of the line segment endpoint.
2. The method according to claim 1, wherein the method further comprises:
acquiring a prediction endpoint in the target detection frame;
and performing straight line fitting according to the predicted end points, acquiring intersection points of the fitted straight line and the boundary of the target detection frame, and taking the intersection points as the line segment end points.
3. The method of claim 1, wherein determining a first coordinate value of the line segment endpoint in a detection frame coordinate system based on the location parameter comprises:
if the position parameter is a value of the line segment endpoint in a set boundary range, determining the boundary of the line segment endpoint according to the value, and determining an increment value corresponding to the line segment endpoint based on the boundary point value of the boundary and the value;
determining a second coordinate value of the line segment endpoint according to the boundary point value and the increment value;
and obtaining a first coordinate value of the line segment endpoint according to the area of the detection frame and the second coordinate value.
4. The method of claim 1, wherein determining a first coordinate value of the line segment endpoint in a detection frame coordinate system based on the location parameter comprises:
and if the position parameter is the offset angle of the line segment endpoint, performing trigonometric function operation on the offset angle to obtain a first coordinate value of the line segment endpoint.
5. The method of any one of claims 1-4, wherein the performing obstacle recognition on the target image to obtain a target detection frame where an obstacle ground line is located and a line segment representation of the target detection frame includes:
Inputting the target image into a pre-trained obstacle recognition model, and outputting a prediction label of a candidate detection frame and a line segment representation of the candidate detection frame by the obstacle recognition model;
and determining the target detection frame from the candidate detection frames according to the prediction labels of the candidate detection frames.
6. The method of claim 5, wherein the training process of the obstacle recognition model comprises:
acquiring a sample image, wherein the sample image comprises a detection frame, a mark label of the detection frame and a reference line segment representation of the detection frame;
inputting the sample image into an obstacle recognition model to be trained for training, and obtaining a loss function of the obstacle recognition model;
and carrying out model parameter adjustment on the obstacle recognition model based on the loss function, and continuing training by using the next sample image until training is finished to obtain the obstacle recognition model.
7. The method of claim 6, wherein the inputting the sample image into an obstacle recognition model to be trained to train to obtain a loss function of the obstacle recognition model comprises:
Predicting the sample image by the obstacle recognition model to obtain a prediction endpoint in the detection frame;
acquiring line segment representations of the prediction endpoints under a detection frame coordinate system;
and obtaining the loss function based on the line segment representation of the prediction endpoint under the detection frame coordinate system, the prediction label, the marking label and the reference line segment representation.
8. The method of claim 6, wherein the inputting the sample image into an obstacle recognition model to be trained to train to obtain a loss function of the obstacle recognition model comprises:
predicting the sample image by the obstacle recognition model to obtain a predicted line segment representation and a predicted label corresponding to the detection frame;
and obtaining the loss function according to the predicted line segment representation and the predicted label, and the marking label and the reference line segment representation.
9. A device for generating a drivable region, comprising:
the first acquisition module is used for acquiring a target image acquired by the vehicle;
the second acquisition module is used for identifying the obstacle of the target image and acquiring a target detection frame where the obstacle grounding wire is and line segment representations of the target detection frame;
The area generation module is used for generating a drivable area of the vehicle based on the line segment representation of the target detection frame;
the region generation module is specifically configured to:
determining an offset corresponding to the target detection frame from the line segment representation, wherein the offset is used for reflecting the position where the target detection frame is projected to the target image;
determining a position parameter of a line segment endpoint from the line segment representation, wherein the line segment endpoint is a predicted point on the boundary of the target detection frame, and the position parameter is a first coordinate value of the line segment endpoint under a detection frame coordinate system;
determining a first global position of the line segment endpoint in the target image according to the offset and the position parameter;
generating a drivable area of the vehicle based on the first global position corresponding to the detection frame where the obstacle grounding wire is located;
the region generation module is further specifically configured to:
determining a second global position of the detection frame according to the offset;
determining a first coordinate value of the line segment end point under a detection frame coordinate system according to the position parameter;
and determining a sum value of the first coordinate value and the second global position, and determining the sum value as the first global position of the line segment endpoint.
10. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 8.
11. A vehicle, characterized by comprising: the electronic device of claim 10 or in connection with the electronic device of claim 10.
12. A non-transitory computer readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of claims 1 to 8.
CN202310462185.7A 2023-04-26 2023-04-26 Method and device for generating drivable area Active CN116486377B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310462185.7A CN116486377B (en) 2023-04-26 2023-04-26 Method and device for generating drivable area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310462185.7A CN116486377B (en) 2023-04-26 2023-04-26 Method and device for generating drivable area

Publications (2)

Publication Number Publication Date
CN116486377A CN116486377A (en) 2023-07-25
CN116486377B true CN116486377B (en) 2023-12-26

Family

ID=87215285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310462185.7A Active CN116486377B (en) 2023-04-26 2023-04-26 Method and device for generating drivable area

Country Status (1)

Country Link
CN (1) CN116486377B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863429B (en) * 2023-07-26 2024-05-31 小米汽车科技有限公司 Training method of detection model, and determination method and device of exercisable area

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020062940A (en) * 2018-10-16 2020-04-23 日立オートモティブシステムズ株式会社 Vehicle control device
JP2021144677A (en) * 2020-03-11 2021-09-24 ベイジン バイドゥ ネットコム サイエンス アンド テクノロジー カンパニー リミテッド Obstacle detection method, device, electronic apparatus, storage medium, and computer program
WO2021226776A1 (en) * 2020-05-11 2021-11-18 华为技术有限公司 Vehicle drivable area detection method, system, and automatic driving vehicle using system
CN114663397A (en) * 2022-03-22 2022-06-24 小米汽车科技有限公司 Method, device, equipment and storage medium for detecting travelable area
CN115019275A (en) * 2022-06-16 2022-09-06 阿里巴巴(中国)有限公司 Heuristic determination and model training methods, electronic device, and computer storage medium
CN115398272A (en) * 2020-04-30 2022-11-25 华为技术有限公司 Method and device for detecting passable area of vehicle
CN115605930A (en) * 2021-04-27 2023-01-13 华为技术有限公司(Cn) Automatic parking method, device and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020062940A (en) * 2018-10-16 2020-04-23 日立オートモティブシステムズ株式会社 Vehicle control device
JP2021144677A (en) * 2020-03-11 2021-09-24 ベイジン バイドゥ ネットコム サイエンス アンド テクノロジー カンパニー リミテッド Obstacle detection method, device, electronic apparatus, storage medium, and computer program
CN115398272A (en) * 2020-04-30 2022-11-25 华为技术有限公司 Method and device for detecting passable area of vehicle
WO2021226776A1 (en) * 2020-05-11 2021-11-18 华为技术有限公司 Vehicle drivable area detection method, system, and automatic driving vehicle using system
CN115605930A (en) * 2021-04-27 2023-01-13 华为技术有限公司(Cn) Automatic parking method, device and system
CN114663397A (en) * 2022-03-22 2022-06-24 小米汽车科技有限公司 Method, device, equipment and storage medium for detecting travelable area
CN115019275A (en) * 2022-06-16 2022-09-06 阿里巴巴(中国)有限公司 Heuristic determination and model training methods, electronic device, and computer storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
激光点云在无人驾驶路径检测中的应用;张永博;李必军;陈诚;;测绘通报(11);70-73+78 *

Also Published As

Publication number Publication date
CN116486377A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN108362295B (en) Vehicle path guiding apparatus and method
US11030803B2 (en) Method and apparatus for generating raster map
JP7239703B2 (en) Object classification using extraterritorial context
CN112740268B (en) Target detection method and device
US20220057806A1 (en) Systems and methods for obstacle detection using a neural network model, depth maps, and segmentation maps
US11144770B2 (en) Method and device for positioning vehicle, device, and computer readable storage medium
CN110390240B (en) Lane post-processing in an autonomous vehicle
RU2743895C2 (en) Methods and systems for computer to determine presence of objects
JP7481534B2 (en) Vehicle position determination method and system
CN116486377B (en) Method and device for generating drivable area
US20210364637A1 (en) Object localization using machine learning
US11820397B2 (en) Localization with diverse dataset for autonomous vehicles
CN111316328A (en) Method for maintaining lane line map, electronic device and storage medium
CN116626670B (en) Automatic driving model generation method and device, vehicle and storage medium
CN115223015B (en) Model training method, image processing method, device and vehicle
US11733373B2 (en) Method and device for supplying radar data
US11846523B2 (en) Method and system for creating a localization map for a vehicle
US11544899B2 (en) System and method for generating terrain maps
US20210279465A1 (en) Streaming object detection within sensor data
Linfeng et al. One estimation method of road slope and vehicle distance
CN116630923B (en) Marking method and device for vanishing points of roads and electronic equipment
CN116767224B (en) Method, device, vehicle and storage medium for determining a travelable region
CN116740681B (en) Target detection method, device, vehicle and storage medium
CN117128976B (en) Method and device for acquiring road center line, vehicle and storage medium
CN116503482A (en) Vehicle position acquisition method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant