CN116563817B - Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium - Google Patents

Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium Download PDF

Info

Publication number
CN116563817B
CN116563817B CN202310395895.2A CN202310395895A CN116563817B CN 116563817 B CN116563817 B CN 116563817B CN 202310395895 A CN202310395895 A CN 202310395895A CN 116563817 B CN116563817 B CN 116563817B
Authority
CN
China
Prior art keywords
obstacle
looking
frame
direction vector
movement direction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310395895.2A
Other languages
Chinese (zh)
Other versions
CN116563817A (en
Inventor
胡禹超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HoloMatic Technology Beijing Co Ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202310395895.2A priority Critical patent/CN116563817B/en
Publication of CN116563817A publication Critical patent/CN116563817A/en
Application granted granted Critical
Publication of CN116563817B publication Critical patent/CN116563817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure discloses an obstacle information generation method, an obstacle information generation device, an electronic device and a computer readable medium. One embodiment of the method comprises the following steps: acquiring a forward-looking road image shot by a forward-looking camera of a current vehicle and a right forward-looking road image shot by a right forward-looking camera; determining a historical frame obstacle movement direction vector, a current frame obstacle movement direction vector and first obstacle detection frame feature point coordinates; performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information; generating an adjusted obstacle movement direction vector and an adjusted detection frame feature point coordinate based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward looking obstacle detection information and the right forward looking obstacle detection information; and generating obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame characteristic point coordinates. Accuracy of obstacle information that this embodiment can.

Description

Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a method, an apparatus, an electronic device, and a computer readable medium for generating obstacle information.
Background
The obstacle information generation method is a technique for determining obstacle information in an image. Currently, when generating obstacle information (for example, the obstacle is another vehicle, the obstacle information may be distance information of the other vehicle, speed information of the other vehicle, or the like), the following methods are generally adopted: two road images of the obstacle vehicle corresponding to the same frame are shot by two adjacent vehicle-mounted cameras (for example, a front view camera and a right front view camera), so that the two road images are subjected to obstacle detection, and therefore, the obstacle information extracted from the two road images can be complemented, and the situation that the obstacle in a single road image is incompletely displayed is avoided. Thus, obstacle information can be generated by the geometric relationship between the obstacle and the in-vehicle camera.
However, the inventors found that when the obstacle information generation is performed in the above manner, there are often the following technical problems:
due to the common knowledge between two adjacent vehicle-mounted cameras, the condition that the obstacle region in the two photographed road images is truncated in the images is caused, and if the truncated part of the obstacle region in the two road images is too large, even if the obstacle information extracted from the two road images is complemented, the complete obstacle information is difficult to generate, so that the accuracy of the generated obstacle information is reduced.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, may contain information that does not form the prior art that is already known to those of ordinary skill in the art in this country.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose an obstacle information generation method, apparatus, electronic device, and computer-readable medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide an obstacle information generating method, the method including: acquiring a forward-looking road image shot by a forward-looking camera of a current vehicle and a right forward-looking road image shot by a right forward-looking camera; determining a historical frame obstacle movement direction vector, a current frame obstacle movement direction vector and a first obstacle detection frame feature point coordinate, wherein the historical frame obstacle movement direction vector and the current frame obstacle movement direction vector are unit vectors; performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information; generating an adjusted obstacle movement direction vector and adjusted detection frame feature point coordinates based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward looking obstacle detection information, and the right forward looking obstacle detection information; and generating obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame characteristic point coordinates.
In a second aspect, some embodiments of the present disclosure provide an obstacle information generating apparatus, the apparatus including: an acquisition unit configured to acquire a forward-looking road image captured by a forward-looking camera of a current vehicle and a right forward-looking road image captured by a right forward-looking camera; a determining unit configured to determine a history frame obstacle movement direction vector, a current frame obstacle movement direction vector, and first obstacle detection frame feature point coordinates, wherein the history frame obstacle movement direction vector and the current frame obstacle movement direction vector are both unit vectors; an identification processing unit configured to perform identification processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information; a first generation unit configured to generate an adjusted obstacle movement direction vector and an adjusted detection frame feature point coordinate based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward-looking obstacle detection information, and the right forward-looking obstacle detection information; and a second generation unit configured to generate obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: by the obstacle information generation method of some embodiments of the present disclosure, the accuracy of the generated obstacle information may be improved. Specifically, the cause of the decrease in accuracy of the generated obstacle information is that: because of the common knowledge between two adjacent vehicle-mounted cameras, the condition that the obstacle region in the two photographed road images is truncated in the images is caused, and if the truncated part of the obstacle region in the two road images is too large, even if the obstacle information extracted from the two road images is complemented, the complete obstacle information is difficult to generate. Based on this, the obstacle information generation method of some embodiments of the present disclosure first acquires a forward-looking road image captured by a forward-looking camera of a current vehicle and a right forward-looking road image captured by a right forward-looking camera. Then, a history frame obstacle movement direction vector, a current frame obstacle movement direction vector, and first obstacle detection frame feature point coordinates are determined. By introducing the history frame obstacle movement direction vector, the method can be used as prior information of an obstacle vehicle, and simultaneously introducing the current frame obstacle movement direction vector and the first obstacle detection frame feature point coordinate, the method can be used for associating the obstacle features in two road images under the condition that the cut-off part between the road images is too large. Then, the forward-looking road image and the right forward-looking road image are subjected to recognition processing to generate forward-looking obstacle detection information and right forward-looking obstacle detection information. And generating an adjusted obstacle movement direction vector and adjusted detection frame feature point coordinates based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward looking obstacle detection information and the right forward looking obstacle detection information. Therefore, more accurate adjusted obstacle moving direction vectors and adjusted detection frame characteristic point coordinates can be obtained. And finally, generating obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame characteristic point coordinates. Thus, even in the case where the cut-off portion of the obstacle region in the two road images is excessively large, complete obstacle information can be generated. Thus, the accuracy of the generated obstacle information is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of some embodiments of an obstacle information generation method according to the present disclosure;
fig. 2 is a schematic structural view of some embodiments of an obstacle information generating device according to the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of an obstacle information generation method according to the present disclosure. The obstacle information generation method comprises the following steps:
Step 101, a forward-looking road image shot by a forward-looking camera of a current vehicle and a right forward-looking road image shot by a right forward-looking camera are acquired.
In some embodiments, the execution subject of the obstacle information generation method may acquire the forward-looking road image photographed by the forward-looking camera of the current vehicle and the right forward-looking road image photographed by the right forward-looking camera in a wired manner or a wireless manner. Wherein the acquisition time points of the forward-looking road image and the right forward-looking road image may be the same. Secondly, the same obstacle vehicle exists in the obtained forward-looking road image and the right forward-looking road image.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G/5G connection, wiFi connection, bluetooth connection, wiMAX connection, zigbee connection, UWB (ultra wideband) connection, and other now known or later developed wireless connection.
Step 102, determining a historical frame obstacle movement direction vector, a current frame obstacle movement direction vector and first obstacle detection frame feature point coordinates.
In some embodiments, the executing body may determine the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, and the first obstacle detection frame feature point coordinates. Wherein, the historical frame obstacle movement direction vector and the current frame obstacle movement direction vector can be unit vectors. The history frame obstacle movement direction vector may be a movement direction vector of the obstacle vehicle generated at a certain time in the history. The current frame obstacle movement direction vector may be a movement direction vector of the obstacle vehicle at the current time. The first obstacle detection frame feature point coordinates may be lower left corner coordinates of the obstacle head/tail detection frame. Here, the history frame obstacle movement direction vector and the current frame obstacle movement direction vector may be three-dimensional vectors in a preset map coordinate system. The first obstacle detection frame feature point coordinates may be three-dimensional coordinates in a map coordinate system.
In some optional implementations of some embodiments, the determining, by the executing body, the historical frame obstacle movement direction vector, the current frame obstacle movement direction vector, and the first obstacle detection frame feature point coordinates may include the following steps:
the first step is to obtain the current frame obstacle course angle and the historical frame road image sequence. Wherein, each road image in the history frame road image sequence may be a road image corresponding to each time point within a period of time (for example, 3 seconds) before the current frame as the termination time.
And secondly, detecting the obstacle direction of each historical frame road image in the historical frame road image sequence to obtain a detection direction vector sequence. And detecting the obstacle direction of each historical frame road image in the historical frame road image sequence through a preset obstacle direction detection algorithm. The detection direction vector may be used to characterize the direction of travel of the obstacle vehicle at a certain moment.
As an example, the obstacle direction detection algorithm may include, but is not limited to, at least one of: obb (oriented bounding box) algorithm, G-CRF (Gaus-conditional random field, gaussian conditional random field) model, denseCRF (full-connected conditional random field) model, etc.
And thirdly, selecting a detection direction vector meeting a preset time condition from the detection direction vector sequence as a historical frame obstacle movement direction vector. The preset time condition may be that a time difference between a time corresponding to the detected direction vector in the detected direction vector sequence and the current time is the smallest.
And fourth, generating a current frame obstacle movement direction vector by using the current frame obstacle course angle. The current frame obstacle moving direction vector can be constructed in the map coordinate system by taking the center of the observed obstacle as a starting point along the course angle direction of the current frame obstacle. Here, the 2 norm of the current frame obstacle movement direction may be equal to 1. In practice, the current frame obstacle movement direction vector may be used to characterize the initial movement direction of the obstacle.
And fifthly, detecting the obstacle of the forward-looking road image to obtain the coordinates of the characteristic points of the detected obstacle. The obstacle detection method comprises the steps of detecting the obstacle of the forward-looking road image through a preset obstacle detection algorithm, and obtaining a detected obstacle head/tail detection frame. And secondly, the corner point coordinates of the left lower corner of the detected obstacle head/tail detection frame can be determined as the characteristic point coordinates of the detected obstacle. Here, the detected obstacle characteristic point coordinates may be two-dimensional coordinates in an image coordinate system of the forward-looking road image (i.e., a forward-looking road image coordinate system).
As an example, the obstacle detection algorithm described above may include, but is not limited to, at least one of: MRF (MRF-Markov Random Field, markov conditional random field) model, SPP (Spatial Pyramid Pooling, spatial pyramid pooling module) model, FCN (Fully Convolutional Networks, full-roll machine neural network) model, and the like.
And sixthly, projecting the detected obstacle characteristic point coordinates to a map coordinate system of the current vehicle to generate first obstacle detection frame characteristic point coordinates. The detected obstacle characteristic point coordinates can be converted into a map coordinate system from a forward-looking road image coordinate system in a coordinate conversion mode. Here, the first obstacle detection frame feature point coordinates may be three-dimensional coordinates in a map coordinate system. In practice, the first obstacle detection box feature point coordinates may also be used to characterize the initial position of the lower left corner of the obstacle detection box.
And 103, performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information.
In some embodiments, the execution body may perform recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information. The forward-looking obstacle detection information may characterize obstacle information detected from the forward-looking road image. The right forward looking obstacle detection information may characterize obstacle information detected from the right forward looking road image. Here, the forward-looking obstacle detection information and the right forward-looking obstacle detection information may correspond to the same obstacle.
In some optional implementations of some embodiments, the executing body performs recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information, and may include the steps of:
first, obstacle detection is performed on the forward-looking road image to generate forward-looking obstacle detection information. The forward-looking obstacle detection information may include a forward-looking obstacle wheel grounding point coordinate. The forward looking obstacle wheel ground point coordinates may characterize the position coordinates of the contact point of the outside of the obstacle wheel with the ground in the detected forward looking road image coordinate system. Here, the obstacle detection may be performed by the obstacle detection algorithm described above.
And secondly, performing obstacle detection on the right front view road image to generate right front view obstacle detection information. The right-front view obstacle detection information may include a right-front view obstacle wheel grounding point coordinate, an obstacle detection frame, and a second obstacle detection frame feature point coordinate corresponding to a lower left corner position of the obstacle detection frame. Second, the right forward looking obstacle wheel ground point coordinates may characterize the position coordinates of the contact point of the outside of the obstacle wheel with the ground in the detected right forward looking road image coordinate system. The obstacle detection frame may be a detection frame of an obstacle head/tail in an image coordinate of the right front view image. Here, the obstacle detection may be performed by the obstacle detection algorithm described above.
And 104, generating an adjusted obstacle movement direction vector and adjusted detection frame feature point coordinates based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward looking obstacle detection information and the right forward looking obstacle detection information.
In some embodiments, the execution body may generate the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward-looking obstacle detection information, and the right forward-looking obstacle detection information.
In some optional implementations of some embodiments, the executing body may generate the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates based on the historical frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward-looking obstacle detection information, and the right forward-looking obstacle detection information, and may include:
first, determining a time difference between the time points corresponding to the obstacle moving direction vector of the history frame and the obstacle moving direction vector of the current frame, and obtaining an obstacle detection time difference. Wherein the obstacle detection time difference value may be determined by a time difference between the time points corresponding to the historical frame obstacle movement direction vector and the current frame obstacle movement direction vector.
And secondly, projecting the first obstacle detection frame characteristic point coordinates to an image coordinate system of the right forward-looking road image to obtain projected right forward-looking characteristic point coordinates. The first obstacle detection frame feature point coordinates can be converted from a map coordinate system to an image coordinate system of the right front view road image in a coordinate conversion mode, and the projected right front view feature point coordinates are obtained.
And thirdly, constructing a first constraint equation and a second constraint equation based on the historical frame obstacle movement direction vector, the current frame obstacle movement direction vector, the obstacle detection time difference value, the projected right front view characteristic point coordinates, the second obstacle detection frame characteristic point coordinates and a preset obstacle vehicle angular speed threshold value. Wherein the first constraint equation may be used to constrain a direction difference between the historical frame obstacle movement direction vector and the current frame obstacle movement direction vector to be within a certain range. The second constraint equation may be used to characterize a distance value constraint between the projected right forward looking feature point coordinates and the second obstacle detection box feature point coordinates.
And a fourth step of constructing a third constraint equation based on the current frame obstacle movement direction vector, the right forward looking obstacle wheel grounding point coordinates, and the first obstacle detection frame feature point coordinates in response to determining that the right forward looking obstacle detection information includes that the right forward looking obstacle wheel grounding point coordinates are not null. The right front view obstacle detection information includes that the coordinates of the wheel grounding point of the right front view obstacle are not null, and the characterization can detect the coordinates of the wheel grounding point of the right front view obstacle from the right front view road image. Thus, the third constraint equation may be constructed after the right forward looking obstacle wheel ground point coordinates are detected. Here, the third constraint equation may be used to characterize a distance constraint between the right forward looking obstacle wheel ground point coordinates and the first target straight line.
And fifthly, adjusting the first obstacle detection frame characteristic point coordinate and the current frame obstacle movement direction vector based on the first constraint equation, the second constraint equation and the third constraint equation to generate an adjusted obstacle movement direction vector and an adjusted detection frame characteristic point coordinate. The adjusted obstacle moving direction vector and the adjusted detection frame feature point coordinates can be in a preset map coordinate system. The adjustment processing can be performed on the first obstacle detection frame feature point coordinates and the current frame obstacle movement direction vector by the following formula:
wherein,representing coordinates. />Representing a map coordinate system. />Representing the right forward looking road image coordinate system. />And representing the coordinates of the characteristic points of the first obstacle detection frame in the map coordinate system. />And the coordinates of the feature points of the adjusted detection frame obtained by adjusting the coordinates of the feature points of the first obstacle detection frame are shown. />Representing the vector. />Representing the current frame obstacle movement direction vector in the map coordinate system. />And an adjusted obstacle movement direction vector obtained by adjusting the current frame obstacle movement direction vector. / >Representing a minimum objective function. />Representing a first constraint equation. />Representing a second constraint equation. />Representing a third constraint equation. />Representing an inverse cosine trigonometric function. />Representing the transpose of the matrix. />Representing historical frame obstacle movement direction vectors in a map coordinate system. />Representing a 2-norm.And a direction difference between the historical frame obstacle movement direction vector and the current frame obstacle movement direction vector is represented. />Representing the first target straight line. />Representing a preset obstacle vehicle angular velocity threshold. />Representing the above-mentioned obstacle detection time difference. />And representing the coordinates of the characteristic points of the second obstacle detection frame in the right front view road image coordinate system. />Representing the right forward looking obstacle wheel ground point coordinates in the right forward looking road image coordinate system. />Representing a preset projection function for converting coordinates to a right forward looking road image coordinate system. />Representing constraints.
Here, the multiplication number in the third constraint equation represents cross multiplication.
Optionally, the execution body generates the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates based on the historical frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward-looking obstacle detection information and the right forward-looking obstacle detection information, and the execution body may further include:
And a first step of constructing a fourth constraint equation based on the forward-looking obstacle wheel grounding point coordinates, the first obstacle detection frame characteristic point coordinates and the current frame obstacle movement direction vector in response to determining that the forward-looking obstacle wheel grounding point coordinates included in the forward-looking obstacle detection information are not null. The forward-looking obstacle detection information includes forward-looking obstacle wheel grounding point coordinates which are not null, and the forward-looking obstacle wheel grounding point coordinates can be detected from the forward-looking road image. Thus, after detecting forward looking obstacle wheel ground point coordinates, it can be used to construct a fourth constraint equation. Here, the fourth constraint equation may be used to characterize a distance constraint between the forward looking obstacle wheel ground point coordinates and the second target straight line.
And a second step of adjusting the first obstacle detection frame feature point coordinates and the current frame obstacle movement direction vector based on the first constraint equation, the second constraint equation, the third constraint equation and the fourth constraint equation to generate adjusted obstacle movement direction vectors and adjusted detection frame feature point coordinates. Wherein, the first obstacle detection frame feature point coordinates and the current frame obstacle movement direction vector can be adjusted by the following formula:
Wherein,representing a fourth constraint equation. />Representing a forward looking road image coordinate system. />The forward-looking obstacle wheel grounding point coordinates in the forward-looking road image coordinate system are represented. />Representing a preset projection function for converting coordinates into a forward looking road image coordinate system.
Here, the multiplication in the third constraint equation and the fourth constraint equation represents cross multiplication.
Step 105, generating obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates.
In some embodiments, the execution body may generate the obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates.
In some optional implementations of some embodiments, the executing body may generate the obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates, and may include:
and a first step of generating a first obstacle frame intersection point coordinate based on the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinate. The forward-looking obstacle detection information may further include a forward-looking obstacle detection frame. Here, the forward-looking obstacle detection frame may be a two-dimensional detection frame of an obstacle vehicle detected in a forward-looking road image coordinate system. And then, the adjusted obstacle moving direction vector and the adjusted detection frame characteristic point coordinate are projected from a map coordinate system to a forward-looking road image coordinate system, so as to obtain a projected moving vector and a projected detection frame characteristic point coordinate. And then, in a forward-looking road image coordinate system, taking the projected detection frame characteristic point coordinate as a starting point, taking an extension line along the direction of the projected movement vector, and determining an intersection point coordinate of the extension line and the left edge line of the forward-looking obstacle detection frame to obtain a first obstacle frame corner point coordinate.
And secondly, carrying out back projection processing on the connection line equation of the characteristic point coordinates of the adjusted detection frame and the intersection point coordinates of the first obstacle frame so as to generate a back projection line segment equation. Firstly, a ground flow equation preset in a map coordinate system can be acquired. Here, the ground flow equation may be used to characterize the ground within a range around the current vehicle. Next, a line equation of a line (i.e., the extension line) between the projected detection frame feature point coordinates corresponding to the adjusted detection frame feature point coordinates and the first obstacle frame intersection point coordinates may be determined. And finally, back-projecting the connection line equation onto the curved surface where the ground manifold equation is located by using a back-projection transformation algorithm to obtain a back-projected line segment equation.
And thirdly, back-projecting the corner coordinates of the first obstacle frame into a map coordinate system to obtain back-projected frame intersection point coordinates. The coordinates of the corner points of the first obstacle frame can be back projected into a map coordinate system from a forward-looking road image coordinate system through a coordinate conversion algorithm, and the coordinates of the intersection points of the frame after back projection are obtained.
And step four, generating a second obstacle frame intersection point coordinate based on the back projection frame intersection point coordinate and the back projection line segment equation. Firstly, determining a perpendicular equation of the line segment equation after back projection according to the coordinates of the corner points of the first obstacle frame. And secondly, the vertical line equation can be projected to a right forward looking road image coordinate system to obtain a projected rear vertical line equation. And finally, determining the intersection point coordinates of the projection back vertical line equation and the obstacle detection frame as the second obstacle frame intersection point coordinates. Here, the second obstacle bezel intersection coordinates may characterize the lower right corner coordinates of the obstacle detection frame.
And fifthly, generating obstacle information based on the first obstacle frame intersection point coordinates and the second obstacle frame intersection point coordinates. And the coordinates of the intersection point of the second obstacle frame can be back projected to the curved surface where the ground flow equation is located, so as to obtain back projected coordinates of the vertex of the frame. And secondly, as the bottom surface of the three-dimensional external frame of the vehicle is rectangular, the coordinates of the characteristic points of the detection frame after adjustment correspond to the left lower corner position of the bottom surface of the three-dimensional external frame, the coordinates of the intersection points of the second obstacle frames correspond to the right lower corner position of the bottom surface of the three-dimensional external frame, and the coordinates of the intersection points of the first obstacle frames correspond to the left upper corner position of the bottom surface of the three-dimensional external frame. Therefore, the position coordinates of the right upper corner of the obstacle detection frame corresponding to the right upper corner of the bottom surface of the three-dimensional external frame can be determined according to the preset rectangular priori frame. Here, the coordinates of each vertex of the three-dimensional external frame can also be determined by the geometric positional relationship between the coordinates of the bottom vertex and the coordinates of the top vertex of the three-dimensional external frame. Finally, the forward-looking obstacle detection information, the right forward-looking obstacle detection information, and coordinates of each vertex of the three-dimensional circumscribed frame may be determined as the obstacle information.
The two formulas and the related contents are taken as an invention point of the embodiments of the present disclosure, and the technical problem mentioned in the background art, namely that "due to the consensus between two adjacent vehicle-mounted cameras, the obstacle region in the two captured road images has a cut-off condition in the images, if the cut-off portion of the obstacle region in the two road images is too large, even if the obstacle information extracted from the two road images is complemented, it is difficult to generate complete obstacle information, thereby resulting in a decrease in accuracy of the generated obstacle information. Factors that cause the accuracy of the generated obstacle information to be lowered tend to be as follows: because of the common knowledge between two adjacent vehicle-mounted cameras, the condition that the obstacle region in the two photographed road images is truncated in the images is caused, and if the truncated part of the obstacle region in the two road images is too large, even if the obstacle information extracted from the two road images is complemented, the complete obstacle information is difficult to generate. If the above factors are solved, the accuracy of the generated obstacle information can be improved. To achieve this, first, the coordinates of the ground point of the forward-looking obstacle wheel, the coordinates of the ground point of the right-looking obstacle wheel, the obstacle detecting frame, and the coordinates of the characteristic point of the second obstacle detecting frame corresponding to the lower left corner of the obstacle detecting frame are generated as basic parameters for generating the coordinates of the vertices of the three-dimensional circumscribed obstacle frame. Secondly, the obstacle moving direction vector can be used as a priori data of the obstacle moving direction in a formula through the introduced historical frame so as to facilitate object solving. Meanwhile, an obstacle detection time difference value is generated, and the obstacle detection time difference value can be used for constraining and introducing errors between the historical frame obstacle movement direction vector and the current frame obstacle movement direction vector in a first constraint equation. In addition, considering that there is a case where the right forward looking obstacle wheel ground point coordinates or the forward looking obstacle wheel ground point coordinates are not recognized, different constraint equations may be constructed in different cases. For example, if the forward looking obstacle wheel ground point coordinates are not detected, but the right forward looking obstacle wheel ground point coordinates are detected, a third constraint equation may be constructed for constraining the distance between the right forward looking obstacle wheel ground point coordinates and the first target straight line. Thus, even if the cut-off portions of the obstacle region in the two road images are excessively large, the first obstacle detection frame feature point coordinates and the current frame obstacle movement direction vector can be subjected to adjustment processing by using the constraint equation to generate an adjusted obstacle movement direction vector and an adjusted detection frame feature point coordinates. Therefore, the generated adjusted obstacle moving direction vector and the generated adjusted detection frame characteristic point coordinate can be improved in accuracy. Accordingly, the respective vertex coordinates of the three-dimensional bounding box of the obstacle vehicle can be determined by the geometric relationship. Further, the accuracy of the generated obstacle information can be improved
Alternatively, the executing body may further send the obstacle information to a display terminal of the current vehicle for display.
The above embodiments of the present disclosure have the following advantageous effects: by the obstacle information generation method of some embodiments of the present disclosure, the accuracy of the generated obstacle information may be improved. Specifically, the cause of the decrease in accuracy of the generated obstacle information is that: because two adjacent vehicle-mounted cameras do not share, the situation that the obstacle area in the two photographed road images is truncated in the images exists, and if the truncated part of the obstacle area in the two road images is too large, even if the obstacle information extracted from the two road images is complemented, complete obstacle information is difficult to generate. Based on this, the obstacle information generation method of some embodiments of the present disclosure first acquires a forward-looking road image captured by a forward-looking camera of a current vehicle and a right forward-looking road image captured by a right forward-looking camera. Then, a history frame obstacle movement direction vector, a current frame obstacle movement direction vector, and first obstacle detection frame feature point coordinates are determined. By introducing the history frame obstacle movement direction vector, the method can be used as prior information of an obstacle vehicle, and simultaneously introducing the current frame obstacle movement direction vector and the first obstacle detection frame feature point coordinate, the method can be used for associating the obstacle features in two road images under the condition that the cut-off part between the road images is too large. Then, the forward-looking road image and the right forward-looking road image are subjected to recognition processing to generate forward-looking obstacle detection information and right forward-looking obstacle detection information. And generating an adjusted obstacle movement direction vector and adjusted detection frame feature point coordinates based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward looking obstacle detection information and the right forward looking obstacle detection information. Therefore, more accurate adjusted obstacle moving direction vectors and adjusted detection frame characteristic point coordinates can be obtained. And finally, generating obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame characteristic point coordinates. Thus, even in the case where the cut-off portion of the obstacle region in the two road images is excessively large, complete obstacle information can be generated. Thus, the accuracy of the generated obstacle information is improved.
With further reference to fig. 2, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of an obstacle information generating device, which correspond to those method embodiments shown in fig. 1, and which are particularly applicable in various electronic apparatuses.
As shown in fig. 2, the obstacle information generating apparatus 200 of some embodiments includes: an acquisition unit 201, a determination unit 202, an identification processing unit 203, a first generation unit 204, and a second generation unit 205. Wherein the acquiring unit 201 is configured to acquire a forward-looking road image captured by a forward-looking camera of the current vehicle and a right forward-looking road image captured by a right forward-looking camera; a determining unit 202 configured to determine a history frame obstacle movement direction vector, a current frame obstacle movement direction vector, and first obstacle detection frame feature point coordinates, wherein the history frame obstacle movement direction vector and the current frame obstacle movement direction vector are unit vectors; an identification processing unit 203 configured to perform identification processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information; a first generation unit 204 configured to generate an adjusted obstacle movement direction vector and an adjusted detection frame feature point coordinate based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward-looking obstacle detection information, and the right forward-looking obstacle detection information; the second generation unit 205 is configured to generate obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates.
It will be appreciated that the elements described in the apparatus 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations, features and resulting benefits described above for the method are equally applicable to the apparatus 200 and the units contained therein, and are not described in detail herein.
Referring now to fig. 3, a schematic diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means 301 (e.g., a central processing unit, a graphics processor, etc.) that may perform various suitable actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate with other devices wirelessly or by wire to exchange data. While fig. 3 shows an electronic device 300 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 3 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 309, or from storage device 308, or from ROM 302. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 301.
It should be noted that, in some embodiments of the present disclosure, the computer readable medium may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (Hyper Text Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the apparatus; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a forward-looking road image shot by a forward-looking camera of a current vehicle and a right forward-looking road image shot by a right forward-looking camera; determining a historical frame obstacle movement direction vector, a current frame obstacle movement direction vector and a first obstacle detection frame feature point coordinate, wherein the historical frame obstacle movement direction vector and the current frame obstacle movement direction vector are unit vectors; performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information; generating an adjusted obstacle movement direction vector and adjusted detection frame feature point coordinates based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward looking obstacle detection information, and the right forward looking obstacle detection information; and generating obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame characteristic point coordinates.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, a determination unit, an identification processing unit, a first generation unit, and a second generation unit. The names of these units do not constitute limitations on the unit itself in some cases, and the acquisition unit may also be described as "a unit that acquires a forward-looking road image taken by a forward-looking camera of the current vehicle and a right forward-looking road image taken by a right forward-looking camera", for example.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (8)

1. An obstacle information generation method, comprising:
acquiring a forward-looking road image shot by a forward-looking camera of a current vehicle and a right forward-looking road image shot by a right forward-looking camera;
determining a historical frame obstacle movement direction vector, a current frame obstacle movement direction vector and first obstacle detection frame feature point coordinates, wherein the historical frame obstacle movement direction vector and the current frame obstacle movement direction vector are unit vectors;
performing recognition processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information;
generating an adjusted obstacle movement direction vector and adjusted detection frame feature point coordinates based on the historical frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward looking obstacle detection information and the right forward looking obstacle detection information;
generating obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates;
the identifying the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information includes:
Performing obstacle detection on the forward-looking road image to generate forward-looking obstacle detection information, wherein the forward-looking obstacle detection information comprises forward-looking obstacle wheel grounding point coordinates;
performing obstacle detection on the right front view road image to generate right front view obstacle detection information, wherein the right front view obstacle detection information comprises a right front view obstacle wheel grounding point coordinate, an obstacle detection frame and a second obstacle detection frame characteristic point coordinate corresponding to the left lower corner position of the obstacle detection frame;
wherein the generating an adjusted obstacle movement direction vector and an adjusted detection frame feature point coordinate based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward looking obstacle detection information and the right forward looking obstacle detection information includes:
determining the time difference between the time points corresponding to the obstacle moving direction vector of the historical frame and the obstacle moving direction vector of the current frame to obtain an obstacle detection time difference value;
projecting the first obstacle detection frame characteristic point coordinates to an image coordinate system of the right front view road image to obtain projected right front view characteristic point coordinates;
Constructing a first constraint equation and a second constraint equation based on the historical frame obstacle movement direction vector, the current frame obstacle movement direction vector, the obstacle detection time difference value, the projected right front view characteristic point coordinate, the second obstacle detection frame characteristic point coordinate and a preset obstacle vehicle angular velocity threshold, wherein the first constraint equation is used for constraining the direction difference between the historical frame obstacle movement direction vector and the current frame obstacle movement direction vector to be within a certain range, and the second constraint equation is used for representing the distance value constraint between the projected right front view characteristic point coordinate and the second obstacle detection frame characteristic point coordinate;
in response to determining that the right forward looking obstacle detection information includes that the right forward looking obstacle wheel ground point coordinates are not empty, constructing a third constraint equation based on the current frame obstacle movement direction vector, the right forward looking obstacle wheel ground point coordinates, and the first obstacle detection frame feature point coordinates, wherein the third constraint equation is used to characterize a distance constraint between the right forward looking obstacle wheel ground point coordinates and a first target straight line;
and adjusting the first obstacle detection frame characteristic point coordinate and the current frame obstacle movement direction vector based on the first constraint equation, the second constraint equation and the third constraint equation to generate an adjusted obstacle movement direction vector and an adjusted detection frame characteristic point coordinate.
2. The method of claim 1, wherein the method further comprises:
and sending the obstacle information to a display terminal of the current vehicle for display.
3. The method of claim 1, wherein the determining the historical frame obstacle movement direction vector, the current frame obstacle movement direction vector, and the first obstacle detection frame feature point coordinates comprises:
acquiring a current frame obstacle course angle and a historical frame road image sequence;
detecting obstacle directions of all historical frame road images in the historical frame road image sequence to obtain a detection direction vector sequence;
selecting a detection direction vector meeting a preset time condition from the detection direction vector sequence as a historical frame obstacle movement direction vector;
generating a current frame obstacle movement direction vector by using the current frame obstacle course angle;
detecting the obstacle of the forward-looking road image to obtain the characteristic point coordinates of the detected obstacle;
and projecting the detected obstacle characteristic point coordinates to a preset map coordinate system to generate first obstacle detection frame characteristic point coordinates.
4. The method of claim 1, wherein the generating the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates based on the historical frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward looking obstacle detection information, and the right forward looking obstacle detection information further comprises:
In response to determining that the forward-looking obstacle detection information includes forward-looking obstacle wheel ground point coordinates that are not empty, constructing a fourth constraint equation based on the forward-looking obstacle wheel ground point coordinates, the first obstacle detection frame feature point coordinates, and the current frame obstacle movement direction vector, wherein the fourth constraint equation is used for characterizing a distance constraint between the forward-looking obstacle wheel ground point coordinates and a second target straight line;
and adjusting the first obstacle detection frame characteristic point coordinate and the current frame obstacle movement direction vector based on the first constraint equation, the second constraint equation, the third constraint equation and the fourth constraint equation to generate an adjusted obstacle movement direction vector and an adjusted detection frame characteristic point coordinate.
5. The method of claim 1, wherein the generating obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection box feature point coordinates comprises:
generating a first obstacle frame intersection point coordinate based on the adjusted obstacle movement direction vector and the adjusted detection frame characteristic point coordinate;
performing back projection processing on the connection line equation of the adjusted detection frame characteristic point coordinates and the first obstacle frame intersection point coordinates to generate a back projection line segment equation;
Back projecting the first obstacle frame corner point coordinates into a map coordinate system to obtain back projected frame intersection point coordinates;
generating a second obstacle frame intersection point coordinate based on the back projection frame intersection point coordinate and the back projection line segment equation;
and generating obstacle information based on the first obstacle frame intersection point coordinates and the second obstacle frame intersection point coordinates.
6. An obstacle information generating device comprising:
an acquisition unit configured to acquire a forward-looking road image captured by a forward-looking camera of a current vehicle and a right forward-looking road image captured by a right forward-looking camera;
a determining unit configured to determine a history frame obstacle movement direction vector, a current frame obstacle movement direction vector, and first obstacle detection frame feature point coordinates, wherein the history frame obstacle movement direction vector and the current frame obstacle movement direction vector are both unit vectors;
an identification processing unit configured to perform identification processing on the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information;
a first generation unit configured to generate an adjusted obstacle movement direction vector and adjusted detection frame feature point coordinates based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward-looking obstacle detection information, and the right forward-looking obstacle detection information;
A second generation unit configured to generate obstacle information based on the adjusted obstacle movement direction vector and the adjusted detection frame feature point coordinates;
the identifying the forward-looking road image and the right forward-looking road image to generate forward-looking obstacle detection information and right forward-looking obstacle detection information includes:
performing obstacle detection on the forward-looking road image to generate forward-looking obstacle detection information, wherein the forward-looking obstacle detection information comprises forward-looking obstacle wheel grounding point coordinates;
performing obstacle detection on the right front view road image to generate right front view obstacle detection information, wherein the right front view obstacle detection information comprises a right front view obstacle wheel grounding point coordinate, an obstacle detection frame and a second obstacle detection frame characteristic point coordinate corresponding to the left lower corner position of the obstacle detection frame;
wherein the generating an adjusted obstacle movement direction vector and an adjusted detection frame feature point coordinate based on the history frame obstacle movement direction vector, the current frame obstacle movement direction vector, the forward looking obstacle detection information and the right forward looking obstacle detection information includes:
Determining the time difference between the time points corresponding to the obstacle moving direction vector of the historical frame and the obstacle moving direction vector of the current frame to obtain an obstacle detection time difference value;
projecting the first obstacle detection frame characteristic point coordinates to an image coordinate system of the right front view road image to obtain projected right front view characteristic point coordinates;
constructing a first constraint equation and a second constraint equation based on the historical frame obstacle movement direction vector, the current frame obstacle movement direction vector, the obstacle detection time difference value, the projected right front view characteristic point coordinate, the second obstacle detection frame characteristic point coordinate and a preset obstacle vehicle angular velocity threshold, wherein the first constraint equation is used for constraining the direction difference between the historical frame obstacle movement direction vector and the current frame obstacle movement direction vector to be within a certain range, and the second constraint equation is used for representing the distance value constraint between the projected right front view characteristic point coordinate and the second obstacle detection frame characteristic point coordinate;
in response to determining that the right forward looking obstacle detection information includes that the right forward looking obstacle wheel ground point coordinates are not empty, constructing a third constraint equation based on the current frame obstacle movement direction vector, the right forward looking obstacle wheel ground point coordinates, and the first obstacle detection frame feature point coordinates, wherein the third constraint equation is used to characterize a distance constraint between the right forward looking obstacle wheel ground point coordinates and a first target straight line;
And adjusting the first obstacle detection frame characteristic point coordinate and the current frame obstacle movement direction vector based on the first constraint equation, the second constraint equation and the third constraint equation to generate an adjusted obstacle movement direction vector and an adjusted detection frame characteristic point coordinate.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-5.
8. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-5.
CN202310395895.2A 2023-04-14 2023-04-14 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium Active CN116563817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310395895.2A CN116563817B (en) 2023-04-14 2023-04-14 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310395895.2A CN116563817B (en) 2023-04-14 2023-04-14 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Publications (2)

Publication Number Publication Date
CN116563817A CN116563817A (en) 2023-08-08
CN116563817B true CN116563817B (en) 2024-02-20

Family

ID=87488855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310395895.2A Active CN116563817B (en) 2023-04-14 2023-04-14 Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium

Country Status (1)

Country Link
CN (1) CN116563817B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08124080A (en) * 1994-10-20 1996-05-17 Honda Motor Co Ltd Obstacle detector of vehicle
JP2004309273A (en) * 2003-04-04 2004-11-04 Sumitomo Electric Ind Ltd Distance detector and obstacle supervising apparatus for vehicle
JP2007069806A (en) * 2005-09-08 2007-03-22 Clarion Co Ltd Obstacle detecting device for vehicle
JP2018097776A (en) * 2016-12-16 2018-06-21 株式会社デンソーテン Obstacle detection device and obstacle detection method
WO2019144286A1 (en) * 2018-01-23 2019-08-01 深圳市大疆创新科技有限公司 Obstacle detection method, mobile platform, and computer readable storage medium
WO2020103427A1 (en) * 2018-11-23 2020-05-28 华为技术有限公司 Object detection method, related device and computer storage medium
WO2020206708A1 (en) * 2019-04-09 2020-10-15 广州文远知行科技有限公司 Obstacle recognition method and apparatus, computer device, and storage medium
KR20200123513A (en) * 2019-04-19 2020-10-30 주식회사 아이유플러스 Method And Apparatus for Displaying 3D Obstacle by Combining Radar And Video
KR20210040312A (en) * 2020-05-29 2021-04-13 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Obstacle detection method and device, apparatus, and storage medium
KR20210042274A (en) * 2020-05-20 2021-04-19 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Method and apparatus for detecting obstacle, electronic device, storage medium and program
CN113205088A (en) * 2021-07-06 2021-08-03 禾多科技(北京)有限公司 Obstacle image presentation method, electronic device, and computer-readable medium
WO2021223116A1 (en) * 2020-05-06 2021-11-11 上海欧菲智能车联科技有限公司 Perceptual map generation method and apparatus, computer device and storage medium
WO2022083402A1 (en) * 2020-10-22 2022-04-28 腾讯科技(深圳)有限公司 Obstacle detection method and apparatus, computer device, and storage medium
CN114419604A (en) * 2022-03-28 2022-04-29 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium
CN115257727A (en) * 2022-09-27 2022-11-01 禾多科技(北京)有限公司 Obstacle information fusion method and device, electronic equipment and computer readable medium
WO2022252380A1 (en) * 2021-06-04 2022-12-08 魔门塔(苏州)科技有限公司 Multi-frame fusion method and apparatus for grounding contour line of stationary obstacle, and medium
CN115468578A (en) * 2022-11-03 2022-12-13 广汽埃安新能源汽车股份有限公司 Path planning method and device, electronic equipment and computer readable medium
CN115546293A (en) * 2022-12-02 2022-12-30 广汽埃安新能源汽车股份有限公司 Obstacle information fusion method and device, electronic equipment and computer readable medium
CN115817463A (en) * 2023-02-23 2023-03-21 禾多科技(北京)有限公司 Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109521756B (en) * 2017-09-18 2022-03-08 阿波罗智能技术(北京)有限公司 Obstacle motion information generation method and apparatus for unmanned vehicle

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08124080A (en) * 1994-10-20 1996-05-17 Honda Motor Co Ltd Obstacle detector of vehicle
JP2004309273A (en) * 2003-04-04 2004-11-04 Sumitomo Electric Ind Ltd Distance detector and obstacle supervising apparatus for vehicle
JP2007069806A (en) * 2005-09-08 2007-03-22 Clarion Co Ltd Obstacle detecting device for vehicle
JP2018097776A (en) * 2016-12-16 2018-06-21 株式会社デンソーテン Obstacle detection device and obstacle detection method
WO2019144286A1 (en) * 2018-01-23 2019-08-01 深圳市大疆创新科技有限公司 Obstacle detection method, mobile platform, and computer readable storage medium
WO2020103427A1 (en) * 2018-11-23 2020-05-28 华为技术有限公司 Object detection method, related device and computer storage medium
WO2020206708A1 (en) * 2019-04-09 2020-10-15 广州文远知行科技有限公司 Obstacle recognition method and apparatus, computer device, and storage medium
KR20200123513A (en) * 2019-04-19 2020-10-30 주식회사 아이유플러스 Method And Apparatus for Displaying 3D Obstacle by Combining Radar And Video
WO2021223116A1 (en) * 2020-05-06 2021-11-11 上海欧菲智能车联科技有限公司 Perceptual map generation method and apparatus, computer device and storage medium
KR20210042274A (en) * 2020-05-20 2021-04-19 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Method and apparatus for detecting obstacle, electronic device, storage medium and program
KR20210040312A (en) * 2020-05-29 2021-04-13 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Obstacle detection method and device, apparatus, and storage medium
WO2022083402A1 (en) * 2020-10-22 2022-04-28 腾讯科技(深圳)有限公司 Obstacle detection method and apparatus, computer device, and storage medium
WO2022252380A1 (en) * 2021-06-04 2022-12-08 魔门塔(苏州)科技有限公司 Multi-frame fusion method and apparatus for grounding contour line of stationary obstacle, and medium
CN113205088A (en) * 2021-07-06 2021-08-03 禾多科技(北京)有限公司 Obstacle image presentation method, electronic device, and computer-readable medium
CN114419604A (en) * 2022-03-28 2022-04-29 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium
CN115257727A (en) * 2022-09-27 2022-11-01 禾多科技(北京)有限公司 Obstacle information fusion method and device, electronic equipment and computer readable medium
CN115468578A (en) * 2022-11-03 2022-12-13 广汽埃安新能源汽车股份有限公司 Path planning method and device, electronic equipment and computer readable medium
CN115546293A (en) * 2022-12-02 2022-12-30 广汽埃安新能源汽车股份有限公司 Obstacle information fusion method and device, electronic equipment and computer readable medium
CN115817463A (en) * 2023-02-23 2023-03-21 禾多科技(北京)有限公司 Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于迁移学习的路面障碍物检测;赵永刚等;《客车技术与研究》;第43卷(第04期);全文 *

Also Published As

Publication number Publication date
CN116563817A (en) 2023-08-08

Similar Documents

Publication Publication Date Title
EP3627180B1 (en) Sensor calibration method and device, computer device, medium, and vehicle
CN111325796B (en) Method and apparatus for determining pose of vision equipment
CN111079619B (en) Method and apparatus for detecting target object in image
CN113607185B (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN112116655B (en) Target object position determining method and device
CN114399588B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN115761702B (en) Vehicle track generation method, device, electronic equipment and computer readable medium
CN114993328B (en) Vehicle positioning evaluation method, device, equipment and computer readable medium
CN116182878B (en) Road curved surface information generation method, device, equipment and computer readable medium
CN115817463B (en) Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium
CN116164770B (en) Path planning method, path planning device, electronic equipment and computer readable medium
CN114445597B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN112232451B (en) Multi-sensor data fusion method and device, electronic equipment and medium
WO2024060708A1 (en) Target detection method and apparatus
CN115468578B (en) Path planning method and device, electronic equipment and computer readable medium
CN116311155A (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN116563817B (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN116805331A (en) Method, device, equipment and storage medium for calculating vehicle orientation angle
WO2022194158A1 (en) Target tracking method and apparatus, device, and medium
CN112880675B (en) Pose smoothing method and device for visual positioning, terminal and mobile robot
CN115565158A (en) Parking space detection method and device, electronic equipment and computer readable medium
CN116563818B (en) Obstacle information generation method, obstacle information generation device, electronic device, and computer-readable medium
CN115222815A (en) Obstacle distance detection method, obstacle distance detection device, computer device, and storage medium
CN111383337B (en) Method and device for identifying objects
CN116740682B (en) Vehicle parking route information generation method, device, electronic equipment and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant