CN114723640A - Obstacle information generation method and device, electronic equipment and computer readable medium - Google Patents
Obstacle information generation method and device, electronic equipment and computer readable medium Download PDFInfo
- Publication number
- CN114723640A CN114723640A CN202210559337.0A CN202210559337A CN114723640A CN 114723640 A CN114723640 A CN 114723640A CN 202210559337 A CN202210559337 A CN 202210559337A CN 114723640 A CN114723640 A CN 114723640A
- Authority
- CN
- China
- Prior art keywords
- frame
- obstacle
- barrier
- coordinate
- point coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000005070 sampling Methods 0.000 claims abstract description 98
- 230000004888 barrier function Effects 0.000 claims abstract description 95
- 238000012937 correction Methods 0.000 claims description 22
- 238000000605 extraction Methods 0.000 claims description 22
- 239000011159 matrix material Substances 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/08—Projecting images onto non-planar surfaces, e.g. geodetic screens
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the disclosure discloses an obstacle information generation method, an obstacle information generation device, electronic equipment and a computer readable medium. One embodiment of the method comprises: extracting the features of the target road image to obtain an obstacle feature information set; for each obstacle feature information in the obstacle feature information set, executing the following obstacle information generation steps: generating a coordinate set of corner points of the frame of the barrier; obtaining a projection barrier frame corner point coordinate set; generating a first external rectangular frame; sampling each barrier frame coordinate in each barrier frame coordinate sequence in the barrier frame coordinate sequence set to generate a frame sampling point coordinate sequence; correcting coordinates of each frame sampling point in each frame sampling point coordinate sequence in the frame sampling point coordinate sequence set to generate a corrected frame sampling point coordinate sequence; generating a second external rectangular frame; obstacle information is generated. This embodiment can improve the efficiency of generating obstacle information.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method and a device for generating obstacle information, electronic equipment and a computer readable medium.
Background
The generation of obstacle information is of great significance to the field of automatic driving. At present, when obstacle information is generated, the following method is generally adopted: firstly, a road image shot by a monocular camera and having distortion needs to be subjected to distortion removal processing to obtain a distortion-removed road image, and then, the neural network is utilized to extract obstacle information from the distortion-removed road image.
However, when the obstacle information generation is performed in the above manner, there are often technical problems as follows:
the distortion removal processing of the road image having distortion consumes a large amount of computing resources, thereby resulting in a reduction in the efficiency of generating obstacle information.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose an obstacle information generating method, apparatus, electronic device and computer readable medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide an obstacle information generating method, including: extracting features of a target road image to obtain an obstacle feature information set, wherein the target road image is a distorted image, and the obstacle feature information in the obstacle feature information set comprises obstacle attribute information and an obstacle frame coordinate sequence set; for each obstacle feature information in the obstacle feature information set, executing the following obstacle information generation steps: generating a coordinate set of corner points of a frame of the barrier based on the barrier attribute information included in the barrier feature information; projecting coordinates of each barrier frame corner point in the barrier frame corner point coordinate set to obtain a projected barrier frame corner point coordinate set; generating a first external rectangular frame based on the projection barrier frame corner coordinate set; sampling each barrier frame coordinate in each barrier frame coordinate sequence in the barrier frame coordinate sequence set to generate a frame sampling point coordinate sequence, and obtaining a frame sampling point coordinate sequence set; correcting coordinates of each frame sampling point in each frame sampling point coordinate sequence in the frame sampling point coordinate sequence set to generate a corrected frame sampling point coordinate sequence, and obtaining a corrected frame sampling point coordinate sequence set; generating a second external rectangular frame based on the coordinate sequence set of the sampling points of the correction frame; and generating obstacle information based on the first external rectangular frame and the second external rectangular frame.
In a second aspect, some embodiments of the present disclosure provide an obstacle information generating apparatus, including: the system comprises a feature extraction unit, a feature extraction unit and a feature extraction unit, wherein the feature extraction unit is configured to perform feature extraction on a target road image to obtain an obstacle feature information set, the target road image is a distorted image, and the obstacle feature information in the obstacle feature information set comprises obstacle attribute information and an obstacle frame coordinate sequence set; a generating unit configured to execute the following obstacle information generating steps for each obstacle feature information in the above-described obstacle feature information set: generating a coordinate set of corner points of a frame of the barrier based on the barrier attribute information included in the barrier feature information; projecting coordinates of each barrier frame corner point in the barrier frame corner point coordinate set to obtain a projected barrier frame corner point coordinate set; generating a first external rectangular frame based on the projection barrier frame corner coordinate set; sampling each barrier frame coordinate in each barrier frame coordinate sequence in the barrier frame coordinate sequence set to generate a frame sampling point coordinate sequence, and obtaining a frame sampling point coordinate sequence set; correcting coordinates of each frame sampling point in each frame sampling point coordinate sequence in the frame sampling point coordinate sequence set to generate a corrected frame sampling point coordinate sequence, and obtaining a corrected frame sampling point coordinate sequence set; generating a second external rectangular frame based on the coordinate sequence set of the sampling points of the correction frame; and generating obstacle information based on the first external rectangular frame and the second external rectangular frame.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following beneficial effects: by the obstacle information generation method of some embodiments of the present disclosure, the efficiency of generating obstacle information may be improved. Specifically, the reason why the efficiency of generating the obstacle information is reduced is that: the distortion removal processing of the road image with distortion needs to consume a large amount of computing resources. Based on this, the obstacle information generation method of some embodiments of the present disclosure, first, by feature extraction, an obstacle feature information set in a distorted image (target road image) may be obtained. Then, for each obstacle feature information in the obstacle feature information set, a projection obstacle frame corner coordinate set can be obtained by projection through generating a corresponding obstacle frame corner coordinate set. In this way, it can be used to generate a first bounding rectangle. Thus, it can be used to generate obstacle information. Then, through a sampling process and a rectification process, it can be used to generate a second circumscribed rectangular frame. Finally, the first external rectangular frame and the second external rectangular frame are generated, so that the generation of the obstacle information can be realized. Therefore, under the condition that the distorted road image is not subjected to distortion removal processing, the obstacle information can be extracted from the road image, and occupation of a large amount of computing resources is avoided. Thus, the efficiency of generating the obstacle information can be improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a flow diagram of some embodiments of an obstacle information generation method according to the present disclosure;
fig. 2 is a schematic structural diagram of some embodiments of an obstacle information generating apparatus according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of an obstacle information generating method according to the present disclosure. The process 100 of the obstacle information generating method includes the following steps:
In some embodiments, an executing subject of the obstacle information generating method may perform feature extraction on the target road image to obtain an obstacle feature information set. The target road image may be a distorted image. The obstacle feature information in the obstacle feature information set may include obstacle attribute information and an obstacle frame coordinate sequence set. The target road image may be a road image captured by a vehicle-mounted monocular camera, which is acquired in advance in a wired manner or a wireless manner. The feature extraction can be carried out on the target road image through a preset feature extraction algorithm to obtain an obstacle feature information set. Each obstacle feature information in the set of obstacle feature information may characterize an obstacle in the target road image. The obstacle attribute information may include, but is not limited to, at least one of: obstacle type, obstacle size, obstacle movement speed, and the like. Each obstacle frame coordinate in the obstacle frame coordinate sequence may be a frame coordinate of a minimum bounding rectangle of an area in which the obstacle is located in the target road image. Each obstacle border coordinate sequence may correspond to one side of a minimum bounding rectangle.
By way of example, the above-described feature extraction algorithm may include, but is not limited to, at least one of: FCN (full volumetric Networks) model, Resnet (Residual neural Network) model, VGG (Visual Geometry Group Network) model, Google Net (deep neural Network) model, and the like.
and step 1021, generating a coordinate set of corner points of the frame of the obstacle based on the obstacle attribute information included in the obstacle feature information.
In some embodiments, the execution subject may generate the set of corner point coordinates of the border of the obstacle in various ways based on the obstacle attribute information included in the obstacle feature information.
In some optional implementations of some embodiments, the obstacle attribute information may further include: the system comprises an obstacle center point coordinate, an obstacle length value, an obstacle width value, an obstacle height value and an obstacle yaw angle. The executing body generates the coordinate set of the corner points of the frame of the obstacle based on the attribute information of the obstacle included in the characteristic information of the obstacle, and the executing body may include the following steps:
and generating an obstacle frame corner point coordinate set by using the obstacle center point coordinate, the obstacle length value, the obstacle width value and the obstacle height value which are included by the obstacle attribute information. The coordinates of the center point of the obstacle may be three-dimensional coordinates of the center point of the obstacle. The obstacle yaw angle may be a yaw angle of the obstacle with respect to the on-vehicle monocular camera. Each of the coordinates of the frame corner points of the obstacle in the coordinate set of the frame corner points of the obstacle may be eight vertex coordinates of a smallest circumscribed quadrangular prism of the obstacle in the camera coordinate system. Therefore, on the basis of the coordinates of the center point of the obstacle, the three-dimensional coordinates of the frame corner points of each obstacle can be determined through the length value of the obstacle, the width value of the obstacle and the height value of the obstacle, and the coordinate set of the frame corner points of the obstacle is obtained.
And 1022, performing projection processing on each barrier frame corner coordinate in the barrier frame corner coordinate set to obtain a projection barrier frame corner coordinate set.
In some embodiments, the executing body may perform projection processing on each of the coordinates of the frame corner points of the obstacle in the coordinate set of the frame corner points of the obstacle, and obtain a coordinate set of the frame corner points of the projected obstacle through various manners.
In some optional implementation manners of some embodiments, the executing unit performs projection processing on each obstacle border corner coordinate in the obstacle border corner coordinate set to obtain a projection obstacle border corner coordinate set, and may include the following steps:
and projecting each barrier frame corner coordinate in the barrier frame corner coordinate set to a target image based on the barrier center point coordinate and the barrier yaw angle to obtain a projected barrier frame corner coordinate set. The target image may be a distortion-free blank image. The target image may be the same size and resolution as the target road image. The coordinates of the corner points of the frame of the projected obstacle can be obtained by projecting the coordinates of the corner points of the frame of the obstacle into an image coordinate system of the target image through the following formula:
wherein the content of the first and second substances,and representing the coordinates of the corner points of the frame of the projection barrier.And an abscissa value representing coordinates of the corner point of the frame of the projection obstacle.And a vertical coordinate value representing coordinates of the corner point of the frame of the projection obstacle.And representing the internal reference matrix of the vehicle-mounted monocular camera.A rotation matrix is represented.The above-described obstacle yaw angle is shown.Representing the coordinates of the center point of the obstacle.And an abscissa value representing the coordinates of the corner point of the frame of the obstacle.And a longitudinal coordinate value representing the coordinates of the corner points of the frame of the obstacle.And a vertical coordinate value representing the coordinate of the corner point of the frame of the obstacle.The conversion parameters are expressed for shortening the formula length.The 3 rd element of the vector in parentheses is taken.The 1 st to 2 nd elements of the vector in parentheses are taken.
And 1023, generating a first external rectangular frame based on the projection barrier frame corner point coordinate set.
In some embodiments, the execution subject may generate the first circumscribed rectangle frame in various ways based on the set of coordinates of corner points of the frame of the projection obstacle.
In some optional implementation manners of some embodiments, the generating, by the execution main body, a first circumscribed rectangle frame based on the coordinate set of corner points of the frame of the projection obstacle may include:
and determining the minimum circumscribed rectangle of each projection barrier frame corner coordinate in the projection barrier frame corner coordinate set as a first circumscribed rectangle frame.
And 1024, sampling each barrier frame coordinate in each barrier frame coordinate sequence in the barrier frame coordinate sequence set to generate a frame sampling point coordinate sequence, and obtaining a frame sampling point coordinate sequence set.
In some embodiments, the executing body may perform sampling processing on each obstacle frame coordinate in each obstacle frame coordinate sequence in the obstacle frame coordinate sequence set to generate a frame sampling point coordinate sequence, so as to obtain a frame sampling point coordinate sequence set. And sampling each barrier frame coordinate in each barrier frame coordinate sequence in the barrier frame coordinate sequence set through a preset sampling algorithm to generate a frame sampling point coordinate sequence.
By way of example, the sampling algorithm described above may include, but is not limited to, at least one of: UFLD (Ultra-Fast Structure-aware Deep Lane Detection) algorithm, LanNet (Lane line Detection network) model.
And 1025, correcting coordinates of each frame sampling point in each frame sampling point coordinate sequence in the frame sampling point coordinate sequence set to generate a corrected frame sampling point coordinate sequence, so as to obtain a corrected frame sampling point coordinate sequence set.
In some embodiments, the execution main body may correct coordinates of each frame sampling point in each frame sampling point coordinate sequence in the frame sampling point coordinate sequence set to generate a corrected frame sampling point coordinate sequence, so as to obtain a corrected frame sampling point coordinate sequence set. And correcting coordinates of each frame sampling point in each frame sampling point coordinate sequence in the frame sampling point coordinate sequence set through a preset correction algorithm to generate a corrected frame sampling point coordinate sequence, so as to obtain a corrected frame sampling point coordinate sequence set.
By way of example, the above-described corrective algorithm may include, but is not limited to, an aberration model.
And step 1026, generating a second external rectangular frame based on the coordinate sequence set of the sampling points of the corrected frame.
In some embodiments, the execution body may generate the second circumscribed rectangle frame in various ways based on the set of coordinate sequences of the sampling points of the correction frame.
In some optional implementations of some embodiments, the generating, by the execution main body, a second circumscribed rectangle frame based on the set of coordinate sequences of the sampling points of the correction frame may include:
and determining the maximum inscribed rectangle of the area surrounded by the coordinates of each correction frame sampling point in each correction frame sampling point coordinate sequence in the correction frame sampling point coordinate sequence set as a second circumscribed rectangle frame.
In some embodiments, the execution body may generate the obstacle information based on the obstacle attribute information included in the obstacle feature information, the first circumscribed rectangular frame, and the second circumscribed rectangular frame.
In some optional implementation manners of some embodiments, the executing body generates the obstacle information based on the obstacle attribute information, the first circumscribed rectangular frame, and the second circumscribed rectangular frame included in the obstacle feature information, and may include the following steps:
and performing coordinate correction processing on the coordinates of the center point of the obstacle in the obstacle attribute information based on the first external rectangular frame, the second external rectangular frame, a preset maximum covariance matrix and a preset minimum covariance matrix to generate obstacle information. First, the vertex of the first circumscribed rectangle whose horizontal and vertical coordinates are the smallest may be determined as the coordinate of the smallest vertex of the first circumscribed rectangle. Then, the vertex of the first circumscribed rectangle whose sum of the abscissa and ordinate is the largest may be determined as the maximum vertex coordinate of the first circumscribed rectangle. Then, the vertex of the second circumscribed rectangle whose sum of the abscissa and ordinate is the smallest may be determined as the second circumscribed rectangle minimum vertex coordinate. Then, the vertex of the second circumscribed rectangle whose sum of the abscissa and ordinate is the largest may be determined as the second circumscribed rectangle maximum vertex coordinate. Finally, coordinate correction processing can be performed on the center point coordinates of the obstacle in the obstacle attribute information through the following formula to generate target obstacle center coordinates as obstacle information:
wherein the content of the first and second substances,and a minimum vertex distance error matrix between the minimum vertex coordinates of the first circumscribed rectangle and the minimum vertex coordinates of the second circumscribed rectangle is represented.And the maximum vertex distance error matrix is expressed between the maximum vertex coordinates of the first external rectangle and the maximum vertex coordinates of the second external rectangle.And the minimum vertex coordinates of the first circumscribed rectangle are shown.Representing the maximum vertex coordinates of the first circumscribed rectangle.Representing the coordinates of the minimum vertex of the second circumscribed rectangle.Representing the coordinates of the maximum vertex of the second circumscribed rectangle.Representing the sign of a normal distribution.And the covariance matrix represents the coordinate of the preset minimum vertex of the circumscribed rectangle.And the covariance matrix represents the coordinates of the maximum vertex of the preset circumscribed rectangle.Representing the coordinates of the center point of the obstacle.The center coordinates of the target obstacle are indicated.Representing a transposed matrix.A transposed matrix representing the minimum vertex distance error matrix.And an inverse matrix of the covariance matrix representing the minimum vertex coordinates of the circumscribed rectangle.A transpose matrix representing a maximum vertex distance error matrix.And an inverse matrix of the covariance matrix representing the maximum vertex coordinates of the circumscribed rectangle.
The above formula and its related contents are an inventive point of the embodiments of the present disclosure, and can improve the accuracy of the generated obstacle information while improving the efficiency of generating the obstacle information. In practice, in the undistorted image, the three-dimensional frame projection boundary of the obstacle should be located on the boundary of the two-dimensional minimum bounding rectangular frame. The neural network can only deduce the two-dimensional minimum circumscribed rectangle frame of the barrier obtained by the undistorted image, the frame is not a rectangle frame any more after the undistorted image is undistorted, but has certain deformation, and the actual two-dimensional frame should be the maximum inscribed rectangle of the deformed frame. However, it is difficult to obtain an analytical form for removing distortion of a straight line segment, and thus it is also difficult to obtain a distortion-removed analytical form of a two-dimensional frame. Therefore, firstly, the projected obstacle frame corner point coordinate set can be obtained by inputting the obstacle center point coordinate, the obstacle yaw angle and each obstacle frame corner point coordinate in the obstacle frame corner point coordinate set into a coordinate projection formula, so as to generate a first external rectangular frame. Then, through sampling processing and correction processing, three-dimensional correction frame sampling point coordinates in a camera coordinate system can be obtained. Thus, it may be used to determine that the actual two-dimensional box should be the largest inscribed rectangle of the deformed box. Further, a second circumscribed rectangular frame may be generated. Such that the second bounding rectangle can be used to approximate a de-distorted analytic form of the replacement two-dimensional box. And finally, correcting the coordinates of the center point of the obstacle output by the network model through the coordinate correction formula, and improving the correction accuracy of the target function by using the distance error of the vertex coordinates of the first external rectangular frame and the second external rectangular frame and the covariance matrix as constraint conditions. Thus, the accuracy of the generated obstacle information can be improved.
Optionally, the execution main body may further send the generated obstacle information to a display terminal for display.
The above embodiments of the present disclosure have the following advantages: by the obstacle information generation method of some embodiments of the present disclosure, the efficiency of generating obstacle information may be improved. Specifically, the reason why the efficiency of generating the obstacle information is reduced is that: the distortion removal processing of the road image with distortion needs to consume a large amount of computing resources. Based on this, the obstacle information generation method of some embodiments of the present disclosure, first, by feature extraction, an obstacle feature information set in a distorted image (target road image) can be obtained. Then, for each obstacle feature information in the obstacle feature information set, a projection obstacle frame corner coordinate set can be obtained by projection through generating a corresponding obstacle frame corner coordinate set. In this way, it can be used to generate a first bounding rectangle. Thus, it can be used to generate obstacle information. Then, through a sampling process and a rectification process, it can be used to generate a second circumscribed rectangular frame. Finally, the first external rectangular frame and the second external rectangular frame are generated, so that the generation of the obstacle information can be realized. Therefore, under the condition that the distorted road image is not subjected to distortion removal processing, the obstacle information can be extracted from the road image, and occupation of a large amount of computing resources is avoided. Thus, the efficiency of generating the obstacle information is improved.
With further reference to fig. 2, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of an obstacle information generating apparatus, which correspond to those shown in fig. 1, and which may be applied in various electronic devices in particular.
As shown in fig. 2, the obstacle information generating apparatus 200 of some embodiments includes: a feature extraction unit 201 and a generation unit 202. The feature extraction unit 201 is configured to perform feature extraction on a target road image to obtain an obstacle feature information set, where the target road image is a distorted image, and the obstacle feature information in the obstacle feature information set includes obstacle attribute information and an obstacle frame coordinate sequence set; a generating unit 202 configured to execute the following obstacle information generating steps for each obstacle feature information in the above-mentioned obstacle feature information set: generating a coordinate set of corner points of a frame of the barrier based on the barrier attribute information included in the barrier feature information; projecting coordinates of each barrier frame corner point in the barrier frame corner point coordinate set to obtain a projected barrier frame corner point coordinate set; generating a first external rectangular frame based on the projection barrier frame corner coordinate set; sampling each barrier frame coordinate in each barrier frame coordinate sequence in the barrier frame coordinate sequence set to generate a frame sampling point coordinate sequence, and obtaining a frame sampling point coordinate sequence set; correcting coordinates of each frame sampling point in each frame sampling point coordinate sequence in the frame sampling point coordinate sequence set to generate a corrected frame sampling point coordinate sequence, and obtaining a corrected frame sampling point coordinate sequence set; generating a second external rectangular frame based on the coordinate sequence set of the sampling points of the correction frame; and generating obstacle information based on the first external rectangular frame and the second external rectangular frame.
It will be appreciated that the units described in the apparatus 200 correspond to the various steps in the method described with reference to figure 1. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 200 and the units included therein, and are not described herein again.
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: extracting features of a target road image to obtain an obstacle feature information set, wherein the target road image is a distorted image, and the obstacle feature information in the obstacle feature information set comprises obstacle attribute information and an obstacle frame coordinate sequence set; for each obstacle feature information in the obstacle feature information set, executing the following obstacle information generation steps: generating a coordinate set of corner points of a frame of the barrier based on the barrier attribute information included in the barrier feature information; projecting coordinates of each barrier frame corner point in the barrier frame corner point coordinate set to obtain a projected barrier frame corner point coordinate set; generating a first external rectangular frame based on the projection barrier frame corner coordinate set; sampling each barrier frame coordinate in each barrier frame coordinate sequence in the barrier frame coordinate sequence set to generate a frame sampling point coordinate sequence, and obtaining a frame sampling point coordinate sequence set; correcting coordinates of each frame sampling point in each frame sampling point coordinate sequence in the frame sampling point coordinate sequence set to generate a corrected frame sampling point coordinate sequence, and obtaining a corrected frame sampling point coordinate sequence set; generating a second external rectangular frame based on the coordinate sequence set of the sampling points of the correction frame; and generating obstacle information based on the first external rectangular frame and the second external rectangular frame.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a feature extraction unit and a generation unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the feature extraction unit may also be described as a "unit that performs feature extraction on a target road image".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.
Claims (10)
1. An obstacle information generation method comprising:
extracting features of a target road image to obtain an obstacle feature information set, wherein the target road image is a distorted image, and the obstacle feature information in the obstacle feature information set comprises obstacle attribute information and an obstacle frame coordinate sequence set;
for each obstacle feature information in the obstacle feature information set, performing the following obstacle information generation steps:
generating a coordinate set of corner points of a frame of the barrier based on barrier attribute information included in the barrier feature information;
projecting coordinates of each barrier frame corner point in the barrier frame corner point coordinate set to obtain a projected barrier frame corner point coordinate set;
generating a first external rectangular frame based on the projection barrier frame corner coordinate set;
sampling each barrier frame coordinate in each barrier frame coordinate sequence in the barrier frame coordinate sequence set to generate a frame sampling point coordinate sequence, and obtaining a frame sampling point coordinate sequence set;
correcting coordinates of each frame sampling point in each frame sampling point coordinate sequence in the frame sampling point coordinate sequence set to generate a corrected frame sampling point coordinate sequence, and obtaining a corrected frame sampling point coordinate sequence set;
generating a second external rectangular frame based on the correction frame sampling point coordinate sequence set;
and generating obstacle information based on the obstacle attribute information, the first external rectangular frame and the second external rectangular frame included in the obstacle feature information.
2. The method of claim 1, wherein the method further comprises:
and sending the generated obstacle information to a display terminal for displaying.
3. The method of claim 1, wherein the obstacle attribute information includes an obstacle center point coordinate, an obstacle length value, an obstacle width value, an obstacle height value, and an obstacle yaw angle; and
generating a coordinate set of corner points of a frame of the obstacle based on the attribute information of the obstacle included in the characteristic information of the obstacle, including:
and generating an obstacle frame corner point coordinate set by using the obstacle center point coordinate, the obstacle length value, the obstacle width value and the obstacle height value which are included by the obstacle attribute information.
4. The method according to claim 3, wherein the projecting each barrier frame corner coordinate in the set of barrier frame corner coordinates to obtain a set of projected barrier frame corner coordinates comprises:
and projecting each barrier frame corner coordinate in the barrier frame corner coordinate set to a target image based on the barrier center point coordinate and the barrier yaw angle to obtain a projected barrier frame corner coordinate set, wherein the target image is an undistorted blank image.
5. The method of claim 3, wherein the generating a first bounding rectangle based on the set of projected obstacle bezel corner coordinates comprises:
and determining the minimum circumscribed rectangle of each projection barrier frame corner coordinate in the projection barrier frame corner coordinate set as a first circumscribed rectangle frame.
6. The method of claim 1, wherein generating a second circumscribed rectangle frame based on the set of corrective bezel sample point coordinate sequences comprises:
and determining the maximum inscribed rectangle of the area surrounded by the coordinates of each correction frame sampling point in each correction frame sampling point coordinate sequence in the correction frame sampling point coordinate sequence set as a second circumscribed rectangle frame.
7. The method according to claim 3, wherein the generating obstacle information based on the obstacle attribute information included in the obstacle feature information, the first circumscribed rectangle frame, and the second circumscribed rectangle frame, comprises:
and performing coordinate correction processing on the coordinates of the center point of the obstacle in the obstacle attribute information based on the first external rectangular frame, the second external rectangular frame, a preset maximum covariance matrix and a preset minimum covariance matrix to generate obstacle information.
8. An obstacle information generating apparatus comprising:
the system comprises a feature extraction unit, a feature extraction unit and a feature extraction unit, wherein the feature extraction unit is configured to perform feature extraction on a target road image to obtain an obstacle feature information set, the target road image is a distorted image, and the obstacle feature information in the obstacle feature information set comprises obstacle attribute information and an obstacle frame coordinate sequence set;
a generating unit configured to perform the following obstacle information generating steps for each obstacle feature information in the obstacle feature information set:
generating a coordinate set of corner points of a frame of the barrier based on barrier attribute information included in the barrier feature information;
projecting coordinates of each barrier frame corner point in the barrier frame corner point coordinate set to obtain a projected barrier frame corner point coordinate set;
generating a first external rectangular frame based on the projection barrier frame corner coordinate set;
sampling each barrier frame coordinate in each barrier frame coordinate sequence in the barrier frame coordinate sequence set to generate a frame sampling point coordinate sequence, and obtaining a frame sampling point coordinate sequence set;
correcting coordinates of each frame sampling point in each frame sampling point coordinate sequence in the frame sampling point coordinate sequence set to generate a corrected frame sampling point coordinate sequence, and obtaining a corrected frame sampling point coordinate sequence set;
generating a second external rectangular frame based on the correction frame sampling point coordinate sequence set;
and generating barrier information based on the first circumscribed rectangular frame and the second circumscribed rectangular frame.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210559337.0A CN114723640B (en) | 2022-05-23 | 2022-05-23 | Obstacle information generation method and device, electronic equipment and computer readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210559337.0A CN114723640B (en) | 2022-05-23 | 2022-05-23 | Obstacle information generation method and device, electronic equipment and computer readable medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114723640A true CN114723640A (en) | 2022-07-08 |
CN114723640B CN114723640B (en) | 2022-09-27 |
Family
ID=82231398
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210559337.0A Active CN114723640B (en) | 2022-05-23 | 2022-05-23 | Obstacle information generation method and device, electronic equipment and computer readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114723640B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115817463A (en) * | 2023-02-23 | 2023-03-21 | 禾多科技(北京)有限公司 | Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080189036A1 (en) * | 2007-02-06 | 2008-08-07 | Honeywell International Inc. | Method and system for three-dimensional obstacle mapping for navigation of autonomous vehicles |
CN109740443A (en) * | 2018-12-12 | 2019-05-10 | 歌尔股份有限公司 | Detect the method, apparatus and sports equipment of barrier |
CN111443704A (en) * | 2019-12-19 | 2020-07-24 | 苏州智加科技有限公司 | Obstacle positioning method and device for automatic driving system |
CN113762134A (en) * | 2021-09-01 | 2021-12-07 | 沈阳工业大学 | Method for detecting surrounding obstacles in automobile parking based on vision |
CN113963330A (en) * | 2021-10-21 | 2022-01-21 | 京东鲲鹏(江苏)科技有限公司 | Obstacle detection method, obstacle detection device, electronic device, and storage medium |
-
2022
- 2022-05-23 CN CN202210559337.0A patent/CN114723640B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080189036A1 (en) * | 2007-02-06 | 2008-08-07 | Honeywell International Inc. | Method and system for three-dimensional obstacle mapping for navigation of autonomous vehicles |
CN109740443A (en) * | 2018-12-12 | 2019-05-10 | 歌尔股份有限公司 | Detect the method, apparatus and sports equipment of barrier |
CN111443704A (en) * | 2019-12-19 | 2020-07-24 | 苏州智加科技有限公司 | Obstacle positioning method and device for automatic driving system |
CN113762134A (en) * | 2021-09-01 | 2021-12-07 | 沈阳工业大学 | Method for detecting surrounding obstacles in automobile parking based on vision |
CN113963330A (en) * | 2021-10-21 | 2022-01-21 | 京东鲲鹏(江苏)科技有限公司 | Obstacle detection method, obstacle detection device, electronic device, and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115817463A (en) * | 2023-02-23 | 2023-03-21 | 禾多科技(北京)有限公司 | Vehicle obstacle avoidance method and device, electronic equipment and computer readable medium |
Also Published As
Publication number | Publication date |
---|---|
CN114723640B (en) | 2022-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112598762B (en) | Three-dimensional lane line information generation method, device, electronic device, and medium | |
CN113869293B (en) | Lane line recognition method and device, electronic equipment and computer readable medium | |
CN112668588B (en) | Parking space information generation method, device, equipment and computer readable medium | |
CN114399589B (en) | Three-dimensional lane line generation method and device, electronic device and computer readable medium | |
CN114399588B (en) | Three-dimensional lane line generation method and device, electronic device and computer readable medium | |
CN109754464B (en) | Method and apparatus for generating information | |
CN110211195B (en) | Method, device, electronic equipment and computer-readable storage medium for generating image set | |
CN109118456B (en) | Image processing method and device | |
CN113239925A (en) | Text detection model training method, text detection method, device and equipment | |
CN114993328B (en) | Vehicle positioning evaluation method, device, equipment and computer readable medium | |
CN111783777B (en) | Image processing method, apparatus, electronic device, and computer readable medium | |
CN112488095A (en) | Seal image identification method and device and electronic equipment | |
CN111209856B (en) | Invoice information identification method and device, electronic equipment and storage medium | |
CN113537153A (en) | Meter image identification method and device, electronic equipment and computer readable medium | |
CN114445597B (en) | Three-dimensional lane line generation method and device, electronic device and computer readable medium | |
CN114723640B (en) | Obstacle information generation method and device, electronic equipment and computer readable medium | |
CN110827301B (en) | Method and apparatus for processing image | |
CN115393826A (en) | Three-dimensional lane line generation method and device, electronic device and computer readable medium | |
CN114155545A (en) | Form identification method and device, readable medium and electronic equipment | |
CN115100536A (en) | Building identification method, building identification device, electronic equipment and computer readable medium | |
CN114863025B (en) | Three-dimensional lane line generation method and device, electronic device and computer readable medium | |
CN111612714A (en) | Image restoration method and device and electronic equipment | |
CN116134476A (en) | Plane correction method and device, computer readable medium and electronic equipment | |
CN111383337A (en) | Method and device for identifying objects | |
CN116630436B (en) | Camera external parameter correction method, camera external parameter correction device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 201, 202, 301, No. 56-4 Fenghuang South Road, Huadu District, Guangzhou City, Guangdong Province, 510806 Patentee after: Heduo Technology (Guangzhou) Co.,Ltd. Address before: 100099 101-15, 3rd floor, building 9, yard 55, zique Road, Haidian District, Beijing Patentee before: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd. |
|
CP03 | Change of name, title or address |