CN114359513A - Method and device for determining position of obstacle and electronic equipment - Google Patents

Method and device for determining position of obstacle and electronic equipment Download PDF

Info

Publication number
CN114359513A
CN114359513A CN202111629325.2A CN202111629325A CN114359513A CN 114359513 A CN114359513 A CN 114359513A CN 202111629325 A CN202111629325 A CN 202111629325A CN 114359513 A CN114359513 A CN 114359513A
Authority
CN
China
Prior art keywords
obstacle
determining
point cloud
cloud data
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111629325.2A
Other languages
Chinese (zh)
Inventor
丁东鹏
袁庭荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Apollo Zhixing Technology Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd, Apollo Zhixing Technology Guangzhou Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202111629325.2A priority Critical patent/CN114359513A/en
Publication of CN114359513A publication Critical patent/CN114359513A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a method and a device for determining the position of an obstacle and electronic equipment, and relates to the technical field of artificial intelligence such as environment perception and automatic driving. The specific implementation scheme is as follows: when the position of the obstacle is determined, the scene image of the road to be detected can be identified first, and a first obstacle included in the scene image is determined; identifying scene point cloud data of a road to be detected, and determining a second obstacle included in the scene point cloud data; and matching the first obstacle with the second obstacle, and determining the position of the first obstacle according to the matching result, so that the position of the first obstacle can be effectively determined by combining the point number of the scene point cloud, and the accuracy of the determined position of the first obstacle is improved.

Description

Method and device for determining position of obstacle and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for determining a position of an obstacle, and an electronic device.
Background
In an automatic driving scenario, accurately identifying the position of an obstacle in a road is important to improve safety of automatic driving.
In the prior art, when the position of an obstacle in a road is identified, a monocular camera arranged at the front end of a vehicle is generally used for acquiring a scene image of the road, and the position of the obstacle in the road is determined based on the acquired scene image.
However, the accuracy of the determined position of the obstacle is low due to the lack of depth information of the monocular camera.
Disclosure of Invention
The disclosure provides a method and a device for determining a position of an obstacle and electronic equipment, which improve the accuracy of the position of the obstacle determined based on a scene image.
According to a first aspect of the present disclosure, there is provided a method of determining a position of an obstacle, which may include:
the method comprises the steps of identifying a scene image of a road to be detected and determining a first obstacle included in the scene image.
And identifying the scene point cloud data of the road to be detected, and determining a second obstacle included in the scene point cloud data.
And matching the first obstacle with the second obstacle, and determining the position of the first obstacle according to the matching result.
According to a second aspect of the present disclosure, there is provided an obstacle position determination apparatus, which may include:
the first determining unit is used for identifying a scene image of a road to be detected and determining the position of a first obstacle in the scene image.
And the second determining unit is used for identifying the scene point cloud data of the road to be detected and determining a second obstacle included in the scene point cloud data.
And the processing unit is used for matching the first obstacle with the second obstacle and determining the position of the first obstacle according to a matching result.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining a position of an obstacle of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to execute the method for determining a position of an obstacle according to the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the method of determining a position of an obstacle according to the first aspect described above.
According to the technical scheme, the accuracy of the position of the obstacle determined based on the scene image is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow chart of a method for determining a position of an obstacle according to a first embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a method for determining a position of a first obstacle according to a matching result according to a second embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an obstacle position determination apparatus according to a third embodiment of the present disclosure;
fig. 4 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In embodiments of the present disclosure, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the access relationships of the matching objects, indicating that there may be three relationships, e.g., A and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the description of the text of the present disclosure, the character "/" generally indicates that the matching objects before and after are in an "or" relationship. In addition, in the embodiments of the present disclosure, "first", "second", "third", "fourth", "fifth", and "sixth" are only used to distinguish the contents of different objects, and have no other special meaning.
The technical scheme provided by the embodiment of the disclosure can be applied to the technical field of artificial intelligence such as environment perception and automatic driving. Taking an automatic driving scene as an example, accurately identifying the position of an obstacle in a road is important for improving the safety of automatic driving.
In the prior art, the position of an obstacle in a road is usually determined based on a scene image acquired by a monocular camera at the front end of a vehicle. However, the accuracy of the determined position of the obstacle is low due to the lack of depth information of the monocular camera.
In order to improve the accuracy of the position of the obstacle determined based on the scene image, the position of the obstacle identified based on the scene image may be determined jointly with scene point cloud data of the obstacle acquired by other heterogeneous sensors, such as a laser radar or a millimeter wave radar, or the obstacle identified based on the point cloud, so as to correct the position of the obstacle determined based on the scene image, thereby improving the accuracy of the position of the obstacle determined based on the scene image.
However, when the positions of the obstacles identified based on the scene image are determined together with the scene point cloud data, the obstacles identified based on the scene point cloud data and the obstacles identified based on the scene image are not completely corresponding and cannot be completely matched due to the difference between the obstacle identification technology based on the point cloud data and the obstacle identification technology based on the image data; meanwhile, if the scene image data and the scene point cloud data are fused blindly, the computation amount of the fused data is large.
Therefore, when the position of the obstacle determined based on the scene image is determined together with the point cloud data, in order to effectively utilize the identification information of the obstacle and avoid the problem of calculation amount of fusing the image data and the point cloud data, the obstacle identified based on the scene image and the obstacle identified based on the scene point cloud data can be matched, and the position of the obstacle is determined according to the matching result, so that the identification information of the obstacle can be effectively utilized, the position of the obstacle determined based on the road image is corrected, and the accuracy of the position of the obstacle determined based on the road image is improved.
Based on the technical concept, embodiments of the present disclosure provide a method for determining a position of an obstacle, and the method for determining a position of an obstacle provided by the present disclosure will be described in detail through specific embodiments. It is to be understood that the following detailed description may be combined with other embodiments, and that the same or similar concepts or processes may not be repeated in some embodiments.
Example one
Fig. 1 is a flowchart illustrating a method for determining a position of an obstacle according to a first embodiment of the present disclosure, where the method for determining a position of an obstacle may be implemented by software and/or a hardware device, for example, the hardware device may be a terminal or a server. For example, referring to fig. 1, the method for determining the position of the obstacle may include:
s101, identifying a scene image of a road to be detected, and determining a first obstacle included in the scene image.
For example, when the scene image of the road to be detected is obtained, the scene image of the road to be detected sent by other electronic equipment may be received, for example, the scene image of the road to be detected collected by a camera; the scene image of the road to be detected can be searched from the local storage; the scene image of the road to be detected may also be acquired by other methods, and may be specifically set according to actual needs, where the embodiment of the present disclosure does not specifically limit the method for acquiring the scene image of the road to be detected.
After a scene image of a road to be detected is obtained, the scene image can be identified by adopting an image identification technology, and an obstacle in the scene image is determined; for example, in the embodiment of the present disclosure, the obstacle determined based on the scene image may be recorded as a first obstacle, and the obstacle determined based on the scene point cloud data may be recorded as a second obstacle.
Because the monocular camera lacks depth information, the position of the obstacle determined based on the scene image has an error, in order to improve the accuracy of the position of the obstacle determined based on the scene image, scene point cloud data of the road to be detected can be collected by combining other heterogeneous sensors, such as a laser radar or a millimeter wave radar, and a second obstacle included in the scene point cloud data is determined, that is, the following step S102 is executed; in this way, the position of the first obstacle identified based on the scene image can be jointly determined in combination with the second obstacle included in the scene point cloud data, so as to correct the position of the first obstacle.
S102, identifying the scene point cloud data of the road to be detected, and determining a second obstacle included in the scene point cloud data.
For example, the number of the first obstacles may be one or multiple, and may be specifically set according to actual needs, where the number of the first obstacles is not specifically limited in the embodiments of the present disclosure.
For example, when scene point cloud data of a road to be detected is obtained, the scene point cloud data of the road to be detected, which is sent by other electronic equipment, may be received, for example, point cloud data acquired by a laser radar or a millimeter wave radar; the scene point cloud data of the road to be detected can also be searched from the local storage; the scene point cloud data of the road to be detected can also be acquired by other methods, and the setting can be specifically performed according to actual needs, and the embodiment of the disclosure is not particularly limited to the method for acquiring the scene point cloud data of the road to be detected.
In combination with the description in S101 above, in order to improve the accuracy of the position of the obstacle determined based on the scene image, the position of the first obstacle identified based on the scene image may be determined jointly with the scene point cloud data, so as to correct the position of the first obstacle. It should be noted that, when the position of the obstacle identified based on the scene image is determined together with the scene point cloud data, it is considered that the obstacle identified based on the scene point cloud data and the obstacle identified based on the image data do not completely correspond to each other and cannot be completely matched because of the difference between the obstacle identification technology based on the point cloud data and the obstacle identification technology based on the image data; in this case, if the scene image data and the scene point cloud data are fused blindly, the amount of calculation of the fused data is large.
Therefore, when the position of the obstacle determined based on the scene image is determined together with the scene point cloud data, the first obstacle identified based on the scene image and the second obstacle identified based on the scene point cloud data may be matched first, and the position of the obstacle may be determined according to the matching result, so that the position of the first obstacle determined based on the scene image may be effectively corrected, that is, the following S103 may be performed to improve the accuracy of the position of the first obstacle determined based on the scene image.
S103, matching the first obstacle with the second obstacle, and determining the position of the first obstacle according to the matching result.
For example, the number of the second obstacles may be one or multiple, and may be specifically set according to actual needs, where the number of the second obstacles is not specifically limited in the embodiments of the present disclosure.
For example, when the first obstacle and the second obstacle are matched, a data matching algorithm, such as a one-to-one matching algorithm, a multiple hypothesis matching algorithm, or a tracking matching algorithm of a Random Finite Set (RFS), may be adopted, and the specific setting may be performed according to actual needs. As an example, the one-to-one matching algorithm may be a hungarian matching algorithm, etc.; the multi-hypothesis matching algorithm may be a multi-hypothesis tracking algorithm, and the like, and may be specifically set according to actual needs.
When the first obstacle and the second obstacle are matched, assuming that the first obstacle identified based on the scene image includes an obstacle a, an obstacle b, an obstacle c, and an obstacle d, and the obstacle a, the obstacle b, and the obstacle c identified based on the point cloud data, the obtained matching result is: in the first obstacles, the obstacles successfully matched with the second obstacles comprise an obstacle a, an obstacle b and an obstacle c; the obstacle that has not been successfully matched with the second obstacle includes an obstacle d.
It can be seen that, in the embodiment of the present disclosure, when determining the position of the obstacle, the scene image of the road to be detected may be first identified, and the first obstacle included in the scene image is determined; identifying scene point cloud data of a road to be detected, and determining a second obstacle included in the scene point cloud data; and matching the first obstacle with the second obstacle, and determining the position of the first obstacle according to the matching result, so that the position of the first obstacle can be effectively determined by combining the point number of the scene point cloud, and the accuracy of the determined position of the first obstacle is improved.
Example two
Fig. 2 is a flowchart of a method for determining a position of a first obstacle according to a matching result according to a second embodiment of the present disclosure, which may also be performed by software and/or hardware means. For example, referring to fig. 2, the method may include:
s201, aiming at a third obstacle which is not successfully matched with the second obstacle in the first obstacles, determining target point cloud data matched with the third obstacle from the scene point cloud data according to the characteristics of the third obstacle.
The target point cloud data may be understood as point cloud data of a third obstacle.
It can be understood that, in the embodiment of the present disclosure, although the obstacle identified based on the scene point cloud data does not include the third obstacle, the identification result of the obstacle may be caused by the identification difference of the obstacle identification technology, and the actually acquired scene point cloud data of the road to be detected includes the point cloud data of the third obstacle, so when the position of the third obstacle is determined, in order to effectively utilize the point cloud data of the third obstacle, the point cloud data matched with the third obstacle may be screened out from the scene point cloud data of the road to be detected, and in order to facilitate the distinction, in the embodiment of the present disclosure, the point cloud data screened out to be matched with the third obstacle may be regarded as the target point cloud data.
For example, when target point cloud data matched with a third obstacle is determined from scene point cloud data according to characteristics of the third obstacle, the scene point cloud data of the road to be detected may be projected onto a scene image, and initial point cloud data corresponding to the third obstacle is preliminarily screened from the scene point cloud data of the road to be detected by using a two-dimensional bounding box (bounding box) identified by a camera, so that data volume of the point cloud data of the non-third obstacle can be effectively removed; and determining target point cloud data matched with the third obstacle from the screened initial point cloud data according to the characteristics of the third obstacle.
For example, when target point cloud data matched with a third obstacle is determined from the initial point cloud data according to the characteristics of the third obstacle, the initial point cloud data may be clustered to obtain a plurality of clusters; determining a cluster matched with the characteristics of the third obstacle from the plurality of clusters; determining a cluster matching the feature of the third obstacle as the target point cloud data.
For example, when the initial point cloud data is clustered, the initial point cloud data may be clustered by using an existing clustering algorithm, and specific implementation of the clustering algorithm may refer to implementation of the existing clustering algorithm, which is not described herein again in the embodiments of the present disclosure.
After the target point cloud data matching the third obstacle is determined, the position of the third obstacle may be determined together with the target point cloud data on the basis of the image data of the third obstacle in the scene image, that is, the following S202 is performed:
s202, determining the position of a third obstacle according to the image data of the third obstacle in the scene image and the target point cloud data.
For example, when the position of the third obstacle is determined according to the image data of the third obstacle and the target point cloud data in the scene image, the image data of the third obstacle and the target point cloud data corresponding to the third obstacle may be fused to obtain fused data; and determining the position of the third obstacle according to the fusion data.
It can be seen that, in the embodiment of the present disclosure, when the position of the obstacle is determined by combining the point cloud data, for a third obstacle that is not successfully matched with the second obstacle in the first obstacle, target point cloud data matched with the third obstacle can be determined from the scene point cloud data of the road to be detected in a targeted manner according to the characteristics of the third obstacle; and then fusing the image data of the third obstacle and the target point cloud data to determine the position of the third obstacle, so that the accuracy of the determined position of the third obstacle can be improved, and the data volume of the fused data can be reduced.
When determining the position of the first obstacle according to the matching result of the first obstacle and the second obstacle, the second embodiment shown in fig. 2 is combined, and how to effectively combine the scene point cloud data to accurately determine the position of the third obstacle with respect to the third obstacle that is not successfully matched with the second obstacle in the first obstacle is described in detail in the second embodiment. Next, how to accurately determine the position of the fourth obstacle in the first obstacle by effectively combining the scene point cloud data with respect to the fourth obstacle successfully matched with the second obstacle will be described in detail.
For example, for a fourth obstacle successfully matched with the second obstacle in the first obstacle, when the position of the fourth obstacle is accurately determined by combining the point cloud data, image identification information of the fourth obstacle in the scene image and point cloud identification information of the fourth obstacle in the point cloud data may be directly fused to obtain fused information; and determining the position of the fourth obstacle according to the fusion information. Therefore, the position of the fourth obstacle is determined by combining the point cloud identification information of the fourth obstacle, and the accuracy of the determined position of the fourth obstacle can be effectively improved.
Based on any of the above embodiments, the position of the first obstacle can be accurately determined by the technical scheme of the disclosure. Furthermore, the running speed of the collection vehicle of the scene image can be determined according to the positions of the first obstacles at different moments, so that reference can be provided for the running of the vehicle by collecting the running speed of the vehicle.
For example, when the driving speed of the vehicle for acquiring the scene image is determined according to the positions of the first obstacles at different times, assuming that the different times include the first time and the second time, the driving distance of the vehicle can be acquired in a time period formed by the first time and the second time according to the determined positions of the first obstacles at the first time and the second obstacles at the second time; and calculating the running speed of the collected vehicle according to the running distance and the time period of the collected vehicle, thereby determining the running speed of the collected vehicle.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an obstacle position determination apparatus 30 according to a third embodiment of the present disclosure, for example, please refer to fig. 3, where the obstacle position determination apparatus 30 may include:
the first determining unit 301 is configured to identify a scene image of a road to be detected, and determine a first obstacle included in the scene image.
The second determining unit 302 is configured to identify the scene point cloud data of the road to be detected, and determine a second obstacle included in the scene point cloud data.
The matching unit 303 is configured to match the first obstacle with the second obstacle, and determine the position of the first obstacle according to a matching result.
Optionally, the matching unit 303 includes a first matching module and a second matching module.
And the first matching module is used for determining target point cloud data matched with a third obstacle from the scene point cloud data according to the characteristics of the third obstacle aiming at the third obstacle which is not successfully matched with the second obstacle in the first obstacle.
And the second matching module is used for determining the position of a third obstacle according to the image data of the third obstacle in the scene image and the target point cloud data.
Optionally, the first matching module includes a first matching submodule and a second matching submodule.
And the first matching submodule is used for clustering the scene point cloud data to obtain a plurality of clusters.
And the second matching submodule is used for determining a cluster matched with the characteristics of the third obstacle from the plurality of clusters as the target point cloud data.
Optionally, the second matching module includes a third matching submodule and a fourth matching submodule.
And the third matching submodule is used for fusing the image data of a third obstacle in the scene image and the target point cloud data to obtain fused data.
And the fourth matching submodule is used for determining the position of the third obstacle according to the fusion data.
Optionally, the matching unit 303 includes a third matching module and a fourth matching module.
And the third matching module is used for fusing the image identification information of the fourth obstacle in the scene image and the point cloud identification information of the fourth obstacle in the point cloud data aiming at the fourth obstacle successfully matched with the second obstacle in the first obstacle to obtain fused information.
And the fourth matching module is used for determining the position of the fourth obstacle according to the fusion information.
Alternatively, the obstacle position determination device 30 includes a third determination unit.
And the third determining unit is used for determining the running speed of the acquisition vehicle of the scene image according to the position of the first obstacle at different moments.
The device 30 for determining the position of the obstacle provided in the embodiment of the present disclosure may implement the technical solution of the method for determining the position of the obstacle shown in any one of the above embodiments, and its implementation principle and beneficial effect are similar to those of the method for determining the position of the obstacle, and reference may be made to the implementation principle and beneficial effect of the method for determining the position of the obstacle, which are not described herein again.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
Fig. 4 is a schematic block diagram of an electronic device 40 provided by an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the apparatus 40 includes a computing unit 401 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the device 40 can also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in device 40 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 40 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 401 executes the respective methods and processes described above, such as the determination method of the obstacle position. For example, in some embodiments, the method of determining the position of an obstacle may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 40 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of the method for determining the position of an obstacle described above may be performed. Alternatively, in other embodiments, the calculation unit 401 may be configured by any other suitable means (e.g. by means of firmware) to perform the method of determining the position of the obstacle.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. A method of determining a location of an obstacle, comprising:
identifying a scene image of a road to be detected, and determining a first obstacle included in the scene image;
identifying scene point cloud data of the road to be detected, and determining a second obstacle included in the scene point cloud data;
and matching the first obstacle with the second obstacle, and determining the position of the first obstacle according to the matching result.
2. The method of claim 1, wherein said determining a location of the first obstacle from the matching result comprises:
for a third obstacle which is not successfully matched with the second obstacle in the first obstacles, determining target point cloud data matched with the third obstacle from the scene point cloud data according to the characteristics of the third obstacle;
and determining the position of the third obstacle according to the image data of the third obstacle in the scene image and the target point cloud data.
3. The method of claim 2, wherein the determining target point cloud data from the scene point cloud data that matches the third obstacle according to the features of the third obstacle comprises:
clustering the scene point cloud data to obtain a plurality of clusters;
determining a cluster of the plurality of clusters that matches a feature of the third obstacle as the target point cloud data.
4. The method of claim 2 or 3, wherein the determining the location of the third obstacle from the image data of the third obstacle in the scene image and the target point cloud data comprises:
fusing the image data of the third obstacle in the scene image and the target point cloud data to obtain fused data;
determining a position of the third obstacle based on the fused data.
5. The method according to any one of claims 1-4, wherein said determining the position of the first obstacle from the matching result comprises:
for a fourth obstacle successfully matched with the second obstacle in the first obstacle, fusing image identification information of the fourth obstacle in the scene image and point cloud identification information of the fourth obstacle in the point cloud data to obtain fused information;
and determining the position of the fourth obstacle according to the fusion information.
6. The method according to any one of claims 1-5, the method comprising:
and determining the running speed of the acquisition vehicle of the scene image according to the position of the first obstacle at different moments.
7. An obstacle position determination apparatus comprising:
the first determining unit is used for identifying a scene image of a road to be detected and determining a first obstacle included in the scene image;
the second determining unit is used for identifying the scene point cloud data of the road to be detected and determining a second barrier included in the scene point cloud data;
and the matching unit is used for matching the first obstacle with the second obstacle and determining the position of the first obstacle according to a matching result.
8. The apparatus of claim 7, wherein the matching unit comprises a first matching module and a second matching module;
the first matching module is used for determining target point cloud data matched with a third obstacle from the scene point cloud data according to the characteristics of the third obstacle aiming at the third obstacle which is not successfully matched with the second obstacle in the first obstacle;
the second matching module is used for determining the position of the third obstacle according to the image data of the third obstacle in the scene image and the target point cloud data.
9. The apparatus of claim 8, wherein the first matching module comprises a first matching submodule and a second matching submodule;
the first matching sub-module is used for clustering the scene point cloud data to obtain a plurality of clusters;
the second matching submodule is configured to determine, as the target point cloud data, a cluster that matches a feature of the third obstacle among the plurality of clusters.
10. The apparatus of claim 8 or 9, wherein the second matching module comprises a third matching submodule and a fourth matching submodule;
the third matching submodule is used for fusing the image data of the third obstacle in the scene image and the target point cloud data to obtain fused data;
and the fourth matching submodule is used for determining the position of the third obstacle according to the fusion data.
11. The apparatus according to any one of claims 7-10, wherein the matching unit comprises a third matching module and a fourth matching module;
the third matching module is configured to, for a fourth obstacle that is successfully matched with the second obstacle in the first obstacle, fuse image identification information of the fourth obstacle in the scene image and point cloud identification information of the fourth obstacle in the point cloud data to obtain fusion information;
and the fourth matching module is used for determining the position of the fourth obstacle according to the fusion information.
12. The apparatus according to any of claims 7-11, the apparatus comprising a third determining unit;
the third determining unit is used for determining the running speed of the collected vehicle of the scene image according to the position of the first obstacle at different moments.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining the location of an obstacle of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of determining the position of an obstacle according to any one of claims 1-6.
15. A computer program product comprising a computer program which, when being executed by a processor, carries out the steps of the method of determining the position of an obstacle according to any one of claims 1 to 6.
CN202111629325.2A 2021-12-28 2021-12-28 Method and device for determining position of obstacle and electronic equipment Pending CN114359513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111629325.2A CN114359513A (en) 2021-12-28 2021-12-28 Method and device for determining position of obstacle and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111629325.2A CN114359513A (en) 2021-12-28 2021-12-28 Method and device for determining position of obstacle and electronic equipment

Publications (1)

Publication Number Publication Date
CN114359513A true CN114359513A (en) 2022-04-15

Family

ID=81102812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111629325.2A Pending CN114359513A (en) 2021-12-28 2021-12-28 Method and device for determining position of obstacle and electronic equipment

Country Status (1)

Country Link
CN (1) CN114359513A (en)

Similar Documents

Publication Publication Date Title
CN110827325B (en) Target tracking method and device, electronic equipment and storage medium
CN111753765A (en) Detection method, device and equipment of sensing equipment and storage medium
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN112785625A (en) Target tracking method and device, electronic equipment and storage medium
EP4145408A1 (en) Obstacle detection method and apparatus, autonomous vehicle, device and storage medium
CN113762272A (en) Road information determination method and device and electronic equipment
CN115690739A (en) Multi-sensor fusion obstacle existence detection method and automatic driving vehicle
CN115147831A (en) Training method and device of three-dimensional target detection model
CN114283398A (en) Method and device for processing lane line and electronic equipment
CN113722342A (en) High-precision map element change detection method, device and equipment and automatic driving vehicle
CN113126120A (en) Data annotation method, device, equipment, storage medium and computer program product
CN115830268A (en) Data acquisition method and device for optimizing perception algorithm and storage medium
CN114359513A (en) Method and device for determining position of obstacle and electronic equipment
CN114445802A (en) Point cloud processing method and device and vehicle
CN114549584A (en) Information processing method and device, electronic equipment and storage medium
CN113723405A (en) Method and device for determining area outline and electronic equipment
CN112988932A (en) High-precision map labeling method, device, equipment, readable storage medium and product
CN112507957A (en) Vehicle association method and device, road side equipment and cloud control platform
CN114049615B (en) Traffic object fusion association method and device in driving environment and edge computing equipment
CN114694138B (en) Road surface detection method, device and equipment applied to intelligent driving
CN114677570B (en) Road information updating method, device, electronic equipment and storage medium
CN113361379B (en) Method and device for generating target detection system and detecting target
JP2023535661A (en) Vehicle lane crossing recognition method, device, electronic device, storage medium and computer program
CN117876992A (en) Obstacle detection method, device, equipment and automatic driving vehicle
CN116229418A (en) Information fusion method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination