CN113466815A - Object identification method, device, equipment and storage medium - Google Patents

Object identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN113466815A
CN113466815A CN202110729559.8A CN202110729559A CN113466815A CN 113466815 A CN113466815 A CN 113466815A CN 202110729559 A CN202110729559 A CN 202110729559A CN 113466815 A CN113466815 A CN 113466815A
Authority
CN
China
Prior art keywords
map
laser radar
target
point cloud
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110729559.8A
Other languages
Chinese (zh)
Inventor
张时嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202110729559.8A priority Critical patent/CN113466815A/en
Publication of CN113466815A publication Critical patent/CN113466815A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/414Discriminating targets with respect to background clutter

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The disclosure discloses an object identification method, an object identification device, an object identification apparatus and a storage medium. According to the method, data irrelevant to the target object in the laser radar point cloud data are filtered by combining a map of an area where the vehicle is located, and the filtered laser radar point cloud data are input into the object recognition network, so that the object recognition network can accurately recognize the target object according to the filtered laser radar point cloud data, and the object recognition accuracy of the object recognition network is improved. Therefore, the object identification method provided by the embodiment of the disclosure has the characteristic of accurate identification result.

Description

Object identification method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to an object recognition method, device, apparatus, and storage medium.
Background
With the development of the technology of the automatic driving vehicle, people have higher and higher driving requirements on the automatic driving vehicle. In the related automatic driving technology, an automatic driving vehicle is provided with a camera, and the camera is used for collecting images, and identifying a target object, for example, an object such as a person or a vehicle, from the images.
The existing object identification method is single.
Disclosure of Invention
In view of the above, the present disclosure provides an object identification method, device, apparatus and storage medium to solve the above technical problems.
In order to achieve the purpose, the technical scheme adopted by the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, an object identification method is provided, including:
acquiring laser radar point cloud data acquired by a laser radar on a vehicle and a map of an area where the vehicle is located;
determining a first position of a non-target area except a target area in the map, wherein the target area is an area where an estimated target object is located;
converting the first position into a second position under a laser radar coordinate system;
according to the second position, filtering data used for representing non-target objects in the laser radar point cloud data, wherein the non-target objects are objects in the non-target area in the map;
inputting the filtered laser radar point cloud data into an object identification network, enabling the object identification network to identify the target object according to the filtered laser radar point cloud data, and outputting an identification result of the target object.
In an alternative embodiment, the determining a first location of a non-target area other than a target area in the map comprises:
determining the non-target objects in the map that belong to a preset category, the target objects not belonging to the preset category;
determining the non-target area in the map where the non-target object is located;
determining the first location of the non-target area in the map.
In an optional embodiment, the filtering data used for characterizing non-target objects in the lidar point cloud data according to the second position includes:
filtering data at the second location in the lidar point cloud data.
In an alternative embodiment, the determining a first location of a non-target area other than a target area in the map comprises:
determining the first location in the map indicating a preset altitude;
determining the first location as a location in the map where an edge of the non-target area is located.
In an optional embodiment, the filtering data used for characterizing non-target objects in the lidar point cloud data according to the second position includes:
determining a set of data above or below the second location in the lidar point cloud data;
filtering the data set in the lidar point cloud data.
In an alternative embodiment, the converting the first position to a second position in a lidar coordinate system includes:
determining the position of the vehicle in a map coordinate system according to the positioning result of a positioning device on the vehicle;
determining the position of the laser radar in the map coordinate system according to the position of the vehicle in the map coordinate system and the position of the laser radar on the vehicle;
determining a conversion relation between the map coordinate system and the laser radar coordinate system according to the position of the laser radar in the map coordinate system;
and converting the first position into the second position according to the conversion relation.
According to a second aspect of the embodiments of the present disclosure, there is provided an object recognition apparatus including:
the system comprises an information acquisition module, a data acquisition module and a data acquisition module, wherein the information acquisition module is configured to acquire laser radar point cloud data acquired by a laser radar on a vehicle and a map of an area where the vehicle is located;
a first position determination module configured to determine a first position of a non-target area in the map, the non-target area being an area where the estimated target object is located;
a second position determination module configured to convert the first position to a second position in a lidar coordinate system;
a data filtering module configured to filter data in the lidar point cloud data that is used to characterize a non-target object according to the second location, the non-target object being an object within the non-target area in the map;
a network using module configured to input the filtered lidar point cloud data into an object recognition network, so that the object recognition network recognizes the target object according to the filtered lidar point cloud data, and outputs a recognition result of the target object.
In an alternative embodiment, the second position determination module includes:
a first position determination submodule configured to determine a position of the vehicle in a map coordinate system according to a positioning result of a positioning device on the vehicle;
a second position determination sub-module configured to determine a position of the lidar in the map coordinate system based on the position of the vehicle in the map coordinate system and the position of the lidar on the vehicle;
a conversion relation determination sub-module configured to determine a conversion relation between the map coordinate system and the lidar coordinate system according to the position of the lidar in the map coordinate system;
a position conversion submodule configured to convert the first position into the second position according to the conversion relationship.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus, including:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
acquiring laser radar point cloud data acquired by a laser radar on a vehicle and a map of an area where the vehicle is located;
determining a first position of a non-target area except a target area in the map, wherein the target area is an area where an estimated target object is located;
converting the first position into a second position under a laser radar coordinate system;
according to the second position, filtering data used for representing non-target objects in the laser radar point cloud data, wherein the non-target objects are objects in the non-target area in the map;
inputting the filtered laser radar point cloud data into an object identification network, enabling the object identification network to identify the target object according to the filtered laser radar point cloud data, and outputting an identification result of the target object.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when processed by a processor, implements:
acquiring laser radar point cloud data acquired by a laser radar on a vehicle and a map of an area where the vehicle is located;
determining a first position of a non-target area except a target area in the map, wherein the target area is an area where an estimated target object is located;
converting the first position into a second position under a laser radar coordinate system;
according to the second position, filtering data used for representing non-target objects in the laser radar point cloud data, wherein the non-target objects are objects in the non-target area in the map;
inputting the filtered laser radar point cloud data into an object identification network, enabling the object identification network to identify the target object according to the filtered laser radar point cloud data, and outputting an identification result of the target object.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the embodiment of the disclosure provides a novel object identification method, which is characterized in that data irrelevant to a target object in laser radar point cloud data are filtered by combining a map of an area where a vehicle is located, and the filtered laser radar point cloud data are input into an object identification network, so that the object identification network can accurately identify the target object according to the filtered laser radar point cloud data, and the object identification accuracy of the object identification network is improved. Therefore, the object identification method provided by the embodiment of the disclosure has the characteristic of accurate identification result.
Drawings
FIG. 1 shows a flow diagram of an object identification method according to an example embodiment of the present disclosure;
FIG. 2 illustrates a block diagram of an object recognition device according to an exemplary embodiment of the present disclosure;
fig. 3 shows a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
The present disclosure will be described in detail below with reference to specific embodiments shown in the drawings. These embodiments do not limit the disclosure, and structural, methodological, or functional changes made by those skilled in the art according to these embodiments are included in the scope of the disclosure.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein to describe various structures, these structures should not be limited by these terms. These terms are only used to distinguish one type of structure from another.
Fig. 1 shows a flow chart of an object identification method according to an exemplary embodiment of the present disclosure. The method of the embodiment can be applied to a terminal device (such as a vehicle-mounted terminal, a smart phone or a tablet computer) or a server (such as a server cluster formed by one or more servers) with a data processing function. As shown in fig. 1, the method comprises the following steps S101-S105:
in step S101, laser radar point cloud data collected by a laser radar on a vehicle and a map of an area where the vehicle is located are acquired.
The vehicle is provided with a laser radar, and the laser radar is used for collecting point cloud data of the laser radar.
The lidar point cloud data may include coordinates of the vehicle in a lidar coordinate system and reflected intensity information.
The vehicle can be provided with a positioning device and download map software, and the vehicle obtains the map of the area where the vehicle is located through the positioning device and the map software.
In step S102, a first position of a non-target area in the map is determined, except for a target area, where the estimated target object is located.
The non-target area in the map is the area where the estimated non-target object is located. The target object is an object that needs to be identified, and the non-target object is an object that does not need to be identified.
Determining the first position of the non-target area in the map may be understood as: regional coordinates of the non-target region in the map coordinate system are determined, and the coordinates may include coordinates on an X-axis and coordinates on a Y-axis.
In an alternative embodiment, a category (hereinafter referred to as a preset category) to which the non-target object belongs is set, for example, the preset category is a person, a flower bed or a building. The target object does not belong to the preset category.
When this step is performed, the non-target object in the map belonging to the preset category may be determined, the non-target area in which the non-target object in the map is located may be determined, and the first position of the non-target area in the map may be determined.
Non-target objects of a preset category may be identified from the map using image recognition techniques or the like.
In an alternative embodiment, a preset height is set, which may be considered to be the height of the edge of the non-target area in the map. The preset height may be set as desired, for example, the preset height is 3m, 4m, 5m, etc.
In performing this step, a first location in the map indicating a preset height may be determined, the first location being determined as a location of an edge of the non-target area in the map.
In one case, the predetermined height includes a first predetermined height, and the first predetermined height is regarded as a height of a lower edge of the non-target area in the map, so that an area in the map higher than the first predetermined height is determined as the non-target area, and an object in the non-target area in the map is determined as a non-target object, that is, an object in the map higher than the first predetermined height is determined as a non-target object.
In another case, the preset height includes a second preset height, and the second preset height is regarded as a height of an upper edge of a non-target area in the map, so that an area in the map lower than the second preset height is determined as the non-target area, and an object in the non-target area in the map is determined as a non-target object, that is, an object in the map lower than the second preset height is determined as a non-target object.
The preset heights may include a first preset height and a second preset height, an area higher than the first preset height and an area lower than the second preset height in the map are determined as non-target areas, and objects in the two non-target areas in the map are determined as non-target objects.
For example, the first preset height is 3m, the second preset height is a ground height, an area higher than 3m and an area lower than the ground height in the map are determined as non-target areas, and objects in two non-target areas in the map are determined as non-target objects.
In step S103, the first position is converted into a second position in the lidar coordinate system.
The first position is the position of the non-target object in the map coordinate system.
The map coordinate system is different from the laser radar coordinate system, and a first position in the map coordinate system needs to be converted into a second position in the laser radar coordinate system.
Obtaining the second position may be understood as: and obtaining coordinates of the position of the non-target object in the laser radar coordinate system, wherein the coordinates can comprise coordinates on an X axis and coordinates on a Y axis.
In an alternative embodiment, converting the first position to a second position in the lidar coordinate system may include: the method comprises the following steps that firstly, the position of a vehicle in a map coordinate system is determined according to the positioning result of a positioning device on the vehicle; determining the position of the laser radar in the map coordinate system according to the position of the vehicle in the map coordinate system and the position of the laser radar on the vehicle; thirdly, determining a conversion relation between a map coordinate system and a laser radar coordinate system according to the position of the laser radar in the map coordinate system; and a fourth step of converting the first position into the second position according to the determined conversion relation.
By the method, the conversion relation between the applicable map coordinate system and the laser radar coordinate system is obtained.
In step S104, according to the second position, data used for representing a non-target object in the laser radar point cloud data is filtered, where the non-target object is an object in a non-target area in the map.
In an alternative embodiment, the first position is a position of a non-target area where a non-target object belonging to a preset category is located in a map coordinate system, and the second position is a position obtained by performing coordinate system conversion on the first position.
The area at the second position in the lidar coordinate system is a non-target area. When the step is executed, the data at the second position in the laser radar point cloud data can be filtered, so that the data irrelevant to the target object in the laser radar point cloud data are removed, and useless data in the laser radar point cloud data are reduced.
In an alternative embodiment, a preset height is set, which may be considered to be the height of the edge of the non-target area in the map.
The first position is a position of an edge of the non-target area in a map coordinate system. The second position is obtained by converting the coordinate system of the first position, and the second position is the position of the edge of the non-target area under the laser radar coordinate system.
In performing this step, a data set in the lidar point cloud data that is above or below the second location may be determined and filtered.
For example, the preset height is considered to be the height of the lower edge of the non-target area, and at this time, the data set higher than the second position in the lidar point cloud data is determined, and the data set higher than the second position in the lidar point cloud data is filtered, so that the data at the higher position in the lidar point cloud data is removed.
For another example, the preset height is considered as the height of the upper edge of the non-target area, and at this time, a data set lower than the second position in the lidar point cloud data is determined, and the data set lower than the second position in the lidar point cloud data is filtered, so that data at a lower position in the lidar point cloud data is removed.
In step S105, the filtered lidar point cloud data is input into an object recognition network, so that the object recognition network recognizes the target object according to the filtered lidar point cloud data, and outputs a recognition result of the target object.
The input of the object identification network is laser radar point cloud data, the output is an identification result of a target object, and the object identification network has a function of identifying the target object according to the laser radar point cloud data and outputting the identification result of the target object.
The recognition result of the target object may include: the target object is not recognized, or the recognition result of the target object may include: the target object and the position of the target object in the laser radar coordinate system are identified.
The network training can be carried out, so that the object recognition network has the function of recognizing the target objects in a specific class. In practical application, the object identification network can identify target objects of a specific category according to the laser radar point cloud data. The specific category may be people, flower beds, buildings, etc.
There are many types of object identification networks that are suitable, for example, voxelnet networks, pointpilar networks, etc.
The input of the voxelnet network and the pointpilar network only comprises laser radar point cloud data, object identification can be completed only according to the laser radar point cloud data, images are not needed, the dependence of the object identification process on the images and the cameras is omitted, and the problem that target objects cannot be detected due to the fact that the images cannot be acquired or the images and the point cloud data are asynchronous is solved.
In the embodiment, data irrelevant to the target object in the laser radar point cloud data is filtered, useless data in the laser radar point cloud data are filtered, and the filtered laser radar point cloud data are input into the object recognition network, so that the object recognition network can accurately recognize the target object according to the filtered laser radar point cloud data, and the accuracy of recognizing the target object by the object recognition network is improved.
The object identification method provided by the embodiment of the disclosure can be used for tracking the object, obtaining the information such as the position, the speed and the orientation of the object, determining whether the vehicle avoids the object based on the information, planning a safe driving route for the vehicle, and further controlling the vehicle to drive according to the planned driving route.
The embodiment of the disclosure provides a novel object identification method, which is characterized in that data irrelevant to a target object in laser radar point cloud data are filtered by combining a map of an area where a vehicle is located, and the filtered laser radar point cloud data are input into an object identification network, so that the object identification network can accurately identify the target object according to the filtered laser radar point cloud data, and the object identification accuracy of the object identification network is improved. Therefore, the object identification method provided by the embodiment of the disclosure has the characteristic of accurate identification result.
FIG. 2 illustrates a block diagram of an object recognition device according to an exemplary embodiment of the present disclosure; the device of the embodiment can be applied to a terminal device (such as a vehicle-mounted terminal, a smart phone or a tablet computer) or a server (such as a server cluster formed by one or more servers) with a data processing function. As shown in fig. 2, the apparatus includes:
the information acquisition module 21 is configured to acquire laser radar point cloud data acquired by a laser radar on a vehicle and a map of an area where the vehicle is located;
a first position determination module 22 configured to determine a first position of a non-target area in the map, except for a target area, where the estimated target object is located;
a second position determination module 23 configured to convert the first position into a second position in a lidar coordinate system;
a data filtering module 24 configured to filter data in the lidar point cloud data that is used to characterize non-target objects that are objects within the non-target area in the map according to the second location;
a network using module 25 configured to input the filtered lidar point cloud data into an object recognition network, so that the object recognition network recognizes the target object according to the filtered lidar point cloud data, and outputs a recognition result of the target object.
In an alternative embodiment, on the basis of the object recognition apparatus shown in fig. 2, the second position determining module 23 may include:
a first position determination submodule configured to determine a position of the vehicle in a map coordinate system according to a positioning result of a positioning device on the vehicle;
a second position determination sub-module configured to determine a position of the lidar in the map coordinate system based on the position of the vehicle in the map coordinate system and the position of the lidar on the vehicle;
a conversion relation determination sub-module configured to determine a conversion relation between the map coordinate system and the lidar coordinate system according to the position of the lidar in the map coordinate system;
a position conversion submodule configured to convert the first position into the second position according to the conversion relationship.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the object recognition device can be applied to network equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor of the device where the software implementation is located as a logical means. From a hardware aspect, as shown in fig. 3, the hardware structure diagram of the electronic device where the object identification apparatus of the present disclosure is located is shown, except for the processor, the network interface, the memory, and the nonvolatile memory shown in fig. 3, the device where the apparatus is located in the embodiment may also generally include other hardware, such as a forwarding chip responsible for processing a packet, and the like; the device may also be a distributed device in terms of hardware structure, and may include multiple interface cards to facilitate expansion of message processing at the hardware level.
The disclosed embodiments also provide a computer-readable storage medium on which a computer program is stored, the program implementing the following task processing method when being processed by a processor:
acquiring laser radar point cloud data acquired by a laser radar on a vehicle and a map of an area where the vehicle is located; determining a first position of a non-target area except a target area in the map, wherein the target area is an area where an estimated target object is located; converting the first position into a second position under a laser radar coordinate system; according to the second position, filtering data used for representing non-target objects in the laser radar point cloud data, wherein the non-target objects are objects in the non-target area in the map; inputting the filtered laser radar point cloud data into an object identification network, enabling the object identification network to identify the target object according to the filtered laser radar point cloud data, and outputting an identification result of the target object.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An object identification method, characterized in that the method comprises:
acquiring laser radar point cloud data acquired by a laser radar on a vehicle and a map of an area where the vehicle is located;
determining a first position of a non-target area except a target area in the map, wherein the target area is an area where an estimated target object is located;
converting the first position into a second position under a laser radar coordinate system;
according to the second position, filtering data used for representing non-target objects in the laser radar point cloud data, wherein the non-target objects are objects in the non-target area in the map;
inputting the filtered laser radar point cloud data into an object identification network, enabling the object identification network to identify the target object according to the filtered laser radar point cloud data, and outputting an identification result of the target object.
2. The method of claim 1, wherein determining the first location of the non-target area other than the target area in the map comprises:
determining the non-target objects in the map that belong to a preset category, the target objects not belonging to the preset category;
determining the non-target area in the map where the non-target object is located;
determining the first location of the non-target area in the map.
3. The method of claim 2, wherein the filtering data characterizing non-target objects in the lidar point cloud data according to the second location comprises:
filtering data at the second location in the lidar point cloud data.
4. The method of claim 1, wherein determining the first location of the non-target area other than the target area in the map comprises:
determining the first location in the map indicating a preset altitude;
determining the first location as a location of an edge of the non-target area in the map.
5. The method of claim 4, wherein the filtering data characterizing non-target objects in the lidar point cloud data according to the second location comprises:
determining a set of data above or below the second location in the lidar point cloud data;
filtering the data set in the lidar point cloud data.
6. The method of claim 1, wherein converting the first position to a second position in a lidar coordinate system comprises:
determining the position of the vehicle in a map coordinate system according to the positioning result of a positioning device on the vehicle;
determining the position of the laser radar in the map coordinate system according to the position of the vehicle in the map coordinate system and the position of the laser radar on the vehicle;
determining a conversion relation between the map coordinate system and the laser radar coordinate system according to the position of the laser radar in the map coordinate system;
and converting the first position into the second position according to the conversion relation.
7. An object recognition device, comprising:
the system comprises an information acquisition module, a data acquisition module and a data acquisition module, wherein the information acquisition module is configured to acquire laser radar point cloud data acquired by a laser radar on a vehicle and a map of an area where the vehicle is located;
a first position determination module configured to determine a first position of a non-target area in the map, the non-target area being an area where the estimated target object is located;
a second position determination module configured to convert the first position to a second position in a lidar coordinate system;
a data filtering module configured to filter data in the lidar point cloud data that is used to characterize a non-target object according to the second location, the non-target object being an object within the non-target area in the map;
a network using module configured to input the filtered lidar point cloud data into an object recognition network, so that the object recognition network recognizes the target object according to the filtered lidar point cloud data, and outputs a recognition result of the target object.
8. The apparatus of claim 7, wherein the second position determining module comprises:
a first position determination submodule configured to determine a position of the vehicle in a map coordinate system according to a positioning result of a positioning device on the vehicle;
a second position determination sub-module configured to determine a position of the lidar in the map coordinate system based on the position of the vehicle in the map coordinate system and the position of the lidar on the vehicle;
a conversion relation determination sub-module configured to determine a conversion relation between the map coordinate system and the lidar coordinate system according to the position of the lidar in the map coordinate system;
a position conversion submodule configured to convert the first position into the second position according to the conversion relationship.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
acquiring laser radar point cloud data acquired by a laser radar on a vehicle and a map of an area where the vehicle is located;
determining a first position of a non-target area except a target area in the map, wherein the target area is an area where an estimated target object is located;
converting the first position into a second position under a laser radar coordinate system;
according to the second position, filtering data used for representing non-target objects in the laser radar point cloud data, wherein the non-target objects are objects in the non-target area in the map;
inputting the filtered laser radar point cloud data into an object identification network, enabling the object identification network to identify the target object according to the filtered laser radar point cloud data, and outputting an identification result of the target object.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being processed by a processor, is adapted to carry out:
acquiring laser radar point cloud data acquired by a laser radar on a vehicle and a map of an area where the vehicle is located;
determining a first position of a non-target area except a target area in the map, wherein the target area is an area where an estimated target object is located;
converting the first position into a second position under a laser radar coordinate system;
according to the second position, filtering data used for representing non-target objects in the laser radar point cloud data, wherein the non-target objects are objects in the non-target area in the map;
inputting the filtered laser radar point cloud data into an object identification network, enabling the object identification network to identify the target object according to the filtered laser radar point cloud data, and outputting an identification result of the target object.
CN202110729559.8A 2021-06-29 2021-06-29 Object identification method, device, equipment and storage medium Pending CN113466815A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110729559.8A CN113466815A (en) 2021-06-29 2021-06-29 Object identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110729559.8A CN113466815A (en) 2021-06-29 2021-06-29 Object identification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113466815A true CN113466815A (en) 2021-10-01

Family

ID=77873810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110729559.8A Pending CN113466815A (en) 2021-06-29 2021-06-29 Object identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113466815A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530380A (en) * 2016-09-20 2017-03-22 长安大学 Ground point cloud segmentation method based on three-dimensional laser radar
CN108345008A (en) * 2017-01-23 2018-07-31 郑州宇通客车股份有限公司 A kind of target object detecting method, point cloud data extracting method and device
KR20190043035A (en) * 2017-10-17 2019-04-25 현대자동차주식회사 Apparatus for aggregating object based on Lidar data, system having the same and method thereof
CN109840448A (en) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 Information output method and device for automatic driving vehicle
US20190178989A1 (en) * 2017-12-11 2019-06-13 Automotive Research & Testing Center Dynamic road surface detecting method based on three-dimensional sensor
CN110457407A (en) * 2018-05-02 2019-11-15 北京京东尚科信息技术有限公司 Method and apparatus for handling point cloud data
CN110969174A (en) * 2018-09-29 2020-04-07 深圳市布谷鸟科技有限公司 Target identification method, device and system based on laser radar
CN111487641A (en) * 2020-03-19 2020-08-04 福瑞泰克智能系统有限公司 Method and device for detecting object by using laser radar, electronic equipment and storage medium
CN111679882A (en) * 2020-06-09 2020-09-18 成都民航空管科技发展有限公司 Scene target integrated display method and system in ATC system
CN111950426A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target detection method and device and delivery vehicle
US20200393566A1 (en) * 2019-06-14 2020-12-17 DeepMap Inc. Segmenting ground points from non-ground points to assist with localization of autonomous vehicles
CN112465908A (en) * 2020-11-30 2021-03-09 深圳市优必选科技股份有限公司 Object positioning method and device, terminal equipment and storage medium
CN112711034A (en) * 2020-12-22 2021-04-27 中国第一汽车股份有限公司 Object detection method, device and equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530380A (en) * 2016-09-20 2017-03-22 长安大学 Ground point cloud segmentation method based on three-dimensional laser radar
CN108345008A (en) * 2017-01-23 2018-07-31 郑州宇通客车股份有限公司 A kind of target object detecting method, point cloud data extracting method and device
KR20190043035A (en) * 2017-10-17 2019-04-25 현대자동차주식회사 Apparatus for aggregating object based on Lidar data, system having the same and method thereof
CN109840448A (en) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 Information output method and device for automatic driving vehicle
US20190178989A1 (en) * 2017-12-11 2019-06-13 Automotive Research & Testing Center Dynamic road surface detecting method based on three-dimensional sensor
CN110457407A (en) * 2018-05-02 2019-11-15 北京京东尚科信息技术有限公司 Method and apparatus for handling point cloud data
CN110969174A (en) * 2018-09-29 2020-04-07 深圳市布谷鸟科技有限公司 Target identification method, device and system based on laser radar
US20200393566A1 (en) * 2019-06-14 2020-12-17 DeepMap Inc. Segmenting ground points from non-ground points to assist with localization of autonomous vehicles
CN111487641A (en) * 2020-03-19 2020-08-04 福瑞泰克智能系统有限公司 Method and device for detecting object by using laser radar, electronic equipment and storage medium
CN111679882A (en) * 2020-06-09 2020-09-18 成都民航空管科技发展有限公司 Scene target integrated display method and system in ATC system
CN111950426A (en) * 2020-08-06 2020-11-17 东软睿驰汽车技术(沈阳)有限公司 Target detection method and device and delivery vehicle
CN112465908A (en) * 2020-11-30 2021-03-09 深圳市优必选科技股份有限公司 Object positioning method and device, terminal equipment and storage medium
CN112711034A (en) * 2020-12-22 2021-04-27 中国第一汽车股份有限公司 Object detection method, device and equipment

Similar Documents

Publication Publication Date Title
CN109918977B (en) Method, device and equipment for determining idle parking space
CN108012083A (en) Face acquisition method, device and computer-readable recording medium
CN109858441A (en) A kind of monitoring abnormal state method and apparatus for construction site
CN111310667A (en) Method, device, storage medium and processor for determining whether annotation is accurate
CN111340879B (en) Image positioning system and method based on up-sampling
CN112132853B (en) Method and device for constructing ground guide arrow, electronic equipment and storage medium
CN112198878B (en) Instant map construction method and device, robot and storage medium
CN108694381A (en) Object positioning method and object trajectory method for tracing
CN113838125A (en) Target position determining method and device, electronic equipment and storage medium
CN113111144A (en) Room marking method and device and robot movement method
CN114219770A (en) Ground detection method, ground detection device, electronic equipment and storage medium
CN107767366B (en) A kind of transmission line of electricity approximating method and device
CN116386373A (en) Vehicle positioning method and device, storage medium and electronic equipment
CN113466815A (en) Object identification method, device, equipment and storage medium
CN112578405A (en) Method and system for removing ground based on laser radar point cloud data
CN115223031B (en) Monocular frame ranging method and device, medium and curtain wall robot
CN111104861A (en) Method and apparatus for determining position of electric wire and storage medium
CN113721240B (en) Target association method, device, electronic equipment and storage medium
CN111212260A (en) Method and device for drawing lane line based on surveillance video
CN110660113A (en) Method and device for establishing characteristic map, acquisition equipment and storage medium
CN113188569A (en) Vehicle and laser radar coordinate system calibration method, device and storage medium
CN113223076A (en) Coordinate system calibration method, device and storage medium for vehicle and vehicle-mounted camera
CN113516685B (en) Target tracking method, device, equipment and storage medium
CN112069849A (en) Identification and positioning method, device, equipment and storage medium based on multiple two-dimensional codes
CN112785704A (en) Semantic map construction method, computer readable storage medium and processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination