CN112668371B - Method and device for outputting information - Google Patents

Method and device for outputting information Download PDF

Info

Publication number
CN112668371B
CN112668371B CN201910983046.2A CN201910983046A CN112668371B CN 112668371 B CN112668371 B CN 112668371B CN 201910983046 A CN201910983046 A CN 201910983046A CN 112668371 B CN112668371 B CN 112668371B
Authority
CN
China
Prior art keywords
obstacle
point
point cloud
determining
interval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910983046.2A
Other languages
Chinese (zh)
Other versions
CN112668371A (en
Inventor
杨磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN201910983046.2A priority Critical patent/CN112668371B/en
Publication of CN112668371A publication Critical patent/CN112668371A/en
Application granted granted Critical
Publication of CN112668371B publication Critical patent/CN112668371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a method and a device for outputting information. One embodiment of the above method comprises: acquiring image data and point cloud data acquired by a vehicle in the driving process; identifying an obstacle in the image data and marking the obstacle by adopting a marking frame; determining position information of point cloud points corresponding to each pixel point in the labeling frame according to the point cloud data; determining the center position of the obstacle according to the position information of each cloud point; and outputting the central position. This embodiment can improve the processing efficiency of data at low cost.

Description

Method and device for outputting information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for outputting information.
Background
The terminal logistics distribution robot can greatly reduce material distribution cost and improve distribution efficiency. But because of the small body and because of cost considerations, the computational power of the on-board computational unit of the end logistics distribution robot is limited. As such, end logistics distribution robots cannot run more complex algorithms to identify obstacles in their travel environment. Providing a clustering algorithm that is less time consuming and does not require high computational resources is critical to end-logistics distribution robots.
Disclosure of Invention
The embodiment of the application provides a method and a device for outputting information.
In a first aspect, an embodiment of the present application provides a method for outputting information, including: acquiring image data and point cloud data acquired by a vehicle in the driving process; identifying an obstacle in the image data, and marking the obstacle by using a marking frame; determining the position information of the point cloud point corresponding to each pixel point in the labeling frame according to the point cloud data; determining the center position of the obstacle according to the position information of each cloud point; and outputting the central position.
In some embodiments, determining the center position of the obstacle according to the position information of the cloud points includes: determining the number of point cloud points located in preset different position intervals according to the position information of the point cloud points; determining a target position interval from each position interval according to the number of the point cloud points included in each position interval; and determining the central position of the obstacle according to the point cloud points included in the target position interval.
In some embodiments, the determining the center position of the obstacle according to the point cloud point included in the target position interval includes: determining an envelope frame for enveloping each cloud point in the target position interval; and taking the position of the central point of the envelope frame as the central position of the obstacle.
In some embodiments, the above method further comprises: tracking the track of the obstacle according to the center position of the obstacle.
In some embodiments, the above method further comprises: and determining the speed of the obstacle according to the center position of the obstacle.
In a second aspect, an embodiment of the present application provides an apparatus for outputting information, including: an acquisition unit configured to acquire image data and point cloud data acquired by a vehicle during traveling; an identification unit configured to identify an obstacle in the image data, and to label the obstacle with a label frame; the dividing unit is configured to determine the position information of the point cloud point corresponding to each pixel point in the labeling frame according to the point cloud data; a determining unit configured to determine a center position of the obstacle based on position information of each cloud point; and an output unit configured to output the center position.
In some embodiments, the determining unit includes: the quantity determining module is configured to determine the quantity of the point cloud points positioned in preset different position intervals according to the position information of the point cloud points; the interval determining module is configured to determine a target position interval from the position intervals according to the number of the point cloud points included in the position intervals; and a position determining module configured to determine a center position of the obstacle based on the point cloud point included in the target position section.
In some embodiments, the above location determination module is further configured to: determining an envelope frame for enveloping each cloud point in the target position interval; and taking the position of the central point of the envelope frame as the central position of the obstacle.
In some embodiments, the apparatus further comprises: and a trajectory tracking unit configured to track the obstacle according to a center position of the obstacle.
In some embodiments, the apparatus further comprises: and a speed determining unit configured to determine a speed of the obstacle based on a center position of the obstacle.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors cause the one or more processors to implement the method as described in any of the embodiments of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the embodiments of the first aspect.
The method and the device for outputting information provided by the embodiments of the present application can firstly acquire image data and point cloud data acquired by a vehicle during driving. Then, an obstacle in the image data is identified, and the obstacle is marked by a marking frame. And determining the position information corresponding to each pixel point in the labeling frame according to the point cloud data. And finally, determining the center position of the obstacle according to the position information of the point cloud point corresponding to each pixel point. And outputs the center position. The method of the embodiment can quickly determine the center position of the obstacle according to the position information of each cloud point of the obstacle, so that the data processing efficiency can be improved under the condition of low cost.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for outputting information according to the present application;
FIG. 3 is a schematic illustration of one application scenario of a method for outputting information according to the present application;
FIG. 4 is a flow chart of one embodiment of determining a center position of an obstacle in a method for outputting information according to the present application;
FIG. 5 is a schematic diagram of an application scenario of the embodiment of FIG. 4;
FIG. 6 is a schematic structural diagram of one embodiment of an apparatus for outputting information according to the present application;
fig. 7 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the methods for outputting information or the apparatus for outputting information of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include a vehicle 101, end-logistics distribution robots 102, 103, a network 104, and a server 105. The network 104 is used to provide a medium for communication links between the vehicles 101, the end-logistics distribution robots 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The vehicle 101 and the end logistics distribution robots 102, 103 may be equipped with data acquisition devices, such as lidar sensors, depth cameras, monocular cameras or binocular cameras, etc., to acquire information of the driving environment during driving. The vehicle 101 and the end logistics distribution robots 102, 103 may send their own location or status to the server 105 over the network 104 in real time to receive information sent by the server 105.
It should be noted that the method for outputting information provided in the embodiments of the present application is generally performed by the vehicle 101 or the terminal logistics distribution robot 102, 103. Accordingly, the means for outputting information is typically provided in the vehicle 101 or the end logistics distribution robot 102, 103.
It should be understood that the number of vehicles, end logistics distribution robots, networks and servers in fig. 1 is merely illustrative. There may be any number of vehicles, end logistics distribution robots, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for outputting information according to the present application is shown. The method for outputting information of the present embodiment includes the steps of:
step 201, acquiring image data and point cloud data acquired by a vehicle in a driving process.
In the present embodiment, the execution subject of the method for outputting information (for example, the vehicle 101 or the terminal logistics distribution robot 102, 103 shown in fig. 1) may acquire image data and point cloud data acquired by the vehicle during traveling by wired connection or wireless connection. An image acquisition device and a point cloud acquisition device can be installed on the execution body. The image acquisition means may comprise a monocular camera, a binocular camera or the like. The point cloud acquisition device may include a lidar sensor, a depth camera, and the like. It is understood that the image data and the point cloud data include information of the traveling environment and information of the obstacle. And the pixel points in the image data are in one-to-one correspondence with the point cloud points in the point cloud data, namely the image acquisition device and the point cloud acquisition device are calibrated in advance.
Step 202, identifying an obstacle in the image data and marking the obstacle with a marking frame.
After the image data is acquired, the execution subject may identify an obstacle in the image data. Specifically, the executing body may employ an obstacle recognition algorithm or a pre-trained obstacle recognition model to recognize an obstacle in the image data. After identifying the obstacle included in the image data, the execution body may mark the identified obstacle with a marking box. The above-described label frame may be various types of label frames, such as rectangular, oval, and the like. In some application scenarios, the execution subject may employ different types of annotation boxes to annotate different types of obstacles.
And 203, determining the position information of the point cloud point corresponding to each pixel point in the labeling frame according to the point cloud data.
In this embodiment, the point cloud data may include laser point cloud data and depth point cloud data. The laser point cloud data comprises coordinates and intensity of each point cloud point, and the depth point cloud data comprises distances between each point cloud point and an execution main body. The execution body may determine, according to the point cloud data, position information of a point cloud point corresponding to each pixel point in the labeling frame.
Step 204, determining the center position of the obstacle according to the position information of the cloud points.
After the execution main body determines the position information of each cloud point in the annotation frame, the central position of the obstacle can be determined according to each position information. For example, the execution subject may calculate an average value of the respective position information, and determine the position of each point cloud point where the position information is equal to the average value. Then, the center position of each cloud point is taken as the center position of the obstacle. Alternatively, the execution body may take the center position of the point cloud point of the position information in the preset position section as the center position of the obstacle.
Step 205, outputting the center position.
After determining the center position of the obstacle, the execution body may output the center position for subsequent processing. For example, predicting the speed of an obstacle, tracking the trajectory of an obstacle, planning the driving strategy of the execution subject itself, and the like.
In some optional implementations of the present embodiment, the method may further include the following steps not shown in fig. 2: and tracking the track of the obstacle according to the central position of the obstacle.
In the implementation mode, the execution main body can acquire the image data and the point cloud data in the running process of the vehicle in real time, so that the center position of the obstacle can be determined in real time. The execution body can track the obstacle according to the central position of the obstacle at each moment.
In some optional implementations of the present embodiment, the method may further include the following steps not shown in fig. 2: the speed of the obstacle is determined based on the center position of the obstacle.
In the present embodiment, the execution body may calculate the travel speed of the obstacle from the center positions of the obstacles at adjacent times. The resulting travel speed may be used to guide the travel speed and direction of the executing body itself.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for outputting information according to the present embodiment. In the application scenario of fig. 3, the end material delivery robot is provided with an express delivery, and in the delivery process, image data and point cloud data of the running environment can be collected by a binocular camera mounted on the robot, and the center position of an obstacle in the running environment is determined according to the processing of steps 202 to 204. And then determines the self running speed and running direction according to the central position.
The method for outputting information provided in the above embodiment of the present application may first acquire image data and point cloud data acquired by a vehicle during running. Then, an obstacle in the image data is identified, and the obstacle is marked by a marking frame. And determining the position information corresponding to each pixel point in the labeling frame according to the point cloud data. And finally, determining the center position of the obstacle according to the position information of the point cloud point corresponding to each pixel point. And outputs the center position. The method of the embodiment can quickly determine the center position of the obstacle according to the position information corresponding to each pixel point of the obstacle, so that the data processing efficiency can be improved under the condition of low cost.
With continued reference to fig. 4, a flow 400 of one embodiment of determining a center position of an obstacle in a method for outputting information according to the present application is shown. As shown in fig. 4, the method for outputting information of the present embodiment can determine the center position of an obstacle by:
step 401, determining the number of pixel points located in preset different position intervals according to the position information of each cloud point.
The execution body may divide each cloud point according to the position information of each cloud point. Specifically, the execution body may divide the cloud points into different location intervals according to the location information, so that the number of the cloud points located in each location interval may be counted. For example, the execution body may count the number of points cloud points whose x-coordinate and y-coordinate lie between 0-d, d-2 d, and 2 d-3 d … …, respectively.
Step 402, determining a target location section from the location sections according to the number of the point cloud points included in the location sections.
After determining the number of point clouds included in each location section, the execution body may determine a target location section from among the location sections. The execution body may set a location section including the largest number of point clouds as the target location section. Alternatively, the execution body may set a position section including N (N is a natural number greater than 1) before the number of cloud points as the target position section. The execution body may determine the center position of the obstacle from the point cloud points included in the target position section.
Step 403, determining an envelope frame of each cloud point in the envelope target position interval.
The execution body may determine an envelope frame capable of enveloping cloud points of points in the target location interval. It will be appreciated that the envelope frame is a three-dimensional volumetric frame. Specifically, the execution body may calculate a minimum circumscribed sphere of each point cloud point in the target position interval, and use the minimum circumscribed sphere as an envelope frame of each point cloud point.
Step 404, taking the position of the center point of the envelope frame as the center position of the obstacle.
The execution body may calculate a center point of the envelope frame and then take a position of the center point as a center position of the obstacle.
With continued reference to fig. 5, a schematic diagram of one application scenario of the embodiment of fig. 4 is shown. In fig. 5, the obstacle is a pedestrian in front of the terminal material delivery robot, the terminal material delivery robot collects image data and point cloud data of the pedestrian through the binocular camera, and marks a marking frame on the pedestrian in the image. And then dividing the point cloud data to obtain a depth statistical histogram which represents the number of point cloud points positioned in each distance interval. And then calculating the center position of the pedestrian according to the point cloud points included in the interval with the largest number of the point cloud points.
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of an apparatus for outputting information, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 6, the apparatus 600 for outputting information of the present embodiment includes: an acquisition unit 601, an identification unit 602, a division unit 603, a determination unit 604, and an output unit 605.
The acquisition unit 601 is configured to acquire image data and point cloud data acquired by a vehicle during traveling.
The identification unit 602 is configured to identify an obstacle in the image data and to annotate the obstacle with a callout box.
The dividing unit 603 is configured to determine, according to the point cloud data, position information of a point cloud point corresponding to each pixel point in the labeling frame.
The determining unit 604 is configured to determine a center position of the obstacle according to position information of each cloud point.
The output unit 605 is configured to output the center position.
In some alternative implementations of the present embodiment, the determining unit 604 may further include: the system comprises a quantity determining module, an interval determining module and a position determining module.
The quantity determining module is configured to determine the quantity of the point cloud points located in preset different position intervals according to the position information of the point cloud points.
And the interval determining module is configured to determine a target position interval from the position intervals according to the number of the point cloud points included in the position intervals.
And the position determining module is configured to determine the central position of the obstacle according to the point cloud points included in the target position interval.
In some optional implementations of the present embodiment, the location determination module may be further configured to: determining an envelope frame of each cloud point in the envelope target position interval; the position of the center point of the envelope is taken as the center position of the obstacle.
In some alternative implementations of the present embodiment, the apparatus 600 may further include a trajectory tracking unit, not shown in fig. 6, configured to track the obstacle according to a center position of the obstacle.
In some alternative implementations of the present embodiment, the apparatus 600 may further include a speed determining unit, not shown in fig. 6, configured to determine the speed of the obstacle according to the center position of the obstacle.
It should be understood that the units 601 to 605 described in the apparatus 600 for outputting information correspond to the respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above with respect to the method for outputting information are equally applicable to the apparatus 600 and the units contained therein, and are not described in detail herein.
Referring now to fig. 7, a schematic diagram of an electronic device 700 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 7 is only one example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processor, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the electronic device 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 shows an electronic device 700 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 7 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 709, or installed from storage 708, or installed from ROM 702. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 701. It should be noted that, the computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring image data and point cloud data acquired by a vehicle in the driving process; identifying an obstacle in the image data and marking the obstacle by adopting a marking frame; determining position information of point cloud points corresponding to each pixel point in the labeling frame according to the point cloud data; determining the center position of the obstacle according to the position information of each cloud point; and outputting the central position.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments described in the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, an identification unit, a division unit, a determination unit, and an output unit. The names of these units do not constitute limitations on the unit itself in some cases, and the acquisition unit may also be described as "a unit that acquires image data and point cloud data acquired by a vehicle during running", for example.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (8)

1. A method for outputting information, comprising:
acquiring image data and point cloud data acquired by a vehicle in the driving process;
identifying an obstacle in the image data, and marking the obstacle by adopting a marking frame;
determining the position information of the point cloud point corresponding to each pixel point in the labeling frame according to the point cloud data;
determining the central position of the obstacle according to the position information of each cloud point;
outputting the central position;
wherein, according to the position information of each cloud point, the center position of the obstacle is determined, including:
determining the number of point cloud points located in preset different position intervals according to the position information of the point cloud points;
taking the position interval with the largest number of the point cloud points as a target position interval; or taking a position interval comprising N points before the number of the point cloud points as a target position interval, wherein N is a natural number greater than 1;
determining an envelope frame for enveloping each cloud point in the target position interval;
and taking the position of the central point of the envelope frame as the central position of the obstacle.
2. The method of claim 1, wherein the method further comprises:
and tracking the track of the obstacle according to the central position of the obstacle.
3. The method of claim 1, wherein the method further comprises:
and determining the speed of the obstacle according to the central position of the obstacle.
4. An apparatus for outputting information, comprising:
an acquisition unit configured to acquire image data and point cloud data acquired by a vehicle during traveling;
an identification unit configured to identify an obstacle in the image data, and to label the obstacle with a label frame;
the dividing unit is configured to determine the position information of the point cloud point corresponding to each pixel point in the labeling frame according to the point cloud data;
a determining unit configured to determine a center position of the obstacle according to position information of each cloud point;
an output unit configured to output the center position;
wherein the determining unit includes:
the quantity determining module is configured to determine the quantity of the point cloud points positioned in preset different position intervals according to the position information of the point cloud points;
the interval determining module is configured to take a position interval with the largest number of the point cloud points as a target position interval; or taking a position interval comprising N points before the number of the point cloud points as a target position interval, wherein N is a natural number greater than 1;
the position determining module is configured to determine an envelope frame for enveloping each cloud point in the target position interval; and taking the position of the central point of the envelope frame as the central position of the obstacle.
5. The apparatus of claim 4, wherein the apparatus further comprises:
and a track tracking unit configured to track the obstacle according to a center position of the obstacle.
6. The apparatus of claim 4, wherein the apparatus further comprises:
a speed determining unit configured to determine a speed of the obstacle according to a center position of the obstacle.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-3.
8. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-3.
CN201910983046.2A 2019-10-16 2019-10-16 Method and device for outputting information Active CN112668371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910983046.2A CN112668371B (en) 2019-10-16 2019-10-16 Method and device for outputting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910983046.2A CN112668371B (en) 2019-10-16 2019-10-16 Method and device for outputting information

Publications (2)

Publication Number Publication Date
CN112668371A CN112668371A (en) 2021-04-16
CN112668371B true CN112668371B (en) 2024-04-09

Family

ID=75400299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910983046.2A Active CN112668371B (en) 2019-10-16 2019-10-16 Method and device for outputting information

Country Status (1)

Country Link
CN (1) CN112668371B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113126120B (en) * 2021-04-25 2023-08-25 北京百度网讯科技有限公司 Data labeling method, device, equipment, storage medium and computer program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013065120A1 (en) * 2011-11-01 2013-05-10 アイシン精機株式会社 Obstacle alert device
CN108256577A (en) * 2018-01-18 2018-07-06 东南大学 A kind of barrier clustering method based on multi-line laser radar
CN108985171A (en) * 2018-06-15 2018-12-11 上海仙途智能科技有限公司 Estimation method of motion state and state estimation device
CN109344804A (en) * 2018-10-30 2019-02-15 百度在线网络技术(北京)有限公司 A kind of recognition methods of laser point cloud data, device, equipment and medium
CN109360239A (en) * 2018-10-24 2019-02-19 长沙智能驾驶研究院有限公司 Obstacle detection method, device, computer equipment and storage medium
CN109840448A (en) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 Information output method and device for automatic driving vehicle
CN110068814A (en) * 2019-03-27 2019-07-30 东软睿驰汽车技术(沈阳)有限公司 A kind of method and device measuring obstacle distance

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145680B (en) * 2017-06-16 2022-05-27 阿波罗智能技术(北京)有限公司 Method, device and equipment for acquiring obstacle information and computer storage medium
JP6911777B2 (en) * 2018-01-23 2021-07-28 トヨタ自動車株式会社 Motion trajectory generator

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013065120A1 (en) * 2011-11-01 2013-05-10 アイシン精機株式会社 Obstacle alert device
CN109840448A (en) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 Information output method and device for automatic driving vehicle
CN108256577A (en) * 2018-01-18 2018-07-06 东南大学 A kind of barrier clustering method based on multi-line laser radar
CN108985171A (en) * 2018-06-15 2018-12-11 上海仙途智能科技有限公司 Estimation method of motion state and state estimation device
CN109360239A (en) * 2018-10-24 2019-02-19 长沙智能驾驶研究院有限公司 Obstacle detection method, device, computer equipment and storage medium
CN109344804A (en) * 2018-10-30 2019-02-15 百度在线网络技术(北京)有限公司 A kind of recognition methods of laser point cloud data, device, equipment and medium
CN110068814A (en) * 2019-03-27 2019-07-30 东软睿驰汽车技术(沈阳)有限公司 A kind of method and device measuring obstacle distance

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A combined clustering and image mapping based point cloud segmentation for 3D object detection;Fangchao Hu 等;2018 Chinese Control And Decision Conference (CCDC);全文 *
基于3D激光雷达的无人水面艇海上目标检测;李小毛;张鑫;王文涛;瞿栋;祝川;;上海大学学报(自然科学版)(第01期);全文 *
基于信息融合的智能车障碍物检测方法;陆峰;徐友春;李永乐;王德宇;谢德胜;;计算机应用(第S2期);全文 *
基于点云图的农业导航中障碍物检测方法;姬长英;沈子尧;顾宝兴;田光兆;张杰;;农业工程学报(第07期);全文 *

Also Published As

Publication number Publication date
CN112668371A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CN111079619B (en) Method and apparatus for detecting target object in image
CN109901567B (en) Method and apparatus for outputting obstacle information
CN110654381B (en) Method and device for controlling a vehicle
CN110689804B (en) Method and apparatus for outputting information
CN112630799B (en) Method and apparatus for outputting information
CN111353453B (en) Obstacle detection method and device for vehicle
CN110696826B (en) Method and device for controlling a vehicle
CN115540896A (en) Path planning method, path planning device, electronic equipment and computer readable medium
CN115240157B (en) Method, apparatus, device and computer readable medium for persistence of road scene data
CN115761702B (en) Vehicle track generation method, device, electronic equipment and computer readable medium
CN112622923B (en) Method and device for controlling a vehicle
CN114612616A (en) Mapping method and device, electronic equipment and storage medium
CN110654380B (en) Method and device for controlling a vehicle
CN112558036B (en) Method and device for outputting information
CN112668371B (en) Method and device for outputting information
CN115061386B (en) Intelligent driving automatic simulation test system and related equipment
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN114724116B (en) Vehicle traffic information generation method, device, equipment and computer readable medium
CN115565374A (en) Logistics vehicle driving optimization method and device, electronic equipment and readable storage medium
CN112526477B (en) Method and device for processing information
CN110120075B (en) Method and apparatus for processing information
CN111383337A (en) Method and device for identifying objects
CN113191279A (en) Data annotation method, device, equipment, storage medium and computer program product
CN113096436B (en) Indoor parking method and device
CN112987707A (en) Automatic driving control method and device for vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant