CN112668371A - Method and apparatus for outputting information - Google Patents

Method and apparatus for outputting information Download PDF

Info

Publication number
CN112668371A
CN112668371A CN201910983046.2A CN201910983046A CN112668371A CN 112668371 A CN112668371 A CN 112668371A CN 201910983046 A CN201910983046 A CN 201910983046A CN 112668371 A CN112668371 A CN 112668371A
Authority
CN
China
Prior art keywords
obstacle
point cloud
point
determining
cloud points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910983046.2A
Other languages
Chinese (zh)
Other versions
CN112668371B (en
Inventor
杨磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN201910983046.2A priority Critical patent/CN112668371B/en
Publication of CN112668371A publication Critical patent/CN112668371A/en
Application granted granted Critical
Publication of CN112668371B publication Critical patent/CN112668371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a method and a device for outputting information. One embodiment of the above method comprises: acquiring image data and point cloud data acquired by a vehicle in the driving process; identifying obstacles in the image data, and marking the obstacles by adopting a marking frame; according to the point cloud data, determining position information of point cloud points corresponding to each pixel point in the marking frame; determining the center position of the barrier according to the position information of the cloud points of each point; the center position is output. This embodiment can improve the data processing efficiency at low cost.

Description

Method and apparatus for outputting information
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for outputting information.
Background
The terminal logistics distribution robot can greatly reduce material distribution cost and improve distribution efficiency. However, since the vehicle body is small, and the on-vehicle computing unit of the end logistics distribution robot has a limited computing capability due to cost. As such, the end logistics distribution robot cannot run more complex algorithms to identify obstacles in its travel environment. The clustering algorithm which is less in time consumption and does not need higher computing resources is provided, and the clustering algorithm is very important for the terminal logistics distribution robot.
Disclosure of Invention
The embodiment of the application provides a method and a device for outputting information.
In a first aspect, an embodiment of the present application provides a method for outputting information, including: acquiring image data and point cloud data acquired by a vehicle in the driving process; recognizing an obstacle in the image data, and marking the obstacle by using a marking frame; determining the position information of point cloud points corresponding to each pixel point in the marking frame according to the point cloud data; determining the center position of the barrier according to the position information of the cloud points of each point; and outputting the central position.
In some embodiments, the determining the center position of the obstacle according to the position information of the cloud points of each point includes: determining the number of point cloud points in different preset position intervals according to the position information of the point cloud points; determining a target position interval from each position interval according to the number of point cloud points included in each position interval; and determining the central position of the obstacle according to the point cloud points included in the target position interval.
In some embodiments, the determining the center position of the obstacle according to the point cloud point included in the target position interval includes: determining an envelope frame enveloping cloud points of each point in the target position interval; and taking the position of the center point of the envelope frame as the center position of the obstacle.
In some embodiments, the above method further comprises: and tracking the track of the obstacle according to the central position of the obstacle.
In some embodiments, the above method further comprises: and determining the speed of the obstacle according to the central position of the obstacle.
In a second aspect, an embodiment of the present application provides an apparatus for outputting information, including: an acquisition unit configured to acquire image data and point cloud data acquired by a vehicle during travel; an identification unit configured to identify an obstacle in the image data and label the obstacle with a label frame; the dividing unit is configured to determine position information of point cloud points corresponding to the pixel points in the marking frame according to the point cloud data; a determining unit configured to determine a center position of the obstacle according to position information of cloud points of each point; an output unit configured to output the center position.
In some embodiments, the determining unit includes: the quantity determining module is configured to determine the quantity of the point cloud points in different preset position intervals according to the position information of the point cloud points; the interval determining module is configured to determine a target position interval from the position intervals according to the number of the point cloud points included in the position intervals; a position determining module configured to determine a center position of the obstacle according to the point cloud point included in the target position section.
In some embodiments, the location determination module is further configured to: determining an envelope frame enveloping cloud points of each point in the target position interval; and taking the position of the center point of the envelope frame as the center position of the obstacle.
In some embodiments, the above apparatus further comprises: and a trajectory tracking unit configured to track the obstacle according to a center position of the obstacle.
In some embodiments, the above apparatus further comprises: a speed determination unit configured to determine a speed of the obstacle according to a center position of the obstacle.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the embodiments of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method as described in any one of the embodiments of the first aspect.
According to the method and the device for outputting information provided by the above embodiment of the application, firstly, image data and point cloud data acquired by a vehicle in a driving process can be acquired. Then, the obstacles in the image data are identified, and the obstacles are marked by using a marking frame. And determining the position information corresponding to each pixel point in the marking frame according to the point cloud data. And finally, determining the central position of the barrier according to the position information of the point cloud point corresponding to each pixel point. And outputs the center position. The method of the embodiment can rapidly determine the center position of the obstacle according to the position information of cloud points of each point of the obstacle, thereby improving the data processing efficiency under the condition of low cost.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present application may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for outputting information, in accordance with the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for outputting information according to the present application;
FIG. 4 is a flow diagram of one embodiment of determining a center position of an obstacle in a method for outputting information according to the present application;
FIG. 5 is a schematic diagram of an application scenario of the embodiment shown in FIG. 4;
FIG. 6 is a schematic block diagram illustrating one embodiment of an apparatus for outputting information according to the present application;
FIG. 7 is a block diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the present method for outputting information or apparatus for outputting information may be applied.
As shown in fig. 1, the system architecture 100 may include a vehicle 101, end logistics distribution robots 102, 103, a network 104, and a server 105. The network 104 is used to provide a medium for communication links between the vehicle 101, the end logistics distribution robots 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The vehicle 101 and the end logistics distribution robots 102 and 103 may be mounted with data acquisition devices, such as laser radar sensors, depth cameras, monocular cameras, binocular cameras, etc., to acquire information of their driving environments during driving. The vehicle 101 and the end logistics distribution robots 102 and 103 can transmit the position or state of the vehicle to the server 105 through the network 104 in real time to receive the information transmitted by the server 105.
It should be noted that the method for outputting information provided in the embodiment of the present application is generally performed by the vehicle 101 or the end logistics distribution robot 102, 103. Accordingly, a device for outputting information is generally provided in the vehicle 101 or the end logistics distribution robot 102, 103.
It should be understood that the number of vehicles, end logistics distribution robots, networks and servers in fig. 1 is merely illustrative. There may be any number of vehicles, end logistics distribution robots, networks, and servers, as desired for the implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for outputting information in accordance with the present application is shown. The method for outputting information of the embodiment comprises the following steps:
step 201, acquiring image data and point cloud data acquired by a vehicle in a driving process.
In the present embodiment, an execution subject of the method for outputting information (e.g., the vehicle 101 or the end logistics distribution robots 102, 103 shown in fig. 1) may acquire image data and point cloud data acquired by the vehicle during traveling by a wired connection manner or a wireless connection manner. The execution main body can be provided with an image acquisition device and a point cloud acquisition device. The image capture device may include a monocular camera, a binocular camera, and the like. The point cloud acquisition device may include a lidar sensor, a depth camera, and the like. It is understood that the image data and the point cloud data include information of a traveling environment and information of an obstacle. And the pixel points in the image data and the point cloud points in the point cloud data are in one-to-one correspondence, namely the image acquisition device and the point cloud acquisition device are calibrated in advance.
Step 202, identifying obstacles in the image data, and marking the obstacles by using a marking frame.
After acquiring the image data, the executive may identify an obstacle in the image data. Specifically, the executing subject may employ an obstacle recognition algorithm or a pre-trained obstacle recognition model to recognize obstacles in the image data. After identifying the obstacle included in the image data, the execution body may mark the identified obstacle with a mark frame. The label frame can be various types of label frames, such as rectangle, ellipse, etc. In some application scenarios, the executing agent may employ different types of labeling boxes to label different types of obstacles.
And step 203, determining the position information of the point cloud points corresponding to each pixel point in the labeling frame according to the point cloud data.
In this embodiment, the point cloud data may include laser point cloud data and depth point cloud data. The laser point cloud data comprises the coordinates and the intensity of cloud points of each point, and the depth point cloud data comprises the distance between the cloud points of each point and an execution main body. The execution subject can determine the position information of the point cloud point corresponding to each pixel point in the marking frame according to the point cloud data.
And step 204, determining the central position of the barrier according to the position information of the cloud points of each point.
After determining the position information of each cloud point in the labeling frame, the execution main body can determine the center position of the obstacle according to each position information. For example, the execution subject may calculate an average value of the respective location information and determine the locations of the cloud points for which the location information is equal to the average value. Then, the center position of each cloud point is used as the center position of the obstacle. Alternatively, the execution subject may use a center position of a point cloud point of which the position information is in the preset position interval as the center position of the obstacle.
Step 205, outputting the center position.
After determining the center position of the obstacle, the execution main body may output the center position for subsequent processing. For example, predicting the speed of an obstacle, tracking the trajectory of an obstacle, planning the driving strategy of the executing subject itself, and so forth.
In some optional implementations of this embodiment, the method may further include the following steps not shown in fig. 2: and tracking the obstacle according to the center position of the obstacle.
In the implementation mode, the execution main body can acquire image data and point cloud data in the driving process of the vehicle in real time, so that the central position of the obstacle can be determined in real time. The execution main body can track the obstacle according to the center position of the obstacle at each moment.
In some optional implementations of this embodiment, the method may further include the following steps not shown in fig. 2: and determining the speed of the obstacle according to the central position of the obstacle.
In this implementation, the execution main body may further calculate the traveling speed of the obstacle according to the center position of the obstacle at the adjacent time. The resulting travel speed may be used to guide the travel speed and direction of the executing subject itself.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for outputting information according to the present embodiment. In the application scenario of fig. 3, an express delivery is installed in the terminal material distribution robot, and in the distribution process, image data and point cloud data of the driving environment can be acquired through a binocular camera installed on the robot, and the central position of an obstacle in the driving environment is determined according to the processing in steps 202 to 204. Then, the self running speed and running direction are determined according to the center position.
The method for outputting information provided by the above embodiment of the present application may first acquire image data and point cloud data acquired by a vehicle during driving. Then, the obstacles in the image data are identified, and the obstacles are marked by using a marking frame. And determining the position information corresponding to each pixel point in the marking frame according to the point cloud data. And finally, determining the central position of the barrier according to the position information of the point cloud point corresponding to each pixel point. And outputs the center position. According to the method, the center position of the obstacle can be determined quickly according to the position information corresponding to each pixel point of the obstacle, and therefore the data processing efficiency can be improved under the condition of low cost.
With continued reference to FIG. 4, a flow 400 of one embodiment of determining a center position of an obstacle in a method for outputting information according to the present application is shown. As shown in fig. 4, the method for outputting information of the present embodiment may determine the center position of the obstacle by:
step 401, determining the number of pixel points located in different preset position intervals according to the position information of the cloud points of each point.
The execution main body can divide the cloud points of each point according to the position information of the cloud points of each point. Specifically, the execution main body can divide the cloud points of each point into different position intervals according to the position information, so that the number of the cloud points in each position interval can be counted. For example, the executive may count the number of clouds of points having x and y coordinates between 0-d, d-2 d, 2 d-3 d … …, respectively.
Step 402, determining a target position interval from each position interval according to the number of point cloud points included in each position interval.
After determining the number of point cloud points included in each location interval, the execution subject may determine a target location interval from each location interval. The execution subject may take a location section including the largest number of point cloud points as a target location section. Alternatively, the execution subject may take a position section including the top N (N is a natural number greater than 1) of the number of point cloud points as a target position section. The execution subject may determine the center position of the obstacle according to the point cloud point included in the target position section.
And step 403, determining an envelope frame for each cloud point in the envelope target position interval.
The execution agent may determine an envelope box that is capable of enveloping each cloud point in the target location interval. It will be appreciated that the envelope box is a three-dimensional solid box. Specifically, the execution main body may calculate a minimum circumscribed sphere of each cloud point in the target position interval, and use the minimum circumscribed sphere as an envelope of each cloud point.
And step 404, taking the position of the central point of the envelope frame as the central position of the obstacle.
The execution subject may calculate a center point of the envelope frame and then take the position of the center point as the center position of the obstacle.
With continued reference to FIG. 5, a schematic diagram of one application scenario of the embodiment shown in FIG. 4 is shown. In fig. 5, the obstacle is a pedestrian in front of the terminal material distribution robot, the terminal material distribution robot acquires image data and point cloud data of the pedestrian through a binocular camera, and a labeling frame is labeled on the pedestrian in the image. And then, dividing the point cloud data to obtain a depth statistical histogram which represents the number of point cloud points in each distance interval. And then calculating the central position of the pedestrian according to the point cloud points included in the section with the maximum point cloud point number.
With further reference to fig. 6, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of an apparatus for outputting information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 6, the apparatus 600 for outputting information of the present embodiment includes: an acquisition unit 601, a recognition unit 602, a division unit 603, a determination unit 604, and an output unit 605.
An acquisition unit 601 configured to acquire image data and point cloud data acquired during travel of the vehicle.
An identification unit 602 configured to identify an obstacle in the image data, and to mark the obstacle with a mark box.
The dividing unit 603 is configured to determine, according to the point cloud data, position information of a point cloud point corresponding to each pixel point in the labeling frame.
The determining unit 604 is configured to determine a center position of the obstacle according to the position information of the cloud points.
An output unit 605 configured to output the center position.
In some optional implementations of this embodiment, the determining unit 604 may further include one or more units not shown in fig. 6: the device comprises a number determination module, an interval determination module and a position determination module.
And the quantity determining module is configured to determine the quantity of the point cloud points located in different preset position intervals according to the position information of the point cloud points.
And the interval determining module is configured to determine a target position interval from the position intervals according to the number of the point cloud points included in the position intervals.
A position determination module configured to determine a center position of the obstacle according to the point cloud points included in the target position interval.
In some optional implementations of this embodiment, the location determination module may be further configured to: determining an envelope frame for enveloping cloud points of each point in a target position interval; and taking the position of the central point of the envelope frame as the central position of the obstacle.
In some optional implementations of this embodiment, the apparatus 600 may further include a trajectory tracking unit, not shown in fig. 6, configured to track the obstacle according to the center position of the obstacle.
In some optional implementations of this embodiment, the apparatus 600 may further include a speed determination unit, not shown in fig. 6, configured to determine the speed of the obstacle according to the center position of the obstacle.
It should be understood that units 601 to 605 recited in the apparatus 600 for outputting information correspond to respective steps in the method described with reference to fig. 2. Thus, the operations and features described above for the method for outputting information are equally applicable to the apparatus 600 and the units included therein and will not be described in detail here.
Referring now to FIG. 7, shown is a schematic diagram of an electronic device 700 suitable for use in implementing embodiments of the present disclosure. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring image data and point cloud data acquired by a vehicle in the driving process; identifying obstacles in the image data, and marking the obstacles by adopting a marking frame; according to the point cloud data, determining position information of point cloud points corresponding to each pixel point in the marking frame; determining the center position of the barrier according to the position information of the cloud points of each point; the center position is output.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a recognition unit, a division unit, a determination unit, and an output unit. The names of these units do not in some cases constitute a limitation on the units themselves, and for example, the acquisition unit may also be described as a "unit that acquires image data and point cloud data acquired while the vehicle is traveling".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (12)

1. A method for outputting information, comprising:
acquiring image data and point cloud data acquired by a vehicle in the driving process;
identifying an obstacle in the image data, and marking the obstacle by using a marking frame;
determining the position information of point cloud points corresponding to each pixel point in the marking frame according to the point cloud data;
determining the central position of the barrier according to the position information of the cloud points of each point;
and outputting the central position.
2. The method of claim 1, wherein the determining the center position of the obstacle according to the position information of the cloud points comprises:
determining the number of point cloud points in different preset position intervals according to the position information of the point cloud points;
determining a target position interval from each position interval according to the number of point cloud points included in each position interval;
and determining the central position of the obstacle according to the point cloud points included in the target position interval.
3. The method of claim 2, wherein said determining a center position of the obstacle from point cloud points included in the target location interval comprises:
determining an envelope frame for enveloping cloud points of each point in the target position interval;
and taking the position of the central point of the envelope frame as the central position of the obstacle.
4. The method according to any one of claims 1-3, wherein the method further comprises:
and tracking the obstacle according to the central position of the obstacle.
5. The method according to any one of claims 1-3, wherein the method further comprises:
and determining the speed of the obstacle according to the central position of the obstacle.
6. An apparatus for outputting information, comprising:
an acquisition unit configured to acquire image data and point cloud data acquired by a vehicle during travel;
an identification unit configured to identify an obstacle in the image data and to mark the obstacle with a mark frame;
the dividing unit is configured to determine position information of point cloud points corresponding to all pixel points in the marking frame according to the point cloud data;
a determining unit configured to determine a center position of the obstacle according to position information of each cloud point;
an output unit configured to output the center position.
7. The apparatus of claim 6, wherein the determining unit comprises:
the quantity determining module is configured to determine the quantity of the point cloud points in different preset position intervals according to the position information of the point cloud points;
an interval determination module configured to determine a target position interval from each distance interval according to the number of point cloud points included in each position interval;
a position determination module configured to determine a center position of the obstacle according to the point cloud points included in the target position interval.
8. The apparatus of claim 7, wherein the location determination module is further configured to:
determining an envelope frame for enveloping cloud points of each point in the target position interval;
and taking the position of the central point of the envelope frame as the central position of the obstacle.
9. The apparatus of any of claims 6-8, wherein the apparatus further comprises:
a trajectory tracking unit configured to track the obstacle according to a center position of the obstacle.
10. The apparatus of any of claims 6-8, wherein the apparatus further comprises:
a speed determination unit configured to determine a speed of the obstacle according to a center position of the obstacle.
11. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201910983046.2A 2019-10-16 2019-10-16 Method and device for outputting information Active CN112668371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910983046.2A CN112668371B (en) 2019-10-16 2019-10-16 Method and device for outputting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910983046.2A CN112668371B (en) 2019-10-16 2019-10-16 Method and device for outputting information

Publications (2)

Publication Number Publication Date
CN112668371A true CN112668371A (en) 2021-04-16
CN112668371B CN112668371B (en) 2024-04-09

Family

ID=75400299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910983046.2A Active CN112668371B (en) 2019-10-16 2019-10-16 Method and device for outputting information

Country Status (1)

Country Link
CN (1) CN112668371B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113126120A (en) * 2021-04-25 2021-07-16 北京百度网讯科技有限公司 Data annotation method, device, equipment, storage medium and computer program product

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013065120A1 (en) * 2011-11-01 2013-05-10 アイシン精機株式会社 Obstacle alert device
CN108256577A (en) * 2018-01-18 2018-07-06 东南大学 A kind of barrier clustering method based on multi-line laser radar
CN108985171A (en) * 2018-06-15 2018-12-11 上海仙途智能科技有限公司 Estimation method of motion state and state estimation device
US20180365503A1 (en) * 2017-06-16 2018-12-20 Baidu Online Network Technology (Beijing) Co., Ltd. Method and Apparatus of Obtaining Obstacle Information, Device and Computer Storage Medium
CN109344804A (en) * 2018-10-30 2019-02-15 百度在线网络技术(北京)有限公司 A kind of recognition methods of laser point cloud data, device, equipment and medium
CN109360239A (en) * 2018-10-24 2019-02-19 长沙智能驾驶研究院有限公司 Obstacle detection method, device, computer equipment and storage medium
CN109840448A (en) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 Information output method and device for automatic driving vehicle
US20190224847A1 (en) * 2018-01-23 2019-07-25 Toyota Jidosha Kabushiki Kaisha Motion trajectory generation apparatus
CN110068814A (en) * 2019-03-27 2019-07-30 东软睿驰汽车技术(沈阳)有限公司 A kind of method and device measuring obstacle distance

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013065120A1 (en) * 2011-11-01 2013-05-10 アイシン精機株式会社 Obstacle alert device
US20180365503A1 (en) * 2017-06-16 2018-12-20 Baidu Online Network Technology (Beijing) Co., Ltd. Method and Apparatus of Obtaining Obstacle Information, Device and Computer Storage Medium
CN109840448A (en) * 2017-11-24 2019-06-04 百度在线网络技术(北京)有限公司 Information output method and device for automatic driving vehicle
CN108256577A (en) * 2018-01-18 2018-07-06 东南大学 A kind of barrier clustering method based on multi-line laser radar
US20190224847A1 (en) * 2018-01-23 2019-07-25 Toyota Jidosha Kabushiki Kaisha Motion trajectory generation apparatus
CN108985171A (en) * 2018-06-15 2018-12-11 上海仙途智能科技有限公司 Estimation method of motion state and state estimation device
CN109360239A (en) * 2018-10-24 2019-02-19 长沙智能驾驶研究院有限公司 Obstacle detection method, device, computer equipment and storage medium
CN109344804A (en) * 2018-10-30 2019-02-15 百度在线网络技术(北京)有限公司 A kind of recognition methods of laser point cloud data, device, equipment and medium
CN110068814A (en) * 2019-03-27 2019-07-30 东软睿驰汽车技术(沈阳)有限公司 A kind of method and device measuring obstacle distance

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FANGCHAO HU 等: "A combined clustering and image mapping based point cloud segmentation for 3D object detection", 2018 CHINESE CONTROL AND DECISION CONFERENCE (CCDC) *
姬长英;沈子尧;顾宝兴;田光兆;张杰;: "基于点云图的农业导航中障碍物检测方法", 农业工程学报, no. 07 *
李小毛;张鑫;王文涛;瞿栋;祝川;: "基于3D激光雷达的无人水面艇海上目标检测", 上海大学学报(自然科学版), no. 01 *
陆峰;徐友春;李永乐;王德宇;谢德胜;: "基于信息融合的智能车障碍物检测方法", 计算机应用, no. 2 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113126120A (en) * 2021-04-25 2021-07-16 北京百度网讯科技有限公司 Data annotation method, device, equipment, storage medium and computer program product
CN113126120B (en) * 2021-04-25 2023-08-25 北京百度网讯科技有限公司 Data labeling method, device, equipment, storage medium and computer program product

Also Published As

Publication number Publication date
CN112668371B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
US11776155B2 (en) Method and apparatus for detecting target object in image
CN109901567B (en) Method and apparatus for outputting obstacle information
CN110654381B (en) Method and device for controlling a vehicle
CN110717918B (en) Pedestrian detection method and device
CN112258519B (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN112630799B (en) Method and apparatus for outputting information
CN110696826B (en) Method and device for controlling a vehicle
CN111353453B (en) Obstacle detection method and device for vehicle
CN115761702B (en) Vehicle track generation method, device, electronic equipment and computer readable medium
CN112622923B (en) Method and device for controlling a vehicle
CN110654380A (en) Method and device for controlling a vehicle
CN110110696B (en) Method and apparatus for processing information
CN112558036B (en) Method and device for outputting information
CN112668371B (en) Method and device for outputting information
CN115061386B (en) Intelligent driving automatic simulation test system and related equipment
CN111383337B (en) Method and device for identifying objects
CN113902047B (en) Image element matching method, device, equipment and storage medium
CN115512336A (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN115565374A (en) Logistics vehicle driving optimization method and device, electronic equipment and readable storage medium
CN108960160A (en) The method and apparatus of structural state amount are predicted based on unstructured prediction model
CN110120075B (en) Method and apparatus for processing information
CN112526477B (en) Method and device for processing information
CN113096436B (en) Indoor parking method and device
CN112560324B (en) Method and device for outputting information
CN115848358B (en) Vehicle parking method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant