CN109215136B - Real data enhancement method and device and terminal - Google Patents
Real data enhancement method and device and terminal Download PDFInfo
- Publication number
- CN109215136B CN109215136B CN201811045664.4A CN201811045664A CN109215136B CN 109215136 B CN109215136 B CN 109215136B CN 201811045664 A CN201811045664 A CN 201811045664A CN 109215136 B CN109215136 B CN 109215136B
- Authority
- CN
- China
- Prior art keywords
- data
- new obstacle
- new
- obstacle
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
Abstract
The invention provides a real data enhancement method, a device and a terminal, wherein the method comprises the following steps: acquiring a multi-frame point cloud, wherein the point cloud comprises a plurality of initial obstacles; deleting the initial barrier to form a plurality of position holes, and filling the position holes to form a real point cloud background; putting a new barrier in the real point cloud background, wherein the new barrier has marking data; and adjusting the new obstacle according to the marking data of the new obstacle to acquire the layout data of the new obstacle. The real data volume is increased, and the diversity of the real data is improved.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a real data enhancement method, a real data enhancement device and a real data enhancement terminal.
Background
In the traffic simulation, the positions of the obstacles marked with the data are extracted and used as placing positions. Due to the limited number of the labeled data, the requirement of diversification of the real data cannot be met. At present, the real data amount is usually enhanced by scaling or rotating a frame of image, so as to obtain more real data. However, the content of the real data volume is not modified in such an enhanced manner, and the real data cannot be generated in a large amount.
Disclosure of Invention
The embodiment of the invention provides a method, a device and a terminal for enhancing real data, which are used for at least solving the technical problems in the prior art.
In a first aspect, an embodiment of the present invention provides a method for enhancing real data, including:
acquiring a multi-frame point cloud, wherein the point cloud comprises a plurality of initial obstacles;
deleting the initial obstacles to form a plurality of position holes, and filling the position holes to form a real point cloud background;
re-placing a new obstacle in the real point cloud background, wherein the new obstacle has marking data;
and adjusting the new obstacle according to the marking data of the new obstacle to obtain the layout data of the new obstacle.
With reference to the first aspect, in a first implementation manner of the first aspect, before the adjusting the new obstacle according to the labeling data of the new obstacle to obtain the layout data of the new obstacle, the embodiment of the present invention further includes:
and carrying out area division on each point cloud to generate a plurality of preset areas so as to enable the preset areas to include the obstacles and enable the new obstacles to be adjusted in the preset areas.
With reference to the first aspect, in a second implementation manner of the first aspect, an embodiment of the present invention adjusts the new obstacle according to the labeling data of the new obstacle to obtain layout data of the new obstacle, and includes:
extracting position data in the marking data of the new obstacle, and adjusting the position of the new obstacle according to the position data;
and the position data obtained after adjustment is used as the layout data of the new obstacle.
With reference to the first aspect, in a third implementation manner of the first aspect, the adjusting the new obstacle according to the labeled data of the new obstacle to obtain layout data of the new obstacle includes:
extracting position data in the marking data of the new obstacle, and replacing the category of the new obstacle according to the position data;
and taking the category obtained after replacement as the layout data of the new obstacle.
With reference to the first aspect, in a fourth implementation manner of the first aspect, the adjusting the new obstacle according to the real annotation data of the new obstacle to obtain the layout data of the new obstacle includes:
extracting position data in the labeling data of the new obstacles, and calculating the space between the adjacent new obstacles;
and adding at least one additional obstacle in the space between the adjacent new obstacles, wherein the marking data corresponding to the additional obstacle is used as the layout data of the new obstacle.
With reference to the first aspect, in a fifth implementation manner of the first aspect, the adjusting the new obstacle according to the labeling data of the new obstacle to obtain the layout data of the new obstacle includes:
extracting the obstacle category in the labeling data of the new obstacle, and adjusting the direction of the new obstacle according to the new obstacle category;
the adjusted orientation of the new obstacle is used as layout data of the new obstacle.
One of the above technical solutions has the following advantages or beneficial effects: and deleting the initial barrier, and filling the cavity left after the initial barrier is deleted by using the surrounding environment to form a real point cloud background. And (3) newly placing a new obstacle in the real point cloud background, acquiring the marking data of the new obstacle, and adjusting the new obstacle according to the marking data of the new obstacle to obtain more layout data of the new obstacle. The real data volume is increased, and the diversity of the real data is improved.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present invention will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 is a flowchart of a real data enhancement method according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of a real data enhancement apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a real data enhancement terminal according to an embodiment of the present invention.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
Example one
In a specific embodiment, as shown in fig. 1, a real data amount enhancing method is provided, which includes:
step S100: acquiring a multi-frame point cloud, wherein the point cloud comprises a plurality of initial obstacles.
When the collection vehicle moves along the movement route, the initial obstacles on the periphery can be scanned through the radar to obtain multi-frame point clouds. The motion rule of the collection vehicle may be that the collection vehicle moves on a main road or moves on a specified auxiliary road, and the like, and both are within the protection scope of the embodiment. The multi-frame point cloud can also be directly obtained from the outside.
Step S200: and deleting the initial barrier to form a plurality of position holes, and filling the position holes to form a real point cloud background.
In each frame of point cloud, the acquisition vehicle is used as a dot to establish a point cloud coordinate system, and the initial barrier has relative coordinates relative to the acquisition vehicle. And obtaining the absolute coordinate of the initial obstacle according to the absolute coordinate of the acquisition vehicle and the relative coordinate of the initial obstacle. And marking the initial obstacle according to the absolute coordinate of the initial obstacle to obtain marking data of the initial obstacle. The marking data of the initial obstacle includes not only the position data of the initial obstacle, but also the type, identification number and orientation of the initial obstacle.
And deleting the position data according to the initial obstacle, and leaving a position hole. And (4) utilizing the surrounding environment to fill the holes to form a real point cloud background.
Step S300: and re-placing a new obstacle in the real point cloud background, wherein the new obstacle has marking data. The marking data of the new obstacle comprises position data, type, identification number, orientation and the like of the new obstacle.
Step S400: and adjusting the new obstacle according to the marking data of the new obstacle to acquire the layout data of the new obstacle.
The method for adjusting the new obstacle according to the labeled data of the new obstacle may include: the new obstacles can be added and deleted according to different scene requirements, and the data obtained after the addition and the deletion is the layout data of the new obstacles. The position, orientation, identification number, type and the like of the new obstacle can be changed, and the marking data corresponding to the new obstacle obtained after the change is the layout data of the new obstacle. The increase of the real data volume is improved, and meanwhile, the diversity of the real data is improved.
In one embodiment, before adjusting the new obstacle according to the labeling data of the new obstacle to obtain the layout data of the new obstacle, the method further includes:
and carrying out area division on each point cloud to generate a plurality of preset areas so as to enable the preset areas to include new obstacles and enable the new obstacles to be adjusted in the preset areas.
Because the point cloud comprises a plurality of preset areas, and a new obstacle is arranged in each preset area, the adjustable moving range of the new obstacle is divided, and subsequent adjustment is facilitated. The size of the preset area can be adjusted according to the size of the new obstacle, and both are within the protection range of the embodiment.
In one embodiment, adjusting the new obstacle according to the labeling data of the new obstacle to obtain the layout data of the new obstacle includes:
and extracting position data in the marking data of the new obstacle, adjusting the position of the new obstacle according to the position data, and taking the position data obtained after adjustment as layout data of the new obstacle.
And changing the positions of the new obstacles in the preset area to obtain position data of a plurality of new obstacles, and taking the newly obtained position data as layout data of the new obstacles. Since the position of the new obstacle is changed in the preset area, collision with the new obstacle in another area is avoided.
In one embodiment, adjusting the new obstacle according to the labeling data of the new obstacle to obtain the layout data of the new obstacle includes:
and extracting position data in the marking data of the new obstacle, replacing the type of the new obstacle according to the position data, and taking the type obtained after replacement as layout data of the new obstacle.
The position of the new obstacle is determined, and then the new obstacle is replaced by combining the position with the scene. In one example, in a road assistance scene, a bicycle-type new obstacle is replaced with a barrier sign-type obstacle, and the replaced barrier sign type is used as layout data of the new obstacle.
In one embodiment, adjusting the new obstacle according to the labeling data of the new obstacle to obtain the layout data of the new obstacle includes:
extracting position data in the marking data of the new obstacle, and calculating the space between the adjacent new obstacles;
and adding at least one additional obstacle in the space between the adjacent new obstacles, and adding marking data corresponding to the obstacles as layout data of the new obstacles.
And calculating the space distance between two new obstacles according to the position data of two adjacent new obstacles, and adding at least one additional obstacle in the space distance. The type of the added obstacle can be selected according to the space size, and the added obstacle is prevented from colliding with two adjacent new obstacles. And adding marking data corresponding to the obstacles as layout data of the new obstacles.
In one embodiment, adjusting the new obstacle according to the labeling data of the new obstacle to obtain the layout data of the new obstacle includes:
and extracting the obstacle type in the marking data of the new obstacle, adjusting the direction of the new obstacle according to the new obstacle type, and taking the adjusted direction of the new obstacle as the layout data of the new obstacle.
And changing the orientation of the new obstacle according to the category and the scene of the new obstacle. For example, the rotation angle of a new obstacle of the automobile type cannot exceed a threshold value, otherwise a traffic regulation is violated. The adjusted orientation of the new obstacle is used as layout data of the new obstacle.
Example two
In another specific embodiment, as shown in fig. 2, a real data enhancement apparatus is included, comprising:
the system comprises a point cloud obtaining module 10, a point cloud obtaining module, a control module and a control module, wherein the point cloud obtaining module is used for obtaining a multi-frame point cloud which comprises a plurality of initial obstacles;
the point cloud background forming module 20 is configured to delete the initial obstacle to form a plurality of position holes, and fill the position holes to form a real point cloud background;
the obstacle setting module 30 is used for re-placing a new obstacle in the real point cloud background, wherein the new obstacle has real marking data;
and the real data acquisition module 40 is configured to adjust the new obstacle according to the labeled data of the new obstacle, so as to acquire layout data of the new obstacle.
In one embodiment, the apparatus further comprises:
and the point cloud area division module is used for carrying out area division on each point cloud to generate a plurality of preset areas so as to enable the preset areas to include obstacles and enable new obstacles to be adjusted in the preset areas.
In one embodiment, the real data acquisition module 40 includes:
and the position data adding unit is used for extracting position data in the marking data of the new obstacle, adjusting the position of the new obstacle according to the position data, and taking the position data obtained after adjustment as layout data of the new obstacle.
In one embodiment, the real data acquiring module 40 further includes:
and the category adding unit is used for extracting the position data in the marking data of the new obstacle, replacing the category of the new obstacle according to the position data, and taking the category obtained after replacement as the layout data of the new obstacle.
In one embodiment, the real data acquiring module 40 further includes:
and the marking data adding unit is used for extracting the position data in the marking data of the new obstacles, calculating the space between the adjacent new obstacles, adding at least one obstacle in the space between the adjacent new obstacles, and adding the marking data corresponding to the obstacles as the layout data of the new obstacles.
In one embodiment, the real data acquiring module 40 further includes:
and the orientation data adding unit is used for extracting the obstacle type in the labeling data of the new obstacle, adjusting the orientation of the new obstacle according to the new obstacle type, and taking the adjusted orientation of the new obstacle as the layout data of the new obstacle.
EXAMPLE III
An embodiment of the present invention provides a real data enhancement terminal, as shown in fig. 3, including:
a memory 400 and a processor 500, the memory 400 having stored therein a computer program operable on the processor 500. The processor 500, when executing the computer program, implements the real data enhancement method in the above embodiments. The number of the memory 400 and the processor 500 may be one or more.
A communication interface 600 for the memory 400 and the processor 500 to communicate with the outside.
If the memory 400, the processor 500, and the communication interface 600 are implemented independently, the memory 400, the processor 500, and the communication interface 600 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 400, the processor 500, and the communication interface 600 are integrated on a single chip, the memory 400, the processor 500, and the communication interface 600 may complete communication with each other through an internal interface.
Example four
A computer-readable storage medium storing a computer program which, when executed by a processor, implements the real data enhancement method according to any one of embodiments included in the first aspect.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present invention, and these should be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (12)
1. A method for enhancing real data, comprising:
acquiring a multi-frame point cloud, wherein the point cloud comprises a plurality of initial obstacles;
deleting the initial obstacles to form a plurality of position holes, and filling the position holes to form a real point cloud background;
re-placing a new obstacle in the real point cloud background, wherein the new obstacle has marking data;
adjusting the new obstacle according to the marking data of the new obstacle to obtain the layout data of the new obstacle;
before adjusting the new obstacle according to the labeled data of the new obstacle to obtain the layout data of the new obstacle, the method further includes:
and carrying out area division on each point cloud to generate a plurality of preset areas so as to enable the preset areas to include the new obstacles and enable the new obstacles to be adjusted in the preset areas.
2. The method of claim 1, wherein adjusting the new obstacle according to the labeling data of the new obstacle to obtain layout data of the new obstacle comprises:
extracting position data in the marking data of the new obstacle, and adjusting the position of the new obstacle according to the position data;
and the position data obtained after adjustment is used as the layout data of the new obstacle.
3. The method of claim 1, wherein adjusting the new obstacle according to the labeling data of the new obstacle to obtain layout data of the new obstacle comprises:
extracting position data in the marking data of the new obstacle, and replacing the category of the new obstacle according to the position data;
and taking the category obtained after replacement as the layout data of the new obstacle.
4. The method of claim 1, wherein adjusting the new obstacle according to the labeling data of the new obstacle to obtain layout data of the new obstacle comprises:
extracting position data in the labeling data of the new obstacles, and calculating the space between the adjacent new obstacles;
and adding at least one additional obstacle in the space between the adjacent new obstacles, wherein the marking data corresponding to the additional obstacle is used as the layout data of the new obstacle.
5. The method of claim 1, wherein adjusting the new obstacle according to the labeling data of the new obstacle to obtain layout data of the new obstacle comprises:
extracting the obstacle category in the labeling data of the new obstacle, and adjusting the direction of the new obstacle according to the new obstacle category;
the adjusted orientation of the new obstacle is used as layout data of the new obstacle.
6. A real data enhancement apparatus, comprising:
the system comprises a point cloud acquisition module, a position detection module and a position detection module, wherein the point cloud acquisition module is used for acquiring multi-frame point cloud which comprises a plurality of initial obstacles;
the point cloud background forming module is used for deleting the initial barrier to form a plurality of position holes, and filling the position holes to form a real point cloud background;
the obstacle setting module is used for putting a new obstacle again in the real point cloud background, and the new obstacle has real marking data;
the real data acquisition module is used for adjusting the new obstacle according to the marking data of the new obstacle so as to acquire the layout data of the new obstacle;
the device further comprises:
and the point cloud area division module is used for carrying out area division on each point cloud to generate a plurality of preset areas so as to enable the preset areas to include the obstacles and enable the new obstacles to be adjusted in the preset areas.
7. The apparatus of claim 6, wherein the real data acquisition module comprises:
and the position data adding unit is used for extracting position data in the marking data of the new obstacle, adjusting the position of the new obstacle according to the position data, and taking the position data obtained after adjustment as layout data of the new obstacle.
8. The apparatus of claim 6, wherein the real data acquisition module further comprises:
and the category adding unit is used for extracting the position data in the marking data of the new obstacle, replacing the category of the new obstacle according to the position data, and taking the category obtained after replacement as the layout data of the new obstacle.
9. The apparatus of claim 6, wherein the real data acquisition module further comprises:
and the marking data adding unit is used for extracting position data in the marking data of the new obstacle, calculating a space between the adjacent new obstacles, adding at least one obstacle in the space between the adjacent new obstacles, and taking the marking data corresponding to the added obstacle as layout data of the new obstacle.
10. The apparatus of claim 6, wherein the real data acquisition module further comprises:
and the orientation data adding unit is used for extracting the obstacle type in the labeling data of the new obstacle, adjusting the orientation of the new obstacle according to the new obstacle type, and taking the adjusted orientation of the new obstacle as the layout data of the new obstacle.
11. A real data enhancement terminal, comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-5.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811045664.4A CN109215136B (en) | 2018-09-07 | 2018-09-07 | Real data enhancement method and device and terminal |
EP19185787.9A EP3621040B1 (en) | 2018-09-07 | 2019-07-11 | Data augmentation method, device and terminal |
US16/514,507 US11205289B2 (en) | 2018-09-07 | 2019-07-17 | Method, device and terminal for data augmentation |
JP2019133292A JP7227867B2 (en) | 2018-09-07 | 2019-07-19 | REAL DATA EXTENSION METHOD, DEVICE AND TERMINAL |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811045664.4A CN109215136B (en) | 2018-09-07 | 2018-09-07 | Real data enhancement method and device and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109215136A CN109215136A (en) | 2019-01-15 |
CN109215136B true CN109215136B (en) | 2020-03-20 |
Family
ID=64987804
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811045664.4A Active CN109215136B (en) | 2018-09-07 | 2018-09-07 | Real data enhancement method and device and terminal |
Country Status (4)
Country | Link |
---|---|
US (1) | US11205289B2 (en) |
EP (1) | EP3621040B1 (en) |
JP (1) | JP7227867B2 (en) |
CN (1) | CN109215136B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109934141B (en) * | 2019-03-01 | 2021-05-04 | 北京百度网讯科技有限公司 | Method and device for marking data |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200945245A (en) * | 2008-03-12 | 2009-11-01 | Harris Corp | Registration of 3D point cloud data by creation of filtered density images |
CN104252716A (en) * | 2014-10-10 | 2014-12-31 | 江苏恒天先进制造科技有限公司 | Museum three-dimensional digital modeling system based on reverse engineering and use method thereof |
TW201643063A (en) * | 2015-06-04 | 2016-12-16 | Univ Nat Formosa | Method to reconstruct the car accident site by three-dimensional animation |
CN107871129A (en) * | 2016-09-27 | 2018-04-03 | 北京百度网讯科技有限公司 | Method and apparatus for handling cloud data |
CN108492356A (en) * | 2017-02-13 | 2018-09-04 | 苏州宝时得电动工具有限公司 | Augmented reality system and its control method |
Family Cites Families (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2190596C (en) | 1994-05-19 | 2002-03-26 | Theodore M. Lachinski | Method for collecting and processing visual and spatial position information |
JP4165722B2 (en) * | 1998-02-26 | 2008-10-15 | 株式会社バンダイナムコゲームス | Image generating apparatus and information storage medium |
JP3300334B2 (en) | 1999-04-16 | 2002-07-08 | 松下電器産業株式会社 | Image processing device and monitoring system |
JP2001013645A (en) | 1999-07-01 | 2001-01-19 | Konica Corp | Film unit with lens |
FR2853121B1 (en) | 2003-03-25 | 2006-12-15 | Imra Europe Sa | DEVICE FOR MONITORING THE SURROUNDINGS OF A VEHICLE |
JP2005202922A (en) | 2003-12-18 | 2005-07-28 | Nissan Motor Co Ltd | Drive assisting device and drive assisting program |
JP5025940B2 (en) * | 2005-10-26 | 2012-09-12 | 任天堂株式会社 | Information processing program, information processing apparatus, information processing system, and information processing method |
US20080243378A1 (en) | 2007-02-21 | 2008-10-02 | Tele Atlas North America, Inc. | System and method for vehicle navigation and piloting including absolute and relative coordinates |
UA101493C2 (en) | 2008-03-11 | 2013-04-10 | Инсайт Корпорейшн | Azetidine and cyclobutane derivatives as jak inhibitors |
US8611585B2 (en) | 2008-04-24 | 2013-12-17 | GM Global Technology Operations LLC | Clear path detection using patch approach |
US8126642B2 (en) | 2008-10-24 | 2012-02-28 | Gray & Company, Inc. | Control and systems for autonomously driven vehicles |
CN101441076B (en) | 2008-12-29 | 2010-06-02 | 东软集团股份有限公司 | Method and device for detecting barrier |
US8260539B2 (en) * | 2010-05-12 | 2012-09-04 | GM Global Technology Operations LLC | Object and vehicle detection and tracking using 3-D laser rangefinder |
WO2012001755A1 (en) | 2010-07-02 | 2012-01-05 | 株式会社ソニー・コンピュータエンタテインメント | Information processing system, information processing device, and information processing method |
JP5206752B2 (en) | 2010-08-30 | 2013-06-12 | 株式会社デンソー | Driving environment recognition device |
JP5535025B2 (en) | 2010-10-08 | 2014-07-02 | 三菱電機株式会社 | Outdoor feature detection system, program for outdoor feature detection system, and recording medium for program for outdoor feature detection system |
JP5316572B2 (en) | 2011-03-28 | 2013-10-16 | トヨタ自動車株式会社 | Object recognition device |
US8605998B2 (en) | 2011-05-06 | 2013-12-10 | Toyota Motor Engineering & Manufacturing North America, Inc. | Real-time 3D point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping |
CN103258338A (en) | 2012-02-16 | 2013-08-21 | 克利特股份有限公司 | Method and system for driving simulated virtual environments with real data |
CN102663196B (en) | 2012-04-17 | 2014-04-16 | 中南大学 | Automobile crane hoisting simulation method on basis of virtual reality |
US9255989B2 (en) | 2012-07-24 | 2016-02-09 | Toyota Motor Engineering & Manufacturing North America, Inc. | Tracking on-road vehicles with sensors of different modalities |
US9056395B1 (en) * | 2012-09-05 | 2015-06-16 | Google Inc. | Construction zone sign detection using light detection and ranging |
JP5949353B2 (en) | 2012-09-07 | 2016-07-06 | 株式会社Ihi | Analysis apparatus and analysis method |
JP2014106585A (en) * | 2012-11-26 | 2014-06-09 | Sony Corp | Information processing device, terminal device, information processing method and program |
US9082014B2 (en) | 2013-03-14 | 2015-07-14 | The Nielsen Company (Us), Llc | Methods and apparatus to estimate demography based on aerial images |
US9523772B2 (en) * | 2013-06-14 | 2016-12-20 | Microsoft Technology Licensing, Llc | Object removal using lidar-based classification |
CN104376297B (en) | 2013-08-12 | 2017-06-23 | 株式会社理光 | The detection method and device of the line style Warning Mark on road |
JP5919243B2 (en) | 2013-10-18 | 2016-05-18 | 本田技研工業株式会社 | Driving training apparatus and driving training method |
CN103914830B (en) | 2014-02-22 | 2017-02-01 | 小米科技有限责任公司 | Straight line detection method and device |
CN104899855A (en) | 2014-03-06 | 2015-09-09 | 株式会社日立制作所 | Three-dimensional obstacle detection method and apparatus |
CN104020674B (en) | 2014-04-21 | 2017-01-25 | 华南农业大学 | Matlab simulation visualized platform of vehicle Bug obstacle avoidance algorithm |
CN104183014B (en) * | 2014-08-13 | 2017-01-18 | 浙江大学 | An information labeling method having high fusion degree and oriented to city augmented reality |
KR101687073B1 (en) | 2014-10-22 | 2016-12-15 | 주식회사 만도 | Apparatus for esimating tunnel height and method thereof |
CN104331910B (en) | 2014-11-24 | 2017-06-16 | 沈阳建筑大学 | A kind of track obstacle detecting system based on machine vision |
CN104457569B (en) | 2014-11-27 | 2017-06-16 | 大连理工大学 | A kind of large-scale composite board geometric parameter vision measuring method |
CN104950883A (en) | 2015-05-14 | 2015-09-30 | 西安电子科技大学 | Mobile robot route planning method based on distance grid map |
CN104933708A (en) | 2015-06-07 | 2015-09-23 | 浙江大学 | Barrier detection method in vegetation environment based on multispectral and 3D feature fusion |
CN104931977B (en) | 2015-06-11 | 2017-08-25 | 同济大学 | A kind of obstacle recognition method for intelligent vehicle |
US9959765B2 (en) | 2015-07-20 | 2018-05-01 | Dura Operating Llc | System and method for providing alert to a vehicle or an advanced driver assist system based on vehicle dynamics input |
US10410513B2 (en) | 2015-07-20 | 2019-09-10 | Dura Operating, Llc | Fusion of non-vehicle-to-vehicle communication equipped vehicles with unknown vulnerable road user |
US20170092000A1 (en) * | 2015-09-25 | 2017-03-30 | Moshe Schwimmer | Method and system for positioning a virtual object in a virtual simulation environment |
US10745003B2 (en) * | 2015-11-04 | 2020-08-18 | Zoox, Inc. | Resilient safety system for a robotic vehicle |
JP6464075B2 (en) | 2015-11-11 | 2019-02-06 | 日本電信電話株式会社 | What-if simulation apparatus, method, and program |
US10557940B2 (en) | 2015-11-30 | 2020-02-11 | Luminar Technologies, Inc. | Lidar system |
JP2017113306A (en) * | 2015-12-24 | 2017-06-29 | 株式会社コロプラ | Program and computer |
CN105761308B (en) * | 2016-02-29 | 2018-09-07 | 武汉大学 | A kind of occlusion area building facade method for reconstructing of ground LiDAR and image data fusion |
US10690495B2 (en) | 2016-03-14 | 2020-06-23 | Canon Kabushiki Kaisha | Ranging apparatus and moving object capable of high-accuracy ranging |
CN105844600B (en) * | 2016-04-27 | 2018-03-16 | 北京航空航天大学 | A kind of extraterrestrial target three-dimensional point cloud fairing denoising method |
CN105957145A (en) * | 2016-04-29 | 2016-09-21 | 百度在线网络技术(北京)有限公司 | Road barrier identification method and device |
EP3467788B1 (en) * | 2016-05-27 | 2022-08-03 | Rakuten Group, Inc. | Three-dimensional model generation system, three-dimensional model generation method, and program |
JP6088094B1 (en) * | 2016-06-20 | 2017-03-01 | 株式会社Cygames | System for creating a mixed reality environment |
CN106204457A (en) | 2016-07-19 | 2016-12-07 | 科盾科技股份有限公司 | A kind of method for capture target and catching device |
US11243080B2 (en) | 2016-07-26 | 2022-02-08 | Nissan Motor Co., Ltd. | Self-position estimation method and self-position estimation device |
CN107818293A (en) * | 2016-09-14 | 2018-03-20 | 北京百度网讯科技有限公司 | Method and apparatus for handling cloud data |
CN106462757B (en) | 2016-09-26 | 2019-09-06 | 深圳市锐明技术股份有限公司 | A kind of rapid detection method and device of pairs of lane line |
JP6548691B2 (en) | 2016-10-06 | 2019-07-24 | 株式会社アドバンスド・データ・コントロールズ | Image generation system, program and method, simulation system, program and method |
RO132599A2 (en) | 2016-11-23 | 2018-05-30 | Centrul It Pentru Ştiinţă Şi Tehnologie S.R.L. | Modular equipment for road inspection, passable way and adjacent area included, meant to be mounted to ordinary vehicles |
CN106707293B (en) | 2016-12-01 | 2019-10-29 | 百度在线网络技术(北京)有限公司 | Obstacle recognition method and device for vehicle |
CN106599832A (en) | 2016-12-09 | 2017-04-26 | 重庆邮电大学 | Method for detecting and recognizing various types of obstacles based on convolution neural network |
CN108268518A (en) | 2016-12-30 | 2018-07-10 | 乐视汽车(北京)有限公司 | The device for the grid map that generation controls for unmanned vehicle navigation |
CN106845412B (en) | 2017-01-20 | 2020-07-10 | 百度在线网络技术(北京)有限公司 | Obstacle identification method and device, computer equipment and readable medium |
CN106919908B (en) | 2017-02-10 | 2020-07-28 | 百度在线网络技术(北京)有限公司 | Obstacle identification method and device, computer equipment and readable medium |
CN106997049B (en) | 2017-03-14 | 2020-07-03 | 奇瑞汽车股份有限公司 | Method and device for detecting barrier based on laser point cloud data |
CN107103627B (en) | 2017-04-27 | 2020-12-11 | 深圳市天双科技有限公司 | Method for calibrating external parameters of automobile panoramic camera based on lane line |
US10803663B2 (en) * | 2017-08-02 | 2020-10-13 | Google Llc | Depth sensor aided estimation of virtual reality environment boundaries |
US11023596B2 (en) * | 2017-08-30 | 2021-06-01 | Go Ghost, LLC | Non-rasterized image streaming system that uses ray tracing samples |
US10771350B2 (en) * | 2017-09-26 | 2020-09-08 | Siemens Aktiengesellschaft | Method and apparatus for changeable configuration of objects using a mixed reality approach with augmented reality |
CN107657237B (en) | 2017-09-28 | 2020-03-31 | 东南大学 | Automobile collision detection method and system based on deep learning |
CN107659774B (en) | 2017-09-30 | 2020-10-13 | 北京拙河科技有限公司 | Video imaging system and video processing method based on multi-scale camera array |
CN107678306B (en) | 2017-10-09 | 2021-04-16 | 驭势(上海)汽车科技有限公司 | Dynamic scene information recording and simulation playback method, device, equipment and medium |
CN107832806A (en) | 2017-12-06 | 2018-03-23 | 四川知创空间孵化器管理有限公司 | A kind of car plate type identification method and system |
US10169680B1 (en) * | 2017-12-21 | 2019-01-01 | Luminar Technologies, Inc. | Object identification and labeling tool for training autonomous vehicle controllers |
CN108156419A (en) | 2017-12-22 | 2018-06-12 | 湖南源信光电科技股份有限公司 | More focal length lens linkage imaging camera machine system based on multiple features combining and Camshift algorithms |
CN108010360A (en) | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | A kind of automatic Pilot context aware systems based on bus or train route collaboration |
CN107993512A (en) | 2018-01-03 | 2018-05-04 | 深圳市欣横纵技术股份有限公司 | One seed nucleus security Table Top Tool systems and application method |
CN108256506B (en) | 2018-02-14 | 2020-11-24 | 北京市商汤科技开发有限公司 | Method and device for detecting object in video and computer storage medium |
DE112018006982B4 (en) * | 2018-03-05 | 2021-07-08 | Mitsubishi Electric Corporation | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING PROGRAM |
US11094112B2 (en) * | 2018-09-06 | 2021-08-17 | Foresight Ai Inc. | Intelligent capturing of a dynamic physical environment |
CN109146898B (en) | 2018-09-07 | 2020-07-24 | 百度在线网络技术(北京)有限公司 | Simulation data volume enhancing method and device and terminal |
US10769846B2 (en) * | 2018-10-11 | 2020-09-08 | GM Global Technology Operations LLC | Point cloud data compression in an autonomous vehicle |
US11295517B2 (en) * | 2019-11-15 | 2022-04-05 | Waymo Llc | Generating realistic point clouds |
-
2018
- 2018-09-07 CN CN201811045664.4A patent/CN109215136B/en active Active
-
2019
- 2019-07-11 EP EP19185787.9A patent/EP3621040B1/en active Active
- 2019-07-17 US US16/514,507 patent/US11205289B2/en active Active
- 2019-07-19 JP JP2019133292A patent/JP7227867B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW200945245A (en) * | 2008-03-12 | 2009-11-01 | Harris Corp | Registration of 3D point cloud data by creation of filtered density images |
CN104252716A (en) * | 2014-10-10 | 2014-12-31 | 江苏恒天先进制造科技有限公司 | Museum three-dimensional digital modeling system based on reverse engineering and use method thereof |
TW201643063A (en) * | 2015-06-04 | 2016-12-16 | Univ Nat Formosa | Method to reconstruct the car accident site by three-dimensional animation |
CN107871129A (en) * | 2016-09-27 | 2018-04-03 | 北京百度网讯科技有限公司 | Method and apparatus for handling cloud data |
CN108492356A (en) * | 2017-02-13 | 2018-09-04 | 苏州宝时得电动工具有限公司 | Augmented reality system and its control method |
Also Published As
Publication number | Publication date |
---|---|
JP7227867B2 (en) | 2023-02-22 |
US20200082584A1 (en) | 2020-03-12 |
US11205289B2 (en) | 2021-12-21 |
EP3621040A1 (en) | 2020-03-11 |
CN109215136A (en) | 2019-01-15 |
EP3621040B1 (en) | 2023-09-06 |
JP2020042790A (en) | 2020-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109146898B (en) | Simulation data volume enhancing method and device and terminal | |
CN109255181B (en) | Obstacle distribution simulation method and device based on multiple models and terminal | |
EP3876141A1 (en) | Object detection method, related device and computer storage medium | |
US20180060655A1 (en) | Method and apparatus for object identification and location correlation based on images | |
US11144770B2 (en) | Method and device for positioning vehicle, device, and computer readable storage medium | |
CN112258519B (en) | Automatic extraction method and device for way-giving line of road in high-precision map making | |
CN112862890B (en) | Road gradient prediction method, device and storage medium | |
JPWO2017130285A1 (en) | Vehicle determination device, vehicle determination method, and vehicle determination program | |
CN115147328A (en) | Three-dimensional target detection method and device | |
CN109215136B (en) | Real data enhancement method and device and terminal | |
CN112150550B (en) | Fusion positioning method and device | |
CN116071284A (en) | Traffic marker detection method and training method of traffic marker detection model | |
CN112902911B (en) | Ranging method, device, equipment and storage medium based on monocular camera | |
CN109598199B (en) | Lane line generation method and device | |
US20200080835A1 (en) | Method, device, apparatus and storage medium for detecting a height of an obstacle | |
CN116681965A (en) | Training method of target detection model and target detection method | |
CN115035495A (en) | Image processing method and device | |
CN115937007B (en) | Wind shear identification method and device, electronic equipment and medium | |
CN115527205A (en) | Lane line marking related method, vehicle-mounted device and storage medium | |
CN114779271B (en) | Target detection method and device, electronic equipment and storage medium | |
CN109636841B (en) | Lane line generation method and device | |
US20200307627A1 (en) | Road boundary determination | |
CN114543820A (en) | Road matching method and related device | |
CN117437610A (en) | Vehicle 3D target detection method, system and equipment based on monocular vision | |
CN117593704A (en) | Vehicle target detection system and method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |