CN108280886A - Laser point cloud mask method, device and readable storage medium storing program for executing - Google Patents
Laser point cloud mask method, device and readable storage medium storing program for executing Download PDFInfo
- Publication number
- CN108280886A CN108280886A CN201810075279.8A CN201810075279A CN108280886A CN 108280886 A CN108280886 A CN 108280886A CN 201810075279 A CN201810075279 A CN 201810075279A CN 108280886 A CN108280886 A CN 108280886A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- frame
- laser point
- laser
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
A kind of laser point cloud mask method of offer of the embodiment of the present invention, device and readable storage medium storing program for executing, belong to identification technology field.This method includes:Obtain at least two frame laser point cloud datas for including first frame laser point cloud data and the second frame laser point cloud data;The second object point cloud data of the first object point cloud data of the first object described in Overlapping display and second object in three-dimensional scenic;Frame is selected for the first object-point cloud data creation first and be that the second object-point cloud data creation second selects frame;It is the first label of the first object point cloud data mark to select frame using described first, and it is the second label of the second object point cloud data mark to select frame using described second.It can be realized by this method and multiframe laser point cloud data is carried out while being marked, without being marked one by one to all laser point cloud datas, to improve the mark speed and efficiency of laser point.
Description
Technical field
The present invention relates to identification technology field, in particular to a kind of laser point cloud mask method, device and readable deposit
Storage media.
Background technology
Now with the development of automatic Pilot technology, identification vehicle-surroundings target object (such as vehicle, pedestrian, tricycle, from
Driving etc.) it is then particularly important, at present a kind of more commonly used mode be by laser radar (as using 8 lines, 16 lines, 32 lines or
64 line laser radars) detection Vehicle target object, laser radar emits laser beam to surrounding, when laser beam encounters object
Then return laser light point cloud, the target object by laser point cloud identification surrounding and the size of the target object, position, movement
Speed etc..
Currently, identifying that the main mode of target object is by laser point cloud:First pass through the manually laser to receiving in advance
Point cloud carries out point-by-point mark to obtain the corresponding laser point cloud sample data of target object;Engineering is carried out using the sample data
Acquistion is to object identification model;Go out the corresponding target object of laser point cloud by the object identification Model Identification.
At present by manually being marked point by point to the laser point cloud received, and the laser point data that laser point cloud includes
Huge, this kind of notation methods speed is slower, and also includes the laser point for not being largely target object in laser point cloud, marks people
Member can may also be handled the laser point of a large amount of non-targeted objects, to extremely inefficient.
Invention content
In view of this, the embodiment of the present invention is designed to provide a kind of laser point cloud mask method, device and readable deposits
Storage media, to improve the above problem.
In a first aspect, an embodiment of the present invention provides a kind of laser point cloud mask method, the method includes:Including
At least two frame laser point cloud datas of first frame laser point cloud data and the second frame laser point cloud data, wherein the first frame
The first object point cloud data of the first object is included at least in laser point cloud data, in the second frame laser point cloud data at least
Include the second object point cloud data of the second object;The first object-point cloud of the first object described in Overlapping display in three-dimensional scenic
Second object point cloud data of data and second object;Frame is selected for the first object-point cloud data creation first and be institute
It states the second object-point cloud data creation second and selects frame;It is that the first object point cloud data marks first to select frame using described first
Label, and it is the second label of the second object point cloud data mark to select frame using described second.
Further, when first object and second object are different objects, first label and described
Second label differs;When first object and second object are same object, first label and described the
Two labels are identical.
Further, first label comprising the first number or includes the of first number and first object
One type;The Second Type of second number and second object is numbered comprising second or included to second label.
Further, it is that the first object-point cloud data creation first selects frame and is the second object point cloud data wound
It builds second and selects frame, including:It obtains and input by user selects frame request to create;Select frame request to create for first object based on described
Point cloud data creates first and selects frame and select frame for the second object-point cloud data creation second.
Further, to select frame request to create be that the first object-point cloud data creation first selects frame and is institute based on described
It states the second object-point cloud data creation second and selects frame, including:The first object point cloud data described in frame request to create is selected based on described
Frame is selected with the second object-point cloud data creation object;Frame is selected to be divided into the first object point cloud data pair the object
First answered selects frame and the second object point cloud data corresponding second to select frame.
Further, in three-dimensional scenic the first object described in Overlapping display the first object point cloud data and described second
Second object point cloud data of object, including:Based on the first object point cloud data and the second object point cloud data structure
Three-dimensional scenic is built, and establishes three-dimensional system of coordinate corresponding with the three-dimensional scenic;By the first object point cloud data and described
The coordinate of each laser point is converted to the three-dimensional coordinate in the three-dimensional system of coordinate in second object point cloud data;Swashed according to each
Each laser point is put into the three-dimensional scenic and is shown by the three-dimensional coordinate of luminous point.
Second aspect, an embodiment of the present invention provides a kind of laser point cloud annotation equipment, described device includes:Point cloud data
Acquisition module, for obtaining at least two frame laser point clouds for including first frame laser point cloud data and the second frame laser point cloud data
Data, wherein in the first frame laser point cloud data include at least the first object the first object point cloud data, described second
The second object point cloud data of the second object is included at least in frame laser point cloud data;Display module, in three-dimensional scenic
First object point cloud data of the first object described in Overlapping display and the second object point cloud data of second object;Frame is selected to create
Block is modeled, for selecting frame for the first object-point cloud data creation first and being the second object-point cloud data creation second
Select frame;Mark module is the first label of the first object point cloud data mark for selecting frame using described first, and utilizes institute
It is the second label of the second object point cloud data mark to state second and select frame.
Further, described to select frame creation module, including:Acquisition request unit input by user selects frame to create for obtaining
Build request;Creating unit, for be based on it is described select frame request to create be the first object-point cloud data creation first select frame and
Frame is selected for the second object-point cloud data creation second.
Further, the display module, including:Three-dimensional scenic establishes unit, for being based on the first object-point cloud
Data and the second object point cloud data build three-dimensional scenic, and establish three-dimensional system of coordinate corresponding with the three-dimensional scenic;
Coordinate transformation unit is used for the seat of each laser point in the first object point cloud data and the second object point cloud data
Mark is converted to the three-dimensional coordinate in the three-dimensional system of coordinate;Display unit, being used for will be every according to the three-dimensional coordinate of each laser point
A laser point is put into the three-dimensional scenic and is shown.
The third aspect, the embodiment of the present invention provide a kind of readable storage medium storing program for executing, are stored thereon with computer program, the meter
Such as above-mentioned method is realized when calculation machine program is executed by processor.
The advantageous effect of the embodiment of the present invention is:
A kind of laser point cloud mask method of offer of the embodiment of the present invention, device and readable storage medium storing program for executing, this method obtain first
Obtain at least two frame laser point cloud datas for including first frame laser point cloud secretary and the second frame laser point cloud data, wherein described
The first object point cloud data of the first object, the second frame laser point cloud data are included at least in first frame laser point cloud data
In include at least the second object point cloud data of the second object, then in three-dimensional scenic the first object described in Overlapping display the
Second object point cloud data of an object point cloud data and second object, then be the first object-point cloud data creation
One selects frame and selects frame for the second object-point cloud data creation second, and it is first object-point to recycle described first to select frame
The first label of cloud data mark, and it is the second label of the second object point cloud data mark to select frame using described second, is passed through
This method, which can be realized, to be carried out while marking to multiframe laser point cloud data, without being carried out one by one to all laser point cloud datas
Mark, to improve the mark speed and efficiency of laser point.
Other features and advantages of the present invention will be illustrated in subsequent specification, also, partly be become from specification
It is clear that by implementing understanding of the embodiment of the present invention.The purpose of the present invention and other advantages can be by saying what is write
Specifically noted structure is realized and is obtained in bright book, claims and attached drawing.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 shows a kind of structure diagram can be applied to the electronic equipment in the embodiment of the present application;
Fig. 2 is a kind of flow chart of laser point cloud mask method provided in an embodiment of the present invention;
Fig. 3 is a kind of laser point cloud data display schematic diagram in three-dimensional scenic provided in an embodiment of the present invention;
Fig. 4 is the flow chart of step S120 in a kind of laser point cloud mask method provided in an embodiment of the present invention;
Fig. 5 is another laser point cloud data display schematic diagram in three-dimensional scenic provided in an embodiment of the present invention;
Fig. 6 is a kind of structure diagram of laser point cloud annotation equipment provided in an embodiment of the present invention.
Specific implementation mode
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground describes, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause
This, the detailed description of the embodiment of the present invention to providing in the accompanying drawings is not intended to limit claimed invention below
Range, but it is merely representative of the selected embodiment of the present invention.Based on the embodiment of the present invention, those skilled in the art are not doing
The every other embodiment obtained under the premise of going out creative work, shall fall within the protection scope of the present invention.
It should be noted that:Similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined, then it further need not be defined and explained in subsequent attached drawing in a attached drawing.Meanwhile the present invention's
In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
Fig. 1 is please referred to, Fig. 1 shows a kind of structure diagram for the electronic equipment 100 that can be applied in the embodiment of the present application.
Electronic equipment 100 can be terminal device comprising laser point cloud annotation equipment, memory 101, storage control 102, processing
Device 103, Peripheral Interface 104, input-output unit 105, audio unit 106, display unit 107.
The memory 101, storage control 102, processor 103, Peripheral Interface 104, input-output unit 105, sound
Frequency unit 106,107 each element of display unit are directly or indirectly electrically connected between each other, to realize the transmission or friendship of data
Mutually.It is electrically connected for example, these elements can be realized between each other by one or more communication bus or signal wire.The laser
Point cloud annotation equipment include it is at least one can be stored in the memory 101 in the form of software or firmware (firmware) or
The software function module being solidificated in the operating system (operating system, OS) of the laser point cloud annotation equipment.Institute
It states processor 103 to be used to execute the executable module stored in memory 101, such as the laser point cloud annotation equipment includes
Software function module or computer program.
Wherein, memory 101 may be, but not limited to, random access memory (Random Access Memory,
RAM), read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only
Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM),
Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc..
Wherein, memory 101 is for storing program, and the processor 103 executes described program after receiving and executing instruction, aforementioned
The method performed by server that the stream process that any embodiment of the embodiment of the present invention discloses defines can be applied to processor 103
In, or realized by processor 103.
Processor 103 can be a kind of IC chip, the processing capacity with signal.Above-mentioned processor 103 can
To be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network processing unit
(Network Processor, abbreviation NP) etc.;Can also be digital signal processor (DSP), application-specific integrated circuit (ASIC),
Ready-made programmable gate array (FPGA) either other programmable logic device, discrete gate or transistor logic, discrete hard
Part component.It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor
Can be microprocessor or the processor 103 can also be any conventional processor etc..
The Peripheral Interface 104 couples various input/output devices to processor 103 and memory 101.At some
In embodiment, Peripheral Interface 104, processor 103 and storage control 102 can be realized in one single chip.Other one
In a little examples, they can be realized by independent chip respectively.
Input-output unit 105 is for being supplied to user input data to realize user and the server (or local terminal)
Interaction.The input-output unit 105 may be, but not limited to, mouse and keyboard etc..
Audio unit 106 provides a user audio interface, may include that one or more microphones, one or more raises
Sound device and voicefrequency circuit.
Display unit 107 provides an interactive interface (such as user's operation circle between the electronic equipment 100 and user
Face) or for display image data give user reference.In the present embodiment, the display unit 107 can be liquid crystal display
Or touch control display.Can be the capacitance type touch control screen or resistance for supporting single-point and multi-point touch operation if touch control display
Formula touch screen etc..Single-point and multi-point touch operation is supported to refer to touch control display and can sense on the touch control display one
Or at multiple positions simultaneously generate touch control operation, and by the touch control operation that this is sensed transfer to processor 103 carry out calculate and
Processing.
The Peripheral Interface 104 couples various input/output devices to processor 103 and memory 101.At some
In embodiment, Peripheral Interface 104, processor 103 and storage control 102 can be realized in one single chip.Other one
In a little examples, they can be realized by independent chip respectively.
The interaction that input-output unit 105 is used to that user input data to be supplied to realize user and processing terminal.It is described defeated
Enter output unit 105 may be, but not limited to, mouse and keyboard etc..
It is appreciated that structure shown in FIG. 1 is only to illustrate, the electronic equipment 100 may also include more than shown in Fig. 1
Either less component or with the configuration different from shown in Fig. 1.Hardware, software may be used in each component shown in Fig. 1
Or combinations thereof realize.
Fig. 2 is please referred to, Fig. 2 is a kind of flow chart of laser point cloud mask method provided in an embodiment of the present invention, the side
Method includes the following steps:
Step S110:Obtain at least two frame laser for including first frame laser point cloud data and the second frame laser point cloud data
Point cloud data.
Laser point cloud data refers to the sky for obtaining each sampled point of body surface under the same space referential using laser
Between coordinate, obtain is it is a series of expression object spaces distribution and target surface characteristic massive points set, this point set
Just it is referred to as " point cloud ".The attribute of point cloud includes spatial resolution, positional accuracy, surface normal etc..
In automatic Pilot field, in order in driving procedure to the target object of vehicle-surroundings (such as vehicle, pedestrian, three
Wheel vehicle, bicycle etc.) it is identified, need the data sample that multiple target objects are obtained ahead of time to carry out machine training study, from
And can automatic identification be carried out to surrounding objects automatically in driving procedure.
And during obtaining the data sample of target object, in order to improve sample acquisition efficiency, must need to obtain first
Multiframe laser point cloud data, the specific multiframe laser point cloud data includes first frame laser point cloud data and the second frame laser point
At least two frame laser point cloud datas of cloud data, wherein including at least the first object in the first frame laser point cloud data
First object point cloud data includes at least the second object point cloud data of the second object in the second frame laser point cloud data.
Specifically, as shown in figure 3, laser point cloud data can be acquired by laser point cloud equipment, which sets
It is standby to be integrated on the knapsack or moveable acquisition platform of staff.For example, needing to the target pair on road
When being acquired as carrying out laser point cloud data, which traverses entire specific model with the movement of staff
Interior scene is enclosed, and in above process, laser point cloud equipment interval setting time is to the laser point cloud data in entire scene
It is acquired, prefixed time interval (such as 0.1s) is divided between the acquisition time of each frame laser point cloud data, thus can get more
Frame laser point cloud data, each frame laser point cloud data acquired are relative to laser point cloud data acquisition moment laser
For the space coordinates of point cloud equipment, and different acquisition moment, the space coordinates of laser point cloud equipment are different.Certainly,
Multiframe laser point cloud data gathered in advance can be used for storing in the server, the when of using can be needed to directly invoke i.e. follow-up
It can.
In addition, as a kind of mode, in vehicle travel process, can also utilize the Vehicle-borne Laser Scanning instrument on vehicle with
The multiframe laser point cloud of each target object on the road of default frequency acquisition (such as interval 0.1s) collection vehicle traveling, then
The multiframe laser point cloud data of acquisition is sent to server to store, it, can be with when needing to be labeled target object
It is labeled after being shown in three-dimensional scenic from the laser point cloud data for obtaining target object in server.
Wherein, first frame laser point cloud data can be the laser point cloud number that laser point cloud equipment acquires at the first moment
According to the second frame laser point cloud data can be the laser point cloud data that laser point cloud equipment acquires at the second moment, wherein first
Interval between moment and the second moment can be 0.1s, certainly, can also continue to the multiframes laser points such as acquisition third frame, the 4th frame
Cloud data.Wherein, the first object and the second object can be different target objects, then the first object point cloud data and second pair
As point cloud data be the different corresponding laser point cloud datas of target object, certain first object and the second object may be same
One target object, then the first object point cloud data and the second object point cloud data are the first object acquired in different moments and
The laser point cloud data of two objects.
Step S120:The first object point cloud data and described second of the first object described in Overlapping display in three-dimensional scenic
Second object point cloud data of object.
In order to be labeled to target object, the three-dimensional scenic of structure target object is also needed, is needed different spaces coordinate
Laser point cloud data under system is repositioned, and the graphics under a unified coordinate system is generated.
Specifically, Fig. 4 is please referred to, step S120 includes:
Step S121:Three-dimensional scenic is built based on the first object point cloud data and the second object point cloud data,
And establish three-dimensional system of coordinate corresponding with the three-dimensional scenic.
It refers to that point cloud data to target object carries out three-dimensional visualization under terminal device that three-dimensional scenic, which creates, can be with
It is created using relevant tool, such as Arcgis tools.
Certainly, in the embodiment of the present invention, can by but be not limited only to using WebGL technologies, OSG (OpenScene
Graph) or STK (Satellite Tool Kit) builds three-dimensional scenic, and the mode the application for building three-dimensional scenic is not done strictly
It limits.
Step S122:By the seat of each laser point in the first object point cloud data and the second object point cloud data
Mark is converted to the three-dimensional coordinate in the three-dimensional system of coordinate.
The acquisition of laser point cloud data was carried out with the unit interval, so only per coordinate system between frame laser point cloud data
It is vertical, therefore, it is necessary to be registrated to each laser point cloud data, each laser point cloud data is normalized into unified coordinate system
Under, that is, the coordinate of each laser point in the first object point cloud data and the second object point cloud data is converted to
Three-dimensional coordinate in the three-dimensional system of coordinate.Wherein, the completion of Cyclone software tools can be used in the registration of laser point cloud data.
Certainly, the registration two-by-two between point cloud data is actually converted by coordinate by the point under two coordinate systems
Under the unification to the same coordinate system of cloud data, the transformational relation put between cloud includes 3 rotation parameters and 3 translation parameters, in order to
The spin matrix R for calculating and three rotation parameters being usually expressed as to a 3*3, three translation parameters is facilitated to be expressed as a three-dimensional
The purpose of translation vector T, point cloud configuration are to find out the rigid body translation of the condition of satisfaction (R, T), and method for registering, which may be used, to be based on
The point cloud registration method and iteration closest approach, that is, ICP method for registering of discrete features (point, line, surface), no longer excessively repeat herein.
Step S123:Each laser point is put into the three-dimensional scenic according to the three-dimensional coordinate of each laser point and is shown
Show.
First object point cloud data and the second object point cloud data are swashed according to the three-dimensional coordinate of each laser point by each
Luminous point is put into three-dimensional scenic and is shown, as shown in figure 3, it can see the point cloud data of each target object in three-dimensional
Presentation in scene.By Fig. 3 it can also be seen that in the three-dimensional scenic generated under multiframe laser point cloud data same object movement
Track.
Step S130:Frame is selected for the first object-point cloud data creation first and be the second object point cloud data wound
It builds second and selects frame.
It, can be to each in three-dimensional scenic after first object-point cloud the second point cloud data of verification of data is presented on three-dimensional scenic
The corresponding point cloud data of a target object is labeled.
Laser point cloud is put into three-dimensional scenic, due to belonging to the laser point Relatively centralized and energy of same target object feedback
Enough show the general profile of the target object, thus mark personnel can it is more intuitive, quickly judge to belong to target type
Laser point, so as to quickly, targetedly mark out by the laser point for belonging to target type, without swashing to all
Luminous point carries out just marking out the laser point for belonging to target type after coming out one by one, to improve the speed and effect of laser point mark
Rate.
In practical applications, belong to the same target object laser point quantity is more and Relatively centralized, if mark personnel
It is fast if taking the mode marked one by one to mark each laser point when determining to belong to multiple laser points of the same target object
Degree is slower, and therefore, technical solution of the present invention is to further increase the speed of laser point mark, by multiple laser points of same type
Unified mark is carried out by one or many, that is, the laser point marked as needed generates three-dimensional and selects frame, by having one
The three-dimensional for determining solid space selects frame to choose the multiple laser points for carrying out unified mark, will fall into the three-dimensional laser selected in frame
Point carries out unified mark.
For example, when need to select frame to the first object-point cloud data creation first and for the second object-point cloud data creation the
Two when selecting frame, and mark personnel can input according to the mark demand of oneself and select frame request to create, then terminal device can obtain the user
Frame request to create is selected in input, and to select frame request to create be that the first object-point cloud data creation first selects frame and is based on described
The second object-point cloud data creation second selects frame.
Specifically, mark personnel can choose the first object point cloud data that need to be marked by mouse, then in terminal device
Corresponding instruction is inputted on the interface of the three-dimensional scenic of display, such as first selects the dimension information of frame, if first selects frame to be defaulted as three
Dimension rectangle selects frame, then can input corresponding length and width higher size information, after getting the confirmation instruction of user's click, then can create
It builds first and selects frame, then first to select laser point cloud data in frame be the first object point cloud data, due in display interface, first
It is 2-d plane graph to select frame to be only see, as shown in Figure 5;If first selects frame to select frame, user can be at end for three-dimensional polygon
It is clicked by mouse in the interface for the three-dimensional scenic that end equipment is shown, that is, is clicked by mouse and once select frame for first
A bit, click second when select frame for first another point, 2 points are attached, after mouse is repeatedly clicked, is passed through
Obtain the confirmation instruction that user clicks, you can form the first closed figure for selecting frame, thus can create polygonal first and select frame.
It similarly selects the establishment of frame that can also be carried out by the above method to second, no longer excessively repeats herein.
Certainly, in addition to above-mentioned respectively the first object-point cloud data creation first selects frame and is created for the second object point cloud data
It builds second and selects frame, then can also first be based on selecting frame request to create being the first object point cloud data and the second object-point cloud data creation pair
As selecting frame, selects frame to be divided into the first object point cloud data corresponding first object and select frame and second object-point
Cloud data corresponding second select frame.
Specifically, user can choose the first object point cloud data and the second object point cloud data that need to be marked by mouse,
Then corresponding instruction is inputted on the interface for the three-dimensional scenic that terminal device is shown, as object selects the dimension information of frame, or
The size of frame is selected in selection first in the drop-down option of the input frame at interface, or is arranged first in pre-set size bar
The size etc. of frame is selected, if object, which selects frame to be defaulted as three-dimensional rectangle, selects frame, corresponding length and width higher size information can be inputted, obtained
After getting the confirmation instruction of user's click, then it can create object and select frame, then it includes first that object, which selects the laser point cloud data in frame,
Object point cloud data and the second object point cloud data, certainly, the object of polygon selects the creation method of frame to please refer to above-mentioned retouch
The process stated, no longer excessively repeats herein.If the first object and the second object are different objects, need to the first object and the
Two objects are labeled respectively, since the laser point cloud of the first object and the second object is directly between there may be distance, then may be used
The first object point cloud data and the second object point cloud data are detached using partitioning algorithm, wherein partitioning algorithm can be used
Partitioning algorithm based on region, such as watershed, region merger and division, or the partitioning algorithm based on edge is used, such as draw
General Laplacian operater etc., after being split the first object point cloud data and the second object point cloud data by partitioning algorithm, then
Can be that the first object-point cloud data creation first select frame, second point cloud data creation first selects frame, and certain first selects in frame and include
There is the first object point cloud data, second to select in frame include the second object point cloud data, and first selects frame and second to select the shape of frame
Shape can not limit.
Step S140:It is the first label of the first object point cloud data mark to select frame using described first, and utilizes institute
It is the second label of the second object point cloud data mark to state second and select frame.
Since the laser point quantity that laser point cloud includes is larger, to avoid spill tag from noting, the comprehensive of laser point mark is improved
And integrality, the embodiment of the present invention builds camera after creating three-dimensional scenic in the three-dimensional scenic, by adjusting the photograph
Position and direction of the machine in three-dimensional scenic check the laser point in three-dimensional scenic, to ensure that mark personnel can be in three dimensional field
Laser point is checked in scape for 360 degree.
As an implementation, at least one camera can be built in the three-dimensional scenic, in the three dimensional field
Position and direction during being labeled to the laser point in scape by adjusting the camera in three-dimensional scenic are adjusted
It is whole to check the visual angle of laser point cloud and range in three-dimensional scenic.
In the embodiment of the present invention, for laser point cloud mask method for viewing on device, which is receiving server
When the laser point cloud data of transmission, the laser point cloud received is labeled using aforementioned laser point cloud notation methods, and will
Annotation results feed back to server.
To further increase mark personnel determination to the accuracy of the affiliated type of laser point in three-dimensional scenic, the technology of the present invention
In scheme, at least two cameras are built in three-dimensional scenic, mutually in the same time, the position and direction where each camera are not
Together, laser point can be checked by different visual angles to mark personnel, improve the accuracy for judging the affiliated type of laser point.
Preferably, aforementioned at least two camera can switch over according to the user's choice, when user selects one of them
Be corresponding visual angle and the range that three-dimensional scenic is presented in user with the position and direction of selected camera when camera, not by
Corresponding visual angle and the range of three-dimensional scenic is presented in the camera chosen in a manner of thumbnail.
The first object and the second object can be more accurately identified under suitable visual angle as a result, with to the first object
It is labeled with the second object, then can input corresponding markup information after frame is selected in establishment first and second selects frame when marking,
When first object and second object are different objects, first label and second label differ, for example,
If the first object is car, the first label may be configured as a, if the second object is truck, the second label may be configured as
B, when first object and second object are same object, first label is identical with second label, e.g.,
When first object and the second object are car, the first label and the second label are a, that is, in terminal device in advance
It is stored with the corresponding label of various target object types, if car corresponds to a, truck corresponds to b, and pedestrian corresponds to c, then to
After an object and the second object are labeled, authentic data can be provided during follow-up object identification.
Wherein, the identification of target object rule of thumb can voluntarily be judged by mark personnel, if for example, according to the observation
When an object is car, then the markup information for the first label that personnel can input as a is marked, is truck in the second object
When, then the markup information for the second label that personnel can input as b is marked, the mark to target object thus can be completed.Certainly, right
It can be identified by terminal device in the identification of target object, for example, terminal device is previously stored with the wheel of various target objects
Exterior feature figure, and the point cloud data of target object can form profile diagram and the size etc. of target object in three-dimensional scenic, for example,
When the first object is car, then the first object point cloud data is formed as general outline and size of car etc.,
Then the first object point cloud data can be compared by terminal device with the profile diagram of the target object of storage, size, in phase
When being more than certain threshold value like degree, such as 90%, then it is judged as same target, then can carries out automatic marking to the first object, such as mark
First is labeled as a, to reduce the workload of mark personnel, certainly, if mark personnel rule of thumb judge that terminal device is known
When not wrong, can voluntarily it modify to markup information.
Wherein, the first kind of first number and first object is numbered comprising first or included to first label
Type;The Second Type of second number and second object is numbered comprising second or included to second label.For example, right
In the first object be car for, first be labeled as a comprising information can be the first number 1 or first number
1 and first object the first kind (car), for the second object be truck for, second be labeled as b comprising
Information can be the Second Type (truck) of the second number 2 or the second number 2 and the second object.
In addition, according to the different type of the first object and the second object, different dimension styles also may be selected and be labeled,
For example, the color and second of selecting frame to the first of establishment select the color of frame to be set as different, or first is selected in frame first
Object point cloud data and second selects the color of the second object point cloud data in frame to be set as different, for example, the first object is small
Its cloud of car can accordingly select to be labeled as blue after choosing, and for truck, its cloud can be selected accordingly the second object after choosing
It selects and is labeled as red, you can different colors is also accordingly marked in mark according to different types, it thus can be according to color pair
Different objects distinguish.
Certainly, in order to obtain the more information to target object, to be accurately identified to target object, to the first object
It may also include more markup informations, such as the size of the first object, position, angle mark with the label when mark of the second object
Note information.
In addition, as an implementation, after being labeled to multiple target objects by the above method, after mark
Target object can be used as the laser point cloud sample data of identification target object, when target object is identified, first with institute
It states sample data to be trained machine learning model, the target object is identified, the machine learning model is
The model that the target object is identified based on the laser point cloud sample data.
For example, the markup information (such as type, size, position, angle information) of target object is converted to machine learning
The input vector of model, to be trained to machine learning model using training sample.The machine learning model can be with
The laser point cloud of target object is as input, to the machine learning model that the size of target object, position, angle are identified,
Such as deep learning model.
In the present embodiment, by the laser point cloud based on target object, obtain different types of target object type,
The markup informations such as small, position, angle, to accurately determine out the type of different types of target object, size, position, angle
Degree.It is then possible to which the markup information based on target object, generates using the laser point cloud of target object as input, to target pair
The training sample for the machine learning model that type, size, position, the angle of elephant are identified.The training sample pair can be utilized
Using the laser point cloud of target object as input, the engineering that the type of target object, size, position, angle are identified
It practises model to be trained, and then the constantly recognition accuracy of hoisting machine learning model.
In some optional realization methods of the present embodiment, it is based on markup information, generates the training of machine learning model
Sample includes:Markup information is sent to server, is generated for being set to server with being based on markup information on the server
On the training sample that is trained of machine learning model.
In the present embodiment, using the laser point cloud of target object as input, to the type of target object, size, position,
The machine learning model that angle is identified can be arranged on the server.Markup information can be sent to server, to
Can the markup information based on target object on the server, training sample is generated, using the training sample to target object
Laser point cloud as input, the machine learning model that the type of target object, size, position, angle are identified carries out
Training, and then constantly recognition accuracy of the hoisting machine learning model to target object.
Fig. 6 is please referred to, Fig. 6 is a kind of structure diagram of laser point cloud annotation equipment 200 provided in an embodiment of the present invention, institute
Stating device includes:
Point cloud data acquisition module 210 includes first frame laser point cloud data and the second frame laser point cloud number for obtaining
According at least two frame laser point cloud datas, wherein in the first frame laser point cloud data include at least the first object first
Object point cloud data includes at least the second object point cloud data of the second object in the second frame laser point cloud data.
Display module 220, for the first object point cloud data of the first object described in Overlapping display in three-dimensional scenic and
Second object point cloud data of second object.
Frame creation module 230 is selected, for selecting frame for the first object-point cloud data creation first and being described second pair
Image point cloud data creation second selects frame.
Mark module 240 is the first label of the first object point cloud data mark for selecting frame using described first, and
It is the second label of the second object point cloud data mark to select frame using described second.
It is described to select frame creation module 230 as a kind of mode, including:
Acquisition request unit input by user selects frame request to create for obtaining.
Creating unit, for be based on it is described select frame request to create be the first object-point cloud data creation first select frame and
Frame is selected for the second object-point cloud data creation second.
As a kind of mode, the creating unit, including:
Object selects frame creating unit, and described the first object point cloud data described in frame request to create and described are selected for being based on
Two object-point cloud data creation objects select frame.
Frame cutting unit is selected, for selecting frame to be divided into the first object point cloud data corresponding first choosing the object
Frame and the second object point cloud data corresponding second select frame.
As a kind of mode, the display module 220, including:
Three-dimensional scenic establishes unit, for being based on the first object point cloud data and the second object point cloud data structure
Three-dimensional scenic is built, and establishes three-dimensional system of coordinate corresponding with the three-dimensional scenic.
Coordinate transformation unit, for will each swash in the first object point cloud data and the second object point cloud data
The coordinate of luminous point is converted to the three-dimensional coordinate in the three-dimensional system of coordinate.
Display unit, for each laser point to be put into the three-dimensional scenic according to the three-dimensional coordinate of each laser point
Display.
The embodiment of the present invention also provides a kind of readable storage medium storing program for executing, is stored thereon with computer program, the computer journey
Such as above-mentioned laser point cloud mask method is realized when sequence is executed by processor.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description
Specific work process, can refer to preceding method in corresponding process, no longer excessively repeat herein.
In conclusion a kind of laser point cloud mask method of offer of the embodiment of the present invention, device and readable storage medium storing program for executing, the party
Method obtains at least two frame laser point cloud datas for including first frame laser point cloud secretary and the second frame laser point cloud data first,
In, the first object point cloud data of the first object, the second frame laser are included at least in the first frame laser point cloud data
The second object point cloud data that the second object is included at least in point cloud data, then in three-dimensional scenic first described in Overlapping display
First object point cloud data of object and the second object point cloud data of second object, then be the first object-point cloud number
It selects frame according to creating first and selects frame for the second object-point cloud data creation second, it is described the to recycle described first to select frame
The first label of an object point cloud data mark, and it is the second mark of the second object point cloud data mark to select frame using described second
Note, this method, which can be realized, to be carried out while marking to multiframe laser point cloud data, without to all laser point cloud data progress
It marks one by one, to improve the speed and efficiency that are marked to laser point.
In several embodiments provided herein, it should be understood that disclosed device and method can also pass through
Other modes are realized.The apparatus embodiments described above are merely exemplary, for example, the flow chart in attached drawing and block diagram
Show the device of multiple embodiments according to the present invention, the architectural framework in the cards of method and computer program product,
Function and operation.In this regard, each box in flowchart or block diagram can represent the one of a module, section or code
Part, a part for the module, section or code, which includes that one or more is for implementing the specified logical function, to be held
Row instruction.It should also be noted that at some as in the realization method replaced, the function of being marked in box can also be to be different from
The sequence marked in attached drawing occurs.For example, two continuous boxes can essentially be basically executed in parallel, they are sometimes
It can execute in the opposite order, this is depended on the functions involved.It is also noted that every in block diagram and or flow chart
The combination of box in a box and block diagram and or flow chart can use function or the dedicated base of action as defined in executing
It realizes, or can be realized using a combination of dedicated hardware and computer instructions in the system of hardware.
In addition, each function module in each embodiment of the present invention can integrate to form an independent portion
Point, can also be modules individualism, can also two or more modules be integrated to form an independent part.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module
It is stored in a computer read/write memory medium.Based on this understanding, technical scheme of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be expressed in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes:USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited
The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic disc or CD.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, any made by repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.It should be noted that:Similar label and letter exist
Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing
It is further defined and is explained.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
It should be noted that herein, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also include other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
Claims (10)
1. a kind of laser point cloud mask method, which is characterized in that the method includes:
At least two frame laser point cloud datas for including first frame laser point cloud data and the second frame laser point cloud data are obtained,
In, the first object point cloud data of the first object, the second frame laser are included at least in the first frame laser point cloud data
The second object point cloud data of the second object is included at least in point cloud data;
The second couple of the first object point cloud data of the first object described in Overlapping display and second object in three-dimensional scenic
As point cloud data;
Frame is selected for the first object-point cloud data creation first and be that the second object-point cloud data creation second selects frame;
It is the first label of the first object point cloud data mark to select frame using described first, and selects frame for institute using described second
State the second label of the second object point cloud data mark.
2. according to the method described in claim 1, it is characterized in that, being Bu Tong right in first object and second object
As when, it is described first label and it is described second label differs;It is same object in first object and second object
When, first label is identical with second label.
3. according to the method described in claim 2, it is characterized in that, first label is comprising the first number or includes described the
The first kind of one number and first object;Second label numbers comprising second or includes second number and institute
State the Second Type of the second object.
4. according to the method described in any claim in claim 1-3, which is characterized in that created for the first object point cloud data
First is built to select frame and select frame for the second object-point cloud data creation second, including:
It obtains and input by user selects frame request to create;
To select frame request to create be that the first object-point cloud data creation first selects frame and is second object-point based on described
Cloud data creation second selects frame.
5. according to the method described in claim 4, it is characterized in that, selecting frame request to create for first object-point based on described
Cloud data creation first selects frame and selects frame for the second object-point cloud data creation second, including:
The first object point cloud data described in frame request to create and the second object-point cloud data creation object is selected to select based on described
Frame;
It selects frame to be divided into the first object point cloud data corresponding first object and selects frame and the second object-point cloud
Data corresponding second select frame.
6. according to the method described in any claim in claim 1-3, which is characterized in that in three-dimensional scenic described in Overlapping display
First object point cloud data of the first object and the second object point cloud data of second object, including:
Three-dimensional scenic is built based on the first object point cloud data and the second object point cloud data, and is established and described three
Tie up the corresponding three-dimensional system of coordinate of scene;
The coordinate of each laser point in the first object point cloud data and the second object point cloud data is converted to described
Three-dimensional coordinate in three-dimensional system of coordinate;
Each laser point is put into the three-dimensional scenic according to the three-dimensional coordinate of each laser point and is shown.
7. a kind of laser point cloud annotation equipment, which is characterized in that described device includes:
Point cloud data acquisition module, for obtaining comprising first frame laser point cloud data and the second frame laser point cloud data at least
Two frame laser point cloud datas, wherein the first object-point cloud of the first object is included at least in the first frame laser point cloud data
Data include at least the second object point cloud data of the second object in the second frame laser point cloud data;
Display module, the first object point cloud data and described second for the first object described in the Overlapping display in three-dimensional scenic
Second object point cloud data of object;
Frame creation module is selected, for selecting frame for the first object-point cloud data creation first and being the second object-point cloud number
Frame is selected according to creating second;
Mark module is the first label of the first object point cloud data mark for selecting frame using described first, and utilizes institute
It is the second label of the second object point cloud data mark to state second and select frame.
8. device according to claim 7, which is characterized in that it is described to select frame creation module, including:
Acquisition request unit input by user selects frame request to create for obtaining;
Creating unit, for being based on, described to select frame request to create be that the first object-point cloud data creation first selects frame and is institute
It states the second object-point cloud data creation second and selects frame.
9. according to any devices of claim 7-8, which is characterized in that the display module, including:
Three-dimensional scenic establishes unit, for based on the first object point cloud data and the second object point cloud data structure three
Scene is tieed up, and establishes three-dimensional system of coordinate corresponding with the three-dimensional scenic;
Coordinate transformation unit is used for each laser point in the first object point cloud data and the second object point cloud data
Coordinate be converted to the three-dimensional coordinate in the three-dimensional system of coordinate;
Display unit is shown for each laser point to be put into the three-dimensional scenic according to the three-dimensional coordinate of each laser point
Show.
10. a kind of readable storage medium storing program for executing, is stored thereon with computer program, which is characterized in that the computer program is handled
Device realizes method as claimed in any one of claims 1 to 6 when executing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810075279.8A CN108280886A (en) | 2018-01-25 | 2018-01-25 | Laser point cloud mask method, device and readable storage medium storing program for executing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810075279.8A CN108280886A (en) | 2018-01-25 | 2018-01-25 | Laser point cloud mask method, device and readable storage medium storing program for executing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108280886A true CN108280886A (en) | 2018-07-13 |
Family
ID=62805216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810075279.8A Pending CN108280886A (en) | 2018-01-25 | 2018-01-25 | Laser point cloud mask method, device and readable storage medium storing program for executing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108280886A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109726647A (en) * | 2018-12-14 | 2019-05-07 | 广州文远知行科技有限公司 | Mask method, device, computer equipment and the storage medium of point cloud |
CN109727312A (en) * | 2018-12-10 | 2019-05-07 | 广州景骐科技有限公司 | Point cloud mask method, device, computer equipment and storage medium |
CN109740487A (en) * | 2018-12-27 | 2019-05-10 | 广州文远知行科技有限公司 | Point cloud mask method, device, computer equipment and storage medium |
CN110084895A (en) * | 2019-04-30 | 2019-08-02 | 上海禾赛光电科技有限公司 | The method and apparatus that point cloud data is labeled |
CN110132233A (en) * | 2019-04-16 | 2019-08-16 | 西安长庆科技工程有限责任公司 | Current relief map drawing practice under a kind of CASS environment based on point cloud data |
CN110136273A (en) * | 2019-03-29 | 2019-08-16 | 初速度(苏州)科技有限公司 | A kind of sample data mask method and device in machine learning |
CN110135453A (en) * | 2019-03-29 | 2019-08-16 | 初速度(苏州)科技有限公司 | A kind of laser point cloud data mask method and device |
CN110263652A (en) * | 2019-05-23 | 2019-09-20 | 杭州飞步科技有限公司 | Laser point cloud data recognition methods and device |
CN110751090A (en) * | 2019-10-18 | 2020-02-04 | 宁波博登智能科技有限责任公司 | Three-dimensional point cloud labeling method and device and electronic equipment |
CN111009040A (en) * | 2018-10-08 | 2020-04-14 | 阿里巴巴集团控股有限公司 | Point cloud entity marking system, method and device and electronic equipment |
CN111062255A (en) * | 2019-11-18 | 2020-04-24 | 苏州智加科技有限公司 | Three-dimensional point cloud labeling method, device, equipment and storage medium |
CN111401321A (en) * | 2020-04-17 | 2020-07-10 | Oppo广东移动通信有限公司 | Object recognition model training method and device, electronic equipment and readable storage medium |
CN112036441A (en) * | 2020-07-31 | 2020-12-04 | 上海图森未来人工智能科技有限公司 | Feedback marking method and device for machine learning object detection result and storage medium |
CN112070830A (en) * | 2020-11-13 | 2020-12-11 | 北京云测信息技术有限公司 | Point cloud image labeling method, device, equipment and storage medium |
CN112419233A (en) * | 2020-10-20 | 2021-02-26 | 腾讯科技(深圳)有限公司 | Data annotation method, device, equipment and computer readable storage medium |
CN112669373A (en) * | 2020-12-24 | 2021-04-16 | 北京亮道智能汽车技术有限公司 | Automatic labeling method and device, electronic equipment and storage medium |
CN112801200A (en) * | 2021-02-07 | 2021-05-14 | 文远鄂行(湖北)出行科技有限公司 | Data packet screening method, device, equipment and storage medium |
CN112912755A (en) * | 2018-09-17 | 2021-06-04 | 小马智行 | Cover for generating circular air flow in housing |
CN113592897A (en) * | 2020-04-30 | 2021-11-02 | 初速度(苏州)科技有限公司 | Point cloud data labeling method and device |
CN114475665A (en) * | 2022-03-17 | 2022-05-13 | 北京小马睿行科技有限公司 | Control method and control device for automatic driving vehicle and automatic driving system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102445186A (en) * | 2011-09-28 | 2012-05-09 | 中交第二公路勘察设计研究院有限公司 | Method for generating road design surface information by laser radar scan |
US20130051658A1 (en) * | 2011-08-22 | 2013-02-28 | Samsung Electronics Co., Ltd. | Method of separating object in three dimension point cloud |
CN103955966A (en) * | 2014-05-12 | 2014-07-30 | 武汉海达数云技术有限公司 | Three-dimensional laser point cloud rendering method based on ArcGIS |
CN105180890A (en) * | 2015-07-28 | 2015-12-23 | 南京工业大学 | Rock structural surface occurrence measuring method integrated with laser-point cloud and digital imaging |
CN105701478A (en) * | 2016-02-24 | 2016-06-22 | 腾讯科技(深圳)有限公司 | Method and device for extraction of rod-shaped ground object |
CN105957145A (en) * | 2016-04-29 | 2016-09-21 | 百度在线网络技术(北京)有限公司 | Road barrier identification method and device |
CN106599915A (en) * | 2016-12-08 | 2017-04-26 | 立得空间信息技术股份有限公司 | Vehicle-mounted laser point cloud classification method |
CN106973569A (en) * | 2014-05-13 | 2017-07-21 | Pcp虚拟现实股份有限公司 | Generation and the playback multimedia mthods, systems and devices of virtual reality |
CN107093210A (en) * | 2017-04-20 | 2017-08-25 | 北京图森未来科技有限公司 | A kind of laser point cloud mask method and device |
US20170374342A1 (en) * | 2016-06-24 | 2017-12-28 | Isee, Inc. | Laser-enhanced visual simultaneous localization and mapping (slam) for mobile devices |
-
2018
- 2018-01-25 CN CN201810075279.8A patent/CN108280886A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130051658A1 (en) * | 2011-08-22 | 2013-02-28 | Samsung Electronics Co., Ltd. | Method of separating object in three dimension point cloud |
CN102445186A (en) * | 2011-09-28 | 2012-05-09 | 中交第二公路勘察设计研究院有限公司 | Method for generating road design surface information by laser radar scan |
CN103955966A (en) * | 2014-05-12 | 2014-07-30 | 武汉海达数云技术有限公司 | Three-dimensional laser point cloud rendering method based on ArcGIS |
CN106973569A (en) * | 2014-05-13 | 2017-07-21 | Pcp虚拟现实股份有限公司 | Generation and the playback multimedia mthods, systems and devices of virtual reality |
CN105180890A (en) * | 2015-07-28 | 2015-12-23 | 南京工业大学 | Rock structural surface occurrence measuring method integrated with laser-point cloud and digital imaging |
CN105701478A (en) * | 2016-02-24 | 2016-06-22 | 腾讯科技(深圳)有限公司 | Method and device for extraction of rod-shaped ground object |
CN105957145A (en) * | 2016-04-29 | 2016-09-21 | 百度在线网络技术(北京)有限公司 | Road barrier identification method and device |
US20170374342A1 (en) * | 2016-06-24 | 2017-12-28 | Isee, Inc. | Laser-enhanced visual simultaneous localization and mapping (slam) for mobile devices |
CN106599915A (en) * | 2016-12-08 | 2017-04-26 | 立得空间信息技术股份有限公司 | Vehicle-mounted laser point cloud classification method |
CN107093210A (en) * | 2017-04-20 | 2017-08-25 | 北京图森未来科技有限公司 | A kind of laser point cloud mask method and device |
Non-Patent Citations (2)
Title |
---|
YASHAR BALAZADEGAN SARVROOD等: "Visual-LiDAR Odometry Aided by Reduced IMU", 《ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION》 * |
张勤等: "基于双视点特征匹配的激光_相机系统标定方法", 《仪器仪表学报》 * |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112912755A (en) * | 2018-09-17 | 2021-06-04 | 小马智行 | Cover for generating circular air flow in housing |
CN111009040B (en) * | 2018-10-08 | 2023-04-18 | 阿里巴巴集团控股有限公司 | Point cloud entity marking system, method and device and electronic equipment |
CN111009040A (en) * | 2018-10-08 | 2020-04-14 | 阿里巴巴集团控股有限公司 | Point cloud entity marking system, method and device and electronic equipment |
CN109727312A (en) * | 2018-12-10 | 2019-05-07 | 广州景骐科技有限公司 | Point cloud mask method, device, computer equipment and storage medium |
CN109727312B (en) * | 2018-12-10 | 2023-07-04 | 广州景骐科技有限公司 | Point cloud labeling method, point cloud labeling device, computer equipment and storage medium |
CN109726647A (en) * | 2018-12-14 | 2019-05-07 | 广州文远知行科技有限公司 | Mask method, device, computer equipment and the storage medium of point cloud |
CN109726647B (en) * | 2018-12-14 | 2020-10-16 | 广州文远知行科技有限公司 | Point cloud labeling method and device, computer equipment and storage medium |
CN109740487A (en) * | 2018-12-27 | 2019-05-10 | 广州文远知行科技有限公司 | Point cloud mask method, device, computer equipment and storage medium |
CN110135453A (en) * | 2019-03-29 | 2019-08-16 | 初速度(苏州)科技有限公司 | A kind of laser point cloud data mask method and device |
CN110136273A (en) * | 2019-03-29 | 2019-08-16 | 初速度(苏州)科技有限公司 | A kind of sample data mask method and device in machine learning |
CN110136273B (en) * | 2019-03-29 | 2022-06-10 | 魔门塔(苏州)科技有限公司 | Sample data labeling method and device used in machine learning |
CN110132233B (en) * | 2019-04-16 | 2021-11-12 | 西安长庆科技工程有限责任公司 | Point cloud data-based terrain map drawing method under CASS environment |
CN110132233A (en) * | 2019-04-16 | 2019-08-16 | 西安长庆科技工程有限责任公司 | Current relief map drawing practice under a kind of CASS environment based on point cloud data |
CN110084895B (en) * | 2019-04-30 | 2023-08-22 | 上海禾赛科技有限公司 | Method and equipment for marking point cloud data |
CN110084895A (en) * | 2019-04-30 | 2019-08-02 | 上海禾赛光电科技有限公司 | The method and apparatus that point cloud data is labeled |
CN110263652A (en) * | 2019-05-23 | 2019-09-20 | 杭州飞步科技有限公司 | Laser point cloud data recognition methods and device |
CN110751090A (en) * | 2019-10-18 | 2020-02-04 | 宁波博登智能科技有限责任公司 | Three-dimensional point cloud labeling method and device and electronic equipment |
CN110751090B (en) * | 2019-10-18 | 2022-09-20 | 宁波博登智能科技有限公司 | Three-dimensional point cloud labeling method and device and electronic equipment |
CN111062255A (en) * | 2019-11-18 | 2020-04-24 | 苏州智加科技有限公司 | Three-dimensional point cloud labeling method, device, equipment and storage medium |
CN111401321A (en) * | 2020-04-17 | 2020-07-10 | Oppo广东移动通信有限公司 | Object recognition model training method and device, electronic equipment and readable storage medium |
CN113592897B (en) * | 2020-04-30 | 2024-03-29 | 魔门塔(苏州)科技有限公司 | Point cloud data labeling method and device |
CN113592897A (en) * | 2020-04-30 | 2021-11-02 | 初速度(苏州)科技有限公司 | Point cloud data labeling method and device |
CN112036441A (en) * | 2020-07-31 | 2020-12-04 | 上海图森未来人工智能科技有限公司 | Feedback marking method and device for machine learning object detection result and storage medium |
CN112419233B (en) * | 2020-10-20 | 2022-02-22 | 腾讯科技(深圳)有限公司 | Data annotation method, device, equipment and computer readable storage medium |
CN112419233A (en) * | 2020-10-20 | 2021-02-26 | 腾讯科技(深圳)有限公司 | Data annotation method, device, equipment and computer readable storage medium |
CN112070830A (en) * | 2020-11-13 | 2020-12-11 | 北京云测信息技术有限公司 | Point cloud image labeling method, device, equipment and storage medium |
CN112669373A (en) * | 2020-12-24 | 2021-04-16 | 北京亮道智能汽车技术有限公司 | Automatic labeling method and device, electronic equipment and storage medium |
CN112669373B (en) * | 2020-12-24 | 2023-12-05 | 北京亮道智能汽车技术有限公司 | Automatic labeling method and device, electronic equipment and storage medium |
CN112801200A (en) * | 2021-02-07 | 2021-05-14 | 文远鄂行(湖北)出行科技有限公司 | Data packet screening method, device, equipment and storage medium |
CN112801200B (en) * | 2021-02-07 | 2024-02-20 | 文远鄂行(湖北)出行科技有限公司 | Data packet screening method, device, equipment and storage medium |
CN114475665A (en) * | 2022-03-17 | 2022-05-13 | 北京小马睿行科技有限公司 | Control method and control device for automatic driving vehicle and automatic driving system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108280886A (en) | Laser point cloud mask method, device and readable storage medium storing program for executing | |
CN108154560A (en) | Laser point cloud mask method, device and readable storage medium storing program for executing | |
US10670416B2 (en) | Traffic sign feature creation for high definition maps used for navigating autonomous vehicles | |
CN107093210B (en) | Laser point cloud labeling method and device | |
CN110796714B (en) | Map construction method, device, terminal and computer readable storage medium | |
CN102867057B (en) | Virtual wizard establishment method based on visual positioning | |
CN109993780A (en) | A kind of three-dimensional high-precision ground drawing generating method and device | |
CN108694882A (en) | Method, apparatus and equipment for marking map | |
CN106463056A (en) | Solution for highly customized interactive mobile maps | |
CN109341702A (en) | Route planning method, device, equipment and storage medium in operating area | |
CN111708858A (en) | Map data processing method, device, equipment and storage medium | |
WO2018165279A1 (en) | Segmentation of images | |
CN112541049B (en) | High-precision map processing method, apparatus, device, storage medium, and program product | |
US11454502B2 (en) | Map feature identification using motion data and surfel data | |
TW201928388A (en) | Method and apparatus for establishing coordinate system and data structure product | |
CN113899384B (en) | Method, device, apparatus, medium, and program for displaying intersection surface of lane-level road | |
Delikostidis et al. | Increasing the usability of pedestrian navigation interfaces by means of landmark visibility analysis | |
CN103954970A (en) | Terrain detail acquisition method | |
KR20200136723A (en) | Method and apparatus for generating learning data for object recognition using virtual city model | |
CN111400423B (en) | Smart city CIM three-dimensional vehicle pose modeling system based on multi-view geometry | |
CN114140592A (en) | High-precision map generation method, device, equipment, medium and automatic driving vehicle | |
CN114186007A (en) | High-precision map generation method and device, electronic equipment and storage medium | |
Jiang et al. | Low–high orthoimage pairs-based 3D reconstruction for elevation determination using drone | |
US11488332B1 (en) | Intensity data visualization | |
CN114565908A (en) | Lane line detection method and device, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210303 Address after: 100095 room 1701, 16 / F and 17 / F, building 1, zone 1, 81 Beiqing Road, Haidian District, Beijing Applicant after: BEIJING XIAOMA HUIXING TECHNOLOGY Co.,Ltd. Address before: Room 01, 1 / F, building 2, yard 68, Beiqing Road, Haidian District, Beijing Applicant before: BEIJING PONY.AI SCIENCE AND TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180713 |
|
RJ01 | Rejection of invention patent application after publication |