CN106503653A - Area marking method, device and electronic equipment - Google Patents

Area marking method, device and electronic equipment Download PDF

Info

Publication number
CN106503653A
CN106503653A CN201610921206.7A CN201610921206A CN106503653A CN 106503653 A CN106503653 A CN 106503653A CN 201610921206 A CN201610921206 A CN 201610921206A CN 106503653 A CN106503653 A CN 106503653A
Authority
CN
China
Prior art keywords
barrier
road
image information
pavement
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610921206.7A
Other languages
Chinese (zh)
Other versions
CN106503653B (en
Inventor
梁继
余轶南
黄畅
余凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Horizon Robotics Science and Technology Co Ltd
Original Assignee
Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Horizon Robotics Science and Technology Co Ltd filed Critical Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority to CN201610921206.7A priority Critical patent/CN106503653B/en
Publication of CN106503653A publication Critical patent/CN106503653A/en
Application granted granted Critical
Publication of CN106503653B publication Critical patent/CN106503653B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

Disclose a kind of area marking method, device and electronic equipment.Methods described includes:During the training sample for training machine learning model is generated, the image information of the running environment gathered by image device is obtained;Obtain the depth information of the running environment synchronous in time with described image information;And barrier region in the running environment is marked according to the depth information in described image information.Therefore, it is possible to automatically mark the barrier region in running environment, the efficiency of area marking is improve.

Description

Area marking method, device and electronic equipment
Technical field
The application is related to aid in driving field, and more particularly, to a kind of area marking method, device, electronic equipment, Computer program and computer-readable recording medium.
Background technology
Recently as the high speed development of the vehicles (for example, vehicle) industry, traffic accident has become global Problem, the dead and wounded Population size estimation of the annual traffic accident in the whole world more than 500,000 people, therefore collection automatically control, artificial intelligence, pattern The technology such as identification are arisen at the historic moment in the auxiliary driving technology of one.Auxiliary driving technology can when user drives a conveyance to User submits necessary information and/or alerts, to avoid producing the dangerous situations such as collision, off-track.In some cases, very Vehicles traveling extremely can be automatically controlled using auxiliary driving technology.
All the time, wheeled region detection is all one of key component in auxiliary driving technology.Most currently used Be detection mode based on machine learning model.In order to ensure the accuracy of machine learning model, need in advance using a large amount of The image information of running environment off-line training is carried out to the model as training sample.Due to often depositing in running environment In the various barriers such as the vehicles, pedestrian, so needing these obstacles before off-line training in training sample Object area marks out, so as to retain the wheeled region for being available for vehicle travels.At present, barrier region in training sample Mark depend on user and have been manually done, that is to say, that user needs to manually find in great amount of images information various Barrier is individual, and each individual size, position etc. are labeled.Hundreds of thousands is reached as training sample database is generally required Scale, so being taken very much using this manual notation methods, human cost is very high, and does not have autgmentability.
Therefore, existing area marking technology is inefficiency.
Content of the invention
In order to solve above-mentioned technical problem, it is proposed that the application.Embodiments herein provides a kind of area marking side Method, device, electronic equipment, computer program and computer-readable recording medium, its can automatically mark running environment In barrier region.
According to the one side of the application, there is provided a kind of area marking method, including:It is used for training machine generating During practising the training sample of model, the image information of the running environment gathered by image device is obtained;Obtain and the figure Depth information as the information running environment synchronous in time;And according to the depth information in described image information The middle barrier region marked in the running environment.
According to the another aspect of the application, there is provided a kind of area marking device, including:Image acquisition unit, for During generating the training sample for training machine learning model, the image of the running environment gathered by image device is obtained Information;Depth acquiring unit, for obtaining the depth information of the running environment synchronous in time with described image information; And obstacle mark unit, for marking the obstacle in the running environment according to the depth information in described image information Object area.
According to the another aspect of the application, there is provided a kind of electronic equipment, including:Processor;Memory;And be stored in Computer program instructions in the memory, the computer program instructions cause the place when being run by the processor Reason device executes above-mentioned area marking method.
According to the another aspect of the application, there is provided a kind of computer program, including computer program instructions, described Computer program instructions cause the above-mentioned area marking method of the computing device when being run by processor.
According to the another aspect of the application, there is provided a kind of computer-readable recording medium, computer journey is stored thereon with Sequence is instructed, and the computer program instructions cause the above-mentioned area marking side of the computing device when being run by processor Method.
Compared with prior art, using the area marking method according to the embodiment of the present application, device, electronic equipment, calculating Machine program product and computer-readable recording medium, can be in the process of the training sample generated for training machine learning model In, the image information of the running environment gathered by image device is obtained, the institute synchronous in time with described image information is obtained The depth information of running environment is stated, and is marked in the running environment in described image information according to the depth information Barrier region.Therefore, compared with the situation of artificial mark is carried out in such as prior art to barrier region, can terrestrial reference automatically Barrier region in note running environment, improves the efficiency of area marking.
Description of the drawings
The embodiment of the present application is described in more detail by combining accompanying drawing, the above-mentioned and other purposes of the application, Feature and advantage will be apparent from.Accompanying drawing is used for providing further understanding the embodiment of the present application, and constitutes explanation A part for book, is used for explaining the application together with the embodiment of the present application, is not constituted the restriction to the application.In the accompanying drawings, Identical reference number typically represents same parts or step.
Fig. 1 illustrates the signal of the image information of the running environment collected according to the image device of the embodiment of the present application Figure.
The flow chart that Fig. 2 illustrates the area marking method according to the application first embodiment.
The flow chart that Fig. 3 illustrates the acquisition depth information step according to the embodiment of the present application.
The flow chart that Fig. 4 illustrates the mark barrier step according to the embodiment of the present application.
The flow chart that Fig. 5 illustrates the area marking method according to the application second embodiment.
The flow chart that Fig. 6 illustrates the mark wheeled region step according to the embodiment of the present application.
Depth information and user is combined with the image information that Fig. 7 A illustrate according to the embodiment of the present application is in Fig. 1 The schematic diagram of input, is labeled with barrier area in the image information that Fig. 7 B illustrate according to the embodiment of the present application is in Fig. 1 Domain and the schematic diagram in wheeled region.
Fig. 8 illustrates the block diagram of the area marking device according to the embodiment of the present application.
Fig. 9 illustrates the block diagram of the electronic equipment according to the embodiment of the present application.
Specific embodiment
Below, the example embodiment according to the application will be described in detail by referring to the drawings.Obviously, described embodiment is only Only it is a part of embodiment of the application, rather than the whole embodiments of the application, it should be appreciated that the application is by described herein The restriction of example embodiment.
Application general introduction
As described above, in the prior art, in training sample, to depend on user complete by hand for the mark of barrier region Into therefore there is complex operation and efficiency be low.
For the technical problem, the basic conception of the application is to propose a kind of new area marking method, device, electronics to set Standby, computer program and computer-readable recording medium, its can be in annotation process, by combining from depth sensing Barrier region is marked out in image device acquired image information, without the need for user's hand by the depth information of device automatically Work is operated, so as to reducing mark cost, improve mark speed.
Embodiments herein can apply to various scenes.For example, embodiments herein can be used for traffic work Barrier region in the residing running environment of tool is labeled.For example, the vehicles can be different types, and which can be with It is vehicle, aircraft, spacecraft, delivery vehicle etc. in water.For convenience of description, below by showing using vehicle as the vehicles Example come continuing on.
For example, in order that vehicle can be determined as in actual travel various in the pavement of road of its running environment The purpose such as barrier and then realization auxiliary driving, needs image information in advance using substantial amounts of running environment as training sample Original to vehicle in machine learning model carry out off-line training.For this purpose, can in advance in test headlight one or many Individual image device, for gathering the great amount of images information with regard to different running environments.Certainly, the application not limited to this.For example, institute State image information to can be from the monitoring camera for being arranged on fixed position or arise directly from internet etc..
Fig. 1 illustrates the signal of the image information of the running environment collected according to the image device of the embodiment of the present application Figure.
As shown in figure 1, the image information that tests acquired in vehicle for being equipped with image device indicates that the vehicle is just travelled On the pavement of road as its typicalness running environment.There are 3 barriers on the pavement of road (positioned at different distance Barrier 1, barrier 2 and barrier as other vehicles 3), 4 lane lines (lane line 1 from left to right, lane line 2, The objects such as lane line 3 and lane line 4) and 1 boundary line (boundary line 1 between road and meadow).
It is each in image information that existing barrier region mask method generally needs user to find based on eye recognition Plant barrier individual, and the mode that is selected using mouse circle is labeled to each individual size, position etc..In normal conditions Under, this barrier mask method is simple and effective.However, for the sample for carrying out off-line training to machine learning model Storehouse frequently includes substantial amounts of image information, if being required for the resolution of user's human eye, Manual Logos per piece image, must take Effort, and as artificial operation is likely to occur spill tag or wrong mark, therefore, existing barrier region mark may be not accurate enough, So as to cause follow-up machine learning result to produce mistake, and then road like of the vehicle to reality when using online may be caused Condition produces the judgement of mistake, produces traffic safety hidden danger.
For this purpose, in embodiments herein, during the training sample for training machine learning model is generated, The image information of the running environment gathered by image device is obtained, the row synchronous in time with described image information is obtained The depth information of environment is sailed, and the obstacle in the running environment is marked according to the depth information in described image information Object area.Therefore, the barrier region in running environment automatically can be marked according to the embodiment of the present application of the basic conception, Improve the efficiency of area marking.
Certainly, although being illustrated to embodiments herein by taking the vehicles as an example above, the application is not limited In this.Embodiments herein is can apply to such as to mobile robot, the first-class various online electronics of fixed monitoring camera The barrier region in running environment residing for equipment is labeled.
Below, the application scenarios of Fig. 1 will be combined, each embodiment according to the application is described in reference to the drawings.
Illustrative methods
The flow chart that Fig. 2 illustrates the area marking method according to the application first embodiment.
As shown in Fig. 2 can be included according to the area marking method of the application first embodiment:
In step s 110, during the training sample for training machine learning model is generated, imager is obtained The image information of the running environment gathered by part.
In order to off-line training is carried out to machine learning model, need image in advance to the running environment as training sample Information is labeled, and finds barrier region therein.For example, it is possible to gather traveling ring by one or more image devices The great amount of images information in border.For example, the application scenarios on test vehicle (or being referred to as, Current vehicle) are equipped in image device In, the image information that the image device obtains the pavement of road in the travel direction of Current vehicle can be passed through, for example, such as Fig. 1 Shown.
For example, the image device could be for the imageing sensor for catching image information, and which can be camera or take the photograph As head array.For example, the image information collected by imageing sensor can be consecutive image frame sequence (that is, video flowing) or from Scattered picture frame sequence (that is, the image data set for arriving in predetermined sampling time point sampling) etc..For example, the camera can be such as list Mesh camera, binocular camera, many mesh cameras etc., in addition, which can be used for catching gray-scale map, it is also possible to catch with colouring information Cromogram.Certainly, the camera of any other type as known in the art and being likely to occur in the future can be applied to this Application, the application are not particularly limited to its mode for catching image, as long as being obtained in that the gray scale or color letter of input picture Breath.In order to reduce the amount of calculation in subsequent operation, in one embodiment, can be before being analyzed and processing, will Cromogram carries out gray processing process.
In the step s 120, the depth information of the running environment synchronous in time with described image information is obtained.
Before step S110, afterwards or concurrently, in addition the road while obtaining can be obtained with image information The depth information on road surface.
For example, depth transducer can be any suitable sensor, such as based on binocular parallax figure fathom double Mesh camera or the infrared ray depth transducer (or laser depth transducer) fathomed based on ultrared irradiation.For example, deep Degree sensor can generate the depth information of such as depth map or laser point cloud etc, for measuring barrier relative to current The position of vehicle.Depth transducer can collect the related suitable depth letter of any distance to barrier away from Current vehicle Breath.For example, depth transducer can collect the information with regard to barrier at Current vehicle how far ahead.Further, deep In addition to range information, it is on the right of Current vehicle or the letter on the left side that can also collect such as with regard to barrier to degree sensor The directional information of breath etc.Depth transducer can also be collected in different time points with regard to barrier away from the distance of Current vehicle Information, to determine that the barrier is directed towards also being remote from Current vehicle motion.Below, will continue by taking laser depth transducer as an example Explanation.
The flow chart that Fig. 3 illustrates the acquisition depth information step according to the embodiment of the present application.
As shown in figure 3, step S120 can include:
In sub-step S121, determine that the image device gathers the acquisition time of described image information.
For example, the various attribute informations containing acquisition time etc can be included in image information.By the attribute information The acquisition time of described image information can determine that.
In sub-step S122, obtain the Current vehicle depth transducer gathered in the acquisition time described in The depth information of the pavement of road in travel direction.
Similarly, the various attribute informations containing acquisition time etc can also be included in depth information.Believed by image The acquisition time of breath, you can determine the depth information gathered on same time point.
It should be noted that the application not limited to this.For example, it is also possible to the collection rank in image device and depth transducer Section, the collection information collected in same time point and depth information are stored together as a pair of relevant informations, so as to slightly After obtain.
Referring back to Fig. 2, in step s 130, the traveling is marked in described image information according to the depth information Barrier region in environment.
After corresponding image information and depth information is obtained, both can be combined by various methods to detect Barrier and its region in the running environment.
The flow chart that Fig. 4 illustrates the mark barrier step according to the embodiment of the present application.
As shown in figure 4, step S130 can include:
In sub-step S131, judged to whether there is barrier on the pavement of road according to the depth information.
For example, the barrier can be at least one of the following:Pedestrian, animal, drop thing, warning sign, every From pier and other vehicles.
As laser depth transducer is by launching especially short light pulse, and measures this light pulse and hindered from being transmitted into The distance between hinder the time that thing is reflected, calculated with object by surveying time interval, so according to sensor detection The position of the laser point cloud for arriving and time of return, it can be determined that go out to whether there is barrier on the pavement of road and be somebody's turn to do Position relationship between barrier and Current vehicle.
In sub-step S132, in response to there is barrier, the depth information according to the barrier is believed in described image View field of the barrier on the pavement of road is determined in breath.
Once judge there is barrier on pavement of road according to laser point cloud, for example, it is possible to according to the laser point cloud Clustered, substantially to identify possible barrier number, and corresponded to according to the depth information of each barrier To in image information, to determine view field of the barrier on the pavement of road.Although it should be noted that in figure Overlapping as there may be between multiple barriers in information, but, as the rule that vehicle is travelled is determined in each barrier Between in order to keep safety must distance away, this result for allowing for being clustered according to depth information often compares The result clustered according to image information is more accurate.
Specifically, for example, it is possible, firstly, to be joined according to the demarcation of the depth information and the depth transducer of the barrier Count to determine three-dimensional coordinate of the barrier relative to the Current vehicle.
Due to manufacturing tolerance, after depth transducer is installed on vehicle, each car all has to carry out independent end Inspection line pick up calibration (end-of-line sensor calibration) or after market sensor adjustment, deep to determine The calibrating parameters such as the degree angle of pitch of the sensor on the vehicle, so as to eventually for purposes such as auxiliary driving.For example, the demarcation Parameter may refer to the outer ginseng matrix of the depth transducer, and which can include the depth transducer relative to the current vehicle Form direction the angle of pitch and inclination angle etc. in one or more.According to angle of pitch after the calibration etc. and can preset Algorithm, the depth information based on barrier is calculating the three-dimensional coordinate of each laser spots related to barrier, for example, coordinate (x,y,z).The three-dimensional coordinate can be absolute coordinate of the barrier under world coordinate system, or the ginseng with Current vehicle Examine the relative coordinate between position.
It is then possible to height coordinate z in the three-dimensional coordinate of the barrier is set to zero, to generate to the road The three-dimensional coordinate of road surface upslide movie queen.That is, the three-dimensional coordinate of each laser spots related to barrier can be changed For (x, y, 0).
Finally, can be according to the calibrating parameters of the three-dimensional coordinate after the projection and the image device come in described image View field of the barrier on the pavement of road is determined in information.
With depth transducer similarly, due to manufacturing tolerance, after image device is installed on vehicle, it is also desirable to first First determine the calibrating parameters such as the angle of pitch of the image device on the vehicle.Therefore, it can according to image device relative to described Angle of pitch of the travel direction of Current vehicle etc. and default algorithm, by the projection of each laser spots related to barrier after Three-dimensional coordinate be converted to each image coordinate in image information, and will each image coordinate outermost region (that is, maximum Contour area) it is defined as view field of the barrier on the pavement of road.
In sub-step S133, the view field is labeled as the barrier region on the pavement of road.
Can be by the barrier determined by the outermost region according to each image coordinate on the pavement of road View field, the mode such as select automatically to be labeled as the barrier region on the pavement of road by circle.
As can be seen here, using the area marking method according to the application first embodiment, training airplane can be used for generating The image information of the running environment gathered by image device during the training sample of device learning model, is obtained, is obtained and institute The depth information of the image information running environment synchronous in time is stated, and according to the depth information in described image The barrier region in the running environment is marked in information.Therefore, with barrier region is carried out manually in such as prior art The situation of mark is compared, and can automatically be marked the barrier region in running environment, be improve the efficiency of area marking.
In above-mentioned first embodiment, can be believed by the depth combined from depth transducer in annotation process Barrier region is marked out in image device acquired image information by breath automatically.However, in order to realize auxiliary drive Etc. purpose, not only barrier region is marked out, and also want to all mark the wheeled region in whole running environment Outpour and, and the training sample for machine learning model is generated based on above-mentioned annotation results.
On the basis of the first embodiment of the application, in order to solve the above problems, propose second enforcement of the application Example.
The flow chart that Fig. 5 illustrates the area marking method according to the application second embodiment.
As shown in figure 5, can be included according to the area marking method of the application second embodiment:
In Figure 5, employ identical reference to indicate the step identical with Fig. 2.Therefore, the step in Fig. 5 S110-S130 is identical with S110-S130 the step of Fig. 2, it is possible to referring to the description carried out above in conjunction with Fig. 2 to Fig. 4.Fig. 5 with The difference of Fig. 2 is to increased step S140 and further alternative step S150.
In step S140, the row is marked in described image information according to user input and the barrier region Sail the wheeled region in environment.
Before the barrier region on pavement of road is labelled with described image information, after or concurrently, The wheeled region on the pavement of road can also be detected by various methods in image information.
The flow chart that Fig. 6 illustrates the mark wheeled region step according to the embodiment of the present application.
As shown in fig. 6, step S140 can include:
In sub-step S141, receiving user's input.
The user input can be the boundary position information of the pavement of road that user is searched out based on eye recognition, and which can be with Including the coordinate input on image or circle choosing input etc..
In sub-step S142, the pavement boundaries of the pavement of road are determined according to the user input.
For example, it is possible to mark out the pavement of road in image information according to the boundary position information of user input Pavement boundaries.For example, the pavement boundaries can be at least one of the following:Curb, isolation strip, greenbelt, guardrail, Lane line and the edge of other vehicles.
In sub-step S143, marked according to the pavement boundaries and the barrier region on the pavement of road Wheeled region.
For example, it is possible to determine the road surface region on the pavement of road according to the pavement boundaries, and from the road surface The barrier region is removed in region, to obtain the wheeled region.
Below, the effect of the embodiment of the present application will be illustrated by a specific experiment.
Depth information and user is combined with the image information that Fig. 7 A illustrate according to the embodiment of the present application is in Fig. 1 The schematic diagram of input, is labeled with barrier area in the image information that Fig. 7 B illustrate according to the embodiment of the present application is in Fig. 1 Domain and the schematic diagram in wheeled region.
With reference to Fig. 7 A, the image information and laser sensor information of time synchronized in annotation process, can be obtained.In figure As, in information, user can mark out wherein 4 lane line (cars of presence by human eye identification in the pavement of road shown in Fig. 1 Diatom 1 to 4) and 1 boundary line (boundary line 1), as the candidate identification of pavement boundaries.Here it is possible to depend on different auxiliary Help driving strategy to determine the road surface scope of Current vehicle wheeled.For example, when lane line 3 and lane line 4 are solid line, logical In the case of often, they can be determined as pavement boundaries road surface scope, but in case of emergency (as front or behind goes out During the early warning of existing possible collision), it is also possible to by maximum physics wheeled scope, i.e., road boundary 1 and road boundary 5 are used as road Face border is determining road surface scope.In addition, as shown in Figure 7 A, laser point cloud even depth infomation detection Chu Gai roads are also based on There are 3 laser spots clusters (laser spots cluster 1 to 3) in the road surface of road.Next, can pass through the space coordinates of laser spots cluster 1 to 3 On the ground level that is changed and project pavement of road in image information, the intersection point of acquired disturbance thing 1 to 3 and ground level. Finally, largest contours region more than intersection point can be labeled as barrier region, namely traveling-prohibited area, and remaining area Domain can be labeled as wheeled region, as shown in Figure 7 B, wherein using by road boundary 1 and road boundary 5 as pavement boundaries As a example by be shown.
Referring back to Fig. 5, next, alternatively, in step S150, based on being wherein labeled with the wheeled region Image information is generating the training sample.
For example, it is possible to image information and associated markup information are packaged with, to generate training sample, for machine The follow-up training of device learning model is used.
As can be seen here, using the area marking method according to the application second embodiment, can be according to depth transducer institute The depth information of collection marks the barrier region in the running environment in image device acquired image information, and Can to mark the ambient boundary in running environment according to user input in described image information, according to the ambient boundary and The barrier region determining the wheeled region in the running environment, and based on being wherein labeled with the wheeled region Image information generating the training sample.Therefore, it is possible to the wheeled area in reliably and efficiently detection running environment Domain simultaneously generates the training sample used for machine learning model.
Exemplary means
Below, the area marking device according to the embodiment of the present application is described with reference to Figure 8.
Fig. 8 illustrates the block diagram of the area marking device according to the embodiment of the present application.
As shown in figure 8, the area marking device 100 can include:Image acquisition unit 110, for being used in generation During the training sample of training machine learning model, the image information of the running environment gathered by image device is obtained;Deep Degree acquiring unit 120, for obtaining the depth information of the running environment synchronous in time with described image information;And Obstacle marks unit 130, for marking the obstacle in the running environment according to the depth information in described image information Object area.
In one example, described image acquiring unit 110 can obtain the Road in the travel direction of Current vehicle The image information in face.
In one example, the depth acquiring unit 120 can include:Time determining module, for determine described into As device gathers the acquisition time of described image information;And depth acquisition module, for obtaining the depth of the Current vehicle The depth information of pavement of road of the sensor in the travel direction gathered by the acquisition time.
In one example, the obstacle mark unit 130 can include:Obstacle judge module, for according to the depth Degree information come judge on the pavement of road whether there is barrier;Projection determining module, in response to there is barrier, Depth information according to the barrier determines projection of the barrier on the pavement of road in described image information Region;And obstacle labeling module, for the view field is labeled as the barrier region on the pavement of road.
In one example, the projection determining module can be passed according to the depth information of the barrier and the depth The calibrating parameters of sensor are determining three-dimensional coordinate of the barrier relative to the Current vehicle;Three-dimensional by the barrier Height coordinate in coordinate is set to zero, to generate the three-dimensional coordinate to the pavement of road upslide movie queen;And according to described The calibrating parameters of three-dimensional coordinate and the image device after projection are determining the barrier in institute in described image information State the view field on pavement of road.
In one example, the barrier can be at least one of the following:Pedestrian, animal, drop thing, police Show board, hard shoulder and other vehicles.
In one example, the area marking device 100 can also include:Wheeled marks unit (not shown), uses In marking the wheeled area in the running environment in described image information according to user input and the barrier region Domain.
In one example, the wheeled mark unit can include:Input receiver module, defeated for receive user Enter;Border determining module, for determining the pavement boundaries of the pavement of road according to the user input;And wheeled mark Injection molding block, for marking the wheeled region on the pavement of road according to the pavement boundaries and the barrier region.
In one example, the wheeled labeling module can be determined on the pavement of road according to the pavement boundaries Road surface region;And the barrier region is removed from the road surface region, to obtain the wheeled region.
In one example, the area marking device 100 can also include:Sample generation unit (not shown), is used for The training sample is generated based on the image information in the wheeled region is wherein labeled with.
The concrete function of unit and module in above-mentioned zone annotation equipment 100 and operation are had been described above with reference to figure It is discussed in detail in the area marking method of 1 to Fig. 7 B descriptions, and therefore, its repeated description will be omitted.
As described above, embodiments herein can apply to be equipped with image device thereon the such as vehicles, The barrier region in running environment residing for the various online electronic equipment of mobile robot, fixed monitoring camera etc It is labeled.Also, the area marking method and area marking device according to the embodiment of the present application can be embodied directly in above-mentioned On online electronic equipment.But, it is contemplated that often disposal ability is limited for online electronic equipment, so in order to obtain more preferable property Can, it is also possible to realize to be communicated with transmitting the machine for training to which with online electronic equipment by embodiments herein In the various offline electronic equipment of device learning model.For example, the offline electronic equipment can include such as terminal device, clothes Business device etc..
Correspondingly, can be used as a software module and/or hardware according to the area marking device 100 of the embodiment of the present application Module and be integrated in the offline electronic equipment, in other words, the electronic equipment can include the area marking device 100.Example Such as, the area marking device 100 can be a software module in the operating system of the electronic equipment, or can be directed to In the application program developed by the electronic equipment;Certainly, the area marking device 100 can equally be the electronic equipment One of numerous hardware modules.
Alternatively, in another example, the area marking device 100 can also be discrete with the offline electronic equipment Equipment, and the area marking device 100 can pass through wiredly and/or wirelessly network connection to the electronic equipment, and according to The data form of agreement is transmitting interactive information.
Example electronic device
Below, the electronic equipment according to the embodiment of the present application is described with reference to Figure 9.The electronic equipment can be equipped thereon Have such as vehicles, mobile robot etc of image device online electronic equipment, or can be with online electricity Sub- equipment is communicated with transmitting the offline electronic equipment of machine learning model for training to which.
Fig. 9 illustrates the block diagram of the electronic equipment according to the embodiment of the present application.
As shown in figure 9, electronic equipment 10 includes one or more processors 11 and memory 12.
Processor 11 can be CPU (CPU) or there is data-handling capacity and/or instruction execution capability Other forms processing unit, and can be with the other assemblies in control electronics 10 executing desired function.
Memory 12 can include that one or more computer programs, the computer program can include each The computer-readable recording medium of the form of kind, such as volatile memory and/or nonvolatile memory.The volatile storage Device can for example include random access memory (RAM) and/or cache memory (cache) etc..Described non-volatile deposit Reservoir can for example include read-only storage (ROM), hard disk, flash memory etc..Can deposit on the computer-readable recording medium One or more computer program instructions are stored up, processor 11 can run described program instruction, to realize this Shen mentioned above The area marking method of each embodiment please and/or other desired functions.In the computer-readable recording medium In can also store the various contents such as image information, depth information, barrier region, wheeled region, markup information.
In one example, electronic equipment 10 can also include:Input unit 13 and output device 14, these components pass through Bindiny mechanism's (not shown) interconnection of bus system and/or other forms.It should be noted that the group of the electronic equipment 10 shown in Fig. 9 Part and structure are exemplary and nonrestrictive, and as needed, electronic equipment 10 can also have other assemblies and knot Structure.
For example, the input unit 13 can be image device, and for gathering image information, acquired image information can be with It is stored in memory 12 and uses for other assemblies.It is of course also possible to the image device for utilizing other integrated or discrete come The picture frame sequence is gathered, and is sent to electronic equipment 10.And for example, the input unit 13 can also be depth sensing Device, for sampling depth information, the depth information for being gathered can also be stored in memory 12.Additionally, the input equipment 13 can also be including such as keyboard, mouse and communication network and its remote input equipment for being connected etc..
Output device 14 can export various information to outside (for example, user or machine learning model), including determining The barrier region of running environment, wheeled region, training sample etc..The output equipment 14 can include such as display, Loudspeaker, printer and communication network and its remote output devices that connected etc..
Certainly, to put it more simply, illustrate only some in relevant with the application component in the electronic equipment 10 in Fig. 9, Eliminate the component of such as bus, input/output interface etc..In addition, according to concrete application situation, electronic equipment 10 is also Any other appropriate component can be included.
Illustrative computer program product and computer-readable recording medium
In addition to said method and equipment, embodiments herein can also be computer program, and which includes counting Calculation machine programmed instruction, the computer program instructions cause described computing device this specification above-mentioned when being run by processor The step in the area marking method according to the various embodiments of the application described in " illustrative methods " part.
The computer program can be write with any combination of one or more programming language for holding The program code of row the embodiment of the present application operation, described program design language include object oriented program language, such as Java, C++ etc., also include conventional procedural programming language, such as " C " language or similar programming language.Journey Sequence code fully can be executed on the user computing device, partly execute on a user device, independent soft as one Part bag is executed, part executes on a remote computing or completely in remote computing device on the user computing device for part Or execute on server.
Additionally, embodiments herein can also be computer-readable recording medium, it is stored thereon with computer program and refers to Order, the computer program instructions cause above-mentioned " the exemplary side of described computing device this specification when being run by processor The step in the area marking method according to the various embodiments of the application described in method " part.
The computer-readable recording medium can adopt any combination of one or more computer-readable recording mediums.Computer-readable recording medium can Being readable signal medium or readable storage medium storing program for executing.Readable storage medium storing program for executing can for example include but is not limited to electricity, magnetic, light, electricity The system of magnetic, infrared ray or semiconductor, device or device, or arbitrarily above combination.Readable storage medium storing program for executing is more specifically Example (non exhaustive list) includes:Electrical connection, portable disc with one or more wires, hard disk, random access memory Device (RAM), read-only storage (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc Read-only storage (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
Above in association with the general principle that specific embodiment describes the application, however, it is desirable to, it is noted that in this application The advantage that refers to, advantage, effect etc. are only exemplary rather than limiting, it is impossible to think that these advantages, advantage, effect etc. are the application Each embodiment is prerequisite.In addition, detail disclosed above is merely to the effect of example and the work for readily appreciating With, and unrestricted, it is must to be realized using above-mentioned concrete details that above-mentioned details is not intended to limit the application.
The device that is related in the application, device, equipment, the block diagram of system are only used as exemplary example and are not intended to Requirement or hint must be attached, arrange, configure according to the mode that square frame is illustrated.As it would be recognized by those skilled in the art that , can be connected, be arranged by any-mode, configure these devices, device, equipment, system.Such as " including ", "comprising", " tool Have " etc. word be open vocabulary, refer to " including but not limited to ", and can be with its used interchangeably.Vocabulary used herein above "or" and " and " refer to vocabulary "and/or", and can be with its used interchangeably, unless it be not such that context is explicitly indicated.Here made Vocabulary " such as " refers to phrase " such as, but not limited to ", and can be with its used interchangeably.
It may also be noted that in device, apparatus and method in the application, each part or each step are to decompose And/or reconfigure.These decompose and/or reconfigure the equivalents that should be regarded as the application.
The above description of disclosed aspect is provided so that any person skilled in the art can make or using this Application.Various modifications in terms of these are readily apparent to those skilled in the art, and here definition General Principle can apply in terms of other without deviating from scope of the present application.Therefore, the application is not intended to be limited to Aspect shown in this, but according to the widest range consistent with the feature of principle disclosed herein and novelty.
In order to purpose of illustration and description has been presented for above description.Additionally, this description is not intended to the reality of the application Apply example and be restricted to form disclosed herein.Although already discussed above multiple exemplary aspects and embodiment, this area skill Art personnel will be recognized that its some modification, modification, change, interpolation and sub-portfolio.

Claims (12)

1. a kind of area marking method, including:
During the training sample for training machine learning model is generated, the running environment gathered by image device is obtained Image information;
Obtain the depth information of the running environment synchronous in time with described image information;And
The barrier region in the running environment is marked according to the depth information in described image information.
2. the image information for the method for claim 1, wherein obtaining the running environment gathered by image device includes:
Obtain the image information of the pavement of road in the travel direction of Current vehicle.
3. method as claimed in claim 2, wherein, obtains the running environment synchronous in time with described image information Depth information include:
Determine that the image device gathers the acquisition time of described image information;And
Obtain Road of the depth transducer of the Current vehicle in the travel direction gathered by the acquisition time The depth information in face.
4. method as claimed in claim 3, wherein, marks the traveling according to the depth information in described image information Barrier region in environment includes:
Judged to whether there is barrier on the pavement of road according to the depth information;
In response to there is barrier, the depth information according to the barrier determines that in described image information the barrier exists View field on the pavement of road;And
The view field is labeled as the barrier region on the pavement of road.
5. method as claimed in claim 4, wherein, the depth information according to the barrier is determined in described image information View field of the barrier on the pavement of road includes:
Determined according to the calibrating parameters of the depth information and the depth transducer of the barrier barrier relative to The three-dimensional coordinate of the Current vehicle;
Height coordinate in the three-dimensional coordinate of the barrier is set to zero, to generate to the pavement of road upslide movie queen's Three-dimensional coordinate;And
Institute is determined in described image information according to the calibrating parameters of the three-dimensional coordinate after the projection and the image device State view field of the barrier on the pavement of road.
6. the method as described in claim 4 or 5, wherein, the barrier is at least one of the following:Pedestrian, dynamic Thing, drop thing, warning sign, hard shoulder and other vehicles.
7. method as claimed in claim 5, also includes:
Receiving user's input;
The pavement boundaries of the pavement of road are determined according to the user input;And
The wheeled region on the pavement of road is marked according to the pavement boundaries and the barrier region.
8. method as claimed in claim 7, wherein, marks the road according to the pavement boundaries and the barrier region Wheeled region on the road surface of road includes:
The road surface region on the pavement of road is determined according to the pavement boundaries;And
The barrier region is removed from the road surface region, to obtain the wheeled region.
9. method as claimed in claim 7 or 8, also includes:
The training sample is generated based on the image information in the wheeled region is wherein labeled with.
10. a kind of area marking device, including:
Image acquisition unit, for, during the training sample for training machine learning model is generated, obtaining imager The image information of the running environment gathered by part;
Depth acquiring unit, for obtaining the depth information of the running environment synchronous in time with described image information; And
Obstacle marks unit, for marking the obstacle in the running environment according to the depth information in described image information Object area.
11. a kind of electronic equipment, including:
Processor;
Memory;And
Storage computer program instructions in which memory, the computer program instructions are when being run by the processor So that computing device method as claimed in any one of claims 1-9 wherein.
A kind of 12. computer programs, including computer program instructions, the computer program instructions are being run by processor When cause computing device method as claimed in any one of claims 1-9 wherein.
CN201610921206.7A 2016-10-21 2016-10-21 Region labeling method and device and electronic equipment Active CN106503653B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610921206.7A CN106503653B (en) 2016-10-21 2016-10-21 Region labeling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610921206.7A CN106503653B (en) 2016-10-21 2016-10-21 Region labeling method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN106503653A true CN106503653A (en) 2017-03-15
CN106503653B CN106503653B (en) 2020-10-13

Family

ID=58318354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610921206.7A Active CN106503653B (en) 2016-10-21 2016-10-21 Region labeling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN106503653B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107437268A (en) * 2017-07-31 2017-12-05 广东欧珀移动通信有限公司 Photographic method, device, mobile terminal and computer-readable storage medium
CN107907886A (en) * 2017-11-07 2018-04-13 广东欧珀移动通信有限公司 Travel conditions recognition methods, device, storage medium and terminal device
WO2018103363A1 (en) * 2016-12-07 2018-06-14 北京三快在线科技有限公司 Road determination method and device
CN108256413A (en) * 2017-11-27 2018-07-06 科大讯飞股份有限公司 It can traffic areas detection method and device, storage medium, electronic equipment
CN108563742A (en) * 2018-04-12 2018-09-21 王海军 The method for automatically creating artificial intelligence image recognition training material and marking file
WO2018177159A1 (en) * 2017-04-01 2018-10-04 上海蔚来汽车有限公司 Method and system for determining position of moving object
CN108827309A (en) * 2018-06-29 2018-11-16 炬大科技有限公司 A kind of robot path planning method and the dust catcher with it
CN109271944A (en) * 2018-09-27 2019-01-25 百度在线网络技术(北京)有限公司 Obstacle detection method, device, electronic equipment, vehicle and storage medium
CN109376664A (en) * 2018-10-29 2019-02-22 百度在线网络技术(北京)有限公司 Machine learning training method, device, server and medium
CN109683613A (en) * 2018-12-24 2019-04-26 驭势(上海)汽车科技有限公司 It is a kind of for determining the method and apparatus of the ancillary control information of vehicle
CN109765634A (en) * 2019-01-18 2019-05-17 广州市盛光微电子有限公司 A kind of deep annotation device
CN110096059A (en) * 2019-04-25 2019-08-06 杭州飞步科技有限公司 Automatic Pilot method, apparatus, equipment and storage medium
CN110197148A (en) * 2019-05-23 2019-09-03 北京三快在线科技有限公司 Mask method, device, electronic equipment and the storage medium of target object
CN110377024A (en) * 2018-04-13 2019-10-25 百度(美国)有限责任公司 Automaticdata for automatic driving vehicle marks
CN110866504A (en) * 2019-11-20 2020-03-06 北京百度网讯科技有限公司 Method, device and equipment for acquiring marked data
CN111027381A (en) * 2019-11-06 2020-04-17 杭州飞步科技有限公司 Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN111104849A (en) * 2018-10-29 2020-05-05 安波福技术有限公司 Automatic annotation of environmental features in a map during navigation of a vehicle
CN111125442A (en) * 2019-12-11 2020-05-08 苏州智加科技有限公司 Data labeling method and device
CN111323026A (en) * 2018-12-17 2020-06-23 兰州大学 Ground filtering method based on high-precision point cloud map
CN111368794A (en) * 2020-03-19 2020-07-03 北京百度网讯科技有限公司 Obstacle detection method, apparatus, device, and medium
CN111552289A (en) * 2020-04-28 2020-08-18 苏州高之仙自动化科技有限公司 Detection method, virtual radar device, electronic apparatus, and storage medium
WO2020173188A1 (en) * 2019-02-26 2020-09-03 文远知行有限公司 Method and device for locating obstacle in semantic map, computer apparatus, and storage medium
CN111696144A (en) * 2019-03-11 2020-09-22 北京地平线机器人技术研发有限公司 Depth information determination method, depth information determination device and electronic equipment
CN112200049A (en) * 2020-09-30 2021-01-08 华人运通(上海)云计算科技有限公司 Method, device and equipment for marking road surface topography data and storage medium
CN112714266A (en) * 2020-12-18 2021-04-27 北京百度网讯科技有限公司 Method and device for displaying label information, electronic equipment and storage medium
WO2021189420A1 (en) * 2020-03-27 2021-09-30 华为技术有限公司 Data processing method and device
CN115164910A (en) * 2022-06-22 2022-10-11 小米汽车科技有限公司 Travel route generation method, travel route generation device, vehicle, storage medium, and chip

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060013438A1 (en) * 2004-07-13 2006-01-19 Susumu Kubota Obstacle detection apparatus and a method therefor
KR101428403B1 (en) * 2013-07-17 2014-08-07 현대자동차주식회사 Apparatus and method for detecting obstacle in front
CN105007449A (en) * 2014-04-25 2015-10-28 日立建机株式会社 Vehicle peripheral obstacle notification system
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method
CN105319991A (en) * 2015-11-25 2016-02-10 哈尔滨工业大学 Kinect visual information-based robot environment identification and operation control method
CN105957145A (en) * 2016-04-29 2016-09-21 百度在线网络技术(北京)有限公司 Road barrier identification method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060013438A1 (en) * 2004-07-13 2006-01-19 Susumu Kubota Obstacle detection apparatus and a method therefor
KR101428403B1 (en) * 2013-07-17 2014-08-07 현대자동차주식회사 Apparatus and method for detecting obstacle in front
CN105007449A (en) * 2014-04-25 2015-10-28 日立建机株式会社 Vehicle peripheral obstacle notification system
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method
CN105319991A (en) * 2015-11-25 2016-02-10 哈尔滨工业大学 Kinect visual information-based robot environment identification and operation control method
CN105957145A (en) * 2016-04-29 2016-09-21 百度在线网络技术(北京)有限公司 Road barrier identification method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GENQIANG DENG 等: "SLAM: Depth Image Information for Mapping and Inertial Navigation System for Localization", 《2016 ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS》 *
朱涛 等: "基于Kinect深度技术的障碍物在线快速检测算法", 《电子设计工程》 *
王新竹 等: "基于三维激光雷达和深度图像的自动驾驶汽车障碍物检测方法", 《吉林大学学报(工学版)》 *
赵日成: "辅助驾驶中的路面障碍检测技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018103363A1 (en) * 2016-12-07 2018-06-14 北京三快在线科技有限公司 Road determination method and device
WO2018177159A1 (en) * 2017-04-01 2018-10-04 上海蔚来汽车有限公司 Method and system for determining position of moving object
CN107437268A (en) * 2017-07-31 2017-12-05 广东欧珀移动通信有限公司 Photographic method, device, mobile terminal and computer-readable storage medium
CN107907886A (en) * 2017-11-07 2018-04-13 广东欧珀移动通信有限公司 Travel conditions recognition methods, device, storage medium and terminal device
CN108256413B (en) * 2017-11-27 2022-02-25 科大讯飞股份有限公司 Passable area detection method and device, storage medium and electronic equipment
CN108256413A (en) * 2017-11-27 2018-07-06 科大讯飞股份有限公司 It can traffic areas detection method and device, storage medium, electronic equipment
CN108563742A (en) * 2018-04-12 2018-09-21 王海军 The method for automatically creating artificial intelligence image recognition training material and marking file
CN110377024A (en) * 2018-04-13 2019-10-25 百度(美国)有限责任公司 Automaticdata for automatic driving vehicle marks
CN108827309A (en) * 2018-06-29 2018-11-16 炬大科技有限公司 A kind of robot path planning method and the dust catcher with it
US11393219B2 (en) 2018-09-27 2022-07-19 Apollo Intelligent Driving Technology (Beijing) Co., Ltd. Method and apparatus for detecting obstacle, electronic device, vehicle and storage medium
CN109271944A (en) * 2018-09-27 2019-01-25 百度在线网络技术(北京)有限公司 Obstacle detection method, device, electronic equipment, vehicle and storage medium
US11774261B2 (en) 2018-10-29 2023-10-03 Motional Ad Llc Automatic annotation of environmental features in a map during navigation of a vehicle
CN109376664A (en) * 2018-10-29 2019-02-22 百度在线网络技术(北京)有限公司 Machine learning training method, device, server and medium
US11340080B2 (en) 2018-10-29 2022-05-24 Motional Ad Llc Automatic annotation of environmental features in a map during navigation of a vehicle
CN109376664B (en) * 2018-10-29 2021-03-09 百度在线网络技术(北京)有限公司 Machine learning training method, device, server and medium
CN111104849A (en) * 2018-10-29 2020-05-05 安波福技术有限公司 Automatic annotation of environmental features in a map during navigation of a vehicle
CN111323026A (en) * 2018-12-17 2020-06-23 兰州大学 Ground filtering method based on high-precision point cloud map
CN109683613B (en) * 2018-12-24 2022-04-29 驭势(上海)汽车科技有限公司 Method and device for determining auxiliary control information of vehicle
CN109683613A (en) * 2018-12-24 2019-04-26 驭势(上海)汽车科技有限公司 It is a kind of for determining the method and apparatus of the ancillary control information of vehicle
CN109765634A (en) * 2019-01-18 2019-05-17 广州市盛光微电子有限公司 A kind of deep annotation device
CN109765634B (en) * 2019-01-18 2021-09-17 广州市盛光微电子有限公司 Depth marking device
WO2020173188A1 (en) * 2019-02-26 2020-09-03 文远知行有限公司 Method and device for locating obstacle in semantic map, computer apparatus, and storage medium
CN111696144A (en) * 2019-03-11 2020-09-22 北京地平线机器人技术研发有限公司 Depth information determination method, depth information determination device and electronic equipment
CN110096059B (en) * 2019-04-25 2022-03-01 杭州飞步科技有限公司 Automatic driving method, device, equipment and storage medium
CN110096059A (en) * 2019-04-25 2019-08-06 杭州飞步科技有限公司 Automatic Pilot method, apparatus, equipment and storage medium
CN110197148A (en) * 2019-05-23 2019-09-03 北京三快在线科技有限公司 Mask method, device, electronic equipment and the storage medium of target object
CN111027381A (en) * 2019-11-06 2020-04-17 杭州飞步科技有限公司 Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN110866504B (en) * 2019-11-20 2023-10-17 北京百度网讯科技有限公司 Method, device and equipment for acquiring annotation data
CN110866504A (en) * 2019-11-20 2020-03-06 北京百度网讯科技有限公司 Method, device and equipment for acquiring marked data
CN111125442B (en) * 2019-12-11 2022-11-15 苏州智加科技有限公司 Data labeling method and device
WO2021114608A1 (en) * 2019-12-11 2021-06-17 Suzhou Zhijia Science & Technologies Co., Ltd. Data labeling method and apparatus
CN111125442A (en) * 2019-12-11 2020-05-08 苏州智加科技有限公司 Data labeling method and device
CN111368794B (en) * 2020-03-19 2023-09-19 北京百度网讯科技有限公司 Obstacle detection method, device, equipment and medium
CN111368794A (en) * 2020-03-19 2020-07-03 北京百度网讯科技有限公司 Obstacle detection method, apparatus, device, and medium
WO2021189420A1 (en) * 2020-03-27 2021-09-30 华为技术有限公司 Data processing method and device
CN111552289A (en) * 2020-04-28 2020-08-18 苏州高之仙自动化科技有限公司 Detection method, virtual radar device, electronic apparatus, and storage medium
CN112200049B (en) * 2020-09-30 2023-03-31 华人运通(上海)云计算科技有限公司 Method, device and equipment for marking road surface topography data and storage medium
CN112200049A (en) * 2020-09-30 2021-01-08 华人运通(上海)云计算科技有限公司 Method, device and equipment for marking road surface topography data and storage medium
CN112714266B (en) * 2020-12-18 2023-03-31 北京百度网讯科技有限公司 Method and device for displaying labeling information, electronic equipment and storage medium
US11694405B2 (en) 2020-12-18 2023-07-04 Beijing Baidu Netcom Science Technology Co., Ltd. Method for displaying annotation information, electronic device and storage medium
CN112714266A (en) * 2020-12-18 2021-04-27 北京百度网讯科技有限公司 Method and device for displaying label information, electronic equipment and storage medium
CN115164910A (en) * 2022-06-22 2022-10-11 小米汽车科技有限公司 Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN115164910B (en) * 2022-06-22 2023-02-21 小米汽车科技有限公司 Travel route generation method, travel route generation device, vehicle, storage medium, and chip

Also Published As

Publication number Publication date
CN106503653B (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN106503653A (en) Area marking method, device and electronic equipment
US11885910B2 (en) Hybrid-view LIDAR-based object detection
EP4145393B1 (en) Vehicle localization
US11797407B2 (en) Systems and methods for generating synthetic sensor data via machine learning
US10949684B2 (en) Vehicle image verification
US11593950B2 (en) System and method for movement detection
US11488392B2 (en) Vehicle system and method for detecting objects and object distance
US11126891B2 (en) Systems and methods for simulating sensor data using a generative model
US10740658B2 (en) Object recognition and classification using multiple sensor modalities
US11693409B2 (en) Systems and methods for a scenario tagger for autonomous vehicles
CN109583415B (en) Traffic light detection and identification method based on fusion of laser radar and camera
US9767368B2 (en) Method and system for adaptive ray based scene analysis of semantic traffic spaces and vehicle equipped with such system
CN113366486B (en) Object classification using out-of-region context
CN110268413A (en) The fusion of low level sensor
CN106485233A (en) Drivable region detection method, device and electronic equipment
US20200151512A1 (en) Method and system for converting point cloud data for use with 2d convolutional neural networks
CN109863500A (en) Event driven area-of-interest management
US11280630B2 (en) Updating map data
US20170083794A1 (en) Virtual, road-surface-perception test bed
CN103770704A (en) System and method for recognizing parking space line markings for vehicle
US20210237737A1 (en) Method for Determining a Lane Change Indication of a Vehicle
CN116529784A (en) Method and system for adding lidar data
US20220284623A1 (en) Framework For 3D Object Detection And Depth Prediction From 2D Images
US20240062386A1 (en) High throughput point cloud processing
CN117795566A (en) Perception of three-dimensional objects in sensor data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant