CN106503653B - Region labeling method and device and electronic equipment - Google Patents

Region labeling method and device and electronic equipment Download PDF

Info

Publication number
CN106503653B
CN106503653B CN201610921206.7A CN201610921206A CN106503653B CN 106503653 B CN106503653 B CN 106503653B CN 201610921206 A CN201610921206 A CN 201610921206A CN 106503653 B CN106503653 B CN 106503653B
Authority
CN
China
Prior art keywords
obstacle
road surface
image information
information
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610921206.7A
Other languages
Chinese (zh)
Other versions
CN106503653A (en
Inventor
梁继
余轶南
黄畅
余凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Horizon Robotics Science and Technology Co Ltd
Original Assignee
Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Horizon Robotics Science and Technology Co Ltd filed Critical Shenzhen Horizon Robotics Science and Technology Co Ltd
Priority to CN201610921206.7A priority Critical patent/CN106503653B/en
Publication of CN106503653A publication Critical patent/CN106503653A/en
Application granted granted Critical
Publication of CN106503653B publication Critical patent/CN106503653B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Abstract

A region labeling method and device and electronic equipment are disclosed. The method comprises the following steps: acquiring image information of a driving environment acquired by an imaging device in the process of generating a training sample for training a machine learning model; acquiring depth information of the driving environment synchronized in time with the image information; and marking an obstacle region in the driving environment in the image information according to the depth information. Therefore, the obstacle area in the driving environment can be automatically marked, and the area marking efficiency is improved.

Description

Region labeling method and device and electronic equipment
Technical Field
The present application relates to the field of driving assistance, and more particularly, to a region labeling method, apparatus, electronic device, computer program product, and computer-readable storage medium.
Background
In recent years, with the rapid development of the vehicle (e.g., vehicle) industry, traffic accidents have become a global problem, and the number of dead people in traffic accidents is estimated to exceed 50 million people every year around the world, so that an assistant driving technology integrating technologies such as automatic control, artificial intelligence, pattern recognition and the like is developed. Assisted driving techniques can provide necessary information and/or warnings to a user while driving a vehicle to avoid creating dangerous situations such as collisions, off-track, etc. In some cases, vehicle travel may even be automatically controlled using assisted driving techniques.
In the past, travelable region detection has been one of the key components in driver assistance technology. The detection mode based on a machine learning model is most commonly used at present. In order to ensure the accuracy of the machine learning model, a large amount of image information of the driving environment is adopted as a training sample in advance to perform offline training on the model. Since various obstacles such as vehicles and pedestrians often exist in a driving environment, the obstacle regions need to be marked in a training sample before offline training, so that a drivable region for the vehicles is reserved. Currently, the labeling of the obstacle area in the training sample mainly depends on the manual completion of the user, that is, the user needs to manually find various obstacle individuals in a large amount of image information and label the size, position, and the like of each individual. Since the training sample library generally needs to reach hundreds of thousands of scales, the manual labeling method is time-consuming, high in labor cost and non-extensible.
Therefore, the existing region labeling technique is inefficient.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. Embodiments of the present application provide a region labeling method, apparatus, electronic device, computer program product, and computer-readable storage medium capable of automatically labeling an obstacle region in a driving environment.
According to an aspect of the present application, there is provided a region labeling method, including: acquiring image information of a driving environment acquired by an imaging device in the process of generating a training sample for training a machine learning model; acquiring depth information of the driving environment synchronized in time with the image information; and marking an obstacle region in the driving environment in the image information according to the depth information.
According to another aspect of the present application, there is provided a region labeling apparatus including: the image acquisition unit is used for acquiring the image information of the driving environment acquired by the imaging device in the process of generating a training sample for training the machine learning model; a depth acquisition unit configured to acquire depth information of the running environment temporally synchronized with the image information; and an obstacle labeling unit for labeling an obstacle region in the driving environment in the image information according to the depth information.
According to another aspect of the present application, there is provided an electronic device including: a processor; a memory; and computer program instructions stored in the memory, which when executed by the processor, cause the processor to perform the above-described region labeling method.
According to another aspect of the present application, there is provided a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the above-described region labeling method.
According to another aspect of the present application, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the above-described region labeling method.
Compared with the prior art, with the region labeling method, the region labeling device, the electronic equipment, the computer program product and the computer readable storage medium according to the embodiments of the application, in the process of generating the training sample for training the machine learning model, the image information of the driving environment acquired by the imaging device is acquired, the depth information of the driving environment synchronized with the image information in time is acquired, and the obstacle region in the driving environment is labeled in the image information according to the depth information. Therefore, compared with the case of manually labeling the obstacle region as in the prior art, the obstacle region in the driving environment can be automatically labeled, and the efficiency of region labeling is improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 illustrates a schematic view of image information of a running environment acquired by an imaging device according to an embodiment of the present application.
Fig. 2 illustrates a flowchart of a region labeling method according to a first embodiment of the present application.
Fig. 3 illustrates a flow chart of the step of obtaining depth information according to an embodiment of the present application.
FIG. 4 illustrates a flow chart of the step of marking an obstacle according to an embodiment of the present application.
Fig. 5 illustrates a flowchart of a region labeling method according to a second embodiment of the present application.
FIG. 6 is a flowchart illustrating a step of labeling a drivable area according to an embodiment of the present application.
Fig. 7A illustrates a schematic diagram of combining depth information and user input in the image information shown in fig. 1 according to an embodiment of the present application, and fig. 7B illustrates a schematic diagram of labeling an obstacle region and a travelable region in the image information shown in fig. 1 according to an embodiment of the present application.
FIG. 8 illustrates a block diagram of a region labeling apparatus according to an embodiment of the present application.
FIG. 9 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, in the prior art, the labeling of the obstacle region in the training sample mainly depends on the manual work of the user, and thus there are problems of complicated operation and low efficiency.
In view of the technical problem, the basic concept of the present application is to provide a new region labeling method, apparatus, electronic device, computer program product, and computer-readable storage medium, which can automatically label a barrier region in image information acquired by an imaging device by combining depth information from a depth sensor during a labeling process without manual operation of a user, thereby reducing labeling cost and increasing labeling speed.
Embodiments of the present application may be applied to various scenarios. For example, embodiments of the present application may be used to label obstacle areas in a driving environment in which a vehicle is located. For example, the vehicle may be of a different type, which may be a vehicle, aircraft, spacecraft, water vehicle, or the like. For convenience of explanation, the description will be continued with a vehicle as an example of the vehicle.
For example, in order to enable a vehicle to identify various obstacles on a road surface as its driving environment during actual driving and to assist driving, it is necessary to perform offline training of a machine learning model in the vehicle using a large amount of image information of the driving environment as a training sample in advance. For this purpose, one or more imaging devices can be provided on the test vehicle in advance for acquiring a large amount of image information about different driving environments. Of course, the present application is not limited thereto. For example, the image information may also come from a surveillance camera set in a fixed location, or directly from the internet, or the like.
Fig. 1 illustrates a schematic view of image information of a running environment acquired by an imaging device according to an embodiment of the present application.
As shown in fig. 1, image information acquired by a test vehicle equipped with an imaging device indicates that the vehicle is traveling on a road surface as its typical traveling environment. On the road surface, there are objects such as 3 obstacles (an obstacle 1, an obstacle 2, and an obstacle 3 as other vehicles located at different distances), 4 lane lines (a lane line 1, a lane line 2, a lane line 3, and a lane line 4 from left to right), and 1 boundary line (a boundary line 1 between a road and a lawn).
The existing method for labeling the obstacle region generally requires a user to search various obstacle individuals in image information based on human eye recognition, and label the size, position and the like of each individual by using a mouse selection mode. In general, such obstacle labeling methods are simple and effective. However, the sample library used for off-line training of the machine learning model often includes a large amount of image information, if each image needs to be distinguished by human eyes of a user and manually identified, time and labor are consumed, and due to the fact that label missing or label missing may occur in manual operation, the existing barrier region labeling may not be accurate enough, so that subsequent machine learning results are wrong, and further, wrong judgment of actual road conditions by vehicles during on-line use may be caused, and traffic safety hazards are caused.
To this end, in an embodiment of the present application, in generating a training sample for training a machine learning model, image information of a travel environment acquired by an imaging device is acquired, depth information of the travel environment temporally synchronized with the image information is acquired, and an obstacle region in the travel environment is marked in the image information according to the depth information. Therefore, the embodiment of the application based on the basic concept can automatically mark the obstacle area in the driving environment, and the area marking efficiency is improved.
Of course, although the embodiments of the present application have been described above by taking a vehicle as an example, the present application is not limited thereto. The embodiment of the application can be applied to marking the obstacle area in the driving environment where various online electronic devices such as a mobile robot and a fixed monitoring camera are located.
In the following, various embodiments according to the present application will be described with reference to the drawings in connection with the application scenario of fig. 1.
Exemplary method
Fig. 2 illustrates a flowchart of a region labeling method according to a first embodiment of the present application.
As shown in fig. 2, a region labeling method according to a first embodiment of the present application may include:
in step S110, in the process of generating a training sample for training the machine learning model, image information of the running environment acquired by the imaging device is acquired.
In order to perform offline training of the machine learning model, it is necessary to label image information of a driving environment as a training sample in advance and search for an obstacle region therein. For example, a large amount of image information of the driving environment may be acquired by one or more imaging devices. For example, in an application scenario in which an imaging device is equipped on a test vehicle (or referred to as a current vehicle), image information of a road surface in a traveling direction of the current vehicle may be acquired by the imaging device, for example, as shown in fig. 1.
For example, the imaging device may be an image sensor for capturing image information, which may be a camera or an array of cameras. For example, the image information acquired by the image sensor may be a continuous image frame sequence (i.e., a video stream) or a discrete image frame sequence (i.e., an image data set sampled at a predetermined sampling time point), etc. For example, the camera may be a monocular camera, a binocular camera, a multi-view camera, etc., and in addition, it may be used to capture a gray scale image, and may also capture a color image with color information. Of course, any other type of camera known in the art and that may appear in the future may be applied to the present application, and the present application has no particular limitation on the manner in which an image is captured as long as gray scale or color information of an input image can be obtained. To reduce the amount of computation in subsequent operations, in one embodiment, the color map may be grayed out before analysis and processing.
In step S120, depth information of the running environment temporally synchronized with the image information is acquired.
Before, after, or simultaneously with step S110, depth information of the road surface acquired simultaneously with the image information may be additionally acquired.
For example, the depth sensor may be any suitable sensor, such as a binocular camera that measures depth based on a binocular disparity map or an infrared depth sensor (or laser depth sensor) that measures depth based on infrared illumination. For example, the depth sensor may generate depth information, such as a depth map or a laser point cloud, for measuring the position of an obstacle relative to the current vehicle. The depth sensor may collect any suitable depth information related to the distance of the obstacle from the current vehicle. For example, a depth sensor may gather information about how far in front of the current vehicle an obstacle is. Still further, the depth sensor may collect direction information such as information on whether an obstacle is on the right or left of the current vehicle, in addition to the distance information. The depth sensor may also collect information about the distance of an obstacle from the current vehicle at different points in time to determine whether the obstacle is moving towards or away from the current vehicle. Next, the description will be continued by taking the laser depth sensor as an example.
Fig. 3 illustrates a flow chart of the step of obtaining depth information according to an embodiment of the present application.
As shown in fig. 3, step S120 may include:
in sub-step S121, an acquisition time at which the imaging device acquires the image information is determined.
For example, various attribute information including the acquisition time and the like may be included in the image information. The acquisition time of the image information can be determined through the attribute information.
In sub-step S122, the depth information of the road surface in the driving direction acquired by the depth sensor of the current vehicle at the acquisition time is acquired.
Similarly, various attribute information including the acquisition time and the like may be included in the depth information. Through the acquisition time of the image information, the depth information acquired at the same time point can be determined.
Note that the present application is not limited to this. For example, it is also possible to store acquisition information and depth information acquired at the same point in time as a pair of related information at the acquisition stage of the imaging device and the depth sensor for later acquisition.
Referring back to fig. 2, in step S130, an obstacle region in the driving environment is marked in the image information according to the depth information.
After obtaining the corresponding image information and depth information, the two may be combined by various methods to detect obstacles and their areas in the driving environment.
FIG. 4 illustrates a flow chart of the step of marking an obstacle according to an embodiment of the present application.
As shown in fig. 4, step S130 may include:
in sub-step S131, it is determined whether an obstacle is present on the road surface based on the depth information.
For example, the obstacle may be at least one of: pedestrians, animals, spills, warning signs, piers, and other vehicles.
The laser depth sensor is used for calculating the distance between the laser depth sensor and an object by emitting a particularly short light pulse, measuring the time from the emitting of the light pulse to the reflection of the light pulse by the obstacle and measuring the time interval, so that whether the obstacle exists on the road surface and the position relation between the obstacle and the current vehicle can be judged according to the position and the return time of the laser point cloud detected by the sensor.
In sub-step S132, in response to the existence of an obstacle, a projection area of the obstacle on the road surface is determined in the image information according to the depth information of the obstacle.
Once it is determined from the laser point cloud that there are obstacles on the road surface, for example, clustering may be performed according to the laser point cloud to approximately identify the number of possible obstacles, and each obstacle may be mapped to the image information according to the depth information of the obstacle to determine the projection area of the obstacle on the road surface. It should be noted that although there may be overlap between multiple obstacles in the image information, the distance between the obstacles for safety is determined by the driving rule of the vehicle, so that the clustering result based on the depth information is more accurate than the clustering result based on the image information.
Specifically, for example, first, the three-dimensional coordinates of the obstacle with respect to the current vehicle may be determined from the depth information of the obstacle and the calibration parameters of the depth sensor.
Due to manufacturing tolerances, after mounting the depth sensor on the vehicle, each vehicle must perform an independent end-of-line sensor calibration (end-of-line sensor calibration) or aftermarket sensor adjustment in order to determine calibration parameters such as the pitch angle of the depth sensor on the vehicle for eventual use for driving assistance purposes, etc. For example, the calibration parameters may refer to an external reference matrix of the depth sensor, which may include one or more of a pitch angle, a roll angle, etc. of the depth sensor with respect to a formal orientation of the current vehicle. Three-dimensional coordinates, for example, coordinates (x, y, z), of each laser point related to the obstacle may be calculated based on the depth information of the obstacle according to the calibrated pitch angle and the like and a preset algorithm. The three-dimensional coordinates may be absolute coordinates of the obstacle in a world coordinate system or relative coordinates with a reference position of the current vehicle.
Then, the height coordinate z in the three-dimensional coordinates of the obstacle may be set to zero to generate three-dimensional coordinates after projection onto the road surface. That is, the three-dimensional coordinates of each laser point associated with the obstacle may be modified to (x, y, 0).
Finally, the projection area of the obstacle on the road surface can be determined in the image information according to the projected three-dimensional coordinates and the calibration parameters of the imaging device.
Similarly to the depth sensor, due to manufacturing tolerances, calibration parameters such as the pitch angle of the imaging device on the vehicle also need to be determined first after the imaging device is mounted on the vehicle. Therefore, it is possible to convert the projected three-dimensional coordinates of each laser point related to the obstacle into respective image coordinates in the image information according to the pitch angle of the imaging device with respect to the traveling direction of the current vehicle and the like and a preset algorithm, and determine the outermost peripheral region (i.e., the maximum outline region) of the respective image coordinates as the projection region of the obstacle on the road surface.
In sub-step S133, the projected area is labeled as an obstacle area on the road surface.
The projection area of the obstacle on the road surface determined from the outermost peripheral area of each image coordinate may be automatically labeled as an obstacle area on the road surface by way of circling or the like.
It can be seen that with the region labeling method according to the first embodiment of the present application, in the process of generating a training sample for training a machine learning model, image information of a driving environment acquired by an imaging device is acquired, depth information of the driving environment temporally synchronized with the image information is acquired, and an obstacle region in the driving environment is labeled in the image information according to the depth information. Therefore, compared with the case of manually labeling the obstacle region as in the prior art, the obstacle region in the driving environment can be automatically labeled, and the efficiency of region labeling is improved.
In the first embodiment described above, the obstacle region may be automatically marked in the image information acquired by the imaging device in the marking process by combining the depth information from the depth sensor. However, for the purpose of driving assistance or the like, it is desirable to label not only the obstacle region but also the travelable region in the entire travel environment, and generate a training sample for the machine learning model based on the labeling result.
In order to solve the above-described problems, a second embodiment of the present application is proposed on the basis of the first embodiment of the present application.
Fig. 5 illustrates a flowchart of a region labeling method according to a second embodiment of the present application.
As shown in fig. 5, a region labeling method according to a second embodiment of the present application may include:
in fig. 5, the same reference numerals are used to indicate the same steps as in fig. 2. Thus, steps S110-S130 in FIG. 5 are the same as steps S110-S130 of FIG. 2, and reference may be made to the description above in connection with FIGS. 2 through 4. Fig. 5 differs from fig. 2 in the addition of step S140 and a further optional step S150.
In step S140, a travelable region in the travel environment is marked in the image information according to a user input and the obstacle region.
When the image information is marked with a region before, after, or simultaneously with an obstacle region on the road surface, a travelable region on the road surface can also be detected in the image information by various methods.
FIG. 6 is a flowchart illustrating a step of labeling a drivable area according to an embodiment of the present application.
As shown in fig. 6, step S140 may include:
in sub-step S141, a user input is received.
The user input may be boundary position information of a road surface found by the user based on human eye recognition, which may include coordinate input or circle selection input on the image, or the like.
In sub-step S142, a road surface boundary of the road surface is determined according to the user input.
For example, the road surface boundary of the road surface may be marked in the image information according to the boundary position information input by the user. For example, the road surface boundary may be at least one of: curbs, isolation belts, green belts, guardrails, lane lines, and other edges of vehicles.
In substep S143, a drivable region on the road surface is marked on the basis of the road surface boundary and the obstacle region.
For example, a road surface area on the road surface may be determined from the road surface boundary, and the obstacle area may be removed from the road surface area to obtain the travelable area.
Next, the effects of the embodiments of the present application will be described by a specific experiment.
Fig. 7A illustrates a schematic diagram of combining depth information and user input in the image information shown in fig. 1 according to an embodiment of the present application, and fig. 7B illustrates a schematic diagram of labeling an obstacle region and a travelable region in the image information shown in fig. 1 according to an embodiment of the present application.
Referring to fig. 7A, time-synchronized image information and laser sensor information may be acquired during the annotation process. In the image information, the user can identify, by human eyes, that 4 lane lines (lane lines 1 to 4) and 1 boundary line (boundary line 1) exist in the road surface shown in fig. 1 as the candidate mark of the road surface boundary. Here, the road surface range on which the vehicle can currently travel may be determined depending on different driving assist strategies. For example, when the lane lines 3 and 4 are solid lines, the road surface range may be determined using them as the road surface boundaries in the normal case, but in an emergency (for example, when a warning of a possible collision occurs in the front or rear), the maximum physical travelable range, that is, the road boundaries 1 and 5, may be determined using them as the road surface boundaries. In addition, as shown in fig. 7A, it is also possible to detect that 3 laser spot clusters (laser spot clusters 1 to 3) exist in the road surface based on depth information such as a laser point cloud. Next, the intersections of the obstacles 1 to 3 with the ground plane can be obtained by converting the spatial coordinates of the laser spot clusters 1 to 3 and projecting them onto the ground plane of the road surface in the image information. Finally, the maximum outline area above the intersection point may be labeled as an obstacle area, i.e., an undrivable area, and the remaining area may be labeled as a drivable area, as shown in fig. 7B, in which a road boundary 1 and a road boundary 5 are illustrated as road surface boundaries.
Referring back to fig. 5, next, optionally, in step S150, the training sample is generated based on the image information in which the travelable region is marked.
For example, the image information and associated annotation information can be packaged together to generate training samples for use in subsequent training of the machine learning model.
It can be seen that with the region labeling method according to the second embodiment of the present application, it is possible to label an obstacle region in a driving environment in image information collected by an imaging device according to depth information collected by a depth sensor, and it is also possible to label an environment boundary in a driving environment in the image information according to a user input, determine a drivable region in the driving environment according to the environment boundary and the obstacle region, and generate the training sample based on the image information in which the drivable region is labeled. Accordingly, it is possible to reliably and efficiently detect a travelable region in a travel environment and generate a training sample for use by a machine learning model.
Exemplary devices
Next, a region labeling apparatus according to an embodiment of the present application is described with reference to fig. 8.
FIG. 8 illustrates a block diagram of a region labeling apparatus according to an embodiment of the present application.
As shown in fig. 8, the region labeling apparatus 100 may include: an image acquisition unit 110 configured to acquire image information of a driving environment acquired by an imaging device in a process of generating a training sample for training a machine learning model; a depth acquisition unit 120 for acquiring depth information of the running environment temporally synchronized with the image information; and an obstacle labeling unit 130 configured to label an obstacle region in the driving environment in the image information according to the depth information.
In one example, the image acquisition unit 110 may acquire image information of a road surface in a traveling direction of a current vehicle.
In one example, the depth obtaining unit 120 may include: the time determining module is used for determining the acquisition time of the imaging device for acquiring the image information; and the depth acquisition module is used for acquiring the depth information of the road surface in the driving direction acquired by the depth sensor of the current vehicle in the acquisition time.
In one example, the obstacle labeling unit 130 may include: the obstacle judging module is used for judging whether an obstacle exists on the road surface according to the depth information; the projection determination module is used for responding to the existence of an obstacle, and determining a projection area of the obstacle on the road pavement in the image information according to the depth information of the obstacle; and the obstacle marking module is used for marking the projection area as an obstacle area on the road surface.
In one example, the projection determination module may determine three-dimensional coordinates of the obstacle relative to the current vehicle based on depth information of the obstacle and calibration parameters of the depth sensor; setting a height coordinate in the three-dimensional coordinates of the obstacle to zero to generate three-dimensional coordinates after projection onto the road surface; and determining a projection area of the obstacle on the road surface in the image information according to the projected three-dimensional coordinates and the calibration parameters of the imaging device.
In one example, the obstacle may be at least one of: pedestrians, animals, spills, warning signs, piers, and other vehicles.
In one example, the region labeling apparatus 100 may further include: a travelable labeling unit (not shown) for labeling a travelable region in the travel environment in the image information according to a user input and the obstacle region.
In one example, the navigable labeling unit may include: the input receiving module is used for receiving user input; a boundary determination module to determine a road surface boundary of the road surface from the user input; and the drivable marking module is used for marking the drivable area on the road surface according to the road surface boundary and the obstacle area.
In one example, the drivable labeling module may determine a road surface region on the road surface from the road surface boundary; and removing the obstacle area from the road surface area to obtain the travelable area.
In one example, the region labeling apparatus 100 may further include: a sample generation unit (not shown) for generating the training sample based on the image information in which the travelable region is labeled.
The specific functions and operations of the respective units and modules in the above-described region labeling apparatus 100 have been described in detail in the region labeling method described above with reference to fig. 1 to 7B, and therefore, repeated descriptions thereof will be omitted.
As described above, the embodiments of the present application can be applied to labeling an obstacle region in a traveling environment in which various online electronic devices such as a vehicle, a mobile robot, a fixed monitoring camera, and the like equipped with an imaging device thereon are located. In addition, the area labeling method and the area labeling device according to the embodiments of the present application can be directly implemented on the online electronic device. However, given that online electronic devices tend to have limited processing capabilities, to achieve better performance, embodiments of the present application may also be implemented in various offline electronic devices that are capable of communicating with the online electronic device to communicate the trained machine learning model thereto. For example, the offline electronic device may include devices such as a terminal device, a server, and the like.
Accordingly, the area labeling apparatus 100 according to the embodiment of the present application may be integrated into the off-line electronic device as a software module and/or a hardware module, in other words, the electronic device may include the area labeling apparatus 100. For example, the region labeling apparatus 100 may be a software module in an operating system of the electronic device, or may be an application program developed for the electronic device; of course, the region labeling apparatus 100 can also be one of many hardware modules of the electronic device.
Alternatively, in another example, the area labeling apparatus 100 and the off-line electronic device may be separate devices, and the area labeling apparatus 100 may be connected to the electronic device through a wired and/or wireless network and transmit the interactive information according to an agreed data format.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 9. The electronic device may be an online electronic device such as a vehicle, a mobile robot, or the like, equipped with an imaging device thereon, or an offline electronic device capable of communicating with the online electronic device to transfer the trained machine learning model thereto.
FIG. 9 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 9, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 11 to implement the region labeling methods of the various embodiments of the present application described above and/or other desired functions. Various contents such as image information, depth information, obstacle regions, travelable regions, annotation information, and the like may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown). It should be noted that the components and configuration of the electronic device 10 shown in FIG. 9 are exemplary only, and not limiting, and that the electronic device 10 may have other components and configurations as desired.
For example, the input device 13 may be an imaging device for acquiring image information, which may be stored in the memory 12 for use by other components. Of course, other integrated or discrete imaging devices may be utilized to acquire the sequence of image frames and transmit it to the electronic device 10. As another example, the input device 13 may also be a depth sensor for collecting depth information, which may also be stored in the memory 12. The input device 13 may also include, for example, a keyboard, a mouse, and a communication network and a remote input device connected thereto.
The output device 14 may output various information including an obstacle region, a travelable region, a training sample, and the like of the determined travel environment to the outside (e.g., a user or a machine learning model). The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 9, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the region labeling method according to various embodiments of the present application described in the "exemplary methods" section above of this specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the region labeling method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (12)

1. A region labeling method comprises the following steps:
in the process of generating a training sample for training a machine learning model, acquiring image information of a driving environment acquired by an imaging device, wherein the image information comprises gray scale or color information of the driving environment;
acquiring depth information of the driving environment, which is acquired by a depth sensor synchronized with the image information in time, wherein the depth information comprises information indicating distances of obstacles in the driving environment from a current vehicle at different time points; and
labeling in the image information an obstacle area in the driving environment according to the depth information, the labeling including:
detecting laser point clusters existing in a road pavement based on the depth information;
converting the space coordinates of the laser spot cluster and projecting the space coordinates onto a ground plane of a road surface in the image information to obtain an intersection point of the obstacle and the ground plane; and
marking the maximum outline area above the intersection point as an obstacle area,
the depth sensor is a laser depth sensor.
2. The method of claim 1, wherein acquiring image information of the driving environment acquired by the imaging device comprises:
image information of a road surface in a traveling direction of a current vehicle is acquired.
3. The method of claim 2, wherein obtaining depth information of the driving environment acquired by a depth sensor temporally synchronized with the image information comprises:
determining the acquisition time for the imaging device to acquire the image information; and
and acquiring the depth information of the road surface in the driving direction acquired by the depth sensor of the current vehicle at the acquisition time.
4. The method of claim 3, wherein the marking of the obstacle area in the driving environment in the image information in accordance with the depth information comprises:
judging whether an obstacle exists on the road surface according to the depth information;
in response to the existence of an obstacle, determining a projection area of the obstacle on the road surface in the image information according to the depth information of the obstacle; and
labeling the projected area as an obstacle area on the road surface.
5. The method of claim 4, wherein determining a projected area of the obstacle on the road surface in the image information from the depth information of the obstacle comprises:
determining the three-dimensional coordinates of the obstacle relative to the current vehicle according to the depth information of the obstacle and the calibration parameters of the depth sensor;
setting a height coordinate in the three-dimensional coordinates of the obstacle to zero to generate three-dimensional coordinates after projection onto the road surface; and
and determining a projection area of the obstacle on the road surface in the image information according to the projected three-dimensional coordinates and the calibration parameters of the imaging device.
6. The method of claim 4 or 5, wherein the obstacle is at least one of: pedestrians, animals, spills, warning signs, piers, and other vehicles.
7. The method of claim 5, further comprising:
receiving a user input;
determining a road surface boundary of the road surface from the user input; and
marking a drivable area on the road surface according to the road surface boundary and the obstacle area.
8. The method of claim 7, wherein labeling the drivable area on the road surface as a function of the road surface boundary and the obstacle region comprises:
determining a road surface area on the road surface according to the road surface boundary; and
removing the obstacle area from the road surface area to obtain the travelable area.
9. The method of claim 7 or 8, further comprising:
generating the training sample based on image information in which the travelable region is labeled.
10. A region labeling apparatus comprising:
the device comprises an image acquisition unit, a processing unit and a processing unit, wherein the image acquisition unit is used for acquiring image information of a driving environment acquired by an imaging device in the process of generating a training sample for training a machine learning model, and the image information comprises gray scale or color information of the driving environment;
a depth acquisition unit configured to acquire depth information of the driving environment acquired by a depth sensor that is time-synchronized with the image information, the depth information including information indicating distances of obstacles in the driving environment from a current vehicle at different time points; and
an obstacle labeling unit configured to label an obstacle region in the driving environment in the image information according to the depth information, the labeling including:
detecting laser point clusters existing in a road pavement based on the depth information;
converting the space coordinates of the laser spot cluster and projecting the space coordinates onto a ground plane of a road surface in the image information to obtain an intersection point of the obstacle and the ground plane; and
and marking the maximum outline area above the intersection point as an obstacle area.
11. An electronic device, comprising:
a processor;
a memory; and
computer program instructions stored in the memory, which, when executed by the processor, cause the processor to perform the method of any of claims 1-9.
12. A computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1-9.
CN201610921206.7A 2016-10-21 2016-10-21 Region labeling method and device and electronic equipment Active CN106503653B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610921206.7A CN106503653B (en) 2016-10-21 2016-10-21 Region labeling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610921206.7A CN106503653B (en) 2016-10-21 2016-10-21 Region labeling method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN106503653A CN106503653A (en) 2017-03-15
CN106503653B true CN106503653B (en) 2020-10-13

Family

ID=58318354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610921206.7A Active CN106503653B (en) 2016-10-21 2016-10-21 Region labeling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN106503653B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108168566B (en) * 2016-12-07 2020-09-04 北京三快在线科技有限公司 Road determination method and device and electronic equipment
CN106952308B (en) * 2017-04-01 2020-02-28 上海蔚来汽车有限公司 Method and system for determining position of moving object
CN107437268A (en) * 2017-07-31 2017-12-05 广东欧珀移动通信有限公司 Photographic method, device, mobile terminal and computer-readable storage medium
CN107907886A (en) * 2017-11-07 2018-04-13 广东欧珀移动通信有限公司 Travel conditions recognition methods, device, storage medium and terminal device
CN108256413B (en) * 2017-11-27 2022-02-25 科大讯飞股份有限公司 Passable area detection method and device, storage medium and electronic equipment
CN108563742B (en) * 2018-04-12 2022-02-01 王海军 Method for automatically creating artificial intelligence image recognition training material and labeled file
US10816984B2 (en) * 2018-04-13 2020-10-27 Baidu Usa Llc Automatic data labelling for autonomous driving vehicles
CN108827309B (en) * 2018-06-29 2021-08-17 炬大科技有限公司 Robot path planning method and dust collector with same
CN109271944B (en) 2018-09-27 2021-03-12 百度在线网络技术(北京)有限公司 Obstacle detection method, obstacle detection device, electronic apparatus, vehicle, and storage medium
CN109376664B (en) * 2018-10-29 2021-03-09 百度在线网络技术(北京)有限公司 Machine learning training method, device, server and medium
DK180774B1 (en) * 2018-10-29 2022-03-04 Motional Ad Llc Automatic annotation of environmental features in a map during navigation of a vehicle
CN111323026B (en) * 2018-12-17 2023-07-07 兰州大学 Ground filtering method based on high-precision point cloud map
CN109683613B (en) * 2018-12-24 2022-04-29 驭势(上海)汽车科技有限公司 Method and device for determining auxiliary control information of vehicle
CN109765634B (en) * 2019-01-18 2021-09-17 广州市盛光微电子有限公司 Depth marking device
CN110032181B (en) * 2019-02-26 2022-05-17 文远知行有限公司 Method and device for positioning barrier in semantic map, computer equipment and storage medium
CN111696144A (en) * 2019-03-11 2020-09-22 北京地平线机器人技术研发有限公司 Depth information determination method, depth information determination device and electronic equipment
CN110096059B (en) * 2019-04-25 2022-03-01 杭州飞步科技有限公司 Automatic driving method, device, equipment and storage medium
CN110197148B (en) * 2019-05-23 2020-12-01 北京三快在线科技有限公司 Target object labeling method and device, electronic equipment and storage medium
CN111027381A (en) * 2019-11-06 2020-04-17 杭州飞步科技有限公司 Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN110866504B (en) * 2019-11-20 2023-10-17 北京百度网讯科技有限公司 Method, device and equipment for acquiring annotation data
CN111125442B (en) * 2019-12-11 2022-11-15 苏州智加科技有限公司 Data labeling method and device
CN111368794B (en) * 2020-03-19 2023-09-19 北京百度网讯科技有限公司 Obstacle detection method, device, equipment and medium
CN112639822B (en) * 2020-03-27 2021-11-30 华为技术有限公司 Data processing method and device
CN111552289B (en) * 2020-04-28 2021-07-06 苏州高之仙自动化科技有限公司 Detection method, virtual radar device, electronic apparatus, and storage medium
CN112200049B (en) * 2020-09-30 2023-03-31 华人运通(上海)云计算科技有限公司 Method, device and equipment for marking road surface topography data and storage medium
CN112714266B (en) * 2020-12-18 2023-03-31 北京百度网讯科技有限公司 Method and device for displaying labeling information, electronic equipment and storage medium
CN115164910B (en) * 2022-06-22 2023-02-21 小米汽车科技有限公司 Travel route generation method, travel route generation device, vehicle, storage medium, and chip

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101428403B1 (en) * 2013-07-17 2014-08-07 현대자동차주식회사 Apparatus and method for detecting obstacle in front

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4406381B2 (en) * 2004-07-13 2010-01-27 株式会社東芝 Obstacle detection apparatus and method
JP6262068B2 (en) * 2014-04-25 2018-01-17 日立建機株式会社 Near-body obstacle notification system
CN108594851A (en) * 2015-10-22 2018-09-28 飞智控(天津)科技有限公司 A kind of autonomous obstacle detection system of unmanned plane based on binocular vision, method and unmanned plane
CN105319991B (en) * 2015-11-25 2018-08-28 哈尔滨工业大学 A kind of robot environment's identification and job control method based on Kinect visual informations
CN105957145A (en) * 2016-04-29 2016-09-21 百度在线网络技术(北京)有限公司 Road barrier identification method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101428403B1 (en) * 2013-07-17 2014-08-07 현대자동차주식회사 Apparatus and method for detecting obstacle in front

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
辅助驾驶中的路面障碍检测技术研究;赵日成;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160315;第2016年卷(第03期);第I138-6885页 *

Also Published As

Publication number Publication date
CN106503653A (en) 2017-03-15

Similar Documents

Publication Publication Date Title
CN106503653B (en) Region labeling method and device and electronic equipment
CN106485233B (en) Method and device for detecting travelable area and electronic equipment
CN106650705B (en) Region labeling method and device and electronic equipment
US10430968B2 (en) Vehicle localization using cameras
CN109583415B (en) Traffic light detection and identification method based on fusion of laser radar and camera
US9083856B2 (en) Vehicle speed measurement method and system utilizing a single image capturing unit
US20200250837A1 (en) Systems and Methods for Detecting an Object Velocity
CN110765894B (en) Target detection method, device, equipment and computer readable storage medium
US9542609B2 (en) Automatic training of a parked vehicle detector for large deployment
JP2023523243A (en) Obstacle detection method and apparatus, computer device, and computer program
JP2020064046A (en) Vehicle position determining method and vehicle position determining device
US10369993B2 (en) Method and device for monitoring a setpoint trajectory to be traveled by a vehicle for being collision free
CN108692719B (en) Object detection device
CN111179300A (en) Method, apparatus, system, device and storage medium for obstacle detection
WO2019198076A1 (en) Real-time raw data- and sensor fusion
EP2960858A1 (en) Sensor system for determining distance information based on stereoscopic images
CN112802092B (en) Obstacle sensing method and device and electronic equipment
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
WO2023231991A1 (en) Traffic signal lamp sensing method and apparatus, and device and storage medium
CN114359865A (en) Obstacle detection method and related device
CN113988197A (en) Multi-camera and multi-laser radar based combined calibration and target fusion detection method
KR20190134303A (en) Apparatus and method for image recognition
CN116630931A (en) Obstacle detection method, obstacle detection system, agricultural machine, electronic device, and storage medium
KR102385907B1 (en) Method And Apparatus for Autonomous Vehicle Navigation System
CN115402347A (en) Method for identifying a drivable region of a vehicle and driving assistance method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant