CN113688259A - Navigation target labeling method and device, electronic equipment and computer readable medium - Google Patents

Navigation target labeling method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN113688259A
CN113688259A CN202010433124.4A CN202010433124A CN113688259A CN 113688259 A CN113688259 A CN 113688259A CN 202010433124 A CN202010433124 A CN 202010433124A CN 113688259 A CN113688259 A CN 113688259A
Authority
CN
China
Prior art keywords
labeling
annotation
point
image
labeled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010433124.4A
Other languages
Chinese (zh)
Inventor
李冰
周志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202010433124.4A priority Critical patent/CN113688259A/en
Priority to KR1020210032233A priority patent/KR102608167B1/en
Priority to JP2021040979A priority patent/JP7383659B2/en
Publication of CN113688259A publication Critical patent/CN113688259A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3667Display of a road map
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3667Display of a road map
    • G01C21/367Details, e.g. road map scale, orientation, zooming, illumination, level of detail, scrolling of road map or positioning of current position marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof

Abstract

The present disclosure provides a method for labeling a navigation target, which includes: acquiring an annotation image; reducing an annotation area in the annotation image based on the annotation object; establishing a labeling processing model matched with the labeling object according to the labeling object and the labeling area corresponding to the labeling object; the annotation processing model is used for reducing an annotation area in the annotation image. The labeling method can reduce the labeling area in the labeling image, reduce the labeling range and improve the labeling efficiency. The disclosure also provides a labeling device, an electronic device and a computer readable medium of the navigation target.

Description

Navigation target labeling method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of augmented reality navigation, and in particular relates to a navigation target labeling method and device, electronic equipment and a computer readable medium.
Background
Augmented Reality (AR) is a new technology developed on the basis of virtual Reality, which increases the technology of perception of a user to the real world through information provided by a computer system, applies virtual information to the real world, and superimposes virtual object, scene or system prompt information generated by a computer to the real scene, thereby realizing the enhancement of Reality.
In the navigation in the augmented reality, detection and labeling are required to be performed on AR navigation data, for example, information such as a vehicle, a pedestrian, a road sign and the like in front is labeled. The manual labeling is a general labeling mode, and the labeling mode has low processing efficiency and needs to consume a large amount of labor cost. Moreover, as the amount of data increases, the probability of error increases, and the influence on the health conditions such as eyesight is large.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
The embodiment of the disclosure provides a navigation target labeling method and device, electronic equipment and a computer readable medium.
In a first aspect, an embodiment of the present disclosure provides a method for labeling a navigation target, including:
acquiring an annotation image;
reducing an annotation area in the annotation image based on the annotation object;
establishing a labeling processing model matched with the labeling object according to the labeling object and the labeling area corresponding to the labeling object; the annotation processing model is used for reducing an annotation area in the annotation image.
In some embodiments, the reducing the annotation region in the annotation image based on the annotation object comprises:
determining vanishing points in the annotated image;
determining a lowest marking point and a highest marking point based on the vanishing point and the marking object;
and determining a closed area enclosed by the lowest marking point and the highest marking point as the marking area.
In some embodiments, the determining a lowest labeled point and a highest labeled point based on the vanishing point and the labeled object includes:
when the labeling object is a vehicle, the lowest labeling point comprises a first intersection point where the bottom edge and the left side edge of the labeling image intersect and a second intersection point where the bottom edge and the right side edge intersect; the highest labeled point comprises a third intersection point where a connecting line of the vanishing point and the first intersection point crosses with a far-end line, and a fourth intersection point where a connecting line of the vanishing point and the second intersection point crosses with the far-end line; the far-end line is parallel to the bottom edge of the marked image and is a preset distance away from the bottom edge of the marked image.
In some embodiments, the determining a lowest labeled point and a highest labeled point based on the vanishing point and the labeled object includes:
when the labeling object is a pedestrian, the lowest labeling point comprises a first intersection point where the bottom edge and the left side edge of the labeling image intersect and a second intersection point where the bottom edge and the right side edge intersect; the highest labeling point comprises a third intersection point where a height line intersects with the left side edge of the labeled image and a fourth intersection point where the height line intersects with the right side edge of the labeled image, wherein the height line passes through the vanishing point and is a line which is away from the ground by a preset height.
In some embodiments, the determining a lowest labeled point and a highest labeled point based on the vanishing point and the labeled object includes:
when the labeling object is a traffic sign, the lowest labeling point comprises a first intersection point where a first height line intersects with the left side of the labeling image and a second intersection point where the first height line intersects with the right side of the labeling image; the highest labeling point comprises a third intersection point of a second height line and the left side of the labeling image, and a fourth intersection point of the second height line and the right side of the labeling image; the first height line is a line which passes through the vanishing point and is at a first preset height from the ground, the second height line is a line which passes through the vanishing point and is at a second preset height from the ground, and the first preset height is smaller than the second preset height.
In some embodiments, after the establishing a label processing model matching the label object according to the label object and the label area corresponding to the label object, the method includes:
obtaining a thermodynamic diagram of the labeled area based on the labeled processing model;
and labeling the target of the labeling area based on the thermodynamic diagram.
In some embodiments, before the reducing the annotation region in the annotation image based on the annotation object, the method further includes:
and excluding the approximate image in the annotation image.
In some embodiments, before the reducing the annotation region in the annotation image based on the annotation object, the method further includes:
acquiring longitude and latitude information of the position of the marked image;
and determining the labeling object based on the longitude and latitude information.
In a second aspect, an embodiment of the present disclosure provides a labeling apparatus for a navigation target, including:
the acquisition module is used for acquiring an annotation image;
the labeling area reducing module is used for reducing a labeling area in the labeling image based on the labeling object;
the label processing model obtaining module is used for establishing a label processing model matched with the label object according to the label object and the label area corresponding to the label object; the annotation processing model is used for reducing an annotation area in the annotation image.
In some embodiments, the label region narrowing module includes:
a vanishing point determining unit, configured to determine vanishing points in the annotation image;
a marking point determining unit, configured to determine a lowest marking point and a highest marking point based on the vanishing point and the marking object;
and the marking area reducing unit is used for determining a closed area enclosed by the lowest marking point and the highest marking point as the marking area.
In some embodiments, the apparatus further comprises:
and the approximate image exclusion module is used for excluding the approximate image in the annotation image.
In some embodiments, the apparatus further comprises:
the longitude and latitude determining module is used for obtaining longitude and latitude information of the position of the marked image;
and the marked object determining module is used for determining the marked object based on the longitude and latitude information.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
one or more processors;
a memory having one or more programs stored thereon that, when executed by the one or more processors, cause the one or more processors to perform any of the above methods of tagging navigation targets;
one or more I/O interfaces connected between the processor and the memory and configured to enable information interaction between the processor and the memory.
In a fourth aspect, the present disclosure provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements any one of the above-mentioned navigation target labeling methods.
The navigation target labeling method provided by the embodiment of the disclosure comprises the steps of obtaining a labeling image; reducing an annotation area in the annotation image based on the annotation object; establishing a labeling processing model matched with the labeling object according to the labeling object and the labeling area corresponding to the labeling object; the annotation processing model is used for reducing an annotation area in the annotation image. The labeling method can reduce the labeling area in the labeled image, reduce the labeling range and improve the labeling efficiency; moreover, the labor intensity can be reduced, the influence on the health of the labeling personnel is reduced, and the error rate of labeling is reduced.
Drawings
The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. The above and other features and advantages will become more apparent to those skilled in the art by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
fig. 1 is a flowchart of a method for labeling a navigation target according to a first embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a step 102 in a method for labeling a navigation target according to a first embodiment of the present disclosure;
FIG. 3 is a labeling diagram illustrating a vehicle as a labeling object in the embodiment of the present disclosure;
FIG. 4 is a labeled schematic diagram of a labeled object as a vehicle in the embodiment of the present disclosure;
FIG. 5 is a labeling diagram illustrating a labeling object as a traffic sign in an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating a method for labeling a navigation target according to a second embodiment of the present disclosure;
FIG. 7 is a flowchart illustrating a method for labeling a navigation target according to a third embodiment of the present disclosure;
FIG. 8 is a schematic block diagram of an annotation device for navigation targets according to a fourth embodiment of the disclosure;
FIG. 9 is a schematic block diagram of a label area reduction module in a labeling apparatus for a navigation target according to a fourth embodiment of the present disclosure;
FIG. 10 is a schematic block diagram of a labeling apparatus for navigating a target according to a fifth embodiment of the present disclosure;
FIG. 11 is a schematic block diagram of an annotation device for navigation targets according to a sixth embodiment of the disclosure;
fig. 12 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present disclosure, the method and apparatus for labeling navigation targets, the electronic device, and the computer readable medium provided in the present disclosure are described in detail below with reference to the accompanying drawings.
Example embodiments will be described more fully hereinafter with reference to the accompanying drawings, but which may be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The navigation target labeling method and device provided by the embodiment of the disclosure preprocess navigation data to assist manual labeling, improve labeling efficiency, and reduce labor cost and influence on health.
In a first aspect, an embodiment of the present disclosure provides a method for labeling a navigation target. The marking method can assist manual marking, reduce the range of manual marking, reduce the labor intensity of manual marking, improve the marking efficiency and reduce the error rate of marking.
Fig. 1 is a flowchart of a navigation target labeling method according to a first embodiment of the present disclosure. Referring to fig. 1, the labeling method of the navigation target includes:
step 101, obtaining an annotation image.
In the AR navigation process, live-action images of the current driving environment, i.e., images around the vehicle, may be acquired in real time by using at least one camera configured on the vehicle, and each live-action image may be used as an annotation image. On the currently acquired image around the vehicle, there may be a plurality of annotation targets, which belong to different types. Such as vehicle, pedestrian, traffic sign, road, building, etc. Because different types of labeling targets have different characteristics, different labeling areas need to be determined for different labeling types before labeling the labeled image, and different modes are adopted.
And 102, reducing the labeling area in the labeling image based on the labeling object.
In the embodiment of the present disclosure, since the probability of different annotation objects appearing in different regions of an annotation image is different, people pay attention to the fact that the regions of the annotation image are different. Therefore, the embodiment of the present disclosure reduces the annotation region in the annotation image based on the annotation object.
For example, when the annotation object is a vehicle, the probability that the vehicle appears in the center region of the annotation image is greater than that in the edge region. When the labeling object is a traffic sign, the probability of the traffic sign appearing in the edge region of the labeling image is greater than that in the central region.
And 103, establishing a labeling processing model matched with the labeling object according to the labeling object and the labeling area corresponding to the labeling object.
In step 103, due to the difference in the probability of the appearance of the annotation object in the annotation image and the attention of people to different areas in the annotation image, there is a corresponding relationship between the annotation object and the annotation area. Therefore, a label processing model matched with the label object is established according to the label object and the label area corresponding to the label object. The annotation processing model therein will be described below.
In the embodiment of the disclosure, the annotation processing model for different annotation objects is established based on the annotation object and the annotation region, so that the annotation region in the annotation image can be reduced, thereby reducing the workload of subsequent further manual annotation, reducing the labor intensity and improving the annotation efficiency.
Fig. 2 is a flowchart of step 102 in a method for labeling a navigation target according to a first embodiment of the present disclosure. As shown in fig. 2, the method for reducing the annotation region in the annotation image based on the annotation object includes:
step 201, determining vanishing points in the annotation image.
In reality, parallel lines can converge at the vanishing point as visually observed, as two rails of a railway appear to merge together on a horizon. Thus, a vanishing point in an embodiment of the disclosure is a point at which two or more LINEs representing parallel LINEs extend to far HORIZON (HORIZONs LINE) until convergence.
In step 201, vanishing points are determined in the annotation image, wherein the vanishing points in the annotation image can be detected and determined by a hough transform or RANSAC method. For example: two straight lines are randomly selected to create a hypothetical vanishing point, and then the number of straight lines passing through the hypothetical vanishing point is calculated. After a certain number of iterations, the return value is a vanishing point that maximizes the number of intersecting lines.
And step 202, determining a lowest marking point and a highest marking point based on the vanishing point and the marking object.
The lowest annotation point and the highest annotation point are located differently due to the different annotation objects. Therefore, in step 202, the labeling area is determined by the lowest labeling point and the highest labeling point.
In some embodiments, because the vanishing point positions in the annotation image are different due to different angles of the captured annotation image, in step 202, the lowest annotation point and the highest annotation point are determined by the vanishing point and the annotation object, and then the annotation region is determined by the lowest annotation point and the highest annotation point.
Step 203, determining a closed area enclosed by the lowest labeling point and the highest labeling point as a labeling area.
In step 203, a closed region enclosed by the lowest labeling point and the highest labeling point is determined as a labeling region.
The following describes the determination manner of the lowest annotation point and the highest annotation point in the annotation image in detail for different annotation objects.
Fig. 3 is a labeling schematic diagram of a vehicle labeled as a labeling object in the embodiment of the present disclosure. In fig. 3, the angle of the observer or the camera is used as a coordinate, the left side of the observer is the left side of the reference image, the right side of the observer is the right side of the reference image, the upper side of the line of sight of the observer (the top side of the paper surface) is the top side of the reference image, and the lower side of the line of sight of the observer (the bottom side of the paper surface) is the bottom side of the reference image.
As shown in fig. 3, when the annotation object is a vehicle, the lowest annotation point includes a first intersection point a where the bottom side (bottom side) of the annotation image intersects the left side, and a second intersection point B where the bottom side intersects the right side. The highest labeled point includes a third intersection point G where a line OA connecting the vanishing point O and the first intersection point a intersects the distal line GH, and a fourth intersection point H where a line OB connecting the vanishing point O and the second intersection point B intersects the distal line GH.
The far-end line GH is parallel to the bottom edge of the annotation image and is a line which is away from the bottom edge of the annotation image by a preset distance S. In some embodiments, the preset distance S may be set according to practical situations, such as the preset distance S being 20 meters, 30 meters or 50 meters. The actual size of the preset distance S in the annotation image can be determined by the existing method, which is not limited in the embodiment of the present disclosure.
It is understood that, when the annotation object is a vehicle, the lowest annotation point includes a first intersection a and a second intersection B. The highest labeled point includes a third intersection G and a fourth intersection H. Therefore, the labeling area is a trapezoidal ABGH area surrounded by the first intersection a, the second intersection B, the third intersection G, and the fourth intersection H. When the labeled image is further labeled, other regions except the labeled region ABGH are coded, only the labeled region ABGH is labeled, and the labeled region is greatly reduced.
Fig. 4 is a labeling schematic diagram of a vehicle labeled as a labeling object in the embodiment of the present disclosure. In fig. 4, the left, right, top and bottom sides of the annotation image are defined as in fig. 3 and are not described herein again.
As shown in fig. 4, when the annotation object is a pedestrian, the lowest annotation point includes a first intersection point a where the bottom side and the left side of the annotation image intersect, and a second intersection point B where the bottom side and the right side intersect. The highest annotation point includes a third intersection point C where the height line OC intersects the left side of the annotation image, and a fourth intersection point D where the height line OD intersects the right side of the annotation image.
Wherein, the height lines OC and OD are lines which pass through the vanishing point O and are at a preset height from the ground. In some embodiments, the preset height may be set according to actual conditions, such as the preset height is 1.8 meters, 2 meters or 2.3 meters. In the annotation image, the pedestrian of interest is substantially below the height lines OC, OD. The actual size of the preset height in the annotation image can be determined by the existing method, and is not limited in the embodiment of the present disclosure.
It is understood that, when the labeling object is a pedestrian, the lowest labeling point includes the first intersection a and the second intersection B. The highest labeled point includes a third intersection C and a fourth intersection D. Therefore, the ABCD area surrounded by the first intersection a, the second intersection B, the third intersection C, and the fourth intersection D as the labeling area. When the labeled image is further labeled, other regions except the labeled region ABCD are coded, only the labeled region ABCD is labeled, and the labeled region is greatly reduced.
Fig. 5 is a labeling schematic diagram of a labeling object as a traffic sign in the embodiment of the present disclosure. In fig. 5, the left, right, top and bottom sides of the annotation image are defined as in fig. 3 and are not described herein again.
As shown in fig. 5, when the labeled object is a traffic sign, the lowest labeled point includes a first intersection point E where the first height line OE intersects with the left side OF the labeled image, and a second intersection point F where the first height line OF intersects with the right side OF the labeled image; the highest labeled point includes a third intersection point G where the second height line OG intersects with the edge of the labeled image, and a fourth intersection point H where the second height line OH intersects with the edge of the labeled image.
The first height lines OE and OF are lines passing through the vanishing point O and having a first preset height H1 from the ground, the second height lines OG and OH are lines passing through the vanishing point O and having a second preset height H2 from the ground, and the first preset height H1 is smaller than the second preset height H2. In some embodiments, the first preset height H1 and the second preset height H2 may be set according to practical situations, such as the first preset height H1 is 1.5 meters, and the second preset height H2 is 2 meters. In the annotation image, the traffic sign of interest is substantially between 1.5 and 2 meters. The actual sizes of the first preset height H1 and the second preset height H2 in the annotation image can be determined by the existing method, and are not limited in the embodiment of the disclosure.
It is understood that, when the labeled object is a traffic sign, the lowest labeled point includes the first intersection E and the second intersection F. The highest labeled point includes a third intersection G and a fourth intersection H. Therefore, the labeling area is an EFGH area surrounded by the first intersection E, the second intersection F, the third intersection G, and the fourth intersection H. When the labeled image is further labeled, other areas except the labeled area EFGH are coded, only the labeled area EFGH is labeled, and the labeled area is greatly reduced.
It should be noted that the third intersection point G may be on the left side of the annotation image, or on the top side (top side) of the annotation image, and the fourth intersection point H may be on the right side of the annotation image, or on the top side of the annotation image. When the third intersection G and the fourth intersection H are on the top side of the labeled image, the EFGH region surrounded by the first intersection E, the second intersection F, the third intersection G, and the fourth intersection H may also be extended to the CDEF region, which is the labeled region.
It should be noted that the above three choices of the labeling areas can be all covered on the data set of the same acquisition environment in batch, that is, the labeling images acquired by the same camera are all suitable for the choice of the labeling areas, so as to reduce the labeling areas.
In some embodiments, a labeling processing model is established according to the labeling object and the labeling area corresponding to the labeling object, and a thermodynamic diagram of the labeling area is obtained based on the labeling processing model; finally, the target of the marked area in the image is marked based on the thermodynamic diagram.
And when the labeling object is a vehicle, establishing an attention model based on the labeling object and the corresponding labeling area. And drawing an attention thermodynamic diagram by using the attention model, and labeling the target of the labeled area based on the thermodynamic diagram.
For example, an attention thermodynamic diagram is drawn based on the attention model, which is more advantageous for labeling of the labeled image. In the attention thermodynamic diagram, the closer to the red region, the more likely the vehicle is to appear, thereby reducing the labor intensity of labeling.
And when the marking object is a pedestrian, establishing a pedestrian marking model matched with the pedestrian according to the pedestrian and the pedestrian marking area.
The pedestrian is pre-labeled by combining a Histogram of Oriented Gradients (HOG) with a Support Vector Machine (SVM) model. Specifically, 1000 positive samples and 1000 negative samples are used as training samples, then HOG features of the samples are extracted and calculated, and finally the HOG features are used as SVM input to conduct model training to obtain a pedestrian labeling model.
When the labeling object is a traffic sign, a traffic labeling model is generated according to the color model, namely, the type of the traffic sign is identified according to the color and the shape, for example, a red circle is a traffic sign of a forbidden class, a yellow triangle is a traffic sign of a warning class, and a blue rectangle is a traffic sign of an indication class. Specifically, red, yellow and blue regions are selected through a color model (Hue, Value, HSV for short), coding (black or white) is carried out on the regions which are not the three colors, then the shape of each region is calculated, and the traffic sign is marked based on the shape and the color. For example, if the shape is circular and the color is red, the region is roughly labeled as a prohibited class traffic sign. If the shape is triangular and the color is yellow, the region is roughly labeled as a warning-type traffic sign. If the shape is rectangular and the color is blue, the region is roughly labeled as indicating a traffic-like sign.
Fig. 6 is a flowchart of a navigation target labeling method according to a second embodiment of the present disclosure. Referring to fig. 6, the method for labeling the navigation target includes:
step 601, obtaining an annotation image.
The step of obtaining the annotation image is the same as the step 101 in the first embodiment, and is not described herein again.
Step 602, eliminating the approximate image in the annotation image.
In practical application, a plurality of images with close similarity exist in the marked image obtained by the camera, and before marking, the marked image with the similarity reaching within the similarity threshold value can be excluded, so that the number of the marked images is reduced, and the marking efficiency is improved.
In step 602, a hamming distance between all the annotated images is calculated by using a disparity value hashing algorithm, a similarity threshold value is the hamming distance, and images with the hamming distances within the interval of [ S1, S2] are set as approximate images.
For example, the similarity threshold is set to 0 for the hamming distance S1 and 10 for S2, the hamming distances of all the annotation images are calculated, and the annotation images whose hamming distances are within [0, 10] are regarded as approximate images. And eliminating the approximate images, and only keeping one of the approximate images as the annotation image.
Step 603, reducing the labeling area in the labeling image based on the labeling object.
The step of reducing the labeled area in the labeled image based on the labeled object is the same as the step 102 in the first embodiment, and is not described herein again.
Step 604, establishing a labeling processing model matched with the labeling object according to the labeling object and the labeling area corresponding to the labeling object; the annotation processing model is used for reducing an annotation area in the annotation image.
The step of establishing the label processing model is the same as step 103 in the first embodiment, and is not described herein again.
Fig. 7 is a flowchart of a method for labeling a navigation target according to a third embodiment of the present disclosure. Referring to fig. 7, the method for labeling the navigation target includes:
and step 701, acquiring an annotation image.
In step 701, the step of acquiring the annotation image is the same as step 101 in the first embodiment, and is not described herein again.
Step 702, eliminating the approximate image in the annotation image.
In step 702, the step of excluding the approximate image is the same as step 602 in the second embodiment, and is not described herein again.
Step 703, obtaining longitude and latitude information of the position of the annotation image.
In step 703, a camera and a positioning system are mounted on the vehicle, the camera is used for obtaining the annotation image, and the positioning system can obtain longitude and latitude information of the position of the annotation image. The positioning system may be a general positioning system, which is not limited in this respect.
And step 704, determining the labeling object based on the latitude and longitude information.
In step 704, a scene of the annotated image may be determined according to the latitude and longitude information, and the annotated object may be determined according to the scene, so as to reduce the annotated object, thereby improving the efficiency of annotation.
For example, if it is determined from the latitude and longitude information that the scene of the annotation image is a suburban area or a more remote place, the annotation object may be a traffic sign and a vehicle. If the scene of the marked image is determined to be the urban area according to the longitude and latitude information, the marked object can be a pedestrian, a vehicle and a traffic mark.
Step 705, reducing the annotation region in the annotation image based on the annotation object.
In step 705, the step of reducing the labeled area in the labeled image based on the labeled object is the same as step 102 in the first embodiment, and is not repeated herein.
Step 706, establishing a labeling processing model matched with the labeling object according to the labeling object and the corresponding labeling area; the annotation processing model is used for reducing an annotation area in the annotation image.
In step 706, the step of establishing the annotation processing model is the same as step 103 in the first embodiment, and is not described herein again.
It should be noted that, in the embodiment of the present disclosure, the approximate image in the annotation image is excluded from being set as step 402, and the latitude and longitude information of the position where the annotation image is located is obtained is set as step 403, but the embodiment of the present disclosure is not limited to this, and the order of step 402 and step 403 is exchanged, and the present disclosure also belongs to the protection scope of the present invention.
The navigation target labeling method provided by the embodiment of the disclosure comprises the steps of obtaining a labeling image; reducing an annotation area in the annotation image based on the annotation object; establishing a labeling processing model matched with the labeling object according to the labeling object and the labeling area corresponding to the labeling object; the annotation processing model is used for reducing an annotation area in the annotation image. The labeling method can reduce the labeling area in the labeled image, reduce the labeling range and improve the labeling efficiency; moreover, the labor intensity can be reduced, the influence on the health of the labeling personnel is reduced, and the error rate of labeling is reduced.
In a second aspect, a fourth embodiment of the present disclosure provides a labeling device for a navigation target, which can assist manual labeling, reduce the range of manual labeling, reduce the labor intensity of manual labeling, improve the labeling efficiency, and reduce the error rate of labeling.
Fig. 8 is a schematic block diagram of an annotation device for navigation targets according to a fourth embodiment of the disclosure. Referring to fig. 8, the labeling apparatus for navigating a target includes:
an obtaining module 801, configured to obtain an annotation image.
The manner of acquiring the annotation image by the acquiring module 801 is the same as that in step 101 of the first embodiment, and is not described herein again.
And an annotated region reduction module 802, configured to reduce an annotated region in the annotated image based on the annotated object.
Because the probability of different labeled objects appearing in different areas of the labeled image is different, people pay attention to the different areas of the labeled image. Accordingly, the annotation region reduction module 802 reduces the annotation region in the annotation image based on the annotation object.
For example, when the annotation object is a vehicle, the probability that the vehicle appears in the center region of the annotation image is greater than that in the edge region. When the labeling object is a traffic sign, the probability of the traffic sign appearing in the edge region of the labeling image is greater than that in the central region.
A label processing model obtaining module 803, configured to establish a label processing model matching the label object according to the label object and the label area corresponding to the label object; the annotation processing model is used for reducing an annotation area in the annotation image.
Fig. 9 is a schematic block diagram of a labeling area reducing module in a labeling apparatus for a navigation target according to a fourth embodiment of the disclosure. As shown in fig. 9, the label region reduction module includes:
a vanishing point determining unit 901 for determining vanishing points in the annotation image.
Vanishing points in the annotation image can be detected and determined by a Hough transform or RANSAC method. For example: two straight lines are randomly selected to create a hypothetical vanishing point, and then the number of straight lines passing through the hypothetical vanishing point is calculated. After a certain number of iterations, the return value is a vanishing point that maximizes the number of intersecting lines.
And an annotation point determining unit 902, configured to determine a lowest annotation point and a highest annotation point based on the vanishing point and the annotation object.
The annotation point determining unit 902 includes a lowest annotation point sub-unit and a highest annotation point sub-unit, and the annotation modes of the lowest annotation point sub-unit and the highest annotation point sub-unit are related to the annotation object.
In some embodiments, the lowest annotation point subunit is configured to, when the annotation object is a vehicle, cross a first cross point a where a bottom side of the annotation image crosses the left side edge and a second cross point B where the bottom side crosses the right side edge.
And a highest labeled point subunit for, when the object is labeled as a vehicle, crossing the far end line GH with a third intersection point G where a line OA connecting the vanishing point O and the first intersection point a crosses the far end line GH, and crossing the far end line GH with a line OB connecting the vanishing point O and the second intersection point B.
The far-end line GH is parallel to the bottom edge of the annotation image and is a line which is away from the bottom edge of the annotation image by a preset distance S.
In some embodiments, the lowest annotation point subunit is configured to, when the annotation object is a pedestrian, label a first intersection point a where a bottom edge of the annotation image intersects with the left side edge, and a second intersection point B where the bottom edge intersects with the right side edge as a lowest annotation point;
and the highest labeling point subunit is used for, when the labeling object is a pedestrian, crossing the height line OC and the left side of the labeling image at a third cross point C and crossing the height line OD and the right side of the labeling image at a fourth cross point D.
Wherein, the height lines OC and OD are lines which pass through the vanishing point and have a preset height with the ground.
In some embodiments, the lowest labeling point subunit is configured to label, as the lowest labeling point, a first intersection point E where the first height line OE intersects with the left side OF the labeling image, and a second intersection point F where the first height line OF intersects with the right side OF the labeling image when the labeling object is a traffic sign;
and a highest labeling point subunit for a third intersection point G where the second height line OG intersects with the edge of the labeling image, and a fourth intersection point H where the second height line OH intersects with the edge of the labeling image.
The first height lines OE and OF are lines passing through the vanishing point O and having a first preset height H1 from the ground, the second height lines OG and OH are lines passing through the vanishing point O and having a second preset height H2 from the ground, and the first preset height H1 is smaller than the second preset height H2.
A labeling area reducing unit 903, configured to determine a closed area surrounded by the lowest labeling point and the highest labeling point as a labeling area.
In some embodiments, when the labeling object is a vehicle, the labeling area reduction unit 903 labels the first intersection a and the second intersection B as the lowest labeling points. The third intersection G and the fourth intersection H are labeled as the highest labeled points. Therefore, the labeling area is a trapezoidal ABGH area surrounded by the first intersection a, the second intersection B, the third intersection G, and the fourth intersection H.
When the labeling object is a pedestrian, the labeling-area reduction unit 903 labels the first intersection a and the second intersection B as the lowest labeled points, and labels the third intersection C and the fourth intersection D as the highest labeled points. Therefore, the ABCD area surrounded by the first intersection a, the second intersection B, the third intersection C, and the fourth intersection D as the labeling area.
When the labeling object is a traffic sign, the labeling-area reducing unit 903 labels the first intersection E and the second intersection F as the lowest labeling points, and labels the third intersection G and the fourth intersection H as the highest labeling points. Therefore, the labeling area is an EFGH area surrounded by the first intersection E, the second intersection F, the third intersection G, and the fourth intersection H.
In some embodiments, the annotation processing model obtaining module 803 is further configured to, when the annotation object is a vehicle, establish an attention model based on the annotation object and the annotation area corresponding to the annotation object.
In some embodiments, the labeling apparatus for a navigation target further includes a thermodynamic diagram generation module for labeling the labeled region with the attention model, and obtaining an attention thermodynamic diagram to facilitate labeling of the labeled image. In the attention thermodynamic diagram, the closer to the red region, the more likely the vehicle is to appear, thereby reducing the labor intensity of labeling.
In some embodiments, the labeling processing model obtaining module 803 is further configured to, when the labeled object is a pedestrian, establish a pedestrian labeling model matching the pedestrian according to the pedestrian and the pedestrian labeling area.
In some embodiments, the labeling processing model obtaining module 803 is further configured to generate a traffic labeling model according to the color model when the labeling object is a traffic sign, that is, the type of the traffic sign is identified according to the color and shape, such as a red circle being a traffic sign of a prohibited class, a yellow triangle being a traffic sign of a warning class, and a blue rectangle being a traffic sign of an indication class.
Fig. 10 is a schematic block diagram of a labeling apparatus for navigating a target according to a fifth embodiment of the present disclosure. Referring to fig. 10, the navigation target labeling apparatus includes an obtaining module 1001, an approximate image eliminating module 1002, a labeling area reducing module 1003, and a labeling processing model obtaining module 1004, where the obtaining module 1001, the labeling area reducing module 1003, and the labeling processing model obtaining module 1004 are equal to the obtaining module 801, the labeling area reducing module 802, and the labeling processing model obtaining module 803 in the fourth embodiment in terms of functions and actions, and are not described herein again. Only the different parts will be described in detail below.
An approximate image exclusion module 1002 is configured to exclude the approximate image from the annotation image. The approximate image exclusion module 1002 calculates hamming distances between all the annotated images using a disparity value hashing algorithm, where a similarity threshold is the hamming distance, and images with hamming distances within the [ S1, S2] interval are set as approximate images.
For example, the similarity threshold is set to 0 for the hamming distance S1 and 10 for S2, the hamming distances of all the annotation images are calculated, and the annotation images whose hamming distances are within [0, 10] are regarded as approximate images. And eliminating the approximate images, and only keeping one of the approximate images as the annotation image.
Fig. 11 is a schematic block diagram of a labeling apparatus for navigating a target according to a sixth embodiment of the present disclosure. Referring to fig. 11, the navigation target labeling apparatus includes an obtaining module 1101, an approximate image eliminating module 1102, a latitude and longitude determining module 1103, a labeling object determining module 1104, a labeling area reducing module 1105 and a labeling processing model obtaining module 1106, where the obtaining module 1101, the labeling area reducing module 1105 and the labeling processing model obtaining module 1106 are corresponding to the obtaining module 801, the labeling area reducing module 802 and the labeling processing model obtaining module 803 in the fourth embodiment, and the approximate image eliminating module 1102 is corresponding to the approximate image eliminating module 1002 in the fifth embodiment, and are not described herein again. Only the different parts will be described in detail below.
In the embodiment of the present disclosure, the latitude and longitude determining module 1103 is configured to obtain latitude and longitude information of a location where the labeled image is located.
The latitude and longitude information may be determined by a positioning system mounted on the vehicle, which is not limited in the present invention.
And an annotated object determination module 1105 configured to determine an annotated object based on the longitude and latitude information.
In some embodiments, the annotated object determination module 1105 may determine a scene of the annotated image according to the latitude and longitude information, and determine an annotated object according to the scene, so as to reduce the annotated object, thereby improving the annotation efficiency.
For example, if it is determined from the latitude and longitude information that the scene of the annotation image is a suburban area or a more remote place, the annotation object may be a traffic sign and a vehicle. If the scene of the marked image is determined to be the urban area according to the longitude and latitude information, the marked object can be a pedestrian, a vehicle and a traffic mark.
The navigation target labeling device provided by the embodiment of the disclosure acquires a labeled image by using an acquisition module; the labeling area reducing module reduces a labeling area in the labeling image based on the labeling object; the annotation processing model obtaining module establishes an annotation processing model matched with the annotation object according to the annotation object and the annotation area corresponding to the annotation object; the annotation processing model is used for reducing an annotation area in the annotation image. The marking device can reduce the marking area in the marked image, reduce the marking range and improve the marking efficiency; moreover, the labor intensity can be reduced, the influence on the health of the labeling personnel is reduced, and the error rate of labeling is reduced.
In a third aspect, referring to fig. 12, an embodiment of the present disclosure provides an electronic device, including:
one or more processors 1201;
memory 1202 having one or more programs stored thereon that, when executed by one or more processors, cause the one or more processors to implement the method of xxxxxx of any one of the above;
one or more I/O interfaces 1203 coupled between the processor and the memory and configured to enable information interaction between the processor and the memory.
The processor 1201 is a device with data processing capability, including but not limited to a Central Processing Unit (CPU), etc.; memory 1202 is a device having data storage capabilities including, but not limited to, random access memory (RAM, more specifically SDRAM, DDR, etc.), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), FLASH memory (FLASH); an I/O interface (read/write interface) 1203 coupled between the processor 1201 and the memory 1202 may enable information interaction between the processor 1201 and the memory 1202, including but not limited to a data Bus (Bus) or the like.
In some embodiments, the processor 1201, memory 1202, and I/O interface 1203 are connected to each other, and in turn, other components of the computing device, by a bus.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, where the computer program, when executed by a processor, implements any of the xxxx methods described above.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purposes of limitation. In some instances, features, characteristics and/or elements described in connection with a particular embodiment may be used alone or in combination with features, characteristics and/or elements described in connection with other embodiments, unless expressly stated otherwise, as would be apparent to one skilled in the art. Accordingly, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure as set forth in the appended claims.

Claims (14)

1. A method of annotating a navigation target, comprising:
acquiring an annotation image;
reducing an annotation area in the annotation image based on the annotation object;
establishing a labeling processing model matched with the labeling object according to the labeling object and the labeling area corresponding to the labeling object; the annotation processing model is used for reducing an annotation area in the annotation image.
2. The method of claim 1, wherein said reducing the annotated zone in the annotated image based on the annotated object comprises:
determining vanishing points in the annotated image;
determining a lowest marking point and a highest marking point based on the vanishing point and the marking object;
and determining a closed area enclosed by the lowest marking point and the highest marking point as the marking area.
3. The method of claim 2, wherein the determining a lowest labeled point and a highest labeled point based on the vanishing point and labeled object comprises:
when the labeling object is a vehicle, the lowest labeling point comprises a first intersection point where the bottom edge and the left side edge of the labeling image intersect and a second intersection point where the bottom edge and the right side edge intersect; the highest labeled point comprises a third intersection point where a connecting line of the vanishing point and the first intersection point crosses with a far-end line, and a fourth intersection point where a connecting line of the vanishing point and the second intersection point crosses with the far-end line; the far-end line is parallel to the bottom edge of the marked image and is a preset distance away from the bottom edge of the marked image.
4. The method of claim 2, wherein the determining a lowest labeled point and a highest labeled point based on the vanishing point and labeled object comprises:
when the labeling object is a pedestrian, the lowest labeling point comprises a first intersection point where the bottom edge and the left side edge of the labeling image intersect and a second intersection point where the bottom edge and the right side edge intersect; the highest labeling point comprises a third intersection point where a height line intersects with the left side edge of the labeled image and a fourth intersection point where the height line intersects with the right side edge of the labeled image, wherein the height line passes through the vanishing point and is a line which is away from the ground by a preset height.
5. The method of claim 2, wherein the determining a lowest labeled point and a highest labeled point based on the vanishing point and labeled object comprises:
when the labeling object is a traffic sign, the lowest labeling point comprises a first intersection point where a first height line intersects with the left side of the labeling image and a second intersection point where the first height line intersects with the right side of the labeling image; the highest labeling point comprises a third intersection point of a second height line and the left side of the labeling image, and a fourth intersection point of the second height line and the right side of the labeling image; the first height line is a line which passes through the vanishing point and is at a first preset height from the ground, the second height line is a line which passes through the vanishing point and is at a second preset height from the ground, and the first preset height is smaller than the second preset height.
6. The method of any one of claims 1 to 5, wherein after establishing the annotation processing model matched with the annotation object according to the annotation object and the annotation region corresponding to the annotation object, the method comprises:
obtaining a thermodynamic diagram of the labeled area based on the labeled processing model;
and labeling the target of the labeling area based on the thermodynamic diagram.
7. The method according to any one of claims 1 to 5, wherein before the reducing the annotation region in the annotation image based on the annotation object, the method further comprises:
and excluding the approximate image in the annotation image.
8. The method according to any one of claims 1 to 5, wherein before the reducing the annotation region in the annotation image based on the annotation object, the method further comprises:
acquiring longitude and latitude information of the position of the marked image;
and determining the labeling object based on the longitude and latitude information.
9. An annotation device for a navigation target, comprising:
the acquisition module is used for acquiring an annotation image;
the labeling area reducing module is used for reducing a labeling area in the labeling image based on the labeling object;
the label processing model obtaining module is used for establishing a label processing model matched with the label object according to the label object and the label area corresponding to the label object; the annotation processing model is used for reducing an annotation area in the annotation image.
10. The apparatus of claim 9, wherein the label region reduction module comprises:
a vanishing point determining unit, configured to determine vanishing points in the annotation image;
a marking point determining unit, configured to determine a lowest marking point and a highest marking point based on the vanishing point and the marking object;
and the marking area reducing unit is used for determining a closed area enclosed by the lowest marking point and the highest marking point as the marking area.
11. The apparatus of claim 9, wherein the apparatus further comprises:
and the approximate image exclusion module is used for excluding the approximate image in the annotation image.
12. The apparatus of claim 9, wherein the apparatus further comprises:
the longitude and latitude determining module is used for obtaining longitude and latitude information of the position of the marked image;
and the marked object determining module is used for determining the marked object based on the longitude and latitude information.
13. An electronic device, comprising:
one or more processors;
storage means having one or more programs stored thereon which, when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8;
one or more I/O interfaces connected between the processor and the memory and configured to enable information interaction between the processor and the memory.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202010433124.4A 2020-05-19 2020-05-19 Navigation target labeling method and device, electronic equipment and computer readable medium Pending CN113688259A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010433124.4A CN113688259A (en) 2020-05-19 2020-05-19 Navigation target labeling method and device, electronic equipment and computer readable medium
KR1020210032233A KR102608167B1 (en) 2020-05-19 2021-03-11 Method and device for marking navigation target, electronic equipment, and computer readable medium
JP2021040979A JP7383659B2 (en) 2020-05-19 2021-03-15 Navigation target marking methods and devices, electronic equipment, computer readable media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010433124.4A CN113688259A (en) 2020-05-19 2020-05-19 Navigation target labeling method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN113688259A true CN113688259A (en) 2021-11-23

Family

ID=75238106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010433124.4A Pending CN113688259A (en) 2020-05-19 2020-05-19 Navigation target labeling method and device, electronic equipment and computer readable medium

Country Status (3)

Country Link
JP (1) JP7383659B2 (en)
KR (1) KR102608167B1 (en)
CN (1) CN113688259A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007265292A (en) * 2006-03-29 2007-10-11 D-E Tech Corp Road sign database construction device
CN101876535A (en) * 2009-12-02 2010-11-03 北京中星微电子有限公司 Method, device and monitoring system for height measurement
KR20140033868A (en) * 2012-09-11 2014-03-19 한국과학기술원 Method and apparatus for environment modeling for ar
US20170132224A1 (en) * 2015-11-05 2017-05-11 Acer Incorporated Method, electronic device, and computer readable medium for photo organization
US20180283892A1 (en) * 2017-04-03 2018-10-04 Robert Bosch Gmbh Automated image labeling for vehicles based on maps
CN109034214A (en) * 2018-07-09 2018-12-18 百度在线网络技术(北京)有限公司 Method and apparatus for generating label
CN110175975A (en) * 2018-12-14 2019-08-27 腾讯科技(深圳)有限公司 Method for checking object, device, computer readable storage medium and computer equipment
CN110458226A (en) * 2019-08-08 2019-11-15 上海商汤智能科技有限公司 Image labeling method and device, electronic equipment and storage medium
CN110929729A (en) * 2020-02-18 2020-03-27 北京海天瑞声科技股份有限公司 Image annotation method, image annotation device and computer storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002362302A (en) 2001-06-01 2002-12-18 Sogo Jidosha Anzen Kogai Gijutsu Kenkyu Kumiai Pedestrian detecting device
JP5760696B2 (en) * 2011-05-27 2015-08-12 株式会社デンソー Image recognition device
JP2014146267A (en) 2013-01-30 2014-08-14 Toyota Motor Corp Pedestrian detection device and driving support device
JP6331811B2 (en) * 2014-07-18 2018-05-30 日産自動車株式会社 Signal detection device and signal detection method
US9747508B2 (en) * 2015-07-24 2017-08-29 Honda Motor Co., Ltd. Surrounding environment recognition device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007265292A (en) * 2006-03-29 2007-10-11 D-E Tech Corp Road sign database construction device
CN101876535A (en) * 2009-12-02 2010-11-03 北京中星微电子有限公司 Method, device and monitoring system for height measurement
KR20140033868A (en) * 2012-09-11 2014-03-19 한국과학기술원 Method and apparatus for environment modeling for ar
US20170132224A1 (en) * 2015-11-05 2017-05-11 Acer Incorporated Method, electronic device, and computer readable medium for photo organization
US20180283892A1 (en) * 2017-04-03 2018-10-04 Robert Bosch Gmbh Automated image labeling for vehicles based on maps
CN109034214A (en) * 2018-07-09 2018-12-18 百度在线网络技术(北京)有限公司 Method and apparatus for generating label
CN110175975A (en) * 2018-12-14 2019-08-27 腾讯科技(深圳)有限公司 Method for checking object, device, computer readable storage medium and computer equipment
CN110458226A (en) * 2019-08-08 2019-11-15 上海商汤智能科技有限公司 Image labeling method and device, electronic equipment and storage medium
CN110929729A (en) * 2020-02-18 2020-03-27 北京海天瑞声科技股份有限公司 Image annotation method, image annotation device and computer storage medium

Also Published As

Publication number Publication date
KR102608167B1 (en) 2023-12-01
JP7383659B2 (en) 2023-11-20
JP2021108137A (en) 2021-07-29
KR20210035112A (en) 2021-03-31

Similar Documents

Publication Publication Date Title
US20210012527A1 (en) Image processing method and apparatus, and related device
US11688183B2 (en) System and method of determining a curve
CN108694882B (en) Method, device and equipment for labeling map
EP3407294B1 (en) Information processing method, device, and terminal
CN110136182B (en) Registration method, device, equipment and medium for laser point cloud and 2D image
Wang et al. Multitask attention network for lane detection and fitting
Ruyi et al. Lane detection and tracking using a new lane model and distance transform
US11625851B2 (en) Geographic object detection apparatus and geographic object detection method
CN113989450B (en) Image processing method, device, electronic equipment and medium
CN110969592B (en) Image fusion method, automatic driving control method, device and equipment
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
CN112735253B (en) Traffic light automatic labeling method and computer equipment
CN112258519B (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN110667474B (en) General obstacle detection method and device and automatic driving system
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN112950725A (en) Monitoring camera parameter calibration method and device
CN114387199A (en) Image annotation method and device
CN114969221A (en) Method for updating map and related equipment
CN113221659B (en) Double-light vehicle detection method and device based on uncertain sensing network
CN113126120B (en) Data labeling method, device, equipment, storage medium and computer program product
CN111316324A (en) Automatic driving simulation system, method, equipment and storage medium
Ji et al. Manhole cover detection using vehicle-based multi-sensor data
CN110827340B (en) Map updating method, device and storage medium
CN113688259A (en) Navigation target labeling method and device, electronic equipment and computer readable medium
CN115618602A (en) Lane-level scene simulation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination