CN109284665B - Method and apparatus for reducing number of detection candidates for object recognition method - Google Patents

Method and apparatus for reducing number of detection candidates for object recognition method Download PDF

Info

Publication number
CN109284665B
CN109284665B CN201810789967.0A CN201810789967A CN109284665B CN 109284665 B CN109284665 B CN 109284665B CN 201810789967 A CN201810789967 A CN 201810789967A CN 109284665 B CN109284665 B CN 109284665B
Authority
CN
China
Prior art keywords
detection
candidates
list
candidate
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810789967.0A
Other languages
Chinese (zh)
Other versions
CN109284665A (en
Inventor
T·文泽尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN109284665A publication Critical patent/CN109284665A/en
Application granted granted Critical
Publication of CN109284665B publication Critical patent/CN109284665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nonlinear Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a method for reducing the number of detection candidates (118) of an object recognition method, wherein the method comprises the steps of: determining a list of thresholds (126) from a list (122) of evaluation detection candidates (118) by using at least the respective scores (120) assigned to the detection candidates (118); -deleting (504) the detection candidates (118), wherein detection candidates (118) having a score (120) smaller than a threshold value in the list of threshold values (126) are deleted from the list (122), in particular as long as they do not exceed a maximum distance to the detection candidate (118) used for calculating the threshold value.

Description

Method and apparatus for reducing number of detection candidates for object recognition method
Technical Field
The present invention relates to a method and apparatus for reducing the number of detection candidates for an object recognition method. The invention also includes a computer program.
Background
In order to find angular objects in an image, angular regions may be searched for in the image. The angular regions of the object are arranged in a known manner relative to each other. If multiple corner regions are within a particular geometric boundary condition, a candidate for the object may be identified.
Disclosure of Invention
Against this background, a method for reducing the number of detection candidates for an object recognition method is proposed according to the invention with the solution presented here, further an apparatus using the method, an image processing system and finally a corresponding computer program are proposed.
In searching for corners, a small search window in the search mode may be directed over the image. If a possible corner region is found in the search window, the location of the search window may be marked and stored as a corner region candidate. In addition, an evaluation can be made as to how reliably the features of the corners are identified.
This search technique is very fast, but results in a large number of corner region candidates, many of which are erroneous.
In the approach presented herein, those candidates that have a low rating relative to those candidates that have a good rating in their local environment are discarded. The amount of computational effort required in subsequent searches for mutually matching angular region candidates is reduced due to the reduced number of candidates to be considered.
A method of reducing the number of detection candidates of an object recognition method is proposed, wherein the method comprises the steps of:
determining a threshold list from the list of evaluation detection candidates by using the scores respectively assigned to the detection candidates;
deleting detection candidates, wherein detection candidates having a score smaller than a threshold selected from the list of thresholds are deleted from the list, in particular as long as they do not exceed a maximum distance from the detection candidate used for calculating the threshold.
The detection candidate may be understood as an image area having a limited size in which the searched feature combination is mapped. An image area is understood to mean in particular an area of smaller size than the image captured by the camera. The detection candidate may in particular be a corner region candidate, wherein image content is mapped out that has features of two edges merged into a corner. The list may include all detection candidates of the image or of local regions of the image. The list may include various types of detection candidates. An evaluation is understood to be the degree of conformity with a set of test object templates (for example with their gray values or variables derived therefrom). The score may represent an evaluation of the quality of the recognition or a confidence of the recognition. The detection candidates of the list may be sorted by score or scoring. The threshold may be a percentage of the fraction. A threshold value may be determined for each score, in particular for each candidate assigned a score, resulting in a threshold value list. Different types of detection candidates may be handled separately.
Detection candidates that are located within a selection area that is local and/or that passes the detection candidate defining a threshold and/or that is determined or decided by the location and/or score of the detection candidate may be deleted from the list. The selection area may define a maximum distance to the detection candidate. In other words, the selection area may limit the area of influence of a single detection candidate.
The selection area may be defined by a radius around the detection candidate and/or a predetermined distance from the detection candidate. The circular selection area can be set quickly and simply. Other forms of selection area may also be defined by using other distance measurements.
The method may comprise a filtering step wherein overlapping detection candidates are filtered out of the list. Overlapping detection candidates are likely to involve the same corner. The detection candidate with the highest score may be used in the filtering.
The steps of determining and deleting may be performed separately for different object types. For example, detection candidates of different edge types, such as upper left, upper right, lower right, and/or lower left, may be processed separately from each other. By separating object types, it is possible to prevent erroneously deleting corner region candidates that are not well identified for smaller objects.
A threshold may be determined first for the detection candidate in the list with the highest score. In other words, one may start with the detection candidate having the highest score. The steps of the method may be repeated. By going from the highest score to the lower score, much computational power is saved, since the threshold is also high at higher scores, thus removing many detection candidates from the list at once.
The method may be implemented in software or hardware or a mixture of software and hardware, for example, in a controller.
The proposed method is suitable in particular for use in the field of automatic traffic sign recognition in driver information systems or driver assistance systems in motor vehicles. For this purpose, image signals or video signals can be recorded by means of a camera arranged in the vehicle and directed in the direction of travel, and traffic signs contained in the recorded image signals or video signals can be checked. In the case of traffic signs, these may be displayed, for example, to provide information to the driver. In principle, it is also conceivable to control the driver assistance function as a function of the recognized traffic sign, for example to limit the vehicle speed to a maximum permissible speed indicated.
For this purpose, a method as described above is proposed, in which an image signal or a video signal from the surroundings of the motor vehicle is recorded by means of a camera which is arranged in or at the motor vehicle, in which it is checked whether a traffic sign is contained in the image signal or the video signal by using any of the methods described above, and in which, when a traffic sign is contained, a signal representing the recognized traffic sign is output. In addition, the solution presented here also proposes a device which is designed to carry out, control or carry out the steps of the variants of the method presented here in a corresponding device. The object on which the invention is based can also be achieved quickly and efficiently by means of this embodiment variant of the invention in the form of a device.
To this end, the device may have: at least one arithmetic unit for processing signals or data; at least one memory cell for storing signals or data; at least one interface to a sensor or actuator for reading a sensor signal from the sensor or outputting a data signal or a control signal to the actuator; and/or at least one communication interface for reading or outputting data embedded in a communication protocol. The arithmetic unit may be, for example, a signal processor, a microcontroller, etc., wherein the memory unit may be a flash memory, an EEPROM or a magnetic memory unit. The communication interface can be designed for wireless and/or wired reading or output of data, wherein the communication interface, which can read or output wired data, can, for example, electrically or optically read data from or output data into the respective data transmission line.
An apparatus is understood here to be an electrical device which processes sensor signals and outputs control signals and/or data signals as a function thereof. The device may have an interface that may be constructed based on hardware and/or software. In a hardware-based configuration, the interface may be, for example, a part of a so-called ASIC system that includes various functions of the device. However, it is also possible that the interface may be an integrated circuit of its own or at least partly composed of discrete components. In a software-based configuration, the interface may be, for example, a software module that is present on the microcontroller in combination with other software modules.
Furthermore, an image processing system is proposed, which has means for reduction according to the solution proposed here.
The device and the image processing system can be used in particular in the field of automatic road sign recognition or automatic traffic sign recognition in driver information systems or driver assistance systems. The image or video signal to be evaluated is recorded by a camera, preferably oriented in the direction of travel, mounted in or at the vehicle and is fed to the signal processing of the device. The signal processing section is adapted to recognize a traffic sign in the recorded image signal or video signal by using the above-described method and output a signal representing the recognized traffic sign. For the identification of traffic signs, reference can be made, for example, to a database of images with traffic sign patterns stored therein. The signal representing the identified traffic sign may be used to control the display of, for example, a traffic sign symbol on the heads-up display or on another display element in the vehicle. Furthermore, the signal representing the recognized traffic sign can also be used to control driver assistance functions, such as automatic speed limitation when a maximum permissible speed is recognized.
It is also advantageous to have a computer program product or a computer program with a program code, which can be stored on a machine-readable carrier or storage medium (for example, semiconductor memory, hard disk memory or optical memory) and is used to carry out, implement and/or control the steps of a method according to one of the embodiments described above, in particular when the program product or program is run on a computer or an apparatus.
Drawings
Embodiments of the solution described herein are illustrated in the accompanying drawings and explained in detail in the following description. Wherein:
FIG. 1 shows a block diagram of an apparatus for reduction according to one embodiment;
FIG. 2 shows a schematic diagram of an image with corner candidates and objects according to one embodiment;
FIG. 3 shows a schematic diagram of an image with detection candidates according to one embodiment;
FIG. 4 shows a schematic diagram of an image with a detection candidate score according to one embodiment; and
FIG. 5 shows a flow diagram of a method according to an embodiment.
Detailed Description
In the following description of advantageous embodiments of the invention, the same or similar reference numerals are used for elements shown in different figures that function similarly, wherein repeated descriptions of these elements are omitted.
Fig. 1 shows a block diagram of an apparatus 100 for reduction according to an embodiment. The apparatus 100 is part of an image processing system 102 of a vehicle 104. The image processing system 102 here also comprises at least one camera 106 and a device for recognition 108. The camera 106 detects a part of the environment of the vehicle 104 in its detection range and images it in an image 110. Video camera 106 may also provide video signal 110 from a series of images. The means for identifying 108 identifies an angular object 112 shown in the image 110 by an edge.
For this purpose, image regions 116 of image 110 having angular features are searched for in search device 114 of apparatus 108 by using an angular search algorithm. The image area 116 in which the corner feature is mapped is selected as a detection candidate 118 and the location of the corner is determined. Each detection candidate 118 is assigned a score 120 representing the quality of the edge detection or position detection. Here, a detection candidate 118 having a higher detection quality is assigned a higher score 120, and a detection candidate 118 having a lower detection quality is assigned a lower score.
A list 122 of detection candidates 118 and their scores 120 is transmitted to the means for reducing 100. In the determination device 124 of the apparatus 100, the detection candidate 118 is assigned a threshold value 126, which depends on the score 120 of the detection candidate 118. Here, the higher the score 120, the higher the threshold 126. In the deletion device 128, detection candidates 118 having scores 120 smaller than the threshold in the list of thresholds 126 are deleted from the list 122, in particular as long as they do not exceed the maximum distance from the detection candidate considered for calculating the threshold. Thereby, the list 122 becomes shorter.
When the position of the detection candidate 118 meets the criterion, the detection candidate 118 on the shortened list 122 is assigned to the object region 132 in the assignment device 130 of the means for identifying 108 by using geometric criteria. Each detection candidate in the list is assigned a threshold value, thereby automatically generating the threshold value list. The deletion unit deletes the detection candidates based on, for example, not only the threshold but also the distance between the threshold list and the detection candidates, and thus deletes each detection candidate satisfying the condition that it is simultaneously smaller than the threshold in the list and lower than the distance from the candidate defining the corresponding threshold, for example. This can be effectively achieved, for example, by sorting the score list in descending order, deleting the score list for each threshold and corresponding candidate, updating the list, and repeating the entire process.
In one embodiment, the selection area is distributed around the location of the detection candidate 118 in the deletion device 128. Other detection candidates 118 within the selection area are removed from the list 122 if they have a score 120 that is below the threshold 126 of the detection candidate 118.
In one embodiment, overlapping detection candidates 118 are merged into a single detection candidate 118 prior to determining the threshold 126. Here, the resulting detection candidate 118 has the position of the detection candidate 118 with the highest score 120.
FIG. 2 shows a schematic diagram of an image 110 with edge candidates 118 and an object 112 according to one embodiment. The depicted object 112 is a rectangular traffic sign 112 as shown in fig. 1. Here, the traffic sign 112 is an additional guideboard 112 below the circular traffic sign 200. The rectangular traffic sign 112 has four differently oriented corners. For a horizontally oriented traffic sign 112, these corners may be referred to as upper right, upper left, lower left, and lower right. In this case, different edges of different orientations are identified by different detection candidates 118. Due to the geometrical framework conditions, in a fully imaged rectangular object 112 the corresponding right-hand corner must be found to the right near the left-hand corner. Likewise, the lower corner must be found below the upper corner. Therefore, in searching for the object region 132, for example, two detection candidates 118 for the lower right corner may belong to only two different objects 112 or one of the two detection candidates 118 may be misrecognized.
The number of false detections is reduced by the approach presented herein, since the less reliable detection candidates 118 have been deleted before searching for the object region 132 due to their lower scores.
In the method for detecting an n-angle geometric object 112 in an image 110, a small window is slid over the entire image 110 in a known image (section), wherein it is determined whether an object corner region is contained by means of known methods in each case. This process is repeated for each of the n object corners. The color and geometry of the angular region of the object being searched is used to determine the window rating that ultimately yields the evaluation score. These features are learned with the aid of exercise data. The result of these n detection processes is a list of candidate corner positions for each object corner. The object candidates 132 are now generated based on these positions and corresponding scores and geometric boundary conditions applicable to the target object 112 to be identified. This method has proven to be very reliable for the problem of additional traffic sign recognition. Here, the target object 112 is a rectangle. The method may be used exclusively for the image portion below the primary traffic sign.
In general, the method may be used to identify traffic panels 112 in the complete image 110. Here, rectangles with variable aspect ratios are identified similarly to the additional markings. Here, the method produces a very large number of false corner region candidates 118 based on many structures imaged in the image 110 that occur particularly in cities.
FIG. 3 illustrates a schematic diagram of an image 110 with detection candidates 118 according to one embodiment. For example, as shown in FIG. 1, an image 110 is recorded by a camera of the vehicle. The image 110 is captured through a front windshield of the vehicle and displays a portion of the vehicle's surroundings in front of the vehicle. The vehicle is here driven on a highway with two directional lanes. Visibility is limited due to rain.
Different types of corner detection candidates 118 are marked in the image. Many of the detection candidates 118 are erroneous and do not mark edges in the image 110. For example, a number of detection candidates 118 are labeled in the illustrated rain cloud region. However, these detection candidates 118 have a low score, which is represented by the thickness of the edge of each marker box.
A rectangular guideboard 112 can be seen on the right-hand side. The detection candidate 118 with the highest score in the lower right corner marks the lower right corner of the guideboard 112. By applying the approach presented herein, many false detection candidates 118 may be eliminated, since there is a high probability that another lower right corner of another guideboard, which is difficult to recognize, does not exist around the reliably recognized lower right corner.
FIG. 4 illustrates a partial schematic view of an image 110 having a score 120 of a detected candidate 118, according to one embodiment. The image 110 is recorded by, for example, a vehicle camera as shown in fig. 1. As shown in fig. 3, an image 110 is acquired through the front windshield of the vehicle. The partial view shows a rectangular guideboard 112 at a gantry 400 on a highway.
As shown in fig. 3, the detection candidates 118 of the angular region are shown. Only the lower left detection candidate 118 is shown here. In addition to the location of the detection candidate 118, the score 120 of the detection candidate 118 is also shown as a numerical value.
At the guideboard 112, three detection candidates 118 are shown here in the lower left corner. Here, the detection candidate 118 of the actual lower left corner of the guideboard 112 is rated here as the highest score 120 of 500 points. The detection candidate 118 in the lower left hand corner of the highway symbol is scored as a lower score of 200. The detection candidate 118 in the lower left corner of the letter D of the place name "Dortmund" is scored as an even lower score 120 of 50.
The highest score 120 of 500 points sets the height of the threshold here. Here, the score 120 is multiplied by an exemplary selected factor of 0.25 to obtain a threshold value of 125.
To delete a detection candidate 118, a circular selection area 402 is distributed around the detection candidate 118 of the actual corner. D's detection candidate 118 is located within the selection area 402 and has a score 120 that is lower than a threshold, and is therefore removed from the list.
The detection candidates 118 for highway symbols are also within the selection area 402, but have scores 120 above the threshold and therefore are not removed from the list.
Other detection candidates not shown outside the selection area 402 are not considered in the removal.
Variable suppression of false positive object detection candidates 118 in a portion of an image region 402 is presented.
The method introduced herein (hereinafter referred to as rNMS (radius-based non-maximum suppression)) is used to suppress false positive candidates 118, which are the result of a standard object recognition method applied to the image 110. This is an algorithm whose validity has been verified empirically by means of successful tests. In principle, it can be used with all object recognition methods that output a score in the form of a score 120 so that the detection candidates 118 can be ranked. Due to its special characteristics, rmss combined with the edge recognition method provides significantly better results than methods using whole recognition objects.
The rNMS approach described herein reduces the number of false positives 118. To this end, in the neighborhood of each identified object corner region O118, all other object corner candidates 118 whose evaluation score 120 is below a selected threshold relative to the score 120 of O are deleted. This is demonstrated by the following idea: the highest scoring candidate 118 in the image region 402 determines the maximum score 120 that can be achieved in the image region 402 due to the image feature, so that only a portion of the score 120 is achieved by all other candidates 118.
The advantage of this algorithmic idea is its higher speed and relative choice of locally applied score thresholds. Thus, the number of false-positive corner region candidates 118 may be reduced by half, for example. The method may also be used in an object detector that detects the object 112 as a whole. False positives can be excluded less because the number of false positives therein is typically much lower.
The standard problem in object recognition is to eliminate all those "non-maximum" values from the large number of detection candidates 118. This problem is solved by non-maximum suppression, i.e. NMS. Here, the non-maximums are always defined with reference to the existing evaluation scores of the object recognition method that have been determined for each detection candidate box 118.
An object recognition method that recognizes the target object as a whole in one step may generate many overlapping candidates.
In a score-based global NMS, the scores of all candidates 118 may be compared to a fixed, preselected threshold, and all candidates 118 having scores below the threshold are eliminated. In contrast, a threshold value selected relative to the maximum occurrence score is used in the scheme presented herein.
In an overlap-based NMS, a candidate 118 with a lower score may be deleted whenever two candidates 118 overlap by a certain percentage.
The method may be performed "greedy", i.e. by comparing only two candidates 118 at a time and immediately deleting, or globally, i.e. by comparing every second candidate 118 for all candidates and deleting as a last step.
An average shift algorithm may be used to average the center of the detection box 118 within the radius to be parameterized, thereby merging all candidates 118 within that radius into one output candidate.
In the iterative approach, information about the determined detection candidates 118 may already be included during object detection in an iterative manner in the detection process, which may lead to a very complex approach.
In contrast, in the method described here, the entire image 110 is first examined using at least one detector, and then an image region 402 is defined for each detection 118, wherein the already existing further detection candidates 118 are compared with a local relative threshold value. The relative nature of the threshold and the detection 118 to define the image region 402 is significantly different from conventional approaches.
The rNMS described herein may be used as a supplement to an overlap-based NMS or an average shift NMS. Here, non-overlapping candidates 118 may also be eliminated.
In the approach described herein, the contents of the candidate image region 118 are not considered again. The method can thus be carried out rapidly.
The new method presented here constitutes an intermediate step of an algorithm in the detection of objects 112 and is particularly suitable for detecting n-angle geometric objects 112 in an image 110. After the detection candidates 118 in the image 110 are identified and all mutually overlapping candidates 118 are suppressed by the overlap-based NMS, the method introduced here will be used. For each candidate 118 of the object 112, an iteration is performed through all the detection candidates 118. Thus, in the case of the corner area detector 118, the method is performed n times for each image 110. Based on the score P (i, j)120 of the ith candidate for the jth corner, a threshold s (j) c (j) P (i, j) is calculated and compared to the scores 120 of all candidates 118 for the jth corner within a radius r (j)402 around the candidates 118. If the score 120 of the observed candidate 118 is less than S (j), the candidate 118 is deleted (greedy variant method). Alternatively, deletion may be performed as the last step (global variant). If the method is used for the whole object 112, j is always equal to 1.
Radius r (j)402 is the parameter to be selected. Here, the radius 402 may be, for example, 1/5 of the image width. If r (j)402 is chosen such that the radius 402 covers the entire image 110 for any position, the presented method amounts to a global suppression of the edge candidates 118 with a relative threshold.
The parameter c (j) may be fixedly selected manually or may be determined based on the results of the exercise data set.
This method is very fast because only very few arithmetic operations are performed and the number of false corner region candidates 118 can be greatly reduced. It can be used for non-maximum suppression as a supplement to other methods.
The NMS method described here can also be used for any object detection method, for example in the field of pedestrian recognition and/or traffic sign recognition. However, in the method of identifying the angular region of the object 112, the angular region detection 118 is only an intermediate step, which may generate more false positives 118 than the method for directly detecting the target object.
Fig. 5 illustrates a flow diagram of a method 500 for reducing the number of detection candidates for an object recognition method, according to one embodiment. For example, the method 500 may be performed on an apparatus for reducing as shown in fig. 1. The method 500 includes the steps of determining 502 and deleting 504. In the step of determining 502, a threshold value list is determined from the list of evaluation detection candidates by using scores respectively assigned to the detection candidates. In the step of deleting 504, detection candidates having scores smaller than the threshold in the threshold list are deleted from the list, in particular as long as they do not exceed the maximum distance from the detection candidate used for calculating the threshold.
If an embodiment includes the word "and/or" connecting between a first feature and a second feature, this can be interpreted as meaning that the embodiment has not only the first feature but also the second feature according to one embodiment and has only the first feature or only the second feature according to another embodiment.

Claims (10)

1. A method (500) for reducing a number of detection candidates (118) of an object recognition method, wherein the method (500) comprises the steps of:
determining a list of thresholds (126) from a list (122) of evaluation detection candidates (118) by using scores (120) respectively assigned to the detection candidates (118), wherein the scores represent an evaluation of the recognition quality, wherein the detection candidates in the list are sorted by the scores and the thresholds are percentages of the scores such that a threshold is determined for each score, i.e. for each detection candidate assigned a score, resulting in a list of thresholds;
deleting detection candidates (118), wherein detection candidates (118) having a score (120) less than a threshold in the list of thresholds (126) are deleted from the list (122) as long as they do not exceed a maximum distance from the detection candidates (118) used to calculate the threshold.
2. The method (500) of claim 1, wherein in the step of deleting, the following detection candidates (118) are deleted from the list (122): the detection candidate (118) is located within a selection area (402) determined by a location of the detection candidate (118) on an image (110) showing the detection candidate (118), and the detection candidate has a score less than the threshold.
3. The method (500) according to claim 2, wherein in the step of deleting, the selection area (402) is defined by a radius around the detection candidate (118) and/or a predetermined distance from the detection candidate (118).
4. The method (500) according to any of claims 1 to 3, comprising a filtering step, wherein overlapping detection candidates (118) are filtered out of the list (122).
5. The method (500) of any of claims 1-3, wherein the determining step and the deleting step are performed separately for different object types.
6. The method (500) according to any of claims 1 to 3, wherein in the determining step, a threshold value is first determined from the list of threshold values (126) for the detection candidate (118) in the list (122) having the largest score (120).
7. Method according to one of claims 1 to 3, wherein an image signal or a video signal from the surroundings of a motor vehicle is recorded with a camera, which is arranged in or at the motor vehicle, wherein it is checked whether a traffic sign is contained in the image signal or video signal, and wherein, when a traffic sign is contained, a signal representing the identified traffic sign is output.
8. An electronic device (100) adapted to perform the steps of the method (500) according to any of claims 1 to 7 in a respective unit.
9. An image processing system (102) with an electronic device (100) according to claim 8.
10. A machine-readable storage medium, on which a computer program is stored, the computer program being adapted to perform the method (500) according to any one of claims 1 to 7.
CN201810789967.0A 2017-07-20 2018-07-18 Method and apparatus for reducing number of detection candidates for object recognition method Active CN109284665B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102017212426.1 2017-07-20
DE102017212426.1A DE102017212426A1 (en) 2017-07-20 2017-07-20 Method and apparatus for reducing a number of detection candidates of an object recognition method

Publications (2)

Publication Number Publication Date
CN109284665A CN109284665A (en) 2019-01-29
CN109284665B true CN109284665B (en) 2022-02-08

Family

ID=64951944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810789967.0A Active CN109284665B (en) 2017-07-20 2018-07-18 Method and apparatus for reducing number of detection candidates for object recognition method

Country Status (3)

Country Link
CN (1) CN109284665B (en)
DE (1) DE102017212426A1 (en)
FR (1) FR3069362B1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761527A (en) * 2012-09-17 2014-04-30 汤姆逊许可公司 Device and method for detecting the presence of a logo in a picture

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8542950B2 (en) * 2009-06-02 2013-09-24 Yahoo! Inc. Finding iconic images
DE112011103687B4 (en) * 2010-11-05 2017-01-05 Cytognomix, Inc. Centromere detector and method for determining radiation exposure by chromosomal abnormalities
US8655071B2 (en) * 2011-02-24 2014-02-18 Sharp Laboratories Of America, Inc. Methods and systems for determining a document region-of-interest in an image
CN103034844B (en) * 2012-12-10 2016-04-27 广东图图搜网络科技有限公司 Image-recognizing method and device
CN103310469B (en) * 2013-06-28 2016-05-11 中国科学院自动化研究所 A kind of vehicle checking method based on vision-mix template
CN106815604B (en) * 2017-01-16 2019-09-27 大连理工大学 Method for viewing points detecting based on fusion of multi-layer information
CN106845458B (en) * 2017-03-05 2020-11-27 北京工业大学 Rapid traffic sign detection method based on nuclear overrun learning machine

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761527A (en) * 2012-09-17 2014-04-30 汤姆逊许可公司 Device and method for detecting the presence of a logo in a picture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Li Yaping,Zhang Jinfang,Xu Fanjiang,Sun Xv.《The Recognition and Enhancement of Traffic Sign for the Computer-Generated Image》.《2012 Fourth International Conference on Digital Home》.2012, *

Also Published As

Publication number Publication date
FR3069362A1 (en) 2019-01-25
CN109284665A (en) 2019-01-29
FR3069362B1 (en) 2023-01-06
DE102017212426A1 (en) 2019-01-24

Similar Documents

Publication Publication Date Title
CN108073928B (en) License plate recognition method and device
Panahi et al. Accurate detection and recognition of dirty vehicle plate numbers for high-speed applications
KR101596299B1 (en) Apparatus and Method for recognizing traffic sign board
US9053361B2 (en) Identifying regions of text to merge in a natural image or video frame
KR101848019B1 (en) Method and Apparatus for Detecting Vehicle License Plate by Detecting Vehicle Area
Huang et al. Vehicle detection and inter-vehicle distance estimation using single-lens video camera on urban/suburb roads
Tae-Hyun et al. Detection of traffic lights for vision-based car navigation system
CN110879950A (en) Multi-stage target classification and traffic sign detection method and device, equipment and medium
CN111149131B (en) Dividing line recognition device
JP2000357233A (en) Body recognition device
CN108229466B (en) License plate recognition method and device
WO2008020544A1 (en) Vehicle detection device, vehicle detection method, and vehicle detection program
Maldonado-Bascon et al. Traffic sign recognition system for inventory purposes
CN114387591A (en) License plate recognition method, system, equipment and storage medium
CN108573244B (en) Vehicle detection method, device and system
CN113569812A (en) Unknown obstacle identification method and device and electronic equipment
JP5327241B2 (en) Object identification device
US9858493B2 (en) Method and apparatus for performing registration plate detection with aid of edge-based sliding concentric windows
Nienhüser et al. Fast and reliable recognition of supplementary traffic signs
CN109284665B (en) Method and apparatus for reducing number of detection candidates for object recognition method
CN116721396A (en) Lane line detection method, device and storage medium
Ho et al. A macao license plate recognition system based on edge and projection analysis
JPH08190690A (en) Method for determining number plate
CN111639640B (en) License plate recognition method, device and equipment based on artificial intelligence
KR101936108B1 (en) Method and apparatus for detecting traffic sign

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant