CN112528970A - Guideboard detection method, device, equipment and computer readable medium - Google Patents

Guideboard detection method, device, equipment and computer readable medium Download PDF

Info

Publication number
CN112528970A
CN112528970A CN202110174410.8A CN202110174410A CN112528970A CN 112528970 A CN112528970 A CN 112528970A CN 202110174410 A CN202110174410 A CN 202110174410A CN 112528970 A CN112528970 A CN 112528970A
Authority
CN
China
Prior art keywords
detection result
guideboard
target
image
result set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110174410.8A
Other languages
Chinese (zh)
Inventor
唐汇
张雷
倪凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HoloMatic Technology Beijing Co Ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202110174410.8A priority Critical patent/CN112528970A/en
Publication of CN112528970A publication Critical patent/CN112528970A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the disclosure discloses a guideboard detection method, a guideboard detection device, guideboard detection equipment and a computer readable medium. One embodiment of the method comprises: acquiring a to-be-detected guideboard image; normalizing the guideboard image to be detected to obtain a first target guideboard image; generating a second target guideboard image based on the guideboard image to be detected; generating a first detection result set and a second detection result set based on the first target guideboard image, the second target guideboard image and a pre-trained neural network model; and carrying out data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set. The embodiment realizes the detection of the guideboard, improves the detection quality of the guideboard and provides convenience for the automatic driving of vehicles.

Description

Guideboard detection method, device, equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a guideboard detection method, a guideboard detection device, guideboard detection equipment and a computer readable medium.
Background
The guideboard detection is a basic technology in the field of automatic driving. At present, a commonly used method for generating obstacle detection information is as follows: and directly inputting the road sign image to be detected into the trained model to obtain a road sign detection result.
However, when the method is used for road sign detection, the following technical problems are often caused:
first, the detection accuracy of the small-sized signpost is low, so that the small-sized signpost cannot be detected more accurately, and further, adverse effects are generated when the vehicle is driven automatically with reference to the signpost detection result.
Secondly, the detected guideboards cannot be screened, so that the guideboard detection result is inaccurate, and further, a certain danger is easy to occur when the vehicle automatically drives according to the guideboard detection result.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a guideboard detection method, apparatus, electronic device, and medium to address one or more of the technical problems noted in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method of guideboard detection, the method including: acquiring a to-be-detected guideboard image; normalizing the to-be-detected guideboard image to obtain a first target guideboard image; generating a second target guideboard image based on the guideboard image to be detected; generating a first detection result set and a second detection result set based on the first target guideboard image, the second target guideboard image and a pre-trained neural network model, wherein a first detection result in the first detection result set is used for representing guideboard position information in the first target guideboard image, and a second detection result in the second detection result set is used for representing guideboard position information in the second target guideboard image; and performing data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set.
In a second aspect, some embodiments of the present disclosure provide a guideboard detection apparatus, the apparatus comprising: an acquisition unit configured to acquire a guideboard image to be detected; the normalization processing unit is configured to perform normalization processing on the guideboard image to be detected to obtain a first target guideboard image; a first generating unit configured to generate a second target guideboard image based on the guideboard image to be detected; a second generating unit configured to generate a first detection result set and a second detection result set based on the first target guideboard image, the second target guideboard image and a pre-trained neural network model, wherein a first detection result in the first detection result set is used for representing guideboard position information in the first target guideboard image, and a second detection result in the second detection result set is used for representing guideboard position information in the second target guideboard image; and the data fusion processing unit is configured to perform data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: the guideboard detection result set obtained by the guideboard detection method of some embodiments of the disclosure can improve the detection precision of the small guideboards, so that the small guideboards can be detected more accurately. Specifically, the reason why the small-sized guideboard cannot be detected more accurately is that: the road sign image to be detected is directly input into the trained model, and the detection precision of the model on the small road sign is low, so that the small road sign cannot be detected accurately. Based on this, some embodiments of the present disclosure provide a guideboard detection method, which first obtains a guideboard image to be detected. And then, carrying out normalization processing on the to-be-detected guideboard image to obtain a first target guideboard image. Thus, the guideboard position information in the guideboard image to be detected can be detected through the first target guideboard image. And then, generating a second target guideboard image based on the guideboard image to be detected. Thus, the small-sized guideboards that are not detected in the first target guideboard image can be complemented by the second target guideboard image. Then, a first detection result set and a second detection result set are generated based on the first target guideboard image, the second target guideboard image and a pre-trained neural network model, wherein a first detection result in the first detection result set is used for representing guideboard position information in the first target guideboard image, and a second detection result in the second detection result set is used for representing guideboard position information in the second target guideboard image. Thus, the guideboard detection result of the first target guideboard image can be stored in the first detection result set. The guideboard detection result of the second target guideboard image may be saved in the second detection result set. And finally, performing data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set. Therefore, the guideboard position information in the guideboard image to be detected can be represented through the guideboard detection result set. The to-be-detected guideboard image is divided into a first target guideboard image and a second target guideboard image, and the first target guideboard image and the second target guideboard image are processed respectively to obtain a first detection result set and a second detection result set. Thus, the first set of detection results may be supplemented by means of the second set of detection results. And detecting the small and medium-sized guideboards in the guideboard image to be detected. The detection precision of the small guideboard is improved. The small-sized guideboard can be detected more accurately. Further, adverse effects are avoided when the vehicle is automatically driven with reference to the guideboard detection result.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a schematic diagram of an application scenario of a guideboard detection method according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a guideboard detection method according to the present disclosure;
FIG. 3 is a schematic structural diagram of some embodiments of a guideboard detection apparatus according to the present disclosure;
FIG. 4 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of the guideboard detection method of some embodiments of the present disclosure.
As shown in fig. 1, first, the computing device 101 may acquire a guideboard image 102 to be detected. Then, the computing device 101 may perform normalization processing on the guideboard image 102 to be detected, so as to obtain a first target guideboard image 103. Thereafter, the computing device 101 may generate a second target guideboard image 104 based on the above-described guideboard image 102 to be detected. Next, the computing device 101 may generate a first detection result set 106 and a second detection result set 107 based on the first target guideboard image 103, the second target guideboard image 104, and the pre-trained neural network model 105, where a first detection result in the first detection result set 106 is used to represent guideboard position information in the first target guideboard image, and a second detection result in the second detection result set 107 is used to represent guideboard position information in the second target guideboard image. Finally, the computing device 101 may perform data fusion processing on the first detection result set 106 and the second detection result set 107 to obtain a guideboard detection result set 108.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of a guideboard detection method according to the present disclosure is shown. The method for detecting the guideboard comprises the following steps:
step 201, acquiring a road sign image to be detected.
In some embodiments, the execution subject of the guideboard detection method (e.g., the computing device 101 shown in fig. 1) may acquire the guideboard image to be detected from the onboard camera of the target vehicle through a wired connection or a wireless connection. The guideboard image to be detected may be an environment image including a guideboard around the target vehicle, which is acquired by the vehicle-mounted camera.
Step 202, normalizing the road sign image to be detected to obtain a first target road sign image.
In some embodiments, the executing body may perform normalization processing on the guideboard image to be detected to obtain a first target guideboard image. The normalization processing may be processing of performing a series of standard processing transformations on the guideboard image to be detected, so that the guideboard image to be detected is transformed into a fixed standard form. The above normalization processing may be maximum-minimum normalization processing. The fixed standard format may be a format in which the pixel size is set in advance. The predetermined pixel size may be 20 × 28 pixels.
In some optional implementation manners of some embodiments, the executing body may perform normalization processing on the to-be-detected guideboard image to obtain a first target guideboard image, and may include the following steps:
firstly, carrying out image mean value removing processing on the to-be-detected guideboard image to obtain a guideboard image with the mean value removed.
Specifically, the executing body may remove noise in the to-be-detected guideboard image by using an image mean value removing method, and perform image mean value removing processing on the to-be-detected guideboard image to obtain a mean-value-removed guideboard image.
And secondly, generating the first target guideboard image based on the mean-removed guideboard image.
Specifically, the executing body may perform image rectification processing on the mean-removed guideboard image through hough line transformation, and convert the mean-removed guideboard image into a standard square guideboard image. And the executing body can Convert the guideboard image after the image rectification processing into a guideboard image (for example, a Torch format guideboard image) which can be trained by introducing an image processing function (for example, a Convert function). And obtaining the first target guideboard image.
And step 203, generating a second target guideboard image based on the guideboard image to be detected.
In some embodiments, the executing body may cut the guideboard image to be detected into a guideboard image of a preset size according to a position of the camera. Then, the execution subject may remove noise in the clipped image by an image averaging method. And the execution main body can correct the denoised guideboard image through Hough line transformation and convert the denoised guideboard image into a standard square guideboard image. The executive may then Convert the rectified guideboard image into a guideboard image that may be trained (e.g., a Torch-formatted guideboard image) by introducing an image processing function (e.g., a Convert function). And obtaining the second target guideboard image. The preset size may be 1024 × 256.
In some optional implementation manners of some embodiments, the executing body may generate a second target guideboard image based on the guideboard image to be detected, and may include the following steps:
firstly, performing image segmentation processing on the to-be-detected guideboard image to obtain a segmented guideboard image.
Specifically, the executing body may segment the to-be-detected guideboard image into guideboard images of a preset size by introducing a programming language (e.g., Python programming language) according to the position of the camera. And obtaining the segmented guideboard image. The preset size may be 1024 × 256.
And a second step of generating the second target guideboard image based on the segmented guideboard image.
Specifically, the executing body may remove noise in the segmented guideboard image by an image averaging method. And the execution main body can correct the denoised guideboard image through Hough line transformation and convert the denoised guideboard image into a standard square guideboard image. The executive may then Convert the rectified guideboard image into a guideboard image that may be trained (e.g., a Torch-formatted guideboard image) by introducing an image processing function (e.g., a Convert function). And obtaining the second target guideboard image.
And 204, generating a first detection result set and a second detection result set based on the first target guideboard image, the second target guideboard image and a pre-trained neural network model.
In some embodiments, the executing agent may input the first target guideboard image to a pre-trained neural network model to obtain the first detection result set. The executing agent may input the second target guideboard image to a pre-trained neural network model to obtain the second detection result set. And the first detection result in the first detection result set is used for representing the guideboard position information in the first target guideboard image. And the second detection result in the second detection result set is used for representing the guideboard position information in the second target guideboard image.
In some optional implementations of some embodiments, the pre-trained neural network model includes a first feature extraction sub-network, a second feature extraction sub-network, a regression prediction sub-network, a classification prediction sub-network, and a decoding sub-network. And the executing agent may generate a first detection result set and a second detection result set based on the first target guideboard image, the second target guideboard image, and a pre-trained neural network model, and may include the following steps:
in the first step, the execution agent may input the first target guideboard image and the second target guideboard image into the first feature extraction sub-network, and obtain a first feature information set of the first target guideboard image and a first feature information set of the second target guideboard image.
Specifically, the first feature extraction sub-network may be a trunk feature extraction network, and may be configured to extract feature information of the first target guideboard image and the second target guideboard image at a first preset number of downsampling times. The first feature information set of the first target guideboard image may be used to represent feature information of the first target guideboard image extracted by the first feature extraction sub-network under a first preset number of downsampling times. The first feature information set of the second target guideboard image may be used to represent feature information of the second target guideboard image extracted by the first feature extraction sub-network under a first preset number of downsampling times. The first preset number of downsampling times may be 1 or 2.
The second step may be that the executing agent inputs the first feature information set of the first target guideboard image and the first feature information set of the second target guideboard image into the second feature extraction sub-network, and obtains a second feature information set of the first target guideboard image and a second feature information set of the second target guideboard image.
Specifically, the second feature extraction sub-network may be an enhanced feature extraction network, and may be configured to extract feature information of the first feature information set of the first target guideboard image and the feature information of the first feature information set of the second target guideboard image at a second preset number of times of down-sampling. The second feature information set of the first target guideboard image may be used to represent feature information of the first feature information set of the first target guideboard image extracted by the second feature extraction sub-network at a second predetermined number of times of down-sampling. The second feature information set of the second target guideboard image may be used to represent feature information of the first feature information set of the second target guideboard image extracted by the second feature extraction sub-network at a second predetermined number of times of down-sampling. The second predetermined number of downsampling times may be 3 times, 4 times, 5 times, 6 times, or 7 times.
Third, the executing agent may input the second feature information set of the first target guideboard image and the second feature information set of the second target guideboard image to the regression prediction subnetwork, respectively, to obtain a first regression prediction result set and a second regression prediction result set.
Specifically, the regression prediction subnetwork may be configured to output the guideboard position information in the guideboard image related to the input feature information, based on the input feature information. The guideboard position information may include a minimum abscissa value, a minimum ordinate value, a maximum abscissa value, and a maximum ordinate value. The first regression prediction result set may be a set including the guideboard position information in the first target guideboard image. The second regression prediction result set may be a set including the guideboard position information in the second target guideboard image.
As an example, the first set of regression prediction results may be { [ minimum abscissa value: 72, minimum ordinate value: 163, maximum abscissa value: 302, maximum ordinate value: 228], [ minimum abscissa value: 185, minimum ordinate value: 194, maximum abscissa value: 500, maximum ordinate value: 316], [ minimum abscissa value: 314, minimum ordinate value: 8, maximum abscissa value: 344, maximum ordinate value: 65]}. The second regression prediction result set may be { [ minimum abscissa value: 184, minimum ordinate value: 190, maximum abscissa value: 483, maximum ordinate value: 315], [ minimum abscissa value: 416, minimum ordinate value: 180, maximum abscissa value: 500, maximum ordinate value: 222]}.
The executing agent may input the second feature information set of the first target guideboard image and the second feature information set of the second target guideboard image to the classification prediction subnetwork to obtain a first classification prediction result set and a second classification prediction result set, respectively.
Specifically, the classification prediction subnetwork may be configured to output the guideboard type information in the guideboard image related to the input feature information, based on the input feature information. The above-mentioned guideboard type information may be information describing a guideboard type in the guideboard image. The guideboard type information may include a number of the guideboard type and the guideboard type. The first classification prediction result set may be a set including the guideboard type information in the first target guideboard image. The second classification prediction result set may be a set including the guideboard type information in the second target guideboard image.
As an example, the first classification prediction result set may be { [1, guideboard ], [1, guideboard ] }. The second classification prediction result set may be { [1, guideboard ], [1, guideboard ] }.
In the fifth step, the execution agent may input the first regression prediction result set and the first classification prediction result set to the decoding subnetwork, so as to obtain the first detection result set.
In particular, the decoding subnetwork may be configured to decode the input first regression prediction result set and the first classification prediction result set. A first set of decoding results is obtained. Then, the first decoding result set is subjected to non-maximum suppression processing. And then, the first classified prediction results in the first classified prediction result set correspond to the first decoding results in the first decoding result set which is subjected to the non-maximum suppression processing in a one-to-one mode. And obtaining the first detection result set. The first detection result set is used for representing the guideboard position information, the guideboard type and the confidence of the guideboard position information in the first target guideboard image. The above-described non-maximum suppression processing may be processing of suppressing an element that is not a maximum value by traversing each element in the set. The first detection result in the first detection result set may include a first decoding result and a first classification prediction result corresponding to the first detection result.
As an example, the first detection result set may be { [ first decoding result: 72, 163, 302, 228, 0.95, first classification prediction result: 1, road sign ], [ first decoding result: 185, 194, 500, 316, 0.83, first classification predictor: 1, road sign ], [ first decoding result: 314, 8, 344, 65, 0.89, first classification predictor: 1, guideboard ] }.
The execution subject may input the second regression prediction result set and the second classification prediction result set to the decoding subnetwork to obtain the second detection result set.
Specifically, the decoding subnetwork may be configured to decode the input second regression prediction result set and the second classification prediction result set. A second set of decoding results is obtained. Then, the second decoding result set is subjected to non-maximum suppression processing. And then, corresponding the second classification prediction results in the second classification prediction result set to the second decoding results in the second decoding result set which is subjected to non-maximum suppression processing one by one. And obtaining the second detection result set. The second detection result set is used for representing the guideboard position information, the guideboard type and the confidence of the guideboard position information in the second target guideboard image. The second detection result in the second detection result set may include a second decoding result and a second classification prediction result corresponding to the second detection result.
As an example, the second detection information set may be { [ second decoding result: 184, 190, 483, 315, 0.87, second classification prediction: 1, road sign ], [ second decoding result: 416, 180, 500, 222, 0.86, second classification predictor: 1, guideboard ] }.
Optionally, the neural network model may be generated by training through the following steps:
firstly, a training data information set is obtained. The execution main body can acquire the training data information set in a wired connection mode or a wireless connection mode. The training data information in the training data information set includes label information of the training data and a guideboard image corresponding to the label information.
And secondly, performing first data processing on the label information set included in the training data information set to obtain an index data set.
Specifically, the execution body may generate the index data set by introducing a programming language (e.g., Python programming language). The index data set may be a set for characterizing the name of a guideboard image included in the guideboard image information set. The guideboard image information may include a guideboard image name and a guideboard image format.
As an example, the set of guideboard image information may be { [ image name: 0001, image format: jpeg (joint Photographic Experts group) ], [ image name: 0002, image format: JPEG ], [ image name: 0003, image format: JPEG. The index data set may be [0001, 0002, 0003 ].
And thirdly, performing second data processing on the training data information set based on the index data set and a preset training category to obtain a processed training data information set.
Specifically, the preset training category may be a road sign that represents a road sign image that needs to be trained. The execution agent may read the absolute path of the guideboard image, the guideboard position information in the image, and the guideboard type information from the training data information set through the index data set and a preset training category by introducing a programming language (e.g., Python programming language).
As an example, the preset training category may be [1, guideboard ]. The processed training data information set may be { [ absolute path of guideboard image: and D, the position information of the road board is: 72, 163, 302, 228, guideboard type information: 1, guideboard ], [ absolute path of guideboard image: d, tunian \0002.jpg, guideboard position information: 185, 194, 500, 316, guideboard type information: 1, guideboard ] }.
And fourthly, inputting the processed training data information set to the built neural network model based on the preset iteration number so as to generate the pre-trained neural network model.
Specifically, the executing body may input the processed training data information set to a built neural network model for iterative training according to a preset number of iterations, so as to obtain a trained neural network model. The constructed neural network model can be a neural network (for example, an EfficientDet neural network) model capable of performing regression prediction and classification prediction. The built neural network model can comprise a convolutional layer, a pooling layer, an activation function and a full connection layer. The preset number of iterations may be 1000.
As an example, the pre-trained neural network model may be a neural network model obtained by training the built neural network model with an initial learning rate of 0.0001, an iteration number of 1000, and a weight of 0.00001.
And step 205, performing data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set.
In some embodiments, the executing entity may calculate an overlap degree between guideboard position information included in a first detection result in the first detection result set and guideboard position information included in a second detection result corresponding to the first detection result in the second detection result set by traversing the first detection result set and the second detection result set. And judging whether the guideboards are repeatedly detected or not according to the overlapping degree, and for the guideboards with repeated detection, keeping the guideboard position information with high confidence coefficient by comparing the confidence coefficients of the guideboard position information with repeated detection, and directly keeping the guideboard position information without repeated detection. And obtaining a guideboard detection result set.
In some optional implementation manners of some embodiments, the executing entity may fuse the first detection result set and the second detection result set to obtain a guideboard detection result set, and may include the following steps:
in the first step, the execution body may perform image stacking processing on the first detection result set and the second detection result set to obtain a third detection result set.
Specifically, the image stacking process of the first detection information set and the second detection information set may be performed such that the execution subject sequentially adds the first detection information set and the second detection information set to an empty set by obstacle type information and groups the first detection information set and the second detection information set.
In the second step, the execution subject may determine a third detection result group satisfying a preset condition in the third detection result group set as a target detection result group, so as to obtain a target detection result group set.
Specifically, the preset condition may include two third detection results.
And thirdly, the executing body can perform rectangular frame repetitive detection on each target detection result group in the target detection result group set to generate a repetitive detection result and obtain a repetitive detection result set.
Specifically, the executing body may input the target detection result set to a preset mathematical formula to obtain a repetitive detection result. The preset mathematical formula may be an exponential function formula.
Optionally, the target detection result in the target detection result group set may include a first coordinate and a second coordinate; and the executing body may perform rectangular frame repetitive detection on each target detection result group in the target detection result group set to generate a repetitive detection result, and may include the following steps:
inputting the target detection result group into the following formula to obtain a repeatability detection result:
Figure 326227DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 104827DEST_PATH_IMAGE002
the above-mentioned reproducibility test results are shown.
Figure 628081DEST_PATH_IMAGE003
And
Figure 884750DEST_PATH_IMAGE004
representing the product value.
Figure 193372DEST_PATH_IMAGE005
Indicating that the maximum value is found.
Figure 893606DEST_PATH_IMAGE006
Indicating that the minimum value is sought.
Figure 654888DEST_PATH_IMAGE007
Indicating a serial number.
Figure 980827DEST_PATH_IMAGE008
Indicating the abscissa value.
Figure 878376DEST_PATH_IMAGE009
Indicating the ordinate value.
Figure 248047DEST_PATH_IMAGE010
Indicating the second of the target detection result sets
Figure 762204DEST_PATH_IMAGE007
The first coordinate in the individual target detection result includes an abscissa value.
Figure 891834DEST_PATH_IMAGE011
Indicating the second of the target detection result sets
Figure 909469DEST_PATH_IMAGE007
The second coordinate in the target detection result includes an abscissa value.
Figure 731932DEST_PATH_IMAGE012
Indicating the second of the target detection result sets
Figure 215609DEST_PATH_IMAGE013
Individual target detectionThe first coordinate in the result comprises an abscissa value.
Figure 883351DEST_PATH_IMAGE014
Indicating the second of the target detection result sets
Figure 286650DEST_PATH_IMAGE013
The second coordinate in the target detection result includes an abscissa value.
Figure 14435DEST_PATH_IMAGE015
Indicating the second of the target detection result sets
Figure 237606DEST_PATH_IMAGE016
The first coordinate in the individual target detection result includes an abscissa value.
Figure 974618DEST_PATH_IMAGE017
Indicating the second of the target detection result sets
Figure 216112DEST_PATH_IMAGE016
The second coordinate in the target detection result includes an abscissa value.
Figure 114798DEST_PATH_IMAGE018
Indicating the second of the target detection result sets
Figure 825265DEST_PATH_IMAGE007
The first coordinate in the individual target detection result includes a vertical coordinate value.
Figure 100389DEST_PATH_IMAGE019
Indicating the second of the target detection result sets
Figure 212701DEST_PATH_IMAGE007
The second coordinate in the target detection result includes a vertical coordinate value.
Figure 547867DEST_PATH_IMAGE020
Indicating the result of the target detectionIn the group of
Figure 230784DEST_PATH_IMAGE013
The first coordinate in the individual target detection result includes a vertical coordinate value.
Figure 309598DEST_PATH_IMAGE021
Indicating the second of the target detection result sets
Figure 541996DEST_PATH_IMAGE013
The second coordinate in the target detection result includes a vertical coordinate value.
Figure 782485DEST_PATH_IMAGE022
Indicating the second of the target detection result sets
Figure 467544DEST_PATH_IMAGE016
The first coordinate in the individual target detection result includes a vertical coordinate value.
Figure 84470DEST_PATH_IMAGE023
Indicating the second of the target detection result sets
Figure 155063DEST_PATH_IMAGE016
The second coordinate in the target detection result includes a vertical coordinate value.
And fourthly, the executing body can update the stacked detection result set based on the repeated detection result set to obtain the guideboard detection result set.
Specifically, the executing body may, in response to that the repetitive detection result in the repetitive detection result set is greater than a preset threshold, take the stacked detection result corresponding to the repetitive detection result greater than the preset threshold in the stacked detection result set as the repeatedly detected guideboard. For the guideboards with repeated detection, the guideboard position information with high reliability is kept in the stacked detection result set by comparing the confidence degrees of the guideboard position information with repeated detection, and for the guideboards without repeated detection, the guideboard position information is directly kept in the stacked detection result set. And finishing the updating of the stacked detection result set to obtain a guideboard detection result set. The preset threshold may be 0.8.
The above formula and its related content are used as an invention point of the embodiment of the present disclosure, and solve the technical problem mentioned in the background art that "the detected guideboard cannot be screened, which results in inaccurate guideboard detection result, and further causes a certain danger to occur easily when the vehicle is automatically driven with reference to the guideboard detection result". Factors that cause a certain risk to occur easily when a vehicle is automatically driven with reference to a guideboard detection result are as follows: the guideboards detected by the neural network may have a repeated detection phenomenon, and the repeatedly detected guideboards cannot be screened, so that a non-existent guideboard is detected in the guideboard image, and thus, a guideboard detection result is inaccurate, and further, a vehicle is easy to have a certain danger when being automatically driven by referring to the guideboard detection result. If the above factors are solved, the repeatedly detected guideboards are screened out, and the accuracy of the guideboard detection result is improved. Further, the risk of the vehicle being automatically driven with reference to the result of the detection of the guideboard is reduced. To achieve this, the present disclosure introduces a formula related to the guideboard detection, and first, introduces not only an overlap region between target detection results by a target detection result group. Meanwhile, a non-overlapping area between target detection results and a minimum closure area between the target detection results are introduced. Then, the degree of overlap of the overlapping regions between the target detection results is calculated. Then, the ratio of the non-overlapping area between the target detection results and the minimum closure area between the target detection results is calculated. And finally, subtracting the ratio of the non-overlapping area between the target detection results and the minimum closure area between the target detection results from the overlapping degree of the overlapping area between the target detection results to obtain a repeated detection result. Not only the overlapping area between the target detection results is considered. While also taking into account non-overlapping regions between target detection results. The repeatability among the target detection results can be better reflected. And the reliability of the repeated detection result is improved. Thus, the repeatedly detected guideboards can be screened out. Therefore, the accuracy of the guideboard detection result is improved. Further, the problem that the vehicle is likely to have a certain risk when the vehicle is automatically driven with reference to the guideboard detection result is solved.
The above embodiments of the present disclosure have the following advantages: the guideboard detection result set obtained by the guideboard detection method of some embodiments of the disclosure can improve the detection precision of the small guideboards, so that the small guideboards can be detected more accurately. Specifically, the reason why the small-sized guideboard cannot be detected more accurately is that: the road sign image to be detected is directly input into the trained model, and the detection precision of the model on the small road sign is low, so that the small road sign cannot be detected accurately. Based on this, some embodiments of the present disclosure provide a guideboard detection method, which first obtains a guideboard image to be detected. And then, carrying out normalization processing on the to-be-detected guideboard image to obtain a first target guideboard image. Thus, the guideboard position information in the guideboard image to be detected can be detected through the first target guideboard image. And then, generating a second target guideboard image based on the guideboard image to be detected. Thus, the small-sized guideboards that are not detected in the first target guideboard image can be complemented by the second target guideboard image. Then, a first detection result set and a second detection result set are generated based on the first target guideboard image, the second target guideboard image and a pre-trained neural network model, wherein a first detection result in the first detection result set is used for representing guideboard position information in the first target guideboard image, and a second detection result in the second detection result set is used for representing guideboard position information in the second target guideboard image. Thus, the guideboard detection result of the first target guideboard image can be stored in the first detection result set. The guideboard detection result of the second target guideboard image may be saved in the second detection result set. And finally, performing data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set. Therefore, the guideboard position information in the guideboard image to be detected can be represented through the guideboard detection result set. The to-be-detected guideboard image is divided into a first target guideboard image and a second target guideboard image, and the first target guideboard image and the second target guideboard image are processed respectively to obtain a first detection result set and a second detection result set. Thus, the first set of detection results may be supplemented by means of the second set of detection results. And detecting the small and medium-sized guideboards in the guideboard image to be detected. The detection precision of the small guideboard is improved. The small-sized guideboard can be detected more accurately. Further, adverse effects are avoided when the vehicle is automatically driven with reference to the guideboard detection result.
With further reference to fig. 3, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments of a guideboard detection apparatus, which correspond to those of the method embodiments described above for fig. 2, and which may be particularly applied in various electronic devices.
As shown in fig. 3, the guideboard detection apparatus 300 of some embodiments includes: an acquisition unit 301, a normalization processing unit 302, a first generation unit 303, a second generation unit 304, and a data fusion processing unit 305. The acquisition unit 301 is configured to acquire a guideboard image to be detected; a normalization processing unit 302 configured to perform normalization processing on the guideboard image to be detected to obtain a first target guideboard image; a first generating unit 303 configured to generate a second target guideboard image based on the guideboard image to be detected; a second generating unit 304, configured to generate a first detection result set and a second detection result set based on the first target guideboard image, the second target guideboard image and a pre-trained neural network model, where a first detection result in the first detection result set is used to represent guideboard position information in the first target guideboard image, and a second detection result in the second detection result set is used to represent guideboard position information in the second target guideboard image; and a data fusion processing unit 305 configured to perform data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set.
It will be understood that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.
Referring now to FIG. 4, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1) 400 suitable for use in implementing some embodiments of the present disclosure is shown. The server shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 4 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a to-be-detected guideboard image; normalizing the to-be-detected guideboard image to obtain a first target guideboard image; generating a second target guideboard image based on the guideboard image to be detected; generating a first detection result set and a second detection result set based on the first target guideboard image, the second target guideboard image and a pre-trained neural network model, wherein a first detection result in the first detection result set is used for representing guideboard position information in the first target guideboard image, and a second detection result in the second detection result set is used for representing guideboard position information in the second target guideboard image; and performing data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a normalization processing unit, a first generation unit, a second generation unit, and a data fusion processing unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the acquisition unit may also be described as a "unit that acquires a guideboard image to be detected".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the technical method may be formed by replacing the above-mentioned features with (but not limited to) technical features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. A method of guideboard detection, comprising:
acquiring a to-be-detected guideboard image;
normalizing the to-be-detected guideboard image to obtain a first target guideboard image;
generating a second target guideboard image based on the guideboard image to be detected;
generating a first detection result set and a second detection result set based on the first target guideboard image, the second target guideboard image and a pre-trained neural network model, wherein a first detection result in the first detection result set is used for representing guideboard position information in the first target guideboard image, and a second detection result in the second detection result set is used for representing guideboard position information in the second target guideboard image;
and carrying out data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set.
2. The method according to claim 1, wherein the normalizing the to-be-detected guideboard image to obtain a first target guideboard image comprises:
carrying out image mean value removing processing on the to-be-detected guideboard image to obtain a guideboard image subjected to mean value removing;
and generating the first target guideboard image based on the mean-removed guideboard image.
3. The method of claim 2, wherein generating a second target guideboard image based on the guideboard image to be detected comprises:
carrying out image segmentation processing on the to-be-detected guideboard image to obtain a segmented guideboard image;
and generating the second target guideboard image based on the segmented guideboard image.
4. The method of claim 3, wherein the neural network model is generated by training by:
acquiring a training data information set, wherein training data information in the training data information set comprises label information of training data and a guideboard image corresponding to the label information;
performing first data processing on a label information set included in the training data information set to obtain an index data set;
performing second data processing on the training data information set based on the index data set and a preset training category to obtain a processed training data information set;
and inputting the processed training data information set to the built neural network model based on a preset iteration number so as to generate the pre-trained neural network model.
5. The method of claim 4, wherein the pre-trained neural network model comprises a first feature extraction sub-network, a second feature extraction sub-network, a regression prediction sub-network, a classification prediction sub-network, and a decoding sub-network; and
generating a first set of detection results and a second set of detection results based on the first target guideboard image, the second target guideboard image, and a pre-trained neural network model, including:
inputting the first target guideboard image and the second target guideboard image into the first feature extraction sub-network respectively to obtain a first feature information set of the first target guideboard image and a first feature information set of the second target guideboard image;
inputting the first characteristic information set of the first target guideboard image and the first characteristic information set of the second target guideboard image into the second characteristic extraction sub-network respectively to obtain a second characteristic information set of the first target guideboard image and a second characteristic information set of the second target guideboard image;
inputting a second characteristic information set of the first target guideboard image and a second characteristic information set of the second target guideboard image into the regression prediction sub-network respectively to obtain a first regression prediction result set and a second regression prediction result set;
inputting a second characteristic information set of the first target guideboard image and a second characteristic information set of the second target guideboard image into the classification prediction sub-network respectively to obtain a first classification prediction result set and a second classification prediction result set;
inputting the first regression prediction result set and the first classification prediction result set into the decoding subnetwork to obtain the first detection result set;
and inputting the second regression prediction result set and the second classification prediction result set into the decoding subnetwork to obtain the second detection result set.
6. The method according to claim 5, wherein the performing data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set comprises:
carrying out image stacking processing on the first detection result set and the second detection result set to obtain a third detection result set;
determining a third detection result group meeting preset conditions in the third detection result group set as a target detection result group to obtain a target detection result group set;
performing rectangular frame repeatability detection on each target detection result group in the target detection result group set to generate repeatability detection results to obtain an repeatability detection result set;
and updating the stacked detection result set based on the repeated detection result set to obtain the guideboard detection result set.
7. The method of claim 6, wherein the target detection results in the set of target detection result sets comprise a first coordinate and a second coordinate; and
performing rectangular frame repetitive detection on each target detection result group in the target detection result group set to generate a repetitive detection result, including:
inputting the target detection result group into the following formula to obtain a repeatability detection result:
Figure 950586DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 506332DEST_PATH_IMAGE002
the results of the repeated tests are shown,
Figure 183301DEST_PATH_IMAGE003
and
Figure 572301DEST_PATH_IMAGE004
the value of the product is represented by,
Figure 461759DEST_PATH_IMAGE005
it is shown that the maximum value is found,
Figure 403171DEST_PATH_IMAGE006
it is shown that the minimum value is found,
Figure 719882DEST_PATH_IMAGE007
the sequence number is shown to indicate that,
Figure 363222DEST_PATH_IMAGE008
the value of the abscissa is represented by the value of,
Figure 56372DEST_PATH_IMAGE009
the values of the ordinate are represented by the values of the ordinate,
Figure 852289DEST_PATH_IMAGE010
indicating the second of the target detection result set
Figure 605482DEST_PATH_IMAGE007
The first coordinate in the individual target detection result includes an abscissa value,
Figure 503162DEST_PATH_IMAGE011
indicating the second of the target detection result set
Figure 734423DEST_PATH_IMAGE007
The second coordinate in the individual target detection result includes an abscissa value,
Figure 650426DEST_PATH_IMAGE012
indicating the second of the target detection result set
Figure 308941DEST_PATH_IMAGE013
The first coordinate in the individual target detection result includes an abscissa value,
Figure 926873DEST_PATH_IMAGE014
indicating the second of the target detection result set
Figure 961825DEST_PATH_IMAGE013
The second coordinate in the individual target detection result includes an abscissa value,
Figure 732335DEST_PATH_IMAGE015
indicating the second of the target detection result set
Figure 827330DEST_PATH_IMAGE016
The first coordinate in the individual target detection result includes an abscissa value,
Figure 683290DEST_PATH_IMAGE017
indicating the second of the target detection result set
Figure 840094DEST_PATH_IMAGE016
The second coordinate in the individual target detection result includes an abscissa value,
Figure 730690DEST_PATH_IMAGE018
indicating the second of the target detection result set
Figure 996586DEST_PATH_IMAGE007
The first coordinate in the individual target detection result includes a vertical coordinate value,
Figure 339843DEST_PATH_IMAGE019
indicating the second of the target detection result set
Figure 982177DEST_PATH_IMAGE007
The second coordinate in the individual target detection result includes a vertical coordinate value,
Figure 976546DEST_PATH_IMAGE020
indicating the second of the target detection result set
Figure 413344DEST_PATH_IMAGE013
The first coordinate in the individual target detection result includes a vertical coordinate value,
Figure 978318DEST_PATH_IMAGE021
indicating the second of the target detection result set
Figure 893184DEST_PATH_IMAGE013
The second coordinate in the individual target detection result includes a vertical coordinate value,
Figure 758372DEST_PATH_IMAGE022
indicating the second of the target detection result set
Figure 116803DEST_PATH_IMAGE016
The first coordinate in the individual target detection result includes a vertical coordinate value,
Figure 434652DEST_PATH_IMAGE023
indicating the second of the target detection result set
Figure 153209DEST_PATH_IMAGE016
The second coordinate in the target detection result includes a vertical coordinate value.
8. A guideboard detection device, comprising:
an acquisition unit configured to acquire a guideboard image to be detected;
the normalization processing unit is configured to perform normalization processing on the guideboard image to be detected to obtain a first target guideboard image;
a first generating unit configured to generate a second target guideboard image based on the guideboard image to be detected;
a second generating unit, configured to generate a first detection result set and a second detection result set based on the first target guideboard image, the second target guideboard image and a pre-trained neural network model, where a first detection result in the first detection result set is used to represent guideboard position information in the first target guideboard image, and a second detection result in the second detection result set is used to represent guideboard position information in the second target guideboard image;
and the data fusion processing unit is configured to perform data fusion processing on the first detection result set and the second detection result set to obtain a guideboard detection result set.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202110174410.8A 2021-02-07 2021-02-07 Guideboard detection method, device, equipment and computer readable medium Pending CN112528970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110174410.8A CN112528970A (en) 2021-02-07 2021-02-07 Guideboard detection method, device, equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110174410.8A CN112528970A (en) 2021-02-07 2021-02-07 Guideboard detection method, device, equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN112528970A true CN112528970A (en) 2021-03-19

Family

ID=74975655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110174410.8A Pending CN112528970A (en) 2021-02-07 2021-02-07 Guideboard detection method, device, equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112528970A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733820A (en) * 2021-03-31 2021-04-30 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463105A (en) * 2014-11-19 2015-03-25 深圳市腾讯计算机系统有限公司 Guide board recognizing method and device
US9041798B1 (en) * 2008-07-07 2015-05-26 Lockheed Martin Corporation Automated pointing and control of high resolution cameras using video analytics
CN109886359A (en) * 2019-03-25 2019-06-14 西安电子科技大学 Small target detecting method and detection model based on convolutional neural networks
CN111401410A (en) * 2020-02-27 2020-07-10 江苏大学 Traffic sign detection method based on improved cascade neural network
CN111507327A (en) * 2020-04-07 2020-08-07 浙江大华技术股份有限公司 Target detection method and device
CN112232231A (en) * 2020-10-20 2021-01-15 城云科技(中国)有限公司 Pedestrian attribute identification method, system, computer device and storage medium
CN112232257A (en) * 2020-10-26 2021-01-15 青岛海信网络科技股份有限公司 Traffic abnormity determining method, device, equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9041798B1 (en) * 2008-07-07 2015-05-26 Lockheed Martin Corporation Automated pointing and control of high resolution cameras using video analytics
CN104463105A (en) * 2014-11-19 2015-03-25 深圳市腾讯计算机系统有限公司 Guide board recognizing method and device
CN109886359A (en) * 2019-03-25 2019-06-14 西安电子科技大学 Small target detecting method and detection model based on convolutional neural networks
CN111401410A (en) * 2020-02-27 2020-07-10 江苏大学 Traffic sign detection method based on improved cascade neural network
CN111507327A (en) * 2020-04-07 2020-08-07 浙江大华技术股份有限公司 Target detection method and device
CN112232231A (en) * 2020-10-20 2021-01-15 城云科技(中国)有限公司 Pedestrian attribute identification method, system, computer device and storage medium
CN112232257A (en) * 2020-10-26 2021-01-15 青岛海信网络科技股份有限公司 Traffic abnormity determining method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733820A (en) * 2021-03-31 2021-04-30 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium
CN112733820B (en) * 2021-03-31 2021-07-27 禾多科技(北京)有限公司 Obstacle information generation method and device, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
US20230394671A1 (en) Image segmentation method and apparatus, and device, and storage medium
EP3637310A1 (en) Method and apparatus for generating vehicle damage information
CN112348029B (en) Local map adjusting method, device, equipment and computer readable medium
CN109118456B (en) Image processing method and device
CN111310770A (en) Target detection method and device
CN112668588A (en) Parking space information generation method, device, equipment and computer readable medium
CN112712036A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN114399589A (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN115272182B (en) Lane line detection method, lane line detection device, electronic equipment and computer readable medium
CN112464921B (en) Obstacle detection information generation method, apparatus, device and computer readable medium
CN113128419A (en) Obstacle identification method and device, electronic equipment and storage medium
CN110287817B (en) Target recognition and target recognition model training method and device and electronic equipment
CN113592033B (en) Oil tank image recognition model training method, oil tank image recognition method and device
CN111382695A (en) Method and apparatus for detecting boundary points of object
CN112528970A (en) Guideboard detection method, device, equipment and computer readable medium
CN112232326A (en) Driving information generation method and device, electronic equipment and computer readable medium
CN110633598B (en) Method and device for determining a driving area in an environment image
CN113688928B (en) Image matching method and device, electronic equipment and computer readable medium
CN111612714B (en) Image restoration method and device and electronic equipment
CN115375657A (en) Method for training polyp detection model, detection method, device, medium, and apparatus
CN112597788B (en) Target measuring method, target measuring device, electronic apparatus, and computer-readable medium
CN115115836A (en) Image recognition method, image recognition device, storage medium and electronic equipment
CN111507931B (en) Data processing method and device
CN111797931A (en) Image processing method, image processing network training method, device and equipment
CN116704473B (en) Obstacle information detection method, obstacle information detection device, electronic device, and computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210319

RJ01 Rejection of invention patent application after publication