Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The following detailed description will be made in conjunction with the accompanying drawings.
In the prior art, in the field of unmanned aerial vehicle aerial photography, a deep learning method is generally adopted for road detection, and the shooting angle of an unmanned aerial vehicle is adjusted according to the result of road detection, but the real-time performance of the machine learning method is poor. In view of this technical problem, some exemplary embodiments of the present invention provide a solution, which will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a road center detection method according to an exemplary embodiment of the present invention, as shown in fig. 1, the method includes:
and S101, performing binarization processing on the acquired scene image of the road to be detected according to the known road hue characteristic and the road saturation characteristic to obtain a binarized scene image.
And S102, removing the non-road area from the binarized scene image to obtain a binarized scene image containing the road area.
And step S103, carrying out image blocking on the binary scene image containing the road area to obtain a plurality of image sub-blocks.
And S104, performing line fitting according to the mass center characteristics of the road subregions contained in the image sub-blocks to obtain a central line of the road to be detected.
In this embodiment, the process of performing binarization processing on the acquired scene image of the road to be detected may be implemented by combining an HSV (Hue, Saturation, Value) color model.
The road tone feature refers to a color feature represented by an H component in an HSV (hue, saturation, value) color model corresponding to the road image, and the road tone feature represents color information of the road image. The road image is an image obtained by photographing a road. Typically, road hue features are expressed in terms of angular quantities. The road saturation feature refers to a color feature expressed by an S component in an HSV color model corresponding to a road image. The road saturation feature represents the degree to which the color of the road image approaches the spectral color. The higher the degree to which the color approaches the spectral color, the higher the saturation of the color.
Generally, the scene image of the road to be detected is an image in RGB (Red, Green, Blue, Red, Green and Blue) color mode. In this embodiment, the binarization processing is not directly performed based on the R, G, B color features of the scene image of the road to be detected, but is performed based on the hue and saturation features, which is advantageous in that: compared with RGB (red, green and blue) characteristics, the hue and saturation characteristics have visual intuition, and further, in the binarization processing process, the road area and the non-road area can be accurately divided based on the hue characteristics and the saturation characteristics of the road, so that the dividing result of the road area and the non-road area is closer to the real condition.
In this embodiment, the road hue feature and the road saturation feature may be obtained by analyzing a large number of road images in advance. In some cases, the road hue feature and the road saturation feature are associated with lighting conditions when the road image is captured. Based on the method, the influence model of the illumination condition on the hue characteristic and the saturation characteristic of the road can be established in advance by analyzing the road images shot under different illumination conditions. Based on this, when the scene image of the road to be detected is obtained, the road hue feature and the road saturation feature can be adjusted in real time according to the illumination condition for shooting the scene image of the road to be detected and by combining the pre-established influence model of the illumination condition on the road hue feature and the road saturation feature, so as to optimize the effect of carrying out binarization processing on the scene image, and no further description is given.
After the binarized scene image is obtained, the non-road area can be removed from the binarized scene image to obtain a binarized scene image including a road area, and the binarized scene image including the road area is subjected to image blocking. When the image is partitioned, the binarized scene image including the road area may be divided into a plurality of image sub-blocks with equal size, or divided into a plurality of image sub-blocks with unequal size according to actual requirements, which is not limited in this embodiment.
It should be understood that after the binarized scene image containing the road area is divided into a plurality of image sub-blocks, the road area will be distributed in different image sub-blocks, and for convenience of description, the portion of the road area falling in the image sub-blocks is referred to as a road sub-area. Then, the centroid characteristics of the road sub-area included in the plurality of image sub-blocks can be acquired. The centroid characteristics may include, but are not limited to, a position characteristic of the centroid, a pixel gray value characteristic at the centroid, and the like. And then, line fitting can be carried out based on the centroid characteristics of the road subregions contained in the image sub-blocks, and the line obtained by fitting is used as the central line of the road to be detected. The method for blocking the binarized scene image and fitting the lines according to the centroid characteristics of each image sub-block can enable the lines obtained by fitting to be closer to the real central lines of the road to be detected, reduce the calculated amount and improve the calculating speed under the condition of ensuring the accuracy of the detection result.
In the embodiment, according to the known road hue characteristic and road saturation characteristic, the scene image of the road to be detected can be subjected to binarization processing to obtain a binarized scene image; then, removing non-road areas from the binarized scene image to obtain a binarized scene image containing road areas, and performing image blocking on the binarized scene image containing road areas to obtain a plurality of image sub-blocks; and based on the centroid characteristics of the road subregions contained in the image subblocks, line fitting can be carried out to obtain the central line of the road to be detected. Based on the embodiment, after the scene image of the road to be detected is acquired, the central line of the road to be detected is extracted from the scene image of the road to be detected, and the real-time performance is high.
In the above and following embodiments, the scene image of the road to be detected may be captured by an onboard device, which includes a camera. For example, the airborne equipment can be realized as an aerial device installed on an unmanned aerial vehicle, and the aerial device comprises a high-speed camera for shooting a road to be detected.
For convenience of description, the following section may refer to the scene image of the road to be detected as the scene image, and the subsequent part related to the scene image is understood as the scene image of the road to be detected.
It should be noted that, in the above or following embodiments of the present invention, after the binarized scene image is obtained, denoising processing may be further performed on the binarized scene image, so as to improve the accuracy of road center detection.
For example, in some scenes, a gaussian smoothing process may be performed on the scene image to reduce gaussian noise of the scene image.
For example, in some scenarios, morphological erosion algorithms and dilation algorithms may be used to remove subtle noise, such as subtle noise points and short line segments, from the binarized scene image.
For another example, in some scenes, the area of each connected region on the binarized scene image may be calculated, and connected regions having an area smaller than a set area threshold may be removed. The removal of the connected region with the area smaller than the set area threshold means that the pixel grayscale value of the connected region with the area smaller than the set area threshold is set as the pixel grayscale value of the non-road region.
As another example, in some scenarios, hole filling may be performed on the binarized scene image. Optionally, when filling the hole, a flooding filling method may be adopted, a seed point is selected in the binarized scene image, and then the pixel gray value of the region connected with the seed point is set as the pixel gray value of the pixel point at the periphery of the hole, which is not described again.
Optionally, in this embodiment, each of the denoising operations may be performed on the binarized scene image separately, and all the denoising operations may also be performed on the binarized scene image, so as to further improve the accuracy of road center detection.
Optionally, in some embodiments, when performing binarization processing on the acquired scene image of the road to be detected according to the known road hue feature and the road saturation feature, the binarization processing may be directly performed based on color features of pixel points in the scene image in an RGB color mode, which will be described in detail below.
For any pixel point in the scene image, the RGB characteristics of the pixel point can be extracted, and the RGB characteristics of the pixel point are mapped into hue (H) characteristics and saturation (S) characteristics in the HSV color model. And then, determining to divide the pixel into a road region or a non-road region by combining the mapped hue (H) feature and saturation (S) feature and the known road hue feature and road saturation feature. After the operation is executed for each pixel point in the scene image, the operation of carrying out binarization processing on the scene image is completed.
Optionally, in other embodiments, before the scene image is subjected to binarization processing, the scene image in the RGB color mode may be converted into a scene image in the HSV color mode, and the scene image is subjected to binarization processing based on hue (H) characteristics and saturation (S) characteristics of each pixel point in the scene image in the HSV color mode, and known road hue characteristics and road saturation characteristics.
The two binarization processing embodiments are optional embodiments of the present invention, and the road center detection method provided by the present invention will be specifically described below with reference to fig. 2a by taking the example of converting the scene image in the RGB color mode into the scene image in the HSV color mode in advance.
Fig. 2a is a schematic flowchart of a road center detection method according to an exemplary embodiment of the present invention, and as shown in fig. 2a, the method includes:
step S201, a scene image of a road to be detected is obtained, and a hue component image and a saturation component image are extracted from the scene image.
And S202, carrying out binarization processing on the pixel gray scale on the tone component image according to the known road tone characteristics to obtain a binarized tone image, and carrying out binarization processing on the pixel gray scale on the saturation component image according to the known road saturation characteristics to obtain a binarized saturation image.
And S203, performing bitwise logical AND operation on the binarized tone image and the binarized saturation image to obtain a binarized scene image.
Step S204, drawing a surrounding frame of at least one connected region in the binarized scene image, and screening out a connected region which does not accord with the road characteristics from the at least one connected region according to the shape characteristics and the area characteristics of the surrounding frame of the at least one connected region.
Step S205, the pixel gray value of the connected region which does not accord with the road characteristics is set as the pixel gray value corresponding to the non-road region.
And step S206, carrying out image blocking on the binary scene image containing the road area to obtain a plurality of image sub-blocks.
Step S207, respectively calculating the centroids of the road subregions contained in the image subblocks according to the pixel distribution of the road subregions contained in the image subblocks, and screening out the centroids with the pixel gray level of the centroids larger than a set gray threshold value from the centroids of the road subregions contained in the image subblocks as effective centroids.
And S208, when the number of the effective centroids is larger than a set number threshold, performing line fitting according to the effective centroids to obtain a central line of the road to be detected.
In step S201, optionally, in one case, the onboard device may perform frame-by-frame shooting on the road to be detected, so as to obtain multiple frames of scene images of the road to be detected, so as to detect the center of the road. In another case, the airborne equipment can perform video shooting on the road to be detected to obtain video data, and reads a plurality of frames of scene images of the road to be detected from the video data so as to be used for detecting the center of the road.
Optionally, in some embodiments, before performing the road center detection, the scene image may be compressed, and the data amount of the scene image may be reduced, so as to increase the speed of the subsequent road center detection.
In this embodiment, the scene image in the RGB color mode may be converted into the scene image in the HSV color mode in advance. Next, from the scene image of the HSV color mode, a hue (H) component image and a saturation component (S) image are extracted.
After acquiring the hue (H) component image and the saturation component (S) image, in step 202, the hue (H) component image and the saturation component (S) image may be processed, respectively.
Optionally, for the hue component image, it may be determined whether a hue value of each pixel point on the hue component image matches a known road hue feature, and the pixel points are divided into a road region or a non-road region according to a matching result. Alternatively, the hue characteristic of the road area may be represented by that the hue value of the pixel point of the road area is greater than the set hue threshold.
Based on the above, whether the tone value of each pixel point on the tone component image is larger than a set tone threshold value or not can be judged, and if so, the pixel point is divided into the road area; and if the number of the pixels is smaller than or equal to the number of the pixels, dividing the pixels into non-road areas. In some embodiments, the pixel grayscale value for the road region may be set to 1 and the pixel grayscale value for the non-road region may be set to 0. That is, in the tone component image, the pixel gray scale of the pixel having the tone value greater than the set tone threshold value is set to 1, and the pixel gray scale of the pixel having the tone value less than or equal to the set tone threshold value is set to 0, as shown in formula 1:
wherein H (x, y) represents the tone value of the pixel point with the coordinate (x, y), dH (x, y) represents the pixel gray value of the pixel point with the coordinate (x, y), and H0 represents the tone threshold.
Optionally, for the saturation component image, it may be determined whether the saturation of each pixel point on the saturation component image matches with a known road saturation feature, and the pixel points are divided into a road region and a non-road region according to a matching result. Optionally, the saturation characteristic of the road area may be represented by that the saturation of the pixel point of the road area is greater than a set saturation threshold.
Based on the method, whether the saturation of each pixel point on the saturation component image is larger than a set saturation threshold value or not can be judged, and if so, the pixel point is divided into road areas; and if the number of the pixels is smaller than or equal to the number of the pixels, dividing the pixels into non-road areas. In some embodiments, the pixel grayscale for the road region may be set to 1 and the pixel grayscale for the non-road region may be set to 0. Based on this, the gray level of the pixel point with the saturation greater than the set saturation threshold on the saturation component image is set to 1, and the gray level of the pixel point with the saturation less than or equal to the set saturation threshold is set to 0, as shown in formula 2:
where S (x, y) represents the saturation value of the pixel point with the coordinate (x, y), dH (x, y) represents the pixel gray value of the pixel point with the coordinate (x, y), and S0 represents the saturation threshold.
Based on the above processing, a binarized tone image and a binarized saturation image can be obtained, and then, step S203 can be executed to perform bitwise logical and operation on the binarized tone image and the binarized saturation image to obtain a binarized scene image. The bitwise logical and operation can be expressed as formula 3:
dSH (x, y) ═ dS (x, y) & dH (x, y) equation 3
Wherein, & represents the logical operation of AND, and dSH (x, y) represents the pixel gray value of the pixel point with the coordinate of (x, y) after the logical operation of AND. Based on the bitwise AND operation, the scene image can be binarized by combining two different image characteristics, and the approach degree of a binarization result and a real situation is favorably improved.
Next, in step S204, a bounding box of at least one connected region in the binarized scene image may be rendered. The bounding box refers to a frame surrounding the connected region, and generally, the bounding box can embody the outline characteristics of the connected region. Optionally, in this embodiment, a minimum rectangular bounding box or a minimum circular bounding box of the connected region may be drawn as the bounding box of the connected region.
Then, the shape characteristic and the area characteristic of the surrounding frame of the at least one connected region are obtained, and the connected region which does not accord with the road characteristic is screened out from the at least one connected region based on the shape characteristic and the area characteristic of the surrounding frame of the at least one connected region.
In some scenarios, such as a scene of tracking and shooting vehicles on a road, the onboard device usually shoots the road to be detected along the extending direction of the road. Then, in the captured scene image, the connected region conforming to the road characteristics should show the shape characteristics of the road, and the road has the characteristics of extensibility, continuity and regularity. Based on this, whether each connected region meets the road characteristics is judged one by one, and which connected regions are positioned in the road region can be determined from at least one connected region.
Optionally, in this embodiment, when determining whether the connected region satisfies the road characteristic for any one of the at least one connected region, at least one of the following determination operations may be performed:
one is as follows: the distance H1 between the top of the bounding box of the connected region and the top of the binarized scene image, and the distance H2 between the bottom of the bounding box of the connected region and the bottom of the binarized scene image are calculated, as shown in fig. 2b and 2 c. Next, it is determined whether both H1 and H2 are greater than the set size threshold Ht.
When the onboard apparatus photographs along the road extending direction, in one case, the road does not turn. In this case, since the roads have a certain continuity, in the scene image captured by the onboard device, the top of the road area is generally flush with the top of the scene image, and the bottom of the road area is generally flush with the bottom of the scene image. Therefore, for the connected region conforming to the road feature, the distance between the top of the surrounding frame and the top of the binarized scene image is less than or equal to the set size threshold, and the distance between the bottom of the surrounding frame and the bottom of the binarized scene image is also less than or equal to the set size threshold. Optionally, the distance described in this embodiment refers to a minimum distance.
A typical non-turning road area is shown in fig. 2b, the top of the bounding box of the road area intersects with the top of the binarized scene image, that is, the minimum distance between the top of the bounding box and the top of the binarized scene image is 0; the bottom of the surrounding frame of the road area is crossed with the bottom of the binarized scene image, that is, the minimum distance between the bottom of the surrounding frame and the bottom of the binarized scene image is 0.
In another case, when the road turns, the following situations may occur in the scene image captured by the onboard device: the top of the road area is flush with the top of the scene image, but the bottom of the road area is a greater distance away from the bottom of the scene image; alternatively, the top of the road area is a large distance from the top of the scene image, but the bottom of the road area is level with the bottom of the scene image. In this case, for the connected region conforming to the road feature, the distance between the top of the bounding box and the top of the binarized scene image may be smaller than or equal to the set size threshold, or the distance between the bottom of the bounding box and the bottom of the binarized scene image may be smaller than or equal to the set size threshold.
A typical road area with a turn is shown in fig. 2c, the top of the bounding box of the road area has a certain distance from the top of the binarized scene image, and the bottom of the bounding box of the road area intersects with the bottom of the binarized scene image, that is, the minimum distance between the bottom of the bounding box and the bottom of the binarized scene image is 0.
Based on the above, when the distance H1 between the top of the bounding box of the connected region and the top of the binarized scene image and the distance H2 between the bottom of the bounding box of the connected region and the bottom of the binarized scene image are both greater than the set size threshold Ht, it is determined that the connected region does not satisfy the requirement for the continuity of the road.
The second step is as follows: and judging whether the area ratio of the surrounding frame of the communication area to the area of the communication area is not in the set area ratio range.
It should be appreciated that, in general, roads have a more regular shape, and the area of the bounding box of the drawn road region should be closer to the area of the road region, as shown in fig. 2b and 2 c. Based on this, the area S1 of the connected region can be calculated, and after the bounding box of the connected region is drawn, the area S2 of the bounding box is calculated. Then, the ratio of S2 and S1 is calculated, and it is determined whether the ratio of S2 and S1 is not within the set area ratio, as shown in equation 4:
wherein S2 can be calculated according to the length and width of the bounding box, VminAnd VmaxRefers to the lower and upper limits of the area ratio range set. In this embodiment, the length refers to the size of the long side of the enclosure frame, and the width refers to the size of the short side of the enclosure frame, which will not be described in detail later.
If the ratio of S2 and S1 is not within the set area ratio range, it may be determined that the connected component does not conform to the shape feature of the road and the regularity feature of the road.
And thirdly: and judging whether the aspect ratio of the surrounding frame of the connected region is smaller than a set aspect ratio threshold value.
Generally, the road has extensibility and continuity, when the onboard device shoots the road along the extending direction of the road, because the shooting position of the onboard device is high, the length of the road area in the shot scene image should be larger than the width of the road area, as shown in fig. 2a and 2 b. Based on this, in this step, after the bounding box of the connected region is drawn, the aspect ratio of the bounding box may be calculated, and it may be determined whether the aspect ratio of the bounding box is smaller than a set aspect ratio threshold, and if so, the connected region may be considered to have no extensibility and no continuity.
Of course, the above three determination operations are only used for exemplary illustration, and actually, whether the connected region meets the road characteristics may also be determined by other characteristics, which is not described again. In this embodiment of the present invention, it may be determined that the connected region does not conform to the road feature when at least one of the above-mentioned determinations is yes. Preferably, in some embodiments, in a case that the results of the above three determination operations are yes, it may be determined that the connected region does not conform to the road characteristics, so as to avoid a large deviation between the line obtained by final fitting and the real road center line.
After the connected regions not conforming to the road characteristics are screened out, step S205 may be executed to set the pixel grayscale value of the connected region not conforming to the road characteristics as the pixel grayscale value corresponding to the non-road region.
Next, step S206 is executed to perform image blocking on the binarized scene image including the road region to obtain a plurality of image sub-blocks, as shown in fig. 2 d.
In this embodiment, the number of rows N of image sub-blocks may be selected based on empirical valuescolAnd number of columns Nrow. Wherein the number of rows N of image sub-blockscolThe selection requirements may be: the fitted line is smooth, and meanwhile, less time cost is consumed. Number of columns N of image sub-blocksrowThe selection requirements may be: and the fitted line is closer to the central line of the road to be detected, and meanwhile, less time cost is consumed.
After obtaining the plurality of image sub-blocks, step S207 may be executed to calculate the centroids of the road sub-regions included in the plurality of image sub-blocks according to the pixel distribution of the road sub-regions included in the plurality of image sub-blocks, and select the centroid, in which the pixel gray level of the centroid is greater than the set gray level threshold, as the effective centroid from the centroids of the road sub-regions included in the plurality of image sub-blocks.
In this embodiment, when calculating the centroid of the road sub-region included in the image sub-block for any one of the image sub-blocks, a pixel gray value matrix of the road sub-region may be obtained in advance to obtain pixel distribution of the road sub-region in the image sub-block, and then the centroid of the road sub-region is calculated based on the pixel gray value matrix.
Optionally, one way of determining whether the centroid of the sub-road region included in a certain image sub-block is an effective centroid is as follows: and judging whether the pixel gray scale of the mass center of the road subregion is larger than a set gray scale threshold value or not, and if so, determining the mass center of the road subregion as an effective mass center. The set gray level threshold is an empirical value, and this embodiment is not limited.
In one case, as shown in fig. 2e, of the image sub-blocks, there may be a case where some of the image sub-blocks contain a plurality of road sub-regions. In this case, optionally, the centroids of the plurality of road sub-regions may be calculated respectively according to the pixel distributions of the plurality of road sub-regions included in the image sub-block; then, an average of the centroids of the plurality of road sub-regions is calculated as the centroid of the image sub-block. As shown in fig. 2e, when the image sub-block a0 includes two road sub-blocks a01 and a02, the centroids a01(x1, y1) and a02(x2, y2) of the road sub-blocks a01 and a02 may be calculated, respectively, and [ (x1+ x2)/2, (y1+ y2)/2] is taken as the centroid of the image sub-block a 0.
Optionally, in step S208, after the effective centroids are determined, it may be determined whether the number of the effective centroids is greater than a set number threshold, and if so, line fitting may be performed according to the effective centroids to obtain a central line of the road to be detected.
The set number threshold is an empirical value, and the value can be set according to the number of the image sub-blocks. Alternatively, in some embodiments, the number threshold may be set to [ N ]col*Nrow/3]Wherein N iscol*NrowIs the number of image sub-blocks, [ 2]]Indicating an integer fetch operation. For example, when the number of image sub-blocks is 10, the number threshold may be set to 3, and when the number of image sub-blocks is 20, the number threshold may be set to 6. Of course, in practice, the number threshold may also be set to [ N ]col*Nrow/4]Or [ Ncol*Nrow/5]The present embodiment is not limited.
Alternatively, when performing straight line fitting according to the effective centroid, effective centroids whose vertical coordinates are within the same range may be identified according to the vertical coordinates of the effective centroids. The effective mass center with the ordinate in the same range means that the image sub-blocks corresponding to the effective mass center are in the same row. As shown in fig. 2f, the effective centroids a01, B01 and C01 are effective centroids whose ordinate is in the same range, the effective centroids a11, B11 and C11 are effective centroids whose ordinate is in the same range, and the effective centroids a21, B21, C21 and D21 are effective centroids whose ordinate is in the same range. Then, the average position of the effective centroid with the ordinate located in the same range is calculated, and a plurality of average positions with the ordinate located in different ranges are obtained. Taking the above example, this step may obtain the average positions P0 of the effective centroids a01, B01, C01, the average position P1 of the effective centroids a11, B11, C11, and the average position P2 of the effective centroids a21, B21, C21, D21, as shown in fig. 2 g. Then, line fitting can be performed based on the average positions P0, P1, and P2 to obtain a central line of the road to be detected.
In the embodiment, according to the known road hue characteristic and road saturation characteristic, the scene image of the road to be detected can be subjected to binarization processing to obtain a binarized scene image; then, removing non-road areas from the binarized scene image to obtain a binarized scene image containing road areas, and performing image blocking on the binarized scene image containing road areas to obtain a plurality of image sub-blocks; and based on the centroid characteristics of the road subregions contained in the image subblocks, line fitting can be carried out to obtain the central line of the road to be detected. Based on the embodiment, after the scene image of the road to be detected is acquired, the central line of the road to be detected is extracted from the scene image of the road to be detected, and the real-time performance is high. In addition, the line fitting mode is carried out based on the mass center characteristics of the road sub-regions contained in the image sub-blocks, so that the calculation amount is reduced, and the accuracy of road center detection is effectively improved.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of step 201 to step 204 may be device a; for another example, the execution subject of steps 201 and 202 may be device a, and the execution subject of step 203 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 201, 202, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
The above embodiment describes an optional implementation of the road center detection method provided by the present invention, and the method may be implemented by an onboard device shown in fig. 3, and optionally, the onboard device includes: a memory 301 and a processor 302.
The memory 301 is used to store one or more computer instructions and may be configured to store various other data to support operations on the on-board device. Examples of such data include instructions for any application or method operating on the on-board device.
The memory 301 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
In some embodiments, memory 301 optionally includes memory located remotely from processor 302, which may be connected to an onboard device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
A processor 302, coupled to the memory 301, to execute the one or more computer instructions to: according to the known road hue characteristic and the road saturation characteristic, carrying out binarization processing on the acquired scene image of the road to be detected to obtain a binarized scene image; removing non-road areas from the binarized scene image to obtain a binarized scene image containing road areas; carrying out image blocking on the binarization scene image containing the road area to obtain a plurality of image sub-blocks; and performing line fitting according to the mass center characteristics of the road subregions contained in the image subblocks to obtain the central line of the road to be detected.
Further optionally, when performing binarization processing on the scene image according to the known road hue feature and road saturation feature to obtain a binarized scene image, the processor 302 is specifically configured to: extracting a hue component image and a saturation component image from the scene image; according to the road tone characteristic, carrying out binarization processing on the pixel gray scale on the tone component image to obtain a binarized tone image; according to the road saturation feature, carrying out binarization processing on the pixel gray scale of the saturation component image to obtain a binarized saturation image; and carrying out bitwise logical AND operation on the binarized tone image and the binarized saturation image to obtain the binarized scene image.
Further optionally, when removing the non-road region from the binarized scene image, the processor 302 is specifically configured to: drawing a bounding box of at least one connected region in the binarized scene image; screening out a communicated region which does not accord with the road characteristics from the at least one communicated region according to the shape characteristics and the area characteristics of the surrounding frame of the at least one communicated region; and setting the pixel gray value of the connected region which does not accord with the road characteristics as the pixel gray value corresponding to the non-road region.
Further optionally, the processor 302, when screening out a connected region that does not conform to the road feature from the at least one connected region according to the shape feature and the area feature of the bounding box of the at least one connected region, is specifically configured to: and aiming at any one of the at least one communication area, executing at least one judgment operation as follows: judging whether the distance between the top of the surrounding frame of the connected region and the top of the binarized scene image and the distance between the bottom of the surrounding frame of the connected region and the bottom of the binarized scene image are both larger than a set size threshold value;
judging whether the area ratio of the surrounding frame of the communication area to the area of the communication area is not in the range of the set area ratio; judging whether the aspect ratio of the bounding box of the connected region is smaller than a set aspect ratio threshold value or not; and if the at least one judgment operation result is yes, determining that the connected region is a connected region which does not accord with the road characteristics.
Further optionally, the processor 302 is further configured to: denoising the binarized scene image: removing a connected region with the area smaller than a set area threshold value in the binarized scene image; and/or filling holes in the binarized scene image.
Further optionally, when the processor 302 performs line fitting according to the centroid features of the sub-regions of the road included in the plurality of image sub-blocks to obtain a central line of the road to be detected, the processor is specifically configured to: respectively calculating the mass centers of the road subregions contained in the image subblocks according to the pixel distribution of the road subregions contained in the image subblocks; screening out a mass center of which the pixel gray level is greater than a set gray level threshold value from the mass centers of the road subregions contained in the image subblocks as an effective mass center; and if the number of the effective centroids is larger than a set number threshold, performing line fitting according to the effective centroids to obtain the central line of the road to be detected.
Further optionally, when the processor 302 calculates the centroids of the road subregions included in the plurality of image sub-blocks according to the pixel distribution of the road subregions included in the plurality of image sub-blocks, it is specifically configured to: for any image subblock in the image subblocks, if the image subblock comprises a plurality of road subregions, respectively calculating the mass centers of the road subregions according to the pixel distribution of the road subregions; and calculating the average value of the mass centers of the plurality of road subregions as the mass center of the image subblock.
Further optionally, when the processor 302 performs line fitting according to the effective centroid to obtain the central line of the road to be detected, the processor is specifically configured to: according to the ordinate of the effective centroid, identifying the effective centroid of which the ordinate is located in the same range; and fitting lines according to the average position of the effective centroid of which the ordinate is positioned in the same range to obtain the central line of the road to be detected.
Further optionally, as shown in fig. 3, the onboard apparatus further includes: an input device 303 and an output device 304. The input device 303 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the onboard equipment, for example, the input device 303 may include a camera for capturing an image of a road to be detected. The output means 304 may comprise a display device such as a display screen.
Further, as shown in fig. 3, the onboard apparatus further includes: the power supply component 305. The power supply component 305 provides power to the various components of the device in which the power supply component is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
As shown in fig. 3, the memory 301, the processor 302, the input device 303, the output device 304, and the power supply component 305 may be connected by a bus or other means, and the bus connection is taken as an example in the figure. In other connection manners not shown, the memory 301 may be directly coupled to the processor 302, and the input device 303 and the output device 304 may be directly or indirectly connected to the processor 302 through data lines and a data interface. Of course, the above connection manner is only used for exemplary illustration, and does not limit the protection scope of the embodiment of the present invention at all.
The onboard equipment can execute the road center detection method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided in the embodiment of the present application, and details are not described again.
The invention also provides a computer-readable storage medium storing a computer program which, when executed, enables the implementation of the steps of the method which can be performed by the onboard apparatus.
The above-described device embodiments are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.