CN110765875A - Method, equipment and device for detecting boundary of traffic target - Google Patents

Method, equipment and device for detecting boundary of traffic target Download PDF

Info

Publication number
CN110765875A
CN110765875A CN201910893262.8A CN201910893262A CN110765875A CN 110765875 A CN110765875 A CN 110765875A CN 201910893262 A CN201910893262 A CN 201910893262A CN 110765875 A CN110765875 A CN 110765875A
Authority
CN
China
Prior art keywords
boundary
detection
target
spectrum
probability spectrum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910893262.8A
Other languages
Chinese (zh)
Other versions
CN110765875B (en
Inventor
薛佳乐
敦婧瑜
张湾湾
李轶锟
王耀农
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201910893262.8A priority Critical patent/CN110765875B/en
Publication of CN110765875A publication Critical patent/CN110765875A/en
Application granted granted Critical
Publication of CN110765875B publication Critical patent/CN110765875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The application discloses a method, equipment and a device for detecting a boundary of a traffic target, wherein the detection method comprises the following steps: acquiring an image to be detected containing a traffic target; carrying out boundary detection on a traffic target in an image to be detected so as to obtain a pre-detection area; calculating boundary point probability spectrums of the pre-detection areas along different directions on the gray level images of the pre-detection areas; performing fusion processing on the boundary point probability spectrums in at least two mutually crossed directions to obtain a fusion probability spectrum; boundary detection is further performed on the fusion probability spectrum to distinguish the shadow region from the target region in the pre-detection region, thereby obtaining boundary coordinates of the target region. By means of the method, the detection accuracy of the traffic target in the image can be improved.

Description

Method, equipment and device for detecting boundary of traffic target
Technical Field
The present application relates to the field of image detection technologies, and in particular, to a method, a device, and an apparatus for detecting a boundary of a traffic target.
Background
Image detection is an important technical part of image recognition technology, and is widely applied in the current traffic field, living field and industrial field. In the traffic field, a traffic area needs to be monitored, and a traffic target needs to be extracted, so that the boundary detection of the traffic target is very important, that is, the image detection technology has a large application and market in the aspect of detection of the traffic target. The current image detection technology is not high in precision and is difficult to be applied to detecting traffic targets.
Disclosure of Invention
The method, the device and the device mainly solve the technical problem of providing the method, the device and the device for detecting the boundary of the traffic target, and can improve the detection accuracy of the traffic target in the image.
In order to solve the technical problem, the application adopts a technical scheme that: provided is a method for detecting a boundary of a traffic target, including:
acquiring an image to be detected containing a traffic target;
carrying out boundary detection on a traffic target in an image to be detected so as to obtain a pre-detection area;
calculating boundary point probability spectrums of the pre-detection areas along different directions on the gray level images of the pre-detection areas;
performing fusion processing on the boundary point probability spectrums in at least two mutually crossed directions to obtain a fusion probability spectrum;
boundary detection is further performed on the fusion probability spectrum to distinguish the shadow region from the target region in the pre-detection region, thereby obtaining boundary coordinates of the target region.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a boundary detecting apparatus of a traffic target, including: the communication circuit is coupled with the processor.
The processor is used for acquiring an image to be detected including a traffic target through the communication circuit. The processor is used for carrying out boundary detection on the traffic target in the image to be detected so as to obtain a pre-detection area. The processor is used for calculating boundary point probability spectrums of the pre-detection areas along different directions on the gray-scale map of the pre-detection areas. The processor is used for carrying out fusion processing on the boundary point probability spectrums in at least two directions which are crossed with each other to obtain a fusion probability spectrum. The processor is used for further carrying out boundary detection on the fusion probability spectrum so as to distinguish the shadow area from the target area in the pre-detection area, thereby obtaining each boundary coordinate of the target area.
In order to solve the above technical problem, another technical solution adopted by the present application is: the device with the storage function stores program data, and the program data can be executed to realize the boundary detection method of the traffic target provided by the application.
Compared with the prior art, the beneficial effects of this application are: the boundary detection is carried out on the traffic target in the image to be detected to obtain a pre-detection area, after the preliminary boundary detection is carried out on the traffic target, the boundary point probability spectrums are calculated from different directions on the gray-scale image of the pre-detection area to calculate the point probability distribution of each boundary of the pre-detection area, the boundary point probability spectrums in at least two mutually crossed directions are subjected to fusion processing to obtain a fusion probability spectrum, so that the accuracy of the boundary point probability distribution is improved, the boundary is clearer, the boundary detection can be further carried out on the basis of the fusion probability spectrum, the shadow area and the target area can be more accurately distinguished, and each boundary of the target area is obtained, thus the preliminary boundary detection, the calculation of the boundary point probability spectrum and the fusion of the point probability spectrums are integrated, and the problem of inaccurate boundary detection in the prior art can be effectively improved, the target area and the shadow area are effectively distinguished, interference of shadow on traffic target detection is improved, accuracy of boundary detection of the target area is effectively improved, meanwhile, preliminary boundary detection, calculation of a boundary point probability spectrum and the like are integrated, an algorithm is simple, operation speed is high, time domain information is not needed, and limitation of data volume is avoided.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment of a traffic target boundary detection method according to the present application;
FIG. 2 is a schematic flow chart illustrating a second embodiment of a traffic target boundary detection method according to the present application;
FIG. 3 is a schematic diagram of a first process of a second embodiment of the method for detecting a boundary of a traffic target according to the present application;
FIG. 4 is a schematic coordinate diagram of a pre-detection area and a target area of a second embodiment of the method for detecting a boundary of a traffic target according to the present application;
FIG. 5 is a schematic sliding diagram of a sliding window in a second embodiment of the traffic target boundary detection method according to the present application;
FIG. 6 is a schematic diagram illustrating a second process of the second embodiment of the method for detecting a boundary of a traffic target according to the present application;
FIG. 7 is a schematic block circuit diagram of an embodiment of a traffic target boundary detection apparatus of the present application;
FIG. 8 is a schematic block circuit diagram of another embodiment of a traffic target boundary detection apparatus of the present application;
fig. 9 is a block diagram schematically illustrating the structure of the traffic target boundary detection apparatus according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The inventor of the present application finds, through long-term research, that how to effectively detect and segment the traffic target is always an industrial difficulty, because conditions and environments of the traffic target are relatively complex, for example, a vehicle walking on a road may be affected by shading or illumination of other objects to cause the traffic target to generate shadows, and the like, so that higher technical complexity exists when detecting and segmenting the traffic target. If the traffic target is detected and segmented by the threshold method, the shadow area is easily used as the part of the traffic target due to the existence of the boundary of the shadow, the accuracy is poor, and even if a new threshold condition is introduced, the shadow area is removed and the part of the traffic target is easily removed. If the traffic target is detected and segmented by a deep learning method of a convolutional neural network and the like, more image information, such as time domain information in a video frame, needs to be used, the calculation amount is large, and the problem of overusing is possibly caused. In order to improve the above problem, and effectively perform detection and object segmentation on traffic objects, the present application provides the following embodiments.
Referring to fig. 1, a first embodiment of a method for detecting a boundary of a traffic target according to the present application includes:
s101: and acquiring an image to be detected containing a traffic target.
The traffic target may be a vehicle, a pedestrian, or an object located within a traffic zone. Traffic objects are, for example, vehicles, people, etc. walking on the road, or other movable or immovable objects on the road.
The image to be detected contains a traffic target, namely the traffic target is displayed in the image. For example, the image to be detected including the traffic target may be photographed by a camera device, a monitoring device, or the like installed in the corresponding traffic area. For example, the image to be detected containing the traffic object may also be taken by a mobile terminal, such as a mobile phone, law enforcement recorder, digital camera, etc. Of course, the image to be detected may also be an image to be detected acquired from other devices, such as a usb disk, a hard disk, other terminals, or a mobile terminal, and may also be an image to be detected acquired from a cloud. The embodiment does not limit the way, manner, etc. of acquiring the image to be detected, nor the format, size, etc. of the acquired image to be detected.
S102: and carrying out boundary detection on the traffic target in the image to be detected so as to obtain a pre-detection area.
And carrying out boundary detection on the traffic target in the image to be detected, so that the boundary of the traffic target can be preliminarily detected, and a pre-detection area can be obtained. For the complex environment of the traffic target, the accuracy of extracting the boundary is low for the preliminary detection of the boundary of the traffic target, and the pre-detection area may include the traffic target and other areas.
For example, a shadow area exists around a traffic target, the contrast of the shadow area in an image to be detected is high due to the existence of the shadow area, when the boundary of the traffic target is detected, the boundary of the shadow area is also obvious, and the traffic target and the shadow area may exist in a pre-detection area. Therefore, as also in the first paragraph, it is difficult for the preliminary detection to accurately detect the boundary of the traffic target.
The embodiment can perform boundary detection on the traffic target in the image to be detected by using a threshold segmentation method. For example, a threshold segmentation method may be used to select different thresholds to perform boundary detection on the traffic target in the image to be detected for multiple times. Certainly, a threshold may also be set by the threshold segmentation method to perform boundary detection on the traffic target in the image to be detected, so as to obtain an area, another threshold is set on the area by the threshold segmentation method to perform another boundary detection, and of course, a third threshold segmentation, a fourth threshold segmentation, and the like may also be superimposed to obtain a pre-detection area. Alternatively, the threshold segmentation method may be used to perform boundary detection on the image to be detected only once, so as to obtain the pre-detection region.
Of course, the embodiment may also perform boundary detection on the traffic target in the image to be detected by using a deep learning method based on a convolutional neural network. Other algorithms or methods capable of performing target boundary/edge detection may also be used in the embodiment to perform boundary detection on the traffic target in the image to be detected.
S103: and calculating boundary point probability spectrums of the pre-detection areas along different directions on the gray-scale images of the pre-detection areas.
The shaded area is generally located around the traffic target. On the gray scale map of the pre-detection area, the traffic target and the shadow area have difference in gray scale, that is, the probability of the gray scale point of the boundary of the pre-detection area can be calculated by using the discontinuity of the edge gray scale between the traffic target and the shadow area. The boundary point probability spectrum can reflect the regions of the target region where the boundary points are likely to occur.
The present embodiment can calculate the boundary point probability spectrum of the pre-detection region from different directions. If the boundary point probability spectrum of the pre-detection area is calculated from only one direction, it is difficult to accurately reflect the point probability distribution of each boundary of the target area, and the shadow area and the dark area of the traffic target have large influence on the point probability distribution of the boundary in the boundary point probability spectrum, which also results in that the point probability distribution of each boundary cannot be accurately obtained. By calculating the boundary point probability spectrums of the pre-detection area from different directions, the boundary point probability spectrums from different directions can be comprehensively utilized, the accuracy of the boundary point probability is improved, and the subsequent corresponding processing can be conveniently carried out on the boundary point probability spectrums.
S104: and performing fusion processing on the boundary point probability spectrums in at least two directions which are intersected with each other to obtain a fusion probability spectrum.
The single boundary point probability spectrum may not accurately define the boundary of the target area where the traffic target is located, but each boundary point probability spectrum may be calculated based on the gray scale difference between the traffic target and the shadow area, so that the boundary point probability spectra in the directions intersecting with each other have respective point probability distributions at least on the same boundary, and thus the boundary point probability spectra in at least two directions intersecting with each other may be subjected to fusion processing, such as superposition processing, summation processing, averaging processing, and the like, to improve the accuracy of the point probability distributions of the same boundary between the boundary point probability spectra, thereby obtaining a fusion probability spectrum.
S105: boundary detection is further performed on the fusion probability spectrum to distinguish the shadow region from the target region in the pre-detection region, thereby obtaining boundary coordinates of the target region.
By obtaining the fusion probability spectrum in step S104, the accuracy of at least a part of the boundary of the target region can be improved. Therefore, boundary detection can be further carried out on the fusion probability spectrum, the shadow region and the target region can be distinguished in the pre-detection region, each boundary of the target region can be determined after the target region is detected, and each boundary coordinate of the target region can be calculated.
The embodiment carries out boundary detection on a traffic target in an image to be detected to obtain a pre-detection area, calculates boundary point probability spectrums from different directions on a gray scale map of the pre-detection area after primary boundary detection is carried out on the traffic target to calculate the point probability distribution of each boundary of the pre-detection area, and carries out fusion processing on the boundary point probability spectrums in at least two directions which are crossed with each other to obtain a fusion probability spectrum, thereby improving the accuracy of the boundary point probability distribution and enabling the boundary to be clearer, further carrying out boundary detection on the basis of the fusion probability spectrums, being capable of distinguishing a shadow area from a target area more accurately to obtain each boundary of the target area, thus integrating the primary boundary detection, the calculation of the boundary point probability spectrums and the fusion of the point probability spectrums, and being capable of effectively improving the problem of inaccurate boundary detection in the prior art, the target area and the shadow area are effectively distinguished, interference of shadow on traffic target detection is improved, accuracy of boundary detection of the target area is effectively improved, meanwhile, preliminary boundary detection, calculation of a boundary point probability spectrum and the like are integrated, an algorithm is simple, operation speed is high, time domain information is not needed, and limitation of data volume is avoided.
The second embodiment of the traffic target boundary detection method is based on the first embodiment, and further exemplary description is performed on calculating boundary point probability spectrums of pre-detection areas along different directions on a gray scale map, fusing the boundary point probability spectrums, calculating boundary coordinates of a target area, and the like. Referring to fig. 2 to 6, the present embodiment may include the following steps:
s201: and acquiring an image to be detected including a traffic target.
S202: and carrying out boundary detection on the traffic target in the image to be detected so as to obtain a pre-detection area.
As shown in fig. 3, the threshold segmentation method is used to perform boundary detection on a traffic target in an image to be detected to obtain a pre-detection area, and may include the following steps:
step 1: the original color image (shown as a in fig. 3) is converted into a grayscale binary image, and the image edge features are extracted using the sobel operator, as shown as B in fig. 3.
Step 2: the image is subjected to a reverse color process, as shown by C in fig. 3. Morphological operations such as dilation-erosion are used to remove the background and leave the target area, as shown at D in fig. 3.
Step 3: counting the number of pixel values in each row, calculating the proportion of pixels, wherein the noise is possibly generated when the pixel values are too high or too low and needs to be removed; then, the above operations are executed for each column; the above operation is repeatedly performed for each row. In this embodiment, the reasonable size ratio of the target may be set to 0.05 to 0.35, and the final result may be as shown by E in fig. 3.
Step 4: and obtaining a pre-detection area according to the calculated boundary at Step3 to obtain a pre-detection frame, wherein the rectangular frame is a pre-detection result as shown by F in FIG. 3.
Step 5: since the rows and columns with excessive pixels are removed in Step4, part of the information in the region of the target itself is also destroyed, the preliminary detection result (shown as G in fig. 3) is intercepted from the original detection result (shown as D in fig. 3) by using the detection frame, and the preliminary target segmentation result of the original image can be obtained according to the preliminary detection result, as shown as H in fig. 3.
Fig. 4 shows a schematic coordinate position relationship of a pre-detection region obtained from an image to be detected. In this embodiment, the boundary coordinates of the pre-detection region may be defined as [ x ]1,y1,x2,y2]。
S203 a: a plurality of sliding windows are arranged along a boundary of one side of the pre-detection area.
Alternatively, the pre-detection region may include an upper side boundary and a lower side boundary spaced apart from each other in a column direction of the image to be detected, and a left side boundary and a right side boundary spaced apart from each other in a row direction of the image to be detected.
A plurality of sliding windows arranged on one side boundary of the pre-detection area can respectively cover a plurality of pixel points on the side boundary. The plurality of pixel points may be spaced from each other or may be continuously adjacent to each other. Of course, each pixel on one side boundary of the pre-detection region may be correspondingly provided with a sliding window, so that the sliding window may slide along the corresponding direction from the one side boundary to calculate the corresponding boundary point probability spectrum. Referring to fig. 4, fig. 4 shows that each pixel point on the left side boundary of the pre-detection region is covered with a sliding window, and the sliding window slides along the row direction.
Alternatively, the boundary point probability spectrum may include a left boundary point probability spectrum calculated in a direction from the left boundary to the right boundary, a right boundary point probability spectrum calculated in a direction from the right boundary to the left boundary, and a lower boundary point probability spectrum calculated in a direction from the lower boundary to the upper boundary.
The shaded areas of the traffic object generally appear on the left, right, and lower sides of the traffic object as viewed from the photographing direction. For the left boundary point probability spectrum, the sliding window slides from the left boundary to the right boundary. For the right boundary point probability spectrum, the sliding window is slid from the right boundary to the left boundary. For the lower boundary point probability spectrum, a sliding window slides from the lower boundary to the upper boundary.
The shadow on the upper side of the traffic object, viewed from the shooting direction, tends to be occluded by the traffic object itself. For the upper boundary, the upper boundary point probability spectrum may not be computed. Alternatively, the upper boundary of the pre-detection region may be the upper boundary of the target region. Of course, in other embodiments, the upper boundary point probability spectrum may also be calculated.
S203 b: and moving the sliding window along the direction, and comparing the gray value of the pixel currently covered by the sliding window with the gray value of the pixel previously covered by the sliding window.
In this embodiment, the gray scale value may be a gray scale, which refers to the brightness value of the pixel in the darkest to brightest range, the gray scale range from darkest to brightest is 0-255, 0 represents black, and 255 represents white. Of course, the gray scale value in this embodiment may be a converted value K of the gray scale value, which is gray scale value/255, and the range of K is 0-1, 0 represents black, and 1 represents white.
As an example shown in fig. 4, the size of the sliding window is 1, the sliding window covers one pixel at a time, and the sliding step size may be one pixel. The gray value of the pixel currently covered by the sliding window is compared with the gray value of the pixel previously covered by the sliding window in the same direction.
As another example, the size of the sliding window may be 2, the sliding window covering one pixel at a time, and the sliding step may be one pixel. The gray values of the two pixels currently covered by the sliding window are compared, and after one pixel is slid, the gray values of the two pixels covered in the sliding window are compared. For example [ 0.10.2 ] 0.40.5, compare the gray scale values of 0.1 and 0.2, the sliding window is shifted by one more pixel 0.1[ 0.20.4 ]0.5, compare the gray scale values of 0.2 and 0.4, continue shifting by one pixel 0.10.2 [0.40.5], compare the gray scale values of 0.4 and 0.5. Of course, when the sliding window size is 2, the sliding step size may be two pixels. For example [ 0.10.2 ] 0.40.5, which becomes 0.10.2 [0.40.5] after one sliding, the average gray value of the pixel currently covered by the sliding window can be compared with the average gray value of the pixel covered by the sliding window last time.
S203 c: and if the gray value of the pixel covered currently is smaller than the gray value of the pixel covered last time, replacing the gray value of the pixel covered currently with the gray value of the pixel covered last time.
S203 d: and if the gray value of the pixel covered currently is larger than the gray value of the pixel covered last time, reserving the gray value of the pixel covered currently.
That is, the larger the gradation value is, the more the shift toward pure white is equivalent. The smaller the gray value, the more shifted to pure black. Pure white is 1 and pure black is 0. The gray scale difference exists between the target area and the shadow area, the gray scale discontinuity exists at the boundary position in the calculation direction, and the gray scale value is updated through a sliding window. In one exemplary case the target area is generally a larger gray scale value than the shaded area.
For example, if the sliding window slides to a certain pixel, the gray value of the current pixel is smaller than the gray value of the pixel covered last time by the sliding window, which indicates that the current pixel may not be the boundary point, or the probability of being the boundary point is small, so that the current pixel is replaced with the gray value of the pixel covered last time.
For example, if the sliding window slides to a certain pixel, the gray value of the current pixel is greater than the gray value covered by the sliding window last time, which indicates that the current pixel may be a boundary point or located in the target area.
An exemplary way of calculating the lower, left and right boundary point probability spectra is described as follows:
as shown in fig. 6, a in fig. 6 is a gray scale diagram of the pre-detection area cut by the pre-detection frame shown as F in fig. 3. As shown in c in fig. 6, a plurality of sliding windows with a window size of 1 are used to slide from the lower boundary of the pre-detection region to the upper boundary of the pre-detection region along the row direction, the gray value of the current pixel is compared with the gray value of the pixel covered last time in the same row, and the gray value of the current pixel is updated to be the larger value of the two, wherein the calculation method of the lower boundary point probability spectrum may be:
Figure BDA0002209436860000091
as shown in b in fig. 6, a plurality of sliding windows with a window size of 1 are used to slide from the left boundary of the pre-detection region to the right boundary of the pre-detection region along the row direction, the gray value of the current pixel point is compared with the gray value of the pixel point covered last time on the same row, and the gray value of the current pixel point is updated to be the larger value of the two, wherein the calculation method of the lower boundary point probability spectrum may be:
Figure BDA0002209436860000092
as shown in c in fig. 6, a plurality of sliding windows with a window size of 1 are used to slide from the right boundary of the pre-detection region to the left boundary of the pre-detection region along the row direction, the pixel value of the current pixel point and the pixel value of the pixel point covered last time in the same row are compared, and the gray value of the current pixel point is updated to be the larger value of the two, wherein the calculation method of the lower boundary point probability spectrum may be:
Figure BDA0002209436860000101
wherein i, j respectively represent the row direction coordinate and the column direction coordinate of each pixel in the gray scale map, pi,jIs the gray value, x, of the current pixel point in the pre-detection region1As the left boundary coordinate, x, of the pre-detection area2As the right boundary coordinate of the pre-detection area, y2Is the lower boundary coordinate of the pre-detection area.
For example, for the size of the sliding window larger than 2, the sliding step is consistent with the size of the window, so that the average value of the gray values of the pixels currently covered by the sliding window can be solved first, the average value of the gray values of the pixels previously covered by the sliding window can be solved, the sizes of the two average values are compared, and the gray values of the pixels currently covered are replaced by the larger value of the two average values.
The above steps S203a-S203d may further exemplarily describe the step S103 in the first embodiment of the traffic target boundary detection method of the present application, that is, S103 may include steps S203a-S203 d.
S204 a: and carrying out weighted summation processing on the lower side boundary point probability spectrum and the left side boundary point probability spectrum to obtain a first fusion probability spectrum.
The lower boundary point probability spectrum is calculated by updating the gray scale value by sliding the lower boundary of the pre-detection region to the upper boundary of the pre-detection region through a sliding window. Probability spectra BCM at the lower boundary points, as shown in c in FIG. 6i,jThere may be a lower boundary, a left boundary and a right boundary of the target area.
For the left boundary point probability spectrum, it is calculated by updating the gray scale value by sliding the sliding window from the left boundary of the pre-detection region to the right boundary of the pre-detection region. Probability spectrum LCM at the left-hand boundary point, as shown by b in FIG. 6i,jThere may be left and lower boundaries of the target region.
Therefore, the lower side boundary point probability spectrum and the left side boundary point probability spectrum are subjected to weighted summation processing to obtain a first fused probability spectrum. The first fusion probability spectrum includes the left boundary of the target region, so that the left boundary of the target region can be calculated.
As shown in fig. 6 e, the first fused probability spectrum may be calculated as follows:
BLCMi,j=δ×BCMi,j+μ×LCMi,jin which μ>0,δ>0,μ+δ=1。
Wherein i, j respectively represent the row direction coordinate and the column direction coordinate of each pixel in the gray scale image, μ, δ respectively represent the weight of the lower side boundary point probability spectrum and the left side boundary point probability spectrum, BLCMi,jRepresenting a first fusion probability spectrum.
Alternatively, the weighted sum process performed on the lower side boundary point probability spectrum and the left side boundary point probability spectrum may be an average weighting. That is, μ is 0.5 and δ is 0.5.
S204 b: and carrying out weighted summation processing on the lower side boundary point probability spectrum and the right side boundary point probability spectrum to obtain a second fusion probability spectrum.
For the right boundary point probability spectrum, it is calculated by updating the gray scale value by sliding the sliding window from the right boundary of the pre-detection region to the left boundary of the pre-detection region. Probability spectrum RCM at right edge point, as shown by d in FIG. 6i,jIn (1), there may be a right side boundary and a lower side boundary of the target region.
Therefore, the lower side boundary point probability spectrum and the right side boundary point probability spectrum are subjected to weighted summation processing to obtain a second fused probability spectrum. The second fusion probability spectrum includes the right boundary of the target region, so that the right boundary of the target region can be calculated. As shown by f in fig. 6:
BRCMi,j=α×BCMi,j+β×RCMi,jwherein α>0,β>0,α+β=1。
Wherein i, j respectively represent the line direction coordinate and the column direction coordinate of each pixel in the gray scale image α respectively represents the weight of the lower side boundary point probability spectrum and the left side boundary point probability spectrum, BRCMi,jRespectively, represent the second fusion probability spectra.
Optionally, the weighted sum of the lower boundary point probability spectrum and the right boundary point probability spectrum is an average weight, i.e., α -0.5 and β -0.5.
S204 c: and denoising the first fusion probability spectrum and the second fusion probability spectrum respectively, wherein low probability points are removed through threshold filtering, and high probability points are removed through median filtering.
In order to filter out the interference information, a threshold value is set through threshold value filtering to filter the boundary points with low probability. Taking the first fused probability spectrum as an example, the threshold is θ, and the filtering process may be as follows:
BLCMi,j=0if(BLCMi,j<θ);
for gray scale values of 0-255, 0< θ < 255. Alternatively, θ may be set to 191. For a gray scale value of 0-1, 0< θ <1, the threshold may be set to 0.75. Of course, the threshold of this threshold filtering can be determined by analyzing the brightness of the shadow area.
Further, since the pixels of the target are densely concentrated, some discrete high-probability boundary points also belong to noise points. Thus, discrete noise is removed by at least one median filter.
The denoising process of the first fusion probability spectrum may refer to the denoising process of the first fusion probability spectrum. Alternatively, the threshold values selected by the threshold filtering of the first fusion probability spectrum and the threshold filtering of the second fusion probability wave may be the same or different.
The above steps S204a-S204b may further exemplarily describe the step S104 in the first embodiment of the traffic target boundary detection method of the present application, that is, S104 may include steps S204a-S204 b.
S205 a: and determining the left boundary coordinate and the first lower boundary coordinate of the target region according to the first fusion probability spectrum.
Fig. 4 shows a schematic coordinate relationship of the target region and the pre-detection region. The boundary coordinates of the target region may be set to [ x'1,y'1,x'2,y'2]。
The first fused probability spectrum contains the left boundary of the target region. The present embodiment may determine the left boundary of the target region in a statistical manner. For example, since the target region and the shadow region can be distinguished at the left boundary of the target region, as shown in e in fig. 6, the gray value at the left boundary is higher than the gray value of each column in the shadow region, statistics may be performed from the left boundary of the pre-detection region along the row direction, the sum of the gray values of the pixels in each column may be counted, and when the sum of the gray values of a column is greater than a preset threshold, the column may be considered as the left boundary of the target region, so that the left boundary coordinate may be determined. As shown by e in fig. 6, the gray scale values of the pixel points in each column may be counted in the row direction from the black part of the left boundary, and the left boundary coordinate x 'shown in fig. 4 may be obtained by the above manner'1
Of course, statistics may be started from the left side boundary of the pre-detection region along the row direction, statistics may be performed on each row of pixels whose gray values are greater than a certain threshold, whether there is at least part of the pixels forming a continuous line segment, and if there is a continuous line segment, the row where the continuous line segment is originally formed may be used as the left side boundary of the target region. In other embodiments, if there are continuous line segments in different columns, one of the continuous line segments with the largest number of continuous pixels may be used as the left boundary. As shown in e in fig. 6, statistics may be performed in the row direction from the black portion of the left boundary, a threshold may be set to exclude the black portion, an edge between the white portion and the black portion may be counted, a continuous line segment formed by pixel points of each column forming the edge may be counted, the leftmost continuous line segment may be selected as the left boundary of the target area, and left boundary coordinates x 'shown in fig. 4 may be obtained'1
S205 b: and determining the right side boundary coordinate and the second lower side boundary coordinate of the target region according to the second fusion probability spectrum.
The second fused probability spectrum contains the right boundary of the target region. The present embodiment may also determine the right boundary of the target region in a statistical manner. For example, due to the presence of a target zoneThe target area and the shadow area can be distinguished at the right boundary of the domain, as shown in f in fig. 6, the gray value at the right boundary is higher than the gray value of each column in the shadow area, so that statistics can be started from the right boundary of the pre-detection area along the row direction, the sum of the gray values of the pixels in each column is counted, when the sum of the gray values of a certain column is greater than a preset threshold, the column can be considered as the left boundary of the target area, and thus the right boundary coordinate can be determined. As shown in f in fig. 6, the gray scale values of the pixel points in each column may be counted in the row direction from the black part of the right boundary, and the right boundary coordinate x 'shown in fig. 4 may be obtained in the above manner'2
Of course, statistics may be started from the right side boundary of the pre-detection region along the row direction, and the statistics may be performed on each row of pixels whose gray values are greater than another preset threshold, to determine whether at least some of the pixels are continuous among the pixels, and if there are continuous pixels forming a continuous line segment, the row where the continuous line segment is originally formed may be used as the right side boundary of the target region. In other embodiments, if there are continuous line segments in different columns, the right boundary may be the one with the largest number of continuous pixels. As shown in f in fig. 6, statistics may be performed in the row direction from the black portion of the right boundary, a threshold may be set to exclude the black portion, an edge which is located between the white portion and the black portion and has discontinuous gray values may be counted, a continuous line segment formed by pixel points of each column forming the edge may be counted, the rightmost continuous line segment may be selected as the left boundary of the target area, and the right boundary coordinate x 'shown in fig. 4 may be obtained'2
S205 c: the first and second lower boundary coordinates are weighted and summed to obtain a third lower boundary coordinate of the target area.
The actual value of the lower boundary coordinate of the target region has a certain correlation with both the first fusion probability spectrum and the second fusion probability spectrum. In order to improve the accuracy of the calculation of the lower boundary coordinates, the embodiment performs weighted summation through the first lower boundary coordinates determined by the first fusion probability spectrum and the second lower boundary coordinates determined by the second fusion probability spectrum, and the calculation method is as follows:
y′2=γ×yBLCMM+λ×yBRCMMwherein γ is>0,λ>0,γ+λ=1
Wherein, y'2BLCMM represents a first fused probability spectrum, y, for the lower boundary coordinates of the target regionBLCMMIs the first lower boundary coordinate of the first fused probability spectrum, and BRCMM is the second fused probability spectrum, yBRCMMAnd gamma and lambda are weights of the first lower boundary coordinate and the second lower boundary coordinate respectively.
Optionally, the weighted sum of the first lower boundary coordinate and the second lower boundary coordinate is a weighted average sum. That is, γ is 0.5 and λ is 0.5.
As set forth in step S203a above, the upper boundary coordinate y 'of the target area'1May be the upper boundary coordinate y of the pre-detection region1The boundary coordinates [ x 'of the target region'1,y'1,x'2,y'2]It can be calculated that the target detection box can be determined as such, as shown by g in fig. 6.
The above steps S205a-S205c may further exemplarily describe the step S105 in the first embodiment of the traffic target boundary detection method of the present application, that is, S105 may include steps S205a-S205 c.
S206: and carrying out fusion processing on the first fusion probability spectrum and the second fusion probability spectrum to obtain a third fusion probability spectrum.
In one embodiment, the first fused probability spectrum and the second fused probability spectrum are subjected to a weighted summation process.
In another embodiment, the boundary of the traffic target is not a rule or is a simple straight line, the first fusion probability spectrum retains more accurate left information of the traffic target, the second fusion probability spectrum retains more accurate right information of the traffic target, and the boundary or the contour of the traffic target can be accurately obtained through the fusion of the first fusion probability spectrum and the second fusion probability spectrum, so that the predetermined coordinate can be selected from the left boundary of the target area to the right boundary of the target area. And combining the left information of the first fusion probability spectrum between the preset coordinates and the left boundary of the target area and the right information of the second fusion probability spectrum between the preset coordinates and the right boundary of the target area to fuse to form a third fusion probability spectrum. As shown by h in fig. 6, the calculation may be as follows:
Figure BDA0002209436860000151
wherein, x'midTo predetermined coordinates, TMi,jFor the target mask, BLCMMi,jRepresenting a first fused probability spectrum, BRCMMi,jIs expressed as a second fused probability spectrum, x'1Is the left boundary coordinate, x 'of the target region'2The right boundary coordinates of the target area.
In the embodiment, the first fusion probability spectrum and the second fusion probability spectrum are subjected to fusion processing, so that the outline of the traffic target can be accurately detected in the target area to obtain the third fusion probability spectrum, and the traffic target can be conveniently segmented in the third fusion probability spectrum.
S207: and performing target segmentation on the traffic target based on the third fusion probability spectrum.
As shown in i in fig. 6, the traffic target is accurately segmented by determining the traffic target in the third fusion probability spectrum and using the third fusion probability spectrum to calibrate the traffic target or perform target segmentation from the target area.
Through the mode, the shadow interference during traffic target detection and cutting can be effectively removed, the traffic target is detected and divided on the basis of the pre-detection result, the algorithm is simple, the operation speed is high, time domain information is not needed, and the method is not limited by data volume. The embodiment optimizes the probability of the boundary point based on the traffic target, is not easy to be interfered by special points, has better robustness, calculates the probability of the boundary point on the basis of the pre-detection area of the traffic target, and can simultaneously optimize the detection result and the segmentation result.
Referring to fig. 7, a boundary detection device 100 according to an embodiment of the present application includes: display screen 103, processor 101, and communication circuitry 102. The communication circuit 102 is coupled to the processor 101, and the display screen 103 is coupled to the processor 101.
The processor 101 controls the operation of the resource server, and the processor 101 may also be referred to as a Central Processing Unit (CPU). The processor 101 may be an integrated circuit chip having signal processing capabilities. The processor 101 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The processor 101 is configured to acquire an image to be detected including a traffic object through the communication circuit 102. The processor 101 is configured to perform boundary detection on a traffic target in an image to be detected to obtain a pre-detection area. The processor 101 is configured to calculate boundary point probability spectra of the pre-detection region in different directions on the gray scale map of the pre-detection region. The processor 101 is configured to perform a fusion process on the boundary point probability spectrums in at least two directions crossing each other to obtain a fused probability spectrum. The processor 101 is configured to perform further boundary detection on the fused probability spectrum to distinguish the shadow region from the target region in the pre-detection region, thereby obtaining boundaries of the target region. The display screen 103 can display the image to be detected and the detection process.
Optionally, the processor 101 is configured to set a plurality of sliding windows along a side boundary of the pre-detection region. The processor 101 is configured to move the sliding window in the direction, and compare the gray level of the pixel currently covered by the sliding window with the gray level of the pixel previously covered by the sliding window; if the gray value of the pixel covered currently is smaller than the gray value of the pixel covered last time, replacing the gray value of the pixel covered currently with the gray value of the pixel covered last time; and if the gray value of the pixel covered currently is larger than the gray value of the pixel covered last time, reserving the gray value of the pixel covered currently.
Alternatively, the pre-detection region includes an upper boundary and a lower boundary spaced from each other in a column direction of the image to be detected, and a left boundary and a right boundary spaced from each other in a row direction of the image to be detected, and the boundary point probability spectrum includes a left boundary point probability spectrum calculated in a direction from the left boundary to the right boundary, a right boundary point probability spectrum calculated in a direction from the right boundary to the left boundary, and a lower boundary point probability spectrum calculated in a direction from the lower boundary to the upper boundary.
Optionally, each sliding window covers one pixel, and the sliding step of the sliding window is one pixel.
Optionally, the processor 101 is configured to perform a weighted summation process on the lower side boundary point probability spectrum and the left side boundary point probability spectrum to obtain a first fused probability spectrum. The processor 101 is configured to perform a weighted summation process on the lower side boundary point probability spectrum and the right side boundary point probability spectrum to obtain a second fused probability spectrum.
Optionally, the processor 101 is configured to perform weighted summation processing on the lower side boundary point probability spectrum and the left side boundary point probability spectrum, and perform weighted summation processing on the lower side boundary point probability spectrum and the right side boundary point probability spectrum by average weighting respectively.
Optionally, the processor 101 is configured to perform denoising processing on the first fused probability spectrum and the second fused probability spectrum respectively, where low probability points are removed by threshold filtering, and high probability points are removed by median filtering.
Optionally, the processor 101 is configured to determine left boundary coordinates and first lower boundary coordinates of the target region according to the first fusion probability spectrum. The processor 101 is configured to determine a right side boundary coordinate and a second lower side boundary coordinate of the target region according to the second fusion probability spectrum. The processor 101 is configured to perform a weighted summation of the first lower boundary coordinate and the second lower boundary coordinate to obtain a third lower boundary coordinate of the target area.
Optionally, the processor 101 is configured to perform a weighted summation of the first fused probability spectrum and the second fused probability spectrum to obtain a third fused probability spectrum. The processor 101 is configured to perform object segmentation on the traffic object based on the third fused probability spectrum.
The boundary detection device 100 of the traffic target described in this embodiment is, for example, a mobile terminal or a terminal, such as one or a combination of multiple types of mobile phones, computers, servers, tablet computers, video cameras, and the like.
Referring to fig. 8, a boundary detection device 200 according to another embodiment of the present application for a boundary detection device of a traffic target includes an obtaining module 201, a pre-detection module 202, a calculation module 203, a fusion module 204, and a detection and classification module 205.
The acquiring module 201 is used for acquiring an image to be detected including a traffic target. The pre-detection module 202 is configured to perform boundary detection on a traffic target in an image to be detected to obtain a pre-detection area. The calculation module 203 is used for calculating the boundary point probability spectra of the pre-detection area along different directions on the gray scale map of the pre-detection area. The fusion module 204 is configured to perform fusion processing on the boundary point probability spectrums in at least two directions crossing each other to obtain a fusion probability spectrum. The detection distinguishing module 205 is configured to further perform boundary detection on the fused probability spectrum to distinguish the shadow region from the target region in the pre-detection region, so as to obtain boundary coordinates of the target region.
For the explanation of each functional module in this embodiment, reference may be made to the explanation in the first embodiment and the second embodiment of the traffic target detection method in the present application, and details are not described here again. In the several embodiments provided in the present application, it should be understood that the disclosed boundary detection device and the detection method of the traffic target may be implemented in other manners. For example, the above-described embodiments of the boundary detection apparatus are merely illustrative, and for example, the division of the modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Referring to fig. 9, the integrated unit may be stored in the device with storage function 300 if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage device and includes instructions (program data) for causing a computer (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. The aforementioned storage device includes: various media such as a usb disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and electronic devices such as a computer, a mobile phone, a notebook computer, a tablet computer, and a camera having the storage medium.
The description of the program of the device with the storage function may refer to the first embodiment and the second embodiment of the traffic target detection method of the present application, and will not be described herein again.
In summary, the traffic target is accurately detected and segmented by utilizing the fusion calculation of the boundary point probability in the pre-detection area obtained by the preliminary boundary detection, so that the shadow interference during the detection and the segmentation of the traffic target is effectively removed, the detection and the segmentation of the traffic target are realized on the basis of the pre-detection result, the algorithm is simple, the operation speed is high, time domain information is not needed, and the data volume is not limited. The embodiment optimizes the probability of the boundary point based on the traffic target, is not easy to be interfered by special points, has better robustness, calculates the probability spectrum of the boundary point on the basis of the pre-detection area of the traffic target, and can simultaneously optimize the detection result and the segmentation result.
Of course, the technical solution of the present application can also be applied to the detection and image segmentation of objects other than traffic targets.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (12)

1. A method for detecting a boundary of a traffic object, comprising:
acquiring an image to be detected comprising the traffic target;
carrying out boundary detection on the traffic target in the image to be detected so as to obtain a pre-detection area;
calculating a boundary point probability spectrum of the pre-detection area along different directions on a gray scale image of the pre-detection area;
performing fusion processing on at least two boundary point probability spectrums in the direction which are crossed with each other to obtain a fusion probability spectrum;
and further carrying out boundary detection on the fusion probability spectrum to distinguish a shadow region from a target region in the pre-detection region so as to obtain each boundary coordinate of the target region.
2. The detection method according to claim 1, characterized in that: the step of calculating the boundary point probability spectrum of the pre-detection area along different directions on the gray scale map of the pre-detection area comprises the following steps:
arranging a plurality of sliding windows along a boundary of one side of the pre-detection area;
moving the sliding window along the direction, and comparing the gray value of the pixel currently covered by the sliding window with the gray value of the pixel previously covered by the sliding window;
if the gray value of the pixel covered currently is smaller than the gray value of the pixel covered last time, replacing the gray value of the pixel covered currently with the gray value of the pixel covered last time;
and if the gray value of the pixel covered currently is larger than the gray value of the pixel covered last time, reserving the gray value of the pixel covered currently.
3. The detection method according to claim 2, characterized in that: the pre-detection region includes an upper boundary and a lower boundary spaced from each other in a column direction of an image to be detected, and an upper boundary and a lower boundary spaced from each other in a row direction of the image to be detected, and the boundary point probability spectrum includes a left boundary point probability spectrum calculated from the left boundary to the right boundary in a direction, a right boundary point probability spectrum calculated from the right boundary to the left boundary in a direction, and a lower boundary point probability spectrum calculated from the lower boundary to the upper boundary in a direction.
4. The detection method according to claim 3, characterized in that: each sliding window covers one pixel, and the sliding step of the sliding window is one pixel.
5. The detection method according to claim 3, characterized in that: the step of fusing the boundary point probability spectrums in the at least two directions which intersect with each other includes:
carrying out weighted summation processing on the lower side boundary point probability spectrum and the left side boundary point probability spectrum to obtain a first fusion probability spectrum;
and carrying out weighted summation processing on the lower side boundary point probability spectrum and the right side boundary point probability spectrum to obtain a second fusion probability spectrum.
6. The detection method according to claim 5, characterized in that: the weighted sum processing performed on the lower side boundary point probability spectrum and the left side boundary point probability spectrum and the weighted sum processing performed on the lower side boundary point probability spectrum and the right side boundary point probability spectrum are respectively average weighting.
7. The detection method according to claim 5, characterized in that: the step of fusing the boundary point probability spectrums in the at least two directions which intersect with each other includes:
and denoising the first fusion probability spectrum and the second fusion probability spectrum respectively, wherein low probability points are removed through threshold filtering, and high probability points are removed through median filtering.
8. The detection method according to claim 5, characterized in that: the step of further performing boundary detection on the fused probability spectrum comprises:
determining a left boundary coordinate and a first lower boundary coordinate of the target area according to the first fusion probability spectrum;
determining a right side boundary coordinate and a second lower side boundary coordinate of the target area according to the second fusion probability spectrum;
and carrying out weighted summation on the first lower side boundary coordinate and the second lower side boundary coordinate to obtain a third lower side boundary coordinate of the target area.
9. The detection method according to claim 7, characterized in that: the method further comprises:
weighting and summing the first fusion probability spectrum and the second fusion probability spectrum to obtain a third fusion probability spectrum;
and performing target segmentation on the traffic target based on the third fusion probability spectrum.
10. The detection method according to claim 1, characterized in that: the step of carrying out boundary detection on the traffic target in the image to be detected comprises the following steps:
and carrying out boundary detection on the traffic target in the image to be detected by utilizing a threshold segmentation mode.
11. An apparatus for detecting a boundary of a traffic object, comprising: a processor and communication circuitry coupled to the processor;
the processor is used for acquiring an image to be detected comprising the traffic target through the communication circuit;
the processor is used for carrying out boundary detection on the traffic target in the image to be detected so as to obtain a pre-detection area;
the processor is used for calculating boundary point probability spectrums of the pre-detection areas along different directions on the gray level images of the pre-detection areas;
the processor is used for carrying out fusion processing on the boundary point probability spectrums in the directions which are crossed with each other to obtain a fusion probability spectrum;
the processor is used for further performing boundary detection on the fusion probability spectrum to distinguish a shadow region from a target region in the pre-detection region, so as to obtain boundary coordinates of the target region.
12. An apparatus having a storage function, wherein program data is stored, the program data being executable to implement the detection method of any one of claims 1 to 10.
CN201910893262.8A 2019-09-20 2019-09-20 Method, equipment and device for detecting boundary of traffic target Active CN110765875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910893262.8A CN110765875B (en) 2019-09-20 2019-09-20 Method, equipment and device for detecting boundary of traffic target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910893262.8A CN110765875B (en) 2019-09-20 2019-09-20 Method, equipment and device for detecting boundary of traffic target

Publications (2)

Publication Number Publication Date
CN110765875A true CN110765875A (en) 2020-02-07
CN110765875B CN110765875B (en) 2022-04-19

Family

ID=69330779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910893262.8A Active CN110765875B (en) 2019-09-20 2019-09-20 Method, equipment and device for detecting boundary of traffic target

Country Status (1)

Country Link
CN (1) CN110765875B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613370A (en) * 2020-12-15 2021-04-06 浙江大华技术股份有限公司 Target defect detection method, device and computer storage medium
CN114367459A (en) * 2022-01-11 2022-04-19 宁波市全盛壳体有限公司 Video identification detection method for automatic painting UV curing equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101533514A (en) * 2007-11-14 2009-09-16 索尼株式会社 Object boundary accurate motion detection using hierarchical block splitting and motion segmentation
US20140093177A1 (en) * 2012-09-28 2014-04-03 Pfu Limited Image processing apparatus, image processing system and computer readable medium
CN104463853A (en) * 2014-11-22 2015-03-25 四川大学 Shadow detection and removal algorithm based on image segmentation
CN106920245A (en) * 2017-03-13 2017-07-04 深圳怡化电脑股份有限公司 A kind of method and device of border detection
CN106991411A (en) * 2017-04-17 2017-07-28 中国科学院电子学研究所 Remote Sensing Target based on depth shape priori becomes more meticulous extracting method
CN107220943A (en) * 2017-04-02 2017-09-29 南京大学 The ship shadow removal method of integration region texture gradient
CN108038864A (en) * 2017-12-05 2018-05-15 中国农业大学 A kind of extracting method and system of animal target image
CN108205675A (en) * 2016-12-20 2018-06-26 浙江宇视科技有限公司 The processing method and equipment of a kind of license plate image
CN109977767A (en) * 2019-02-18 2019-07-05 浙江大华技术股份有限公司 Object detection method, device and storage device based on super-pixel segmentation algorithm
CN110097569A (en) * 2019-04-04 2019-08-06 北京航空航天大学 Oil tank object detection method based on color Markov Chain conspicuousness model

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101533514A (en) * 2007-11-14 2009-09-16 索尼株式会社 Object boundary accurate motion detection using hierarchical block splitting and motion segmentation
US20140093177A1 (en) * 2012-09-28 2014-04-03 Pfu Limited Image processing apparatus, image processing system and computer readable medium
CN104463853A (en) * 2014-11-22 2015-03-25 四川大学 Shadow detection and removal algorithm based on image segmentation
CN108205675A (en) * 2016-12-20 2018-06-26 浙江宇视科技有限公司 The processing method and equipment of a kind of license plate image
CN106920245A (en) * 2017-03-13 2017-07-04 深圳怡化电脑股份有限公司 A kind of method and device of border detection
CN107220943A (en) * 2017-04-02 2017-09-29 南京大学 The ship shadow removal method of integration region texture gradient
CN106991411A (en) * 2017-04-17 2017-07-28 中国科学院电子学研究所 Remote Sensing Target based on depth shape priori becomes more meticulous extracting method
CN108038864A (en) * 2017-12-05 2018-05-15 中国农业大学 A kind of extracting method and system of animal target image
CN109977767A (en) * 2019-02-18 2019-07-05 浙江大华技术股份有限公司 Object detection method, device and storage device based on super-pixel segmentation algorithm
CN110097569A (en) * 2019-04-04 2019-08-06 北京航空航天大学 Oil tank object detection method based on color Markov Chain conspicuousness model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FENG JI等: ""Shadow Removal Based on Gray Correlation Analysis and Sobel Edge Detection Algorithm"", 《INTERNATIONAL SYMPOSIUM ON NEURAL NETWORKS》 *
丁爱玲等: ""单边侧阴影特征的车辆阴影去除"", 《智能系统学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613370A (en) * 2020-12-15 2021-04-06 浙江大华技术股份有限公司 Target defect detection method, device and computer storage medium
CN114367459A (en) * 2022-01-11 2022-04-19 宁波市全盛壳体有限公司 Video identification detection method for automatic painting UV curing equipment
CN114367459B (en) * 2022-01-11 2024-03-08 宁波市全盛壳体有限公司 Video identification and detection method for automatic painting UV curing equipment

Also Published As

Publication number Publication date
CN110765875B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN109978890B (en) Target extraction method and device based on image processing and terminal equipment
CN106780392B (en) Image fusion method and device
Wu et al. Lane-mark extraction for automobiles under complex conditions
KR101051459B1 (en) Apparatus and method for extracting edges of an image
EP3254236A1 (en) Method and apparatus for target acquisition
EP3438929B1 (en) Foreground and background detection method
WO2013186662A1 (en) Multi-cue object detection and analysis
CN107220962B (en) Image detection method and device for tunnel cracks
CN109478329B (en) Image processing method and device
CN112947419B (en) Obstacle avoidance method, device and equipment
US20170178341A1 (en) Single Parameter Segmentation of Images
CN111950543A (en) Target detection method and device
CN112364865B (en) Method for detecting small moving target in complex scene
US20150220804A1 (en) Image processor with edge selection functionality
CN110765875B (en) Method, equipment and device for detecting boundary of traffic target
CN117037103A (en) Road detection method and device
CN115049954A (en) Target identification method, device, electronic equipment and medium
KR101690050B1 (en) Intelligent video security system
CN114155278A (en) Target tracking and related model training method, related device, equipment and medium
CN111695373A (en) Zebra crossing positioning method, system, medium and device
CN106778822B (en) Image straight line detection method based on funnel transformation
CN109255797B (en) Image processing device and method, and electronic device
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN115083008A (en) Moving object detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant