CN110163147B - Binaryzation method, device, equipment and storage medium for stacking five-distance detection - Google Patents

Binaryzation method, device, equipment and storage medium for stacking five-distance detection Download PDF

Info

Publication number
CN110163147B
CN110163147B CN201910422405.7A CN201910422405A CN110163147B CN 110163147 B CN110163147 B CN 110163147B CN 201910422405 A CN201910422405 A CN 201910422405A CN 110163147 B CN110163147 B CN 110163147B
Authority
CN
China
Prior art keywords
binary
imaging picture
calculating
gray
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910422405.7A
Other languages
Chinese (zh)
Other versions
CN110163147A (en
Inventor
刘学君
魏宇晨
晏涌
沙芸
栾海英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Petrochemical Technology
Original Assignee
Beijing Institute of Petrochemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Petrochemical Technology filed Critical Beijing Institute of Petrochemical Technology
Priority to CN201910422405.7A priority Critical patent/CN110163147B/en
Publication of CN110163147A publication Critical patent/CN110163147A/en
Application granted granted Critical
Publication of CN110163147B publication Critical patent/CN110163147B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The invention relates to a binarization method, a device, equipment and a storage medium applied to storage and stacking five-distance detection, wherein the method comprises the following steps: acquiring an imaging picture of a stack to be processed in a night vision environment; acquiring each pixel point of an imaging picture according to a preset resolution; for each imaging picture, calculating a corresponding binary group for each pixel point in the imaging picture, and combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary group is a two-dimensional gray array; calculating a two-dimensional maximum peak difference aiming at the binary set; determining a target binary gray threshold according to the binary group corresponding to the maximum peak difference and a preset binary demarcation point condition; and performing binarization processing on the corresponding imaging picture by applying a target binary gray threshold value. The binarization method in the embodiment of the application is used for carrying out binarization processing on the imaging picture in the night vision environment, so that the processing effect of the imaging picture is better, and the edge information, the corner information and the like of an object can be effectively detected.

Description

Binaryzation method, device, equipment and storage medium for stacking five-distance detection
Technical Field
The invention relates to the technical field of image processing, in particular to a binarization method, a device, equipment and a storage medium applied to storage stacking five-distance detection.
Background
The monitoring of the safe distance of the stack of the dangerous chemicals is called as the problem which needs to be solved more and more urgently in recent years, and the problem can be effectively solved by constructing a binocular camera five-distance detection system based on a cloud platform to carry out intelligent safe distance early warning. Specifically, in the monitoring of the safe distance of the stack of the hazardous chemical substances, the problem of feature extraction is particularly critical, the accuracy of feature extraction directly affects the accuracy of subsequent result measurement, especially under a night vision environment, the gray values of all pixel points are very close, at the moment, the feature point extraction is inaccurate, the feature point information cannot be accurately acquired, and the accuracy of edge extraction is further affected.
In the prior art, after a night vision image is subjected to gray level histogram homogenization treatment, a concentration phenomenon is generated in a gray level range, an extracted edge is unclear, and an angular point cannot be identified; in addition, although the night vision image can detect the edge information and the angular point information of an object after being subjected to one-dimensional Dajin threshold processing, the foreground information is not effectively extracted from the binary image, so that the original night vision image contains a large amount of noise information.
Disclosure of Invention
In view of the above, a binarization method, device, equipment and storage medium for warehouse stacking five-distance detection are provided to solve the problem that edge information and corner information of an object cannot be effectively detected due to improper imaging picture processing in a night vision environment in the prior art.
The invention adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides a binarization method applied to storage stack "five-distance" detection, where the method includes:
acquiring an imaging picture of a stack to be processed in a night vision environment;
acquiring each pixel point of the imaging picture according to a preset resolution;
for each imaging picture, calculating a corresponding binary group for each pixel point in the imaging picture, and combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary group is a two-dimensional gray array;
calculating a two-dimensional maximum peak difference for the binary set;
determining a target binary gray threshold according to the binary group corresponding to the maximum peak difference and a preset binary demarcation point condition;
and performing binarization processing on the corresponding imaging picture by applying the target binary gray threshold value.
In a second aspect, an embodiment of the present application provides a binarization device applied to storage stack "five-distance" detection, the device including:
the image acquisition module is used for acquiring an imaging image of the stack to be processed in a night vision environment;
the pixel point acquisition module is used for acquiring each pixel point of the imaging picture according to a preset resolution;
the binary group calculating module is used for calculating a corresponding binary group for each pixel point in the imaging picture aiming at each imaging picture, and combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary group is a two-dimensional gray array;
the maximum peak difference calculation module is used for calculating a two-dimensional maximum peak difference aiming at the binary set;
the gray threshold value determining module is used for determining a target binary gray threshold value according to the binary group corresponding to the maximum peak value difference and a preset binary demarcation point condition;
and the binarization processing module is used for carrying out binarization processing on the corresponding imaging picture by applying the target binary gray threshold value.
In a third aspect, an embodiment of the present application provides an apparatus, including:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program, and the computer program is at least used for executing the binarization method applied to warehouse stack five-distance detection in the first aspect of the embodiment of the application;
the processor is used for calling and executing the computer program in the memory.
In a fourth aspect, the present application provides a storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of the binarization method applied to the storage stack "five-distance" detection as described in the first aspect.
By adopting the technical scheme, the imaging picture of the stack to be processed in the night vision environment is firstly acquired; acquiring each pixel point of the imaging picture according to a preset resolution; for each imaging picture, calculating a corresponding binary group for each pixel point in the imaging picture, and combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary group is a two-dimensional gray array; calculating a two-dimensional maximum peak difference for the binary set; determining a target binary gray threshold according to the binary corresponding to the maximum peak difference and a preset binary demarcation point condition, so that the determined target binary threshold is more accurate; and performing binarization processing on the corresponding imaging picture by applying the target binary gray threshold value. The processing enables the noise in the image after the binarization processing to be less, and further edge information, corner information and the like of the object can be effectively detected subsequently.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a binocular "five-distance" detection system suitable for use in embodiments of the present invention;
fig. 2 is a flowchart of a binarization method applied to storage stack five-distance detection according to an embodiment of the present invention;
fig. 3 is a flowchart of another binarization method applied to warehouse stack five-distance detection according to an embodiment of the present invention;
FIG. 4 is a set of imaged pictures in a night vision environment suitable for use in embodiments of the present invention;
FIG. 5 is a set of corner point inspection views of an imaged picture in a night vision environment suitable for use in embodiments of the present invention;
FIG. 6 is a flow chart of an improved two-dimensional Otsu threshold and transform extraction algorithm suitable for use in embodiments of the present invention;
FIG. 7 is a picture of a randomly selected portion of a screenshot of a set of standard data sets suitable for use in embodiments of the present invention;
FIG. 8 is a binarized image corresponding to a randomly selected portion of an image from a set of standard data sets as applicable in embodiments of the present invention;
FIG. 9 is a three-dimensional point distribution diagram corresponding to a randomly selected portion of a picture from a set of standard data sets, as applicable to embodiments of the present invention;
FIG. 10 is a diagram of a set of full, out and under stack experimental simulation scenarios applicable to embodiments of the present invention;
FIG. 11 is a binarized view of a set of full, over and under-stacked experimental simulation scene graphs suitable for use in embodiments of the present invention;
FIG. 12 is a three-dimensional point map of a set of full, over and under-stacked experimental simulation scenarios applicable to embodiments of the present invention;
FIG. 13 is a diagram of edge detection and corner detection after a set of gray-level histogram equalization processes applied in the embodiment of the present invention;
FIG. 14 is a set of graphs of edge detection and corner detection after one-dimensional Otsu thresholding in accordance with an embodiment of the present invention;
FIG. 15 is a set of two-dimensional edge detection and corner detection graphs after thresholding applied in embodiments of the present invention;
fig. 16 is a schematic structural diagram of a binarization device applied to storage stack five-distance detection according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
Firstly, an application scenario of the embodiment of the application is explained, the embodiment of the application provides a binarization method applied to storage and stacking five-distance detection, a specific scenario can be that the binarization processing method is applied to storage and stacking five-distance detection of hazardous chemical substances under a night vision environment, and the binarization processing result or precision greatly affects the accuracy of a distance measurement result, and particularly under the condition that gray values of pixel points are close under the night vision environment, the binarization processing precision is very important. Specifically, the term "five-distance" means that when the material storehouse is used for storing materials, the stored materials should be spaced from the wall of the storehouse, spaced from the pillar in the storehouse, spaced from the lamp, spaced from the beam, and spaced from the roof to form a line space between the stored materials.
In addition, in recent years, dead injuries such as Tianjin harbor events, Hebei Zhangjiakoushhua chemical events and the like frequently happen with safety accidents of dangerous chemical stacking. Therefore, the enhancement of the supervision on the dangerous chemicals is more and more worthy of attention, and the intelligent supervision measures are more important effective scientific means to step into the new era. At present, related researches on safety monitoring of dangerous chemical stacking are continuously developed, and the laser scanning system, the UWB positioning system and the like are provided. The embodiment of the application can be applied to a 'five-distance' measuring platform for dangerous chemical stacking built by using binocular cameras, and the cloud service is used as a communication platform for supervision. In the binocular five-distance measurement, the steps of camera calibration, angular point matching, depth information and the like are divided, and fig. 1 shows a flow chart of a binocular five-distance detection system, which mainly comprises binocular calibration, feature extraction, angular point detection, stereo matching, depth reconstruction, parallax ranging and the like.
Examples
Fig. 2 is a flowchart of a binarization method applied to warehouse stack five-distance detection according to an embodiment of the present invention, where the method may be implemented by a binarization device applied to warehouse stack five-distance detection according to an embodiment of the present invention, and the device may be implemented in a software and/or hardware manner. Referring to fig. 2, the method may specifically include the following steps:
s201, obtaining an imaging picture of the stack to be processed in a night vision environment.
The stacking means that articles are stacked or are regularly stacked to form a stack shape, and the monitoring of the safety distance of the stack of the hazardous chemical substances has profound significance for monitoring the safety state of the stack under normal conditions. In the present example, the stack was simulated using square blocks of wood.
Because the gray values of the pixel points of the imaging picture under the night vision environment are relatively close, and the reason such as the limitation of the management cost is received, a 4K high-definition starlight color night vision camera cannot be used for acquiring the imaging picture in the scene of partial storage stacking, therefore, a common camera or a high-definition camera is usually adopted for shooting, under the condition, the image definition is not high, the gray values of the pixel points are relatively close, the position of the corner point cannot be detected subsequently, further, other detection or processing on the imaging picture cannot be carried out, or the accuracy after subsequent detection or processing is directly caused to be lower.
For example, a binocular camera model 1920 x 1080 resolution and AX-850-V1.0 can be adopted to obtain night vision pictures, wherein desktop configuration parameters are Inter Core i5-4200, 2G DDR3 independent video cards, 8G memories and 500G hard disks.
S202, obtaining each pixel point of the imaging picture according to a preset resolution.
Specifically, the resolution includes a display resolution and an image resolution, and the resolution in the embodiment of the present application refers to the image resolution, that is, the number of pixels included in a unit inch. Setting a preset resolution according to different requirements of different application scenes, wherein the preset resolution may be multiple, selecting a suitable preset resolution, for example 480 × 480, in combination with the application scene corresponding to the current night vision picture, and obtaining each pixel point of the imaged picture according to the preset resolution, in this specific example, the imaged picture has 480 × 480 pixel points. In addition, the preset resolution here may also relate to the performance of the photographing apparatus that acquires the imaged picture, and may be set or adjusted according to actual conditions.
S203, aiming at each imaging picture, calculating a corresponding binary group for each pixel point in the imaging picture, and combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary group is a two-dimensional gray array.
Specifically, after each pixel point is obtained for each imaging picture, the gray value of each pixel point is determined, at this time, each pixel point can calculate a corresponding binary group, and each binary group is a two-dimensional gray value array and is called a two-dimensional gray value array. Therefore, after the binary group corresponding to each pixel point is calculated, the binary groups corresponding to the pixel points are combined into a binary group set, namely, the binary group set stores the binary group corresponding to each pixel point in one imaging picture.
And S204, calculating a two-dimensional maximum peak difference aiming at the binary set.
Specifically, first, briefly explaining the greater amount binarization method, the algorithm assumes that the image includes two types of pixels, namely foreground pixels and background pixels, and needs to calculate an optimal threshold value capable of separating the two types of pixels, so that the intra-class variance of the pixels is minimum, and the inter-class variance of the pixels is maximum due to the constant squared distance between every two pixels. Optionally, the binarization method in the embodiment of the present application also belongs to an extra-corpora binarization method, but is an improved extra-corpora binarization method, and the maximum peak difference in the present application and the variance between the large classes are in a positive correlation. Because the binary set has a plurality of binary groups, each binary group can calculate a peak difference, and the maximum peak difference is calculated for the binary set.
S205, determining a target binary gray threshold according to the binary corresponding to the maximum peak difference and a preset binary demarcation point condition.
Specifically, finding out a binary group corresponding to the maximum peak difference, and then determining a target binary group gray threshold according to a preset binary boundary condition. In a specific example, the preset binary boundary condition is mainly used for determining one gray value as the target gray threshold value from two gray values in the binary group. The preset binary demarcation point condition is a set rule which can be adjusted according to actual conditions.
And S206, performing binarization processing on the corresponding imaging picture by applying the target binary gray threshold value.
Specifically, after the target binary gray threshold is determined, binarization processing can be performed on the corresponding imaging picture by using the target binary threshold, for example, pixels larger than the target binary gray threshold are all foreground pixels which form a foreground region of the imaging picture, and pixels smaller than the target binary gray threshold are all background pixels which form a background region of the imaging picture.
By adopting the technical scheme, the imaging picture of the stack to be processed in the night vision environment is firstly acquired; acquiring each pixel point of the imaging picture according to a preset resolution; for each imaging picture, calculating a corresponding binary group for each pixel point in the imaging picture, and combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary group is a two-dimensional gray array; calculating a two-dimensional maximum peak difference for the binary set; determining a target binary gray threshold according to the binary corresponding to the maximum peak difference and a preset binary demarcation point condition, so that the determined target binary threshold is more accurate; and performing binarization processing on the corresponding imaging picture by applying the target binary gray threshold value. The processing enables the noise in the image after the binarization processing to be less, and further edge information, corner information and the like of the object can be effectively detected subsequently. By applying the binarization method in the embodiment of the application to extract the object information, the problem that the decomposition threshold value is inaccurate and contains noise because the traditional method that the information around the pixel point cannot be retrieved by considering the extra-large threshold value can be avoided.
Fig. 3 is a flowchart of another binarization method applied to warehouse stack "five-distance" detection according to another embodiment of the present invention, which is implemented on the basis of the above embodiment. Referring to fig. 3, the method may specifically include the following steps:
s301, obtaining an imaging picture of the stack to be processed in a night vision environment.
S302, obtaining each pixel point of the imaging picture according to a preset resolution.
And S303, aiming at each imaging picture, determining the gray value of each pixel point in the imaging picture as the first-dimension data of the binary group.
Specifically, a description is given here for one imaged picture as an example, and the processing methods of the other imaged pictures are equivalent. Any imaging picture is selected, 480 × 480 pixel points are assumed, each pixel point corresponds to a binary group, the binary groups of different pixel points may be the same, the binary groups are called as homovalue binary groups, and optionally, the number of homovalue binary groups can be calculated. Taking the pixel point of the 1 st pixel row and the 1 st pixel column as an example, the first-dimensional data of the binary group of the pixel point is the gray value of the pixel point, that is, the first-dimensional data of the binary group of each pixel point is the gray value of the point.
S304, taking the position of the pixel point as a center, and taking the average value of the gray values of all the pixel points in a preset window length as second-dimension data of the binary group.
Next, how the second-dimension data of the binary group of each pixel point is determined will be described. Taking the position of the currently researched pixel point as a center, wherein the position of the current pixel point can be exemplified by the concepts of a pixel row and a pixel column, for example, the 1 st row and the 1 st column are the positions of the current pixel point. The preset window length may be 3 × 3 pixels, in the above specific example, all the pixels in the preset window length are 3 × 3 pixels including the current pixel, and at this time, the average value of the gray values of the 9 pixels is the second dimensional data of the binary group. In a specific example, the calculation method for calculating the gray values of the 9 pixels may be directly calculating an arithmetic average, or may be calculating an average by giving weights corresponding to the respective points. It should be noted that, this is only an example, and the length of the preset window and the selection manner of the preset window are not unique.
S305, combining the first dimension data and the second dimension data into a binary group.
Specifically, for each pixel point, the first-dimension data and the second-dimension data are combined into a binary group, so that each pixel point corresponds to one binary group, and one imaging picture corresponds to a binary group set formed by a plurality of binary groups. The binary pictures correspond to the binary sets. It should be noted that the gray values of different pixel points may be the same, and therefore, the binary groups of different pixel points may be the same.
In one specific example, the bigram of each pixel point can be calculated by the following formula.
Figure GDA0003258807320000091
Wherein i and j are iteration times, wide and height are length and width numbers of pixels of the imaging picture, such as 480 × 480, m and n are positions of length and width of the function window at a certain point deviating from the pixel origin, optionally, the pixel origin may be a current pixel point, T is a memory length actually occupied by the imaging picture, x and y are length and width of the function window, and are difference values of the m, n starting point and end point, and D'm*T+nIs a grey scale value, D ', of a point of the function window'i*T+jThe gray value of each pixel point of the imaging picture is obtained.
S306, combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary groups are two-dimensional gray array.
S307, calculating peak value differences corresponding to the binary threshold values aiming at the binary set.
Wherein each binary threshold is determined by the preset resolution and the number of occurrences of the same binary. Specifically, each binary group can be determined by presetting the resolution and the number of the same binary group, each binary group threshold is sequentially selected, and the peak difference corresponding to the threshold is calculated, for example, when the binary group threshold is 168, one peak difference is calculated, when the binary group threshold is 172, another peak difference is calculated, and so on, the peak difference corresponding to each binary group is calculated.
Optionally, calculating the peak difference corresponding to each binary threshold may be implemented as follows: solving the probability distribution of the binary group set; dividing the imaging picture into a foreground area and a background area according to a binary threshold; calculating foreground joint probability density distribution of the foreground region, background joint probability density distribution of the background region, foreground region components of the foreground region, background region components of the background region and global region components according to the probability distribution; and calculating peak value difference according to the foreground joint probability density distribution of the foreground area, the background joint probability density distribution of the background area, the foreground area component of the foreground area, the background area component of the background area and the global distinguishing amount.
Specifically, a binary threshold is taken as an example to illustrate the calculation process of the peak difference.
(1) Solving the probability distribution of the binary set can be calculated by the following formula:
Figure GDA0003258807320000101
wherein, PijL is the maximum value of the gray values, 255, G, for a probability distributionijThe number of occurrences of the same doublet.
(2) And dividing the imaging picture into a foreground region and a background region according to a binary threshold.
Specifically, a picture region composed of pixels with gray values larger than the binary threshold is a foreground region, and a picture region composed of pixels with gray values smaller than the binary threshold is a background region. The foreground region and the background region are determined only according to the current binary threshold, and different binary thresholds are determined, so that the determined foreground region and the determined background region are different.
(3) And calculating foreground joint probability density distribution of the foreground region, background joint probability density distribution of the background region, foreground region components of the foreground region, background region components of the background region and global region components according to the probability distribution.
Specifically, the following formula can be applied to calculate the foreground joint probability density distribution of the foreground region:
Figure GDA0003258807320000111
the following formula can be applied to calculate the background joint probability density distribution of the background region:
Figure GDA0003258807320000112
wherein, ω is1To combine probability density, ω, for the foreground2For the background joint probability density, s and t are binary threshold values, foreground vectors i and j are distinguished, and the values of the global mean vectors i and j can be respectively calculated by the following formulas:
Figure GDA0003258807320000113
Figure GDA0003258807320000114
Figure GDA0003258807320000115
wherein, mu1To distinguish the values of the quantities i and j for the foreground, where μ1iFor the value of the foreground discrimination quantity i, μ1jThe value of the foreground discrimination quantity j. Mu.s2Values of components i and j being distinguished by the background, where μ2iIs the value of the background difference component i, μ2jIs the value of the background region component j, μTIs the value of the global difference components i and j, where μTiIs the value of the global regional component i, μTjIs the value of the global region component j.
(4) And calculating peak value difference according to the foreground joint probability density distribution of the foreground area, the background joint probability density distribution of the background area, the foreground area component of the foreground area, the background area component of the background area and the global distinguishing amount.
In one specific example, the maximum peak difference may be calculated by the following equation:
Figure GDA0003258807320000116
in the formula, Max { TrMax } is the maximum peak difference after iteration, and the pixel point corresponding to the two-dimensional maximum peak difference is the boundary point of the foreground and the background.
S308, determining the maximum peak difference in the peak differences to be a two-dimensional maximum peak difference.
S309, calculating a numerical value obtained after processing the first dimensional data and the second dimensional data in the binary group corresponding to the maximum peak difference according to a preset binary boundary condition, and calculating an absolute value of the difference between the numerical value and 1.
Specifically, in practical applications, the binary group corresponding to the two-dimensional maximum peak difference includes the first-dimensional data and the second-dimensional data, and at this time, which data of the two-dimensional maximum peak difference is the target binary gray threshold is determined. The first dimension data i is the gray value of the current decomposition point, and the second dimension data j is the mean value of the gray value of the window function of the pixel point. In one specific example, the processing herein may refer to
Figure GDA0003258807320000121
Computing
Figure GDA0003258807320000122
The absolute value of the difference from 1, marked as A, is
Figure GDA0003258807320000123
S310, if the absolute value is smaller than a preset threshold, determining that the first-dimension data is a target binary gray threshold; otherwise, calculating the average value of the first dimension data and the second dimension data as a target binary gray threshold.
Specifically, if a is smaller than a preset threshold, where the preset threshold may be 0.000001, it is determined that the first-dimensional data is a target binary gray threshold, that is, a gray value of a current pixel point; and if A is larger than a preset threshold, calculating the average value of the first-dimension data and the second-dimension data as a target binary gray threshold. For example, the first dimension data is S1, the second dimension data is S2, and the target binary gray scale threshold is (S1+ S2)/2.
And S311, performing binarization processing on the corresponding imaging picture by applying the target binary gray threshold value.
And S312, performing gradient dimension reduction edge extraction on the imaging picture after the binarization processing by applying a preset edge detection algorithm so as to perform corner point detection.
After binarization processing, gradient dimensionality reduction edge extraction is carried out on the imaging picture after binarization processing by using a preset edge detection algorithm so as to carry out corner point detection.
FIG. 4 shows a set of imaged pictures in a night vision environment; FIG. 5 shows a set of imaged picture corner point detection diagrams in a night vision environment; fig. 6 shows a flow chart of an improved two-dimensional atrazine threshold and transformation extraction algorithm, wherein 501 is a detected corner point, which is used for illustration only, and other corner points can also be detected, which is not shown in fig. 5. The binarization method applied to the five-distance detection of the storage stacks in the application is an improved two-dimensional Dajin threshold value and a transformation extraction algorithm.
In the embodiment of the application, a mode for acquiring the first-dimension data and the second-dimension data in the binary group data is designed, wherein the second-dimension data also uses an average value of gray values of all pixel points within a preset window length, so that the two-dimension data can be applied, and the accuracy is higher compared with the case of only using one-dimension data; and then, sequentially applying each two-tuple threshold to calculate the corresponding peak value difference, selecting the data in the two-tuple with the maximum peak value difference to continue analysis, and respectively determining the corresponding target two-tuple threshold aiming at the conditions that the absolute value is smaller than the preset threshold and the absolute value is larger than the preset threshold. And finally, performing binarization processing by using a target binary threshold, and then performing gradient dimension reduction edge extraction to perform operations such as corner detection and the like. In addition, after the stacking feature extraction, the corner detection and the five-distance measurement in the night vision environment are realized, and after the foreground and the background are extracted, the Sobel algorithm is used for gradient dimension reduction edge extraction, so that the subsequent steps of corner detection and the like are carried out.
In order to make the technical solution of the present application easier to understand, the following description is made from the perspective of experimental results.
In a first aspect, a standard data set is tested.
Firstly, a standard data set can be downloaded from a specified source, for example, from a GitHup, and a partial screenshot picture can be randomly selected from the standard data set, fig. 7 shows a group of randomly selected partial screenshot pictures of the standard data set, where the group of partial screenshot pictures includes pictures identified as art11.jpg, art665.jpg, bost27.jpg, bost112.jpg, gre143.jpg, gre152.jpg, house 20.jpg, house 30.jpg, street18.jp, street115.jpg, url 244.jpg, and gur 307.jpg, as shown in fig. 7, a binarization method in the embodiment of the present application is applied to obtain a corresponding binarized picture, and fig. 8 shows a group of randomly selected partial pictures of the standard data set corresponding to binarized pictures identified as art11.jpg, artjpg, arjtag 665, str 112.jpg, str 115.jpg, str 20.jpg, str 115.jpg, and tur.g.g.115. jpg, as shown in fig. 8.
And then, carrying out numerical analysis on the binary image according to the condition that the three-dimensional point is projected onto an xy plane, and judging whether the target binary threshold is approximately distributed on a negative diagonal line. Fig. 9 shows a three-dimensional point distribution diagram corresponding to a randomly selected part of pictures in a set of standard data sets, where the three-dimensional point distribution diagram includes picture identifiers of art11.jpg, art665.jpg, bost27.jpg, bost112.jpg, gre143.jpg, gre152.jpg, house 20.jpg, house 30.jpg, street18.jpg, street115.jpg, url 244.jpg, and url 307. jpg. As can be seen from fig. 9, the binarized three-dimensional points for the standard data set can be approximately distributed on the negative diagonal line of the picture to reach 75%.
In a second aspect, a simulated experimental scenario is tested.
Because the stacks to be processed comprise full stack, over stack, under stack and the like, the experimental simulation scene respectively alarms the full stack (namely the maximum three-dimensional standard size condition and the critical point of five-distance alarm), the over stack (namely the condition of exceeding the maximum three-dimensional standard size and the alarm state) and the under stack (namely the condition of being smaller than the three-dimensional standard size and the safety state) of the stacks. Fig. 10 shows a set of full, stacked, and under-stacked experimental simulation scenarios including pictures identified as y1r.jpg, y2l.jpg, y2r.jpg, y3l.jpg, y3r.jpg, y4r.jpg, y5l.jpg, y5r.jpg, y6l.jpg, y6r.jpg. With respect to fig. 10, a binarization process is performed, and fig. 11 shows a binarization map of a set of full-stack, super-stack, and under-stack experimental simulation scene maps, which includes pictures identified as y1r.jpg, y2l.jpg, y2r.jpg, y3l.jpg, y3r.jpg, y4r.jpg, y5l.jpg, y5r.jpg, y6l.jpg, and y6r.jpg. In fig. 10, y1r.jpg, y6l.jpg, y6r.jpg are left and right views under different light and shade, respectively, for the under-stacked condition, where R represents the left view and L represents the right view (the same applies to the following picture naming rules); Y2L.jpg, Y2R.jpg, Y5L.jpg and Y5R.jpg are respectively a left view and a right view under different light rays and shadows under the full stack condition; Y3L.jpg, Y3R.jpg and Y4R.jpg are respectively the left and right views under different light rays and shadows under the condition of superbuttress
And carrying out numerical analysis on the graph 11 according to the condition that the three-dimensional points are projected onto an xy plane, and judging whether the target binary threshold is approximately distributed on a negative diagonal line, wherein the graph 12 shows a three-dimensional point distribution graph corresponding to a group of randomly selected partial pictures of the experimental simulation scene, and the pictures included in the group of three-dimensional point distribution graphs are identified as Y1R.jpg, Y2L.jpg, Y2R.jpg, Y3L.jpg, Y3R.jpg, Y4R.jpg, Y5L.jpg, Y5R.jpg, Y6L.jpg and Y6R.jpg. As can be seen from fig. 12, the picture in which the binarized three-dimensional points of the experimental simulation scene are approximately distributed on the negative diagonal line reaches 100%, and the picture in which the binarized three-dimensional points of the standard data set of fig. 8 are distributed on the negative diagonal line reaches 87.5%.
In the third aspect, edge extraction and corner detection are verified.
In the night vision situation of fig. 4, the phenomenon of low image definition and the like is deblurred, algorithm processing such as a gray level histogram, a one-dimensional Dajin threshold value and the like is respectively selected, edge detection and corner point detection verification are carried out on the image, and the image is compared with the binarization method in the embodiment of the application.
Fig. 13 shows a set of edge detection and corner detection images after the gray-scale histogram homogenization, as shown in fig. 13, from left to right, an original night view, a night view subjected to the gray-scale histogram homogenization according to the original image, an edge extraction image subjected to the sobel algorithm after the gray-scale histogram homogenization, and a night view which searches for corner position information according to the sobel algorithm and then returns the corner position information to the original image. As shown in fig. 13, after the experimental night vision chart is subjected to the gray-level histogram homogenization process, the gray-level range is concentrated, the edge is not clear when being extracted, and the corner point cannot be identified.
FIG. 14 shows a set of one-dimensional Otsu thresholding edge detection and corner detection plots; as shown in fig. 14, from left to right, an original night view, a binary image processed by a one-dimensional atrazine threshold according to the original image, an edge detection image processed by a sobel algorithm according to the one-dimensional atrazine threshold, an edge extraction according to the sobel algorithm, a corner point detection, and a corner point position information are put back to the original night view image. As shown in fig. 14, although the edge and corner information of the object can be detected after the one-dimensional scarlet threshold processing, the foreground information is not effectively extracted from the binarized image, so that the original night vision image contains a large amount of noise information.
FIG. 15 shows a set of two-dimensional atrazine thresholded edge detection and corner detection plots; that is, as shown in fig. 15, the edge detection and corner point detection images processed by the binarization method applied to the storage stack five-distance detection in the embodiment of the present application are, from left to right, an original night view, a binary image processed by a two-dimensional atrazine threshold according to the original image, an edge detection image processed by a sobel algorithm according to the two-dimensional atrazine threshold, and a night view detected by a corner point after the sobel algorithm. As shown in fig. 15, after two-dimensional scarlet threshold processing, object edge and corner information can be effectively detected, and foreground information extraction from a binarized picture is normal.
In summary, in the developed binocular five-distance measurement system, the object feature extraction is performed more accurately under the condition that the imaging of the binocular camera is fuzzy or the gray value is close to the night vision condition. The binarization method in the embodiment of the application effectively solves the problems that algorithms such as gray value averaging, one-dimensional Dajin threshold value and the like cannot accurately acquire object characteristics, and subsequent five-distance measurement such as corner detection, matching and the like cannot be performed. In addition, the algorithm aims at the standard data set and the binarization three-dimensional point information of the simulated experiment scene picture, and the picture which can be approximately distributed on the negative diagonal line can reach 87.5%.
Fig. 16 is a schematic structural diagram of a binarization device applied to storage stack five-distance detection according to an embodiment of the present invention, which is adapted to execute a binarization method applied to storage stack five-distance detection according to an embodiment of the present invention. As shown in fig. 16, the apparatus may specifically include: the image processing device comprises an image acquisition module 1601, a pixel point acquisition module 1602, a binary group calculation module 1603, a maximum peak difference calculation module 1604, a gray threshold determination module 1605 and a binarization processing module 1606.
The image acquisition module 1601 is configured to acquire an imaging image of a to-be-processed stack in a night vision environment; a pixel point obtaining module 1602, configured to obtain each pixel point of the imaging picture according to a preset resolution; a binary group calculating module 1603, configured to calculate, for each imaging picture, a corresponding binary group for each pixel point in the imaging picture, and combine the binary groups corresponding to the pixel points into a binary group set, where the binary group is a two-dimensional grayscale array; a maximum peak difference calculation module 1604, configured to calculate a two-dimensional maximum peak difference for the binary set; a gray threshold determination module 1605, configured to determine a target binary gray threshold according to the binary corresponding to the maximum peak difference and a preset binary demarcation point condition; and a binarization processing module 1606, configured to apply the target binary grayscale threshold to perform binarization processing on the corresponding imaging picture.
Further, the binary calculating module 1603 is specifically configured to:
for each pixel point in the imaging picture, determining the gray value of the pixel point as the first dimension data of the binary group;
taking the position of the pixel point as a center, and taking the average value of the gray values of all the pixel points in a preset window length as second dimension data of the binary group;
combining the first dimension data and the second dimension data into a doublet.
Further, the maximum peak difference calculation module 1604 includes:
the peak value difference calculation submodule is used for calculating the peak value difference corresponding to each binary threshold value aiming at the binary set;
a maximum peak difference determining submodule for determining a maximum peak difference among the peak differences as a two-dimensional maximum peak difference;
wherein each binary threshold is determined by the preset resolution and the number of occurrences of the same binary.
Further, the peak difference calculation sub-module is specifically configured to:
solving the probability distribution of the binary group set;
dividing the imaging picture into a foreground area and a background area according to a binary threshold;
calculating foreground joint probability density distribution of the foreground region, background joint probability density distribution of the background region, foreground region components of the foreground region, background region components of the background region and global region components according to the probability distribution;
and calculating peak value difference according to the foreground joint probability density distribution of the foreground area, the background joint probability density distribution of the background area, the foreground area component of the foreground area, the background area component of the background area and the global distinguishing amount.
Further, the gray threshold determination module 1605 is specifically configured to:
calculating a numerical value obtained after processing the first dimensional data and the second dimensional data in the binary group corresponding to the maximum peak difference according to a preset binary boundary condition, and calculating an absolute value of the difference between the numerical value and 1;
if the absolute value is smaller than a preset threshold, determining that the first-dimensional data is a target binary gray threshold;
otherwise, calculating the average value of the first dimension data and the second dimension data as a target binary gray threshold.
And further, the system also comprises an angular point detection module which is used for carrying out binarization processing on the corresponding imaging picture by applying the target binary gray threshold value, and then carrying out gradient dimension reduction edge extraction on the imaging picture after binarization processing by applying a preset edge detection algorithm so as to carry out angular point detection.
Further, the stacks to be processed comprise full stacks, over stacks and under stacks.
The binarization device applied to the five-distance detection of the warehouse stack provided by the embodiment of the invention can execute the binarization method applied to the five-distance detection of the warehouse stack provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
An embodiment of the present invention further provides an apparatus, please refer to fig. 17, where fig. 17 is a schematic structural diagram of an apparatus, and as shown in fig. 17, the apparatus includes: a processor 1710, and a memory 1720 connected to the processor 1710; the memory 1720 is used for storing a computer program for executing at least a binarization method applied to warehouse stack "five-pitch" detection in the embodiments of the present invention; the processor 1710 is used for calling and executing the computer program in the memory, and the method at least comprises the following steps: acquiring an imaging picture of a stack to be processed in a night vision environment; acquiring each pixel point of the imaging picture according to a preset resolution; for each imaging picture, calculating a corresponding binary group for each pixel point in the imaging picture, and combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary group is a two-dimensional gray array; calculating a two-dimensional maximum peak difference for the binary set; determining a target binary gray threshold according to the binary group corresponding to the maximum peak difference and a preset binary demarcation point condition; and performing binarization processing on the corresponding imaging picture by applying the target binary gray threshold value.
The embodiment of the present invention further provides a storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method implements each step in the binarization method applied to warehouse stack five-distance detection in the embodiment of the present invention, where the method at least includes the following steps: acquiring an imaging picture of a stack to be processed in a night vision environment; acquiring each pixel point of the imaging picture according to a preset resolution; for each imaging picture, calculating a corresponding binary group for each pixel point in the imaging picture, and combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary group is a two-dimensional gray array; calculating a two-dimensional maximum peak difference for the binary set; determining a target binary gray threshold according to the binary group corresponding to the maximum peak difference and a preset binary demarcation point condition; and performing binarization processing on the corresponding imaging picture by applying the target binary gray threshold value.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (7)

1. A binarization method applied to storage and stacking five-distance detection is characterized by comprising the following steps:
acquiring an imaging picture of a stack to be processed in a night vision environment;
acquiring each pixel point of the imaging picture according to a preset resolution;
for each imaging picture, calculating a corresponding binary group for each pixel point in the imaging picture, and combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary group is a two-dimensional gray array; for each pixel point in the imaging picture, calculating a corresponding binary group comprises: for each pixel point in the imaging picture, determining the gray value of the pixel point as the first dimension data of the binary group; taking the position of the pixel point as a center, and taking the average value of the gray values of all the pixel points in a preset window length as second dimension data of the binary group; combining the first dimension data and the second dimension data into a tuple;
calculating a two-dimensional maximum peak difference for the binary set; the method comprises the following steps: calculating peak value differences corresponding to all binary threshold values aiming at the binary set; determining a maximum peak difference in the peak differences as a two-dimensional maximum peak difference; wherein each binary threshold is determined by the preset resolution and the number of the same binary; calculating the peak difference corresponding to each binary threshold value comprises: solving the probability distribution of the binary group set; dividing the imaging picture into a foreground area and a background area according to a binary threshold; calculating foreground joint probability density distribution of the foreground region, background joint probability density distribution of the background region, foreground region components of the foreground region, background region components of the background region and global region components according to the probability distribution; calculating a peak difference according to the foreground joint probability density distribution of the foreground region, the background joint probability density distribution of the background region, the foreground region component of the foreground region, the background region component of the background region and the global discrimination;
determining a target binary gray threshold according to the binary group corresponding to the maximum peak difference and a preset binary demarcation point condition;
and performing binarization processing on the corresponding imaging picture by applying the target binary gray threshold value.
2. The method of claim 1, wherein determining a target binary grayscale threshold according to the binary corresponding to the maximum peak difference and a preset binary demarcation point condition comprises:
calculating a numerical value obtained after processing the first dimensional data and the second dimensional data in the binary group corresponding to the maximum peak difference according to a preset binary boundary condition, and calculating an absolute value of the difference between the numerical value and 1;
if the absolute value is smaller than a preset threshold, determining that the first-dimensional data is a target binary gray threshold;
otherwise, calculating the average value of the first dimension data and the second dimension data as a target binary gray threshold.
3. The method according to claim 1, wherein the binarization processing is performed on the corresponding imaging picture by applying the target binary gray threshold, and thereafter, the method further comprises:
and performing gradient dimensionality reduction edge extraction on the imaging picture after the binarization processing by applying a preset edge detection algorithm so as to perform corner point detection.
4. Method according to claim 1, characterized in that the stacks to be treated comprise full stacks, over stacks and under stacks.
5. The utility model provides a be applied to binarization device that storage stack "five apart from" detected which characterized in that includes:
the image acquisition module is used for acquiring an imaging image of the stack to be processed in a night vision environment;
the pixel point acquisition module is used for acquiring each pixel point of the imaging picture according to a preset resolution;
the binary group calculating module is used for calculating a corresponding binary group for each pixel point in the imaging picture aiming at each imaging picture, and combining the binary groups corresponding to the pixel points into a binary group set, wherein the binary group is a two-dimensional gray array; for each pixel point in the imaging picture, calculating a corresponding binary group comprises: for each pixel point in the imaging picture, determining the gray value of the pixel point as the first dimension data of the binary group; taking the position of the pixel point as a center, and taking the average value of the gray values of all the pixel points in a preset window length as second dimension data of the binary group; combining the first dimension data and the second dimension data into a tuple;
the maximum peak difference calculation module is used for calculating a two-dimensional maximum peak difference aiming at the binary set; the method comprises the following steps: calculating peak value differences corresponding to all binary threshold values aiming at the binary set; determining a maximum peak difference in the peak differences as a two-dimensional maximum peak difference; wherein each binary threshold is determined by the preset resolution and the number of the same binary; calculating the peak difference corresponding to each binary threshold value comprises: solving the probability distribution of the binary group set; dividing the imaging picture into a foreground area and a background area according to a binary threshold; calculating foreground joint probability density distribution of the foreground region, background joint probability density distribution of the background region, foreground region components of the foreground region, background region components of the background region and global region components according to the probability distribution; calculating a peak difference according to the foreground joint probability density distribution of the foreground region, the background joint probability density distribution of the background region, the foreground region component of the foreground region, the background region component of the background region and the global discrimination;
the gray threshold value determining module is used for determining a target binary gray threshold value according to the binary group corresponding to the maximum peak value difference and a preset binary demarcation point condition;
and the binarization processing module is used for carrying out binarization processing on the corresponding imaging picture by applying the target binary gray threshold value.
6. A binarization device applied to storage and stacking five-distance detection is characterized by comprising:
a processor, and a memory coupled to the processor;
the memory is used for storing a computer program at least for executing the binarization method applied to warehouse stack five-distance detection in any one of claims 1-4;
the processor is used for calling and executing the computer program in the memory.
7. A storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the steps of the binarization method for warehouse stack "five-pitch" detection as claimed in any one of claims 1-4.
CN201910422405.7A 2019-05-21 2019-05-21 Binaryzation method, device, equipment and storage medium for stacking five-distance detection Active CN110163147B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910422405.7A CN110163147B (en) 2019-05-21 2019-05-21 Binaryzation method, device, equipment and storage medium for stacking five-distance detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910422405.7A CN110163147B (en) 2019-05-21 2019-05-21 Binaryzation method, device, equipment and storage medium for stacking five-distance detection

Publications (2)

Publication Number Publication Date
CN110163147A CN110163147A (en) 2019-08-23
CN110163147B true CN110163147B (en) 2021-11-09

Family

ID=67631631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910422405.7A Active CN110163147B (en) 2019-05-21 2019-05-21 Binaryzation method, device, equipment and storage medium for stacking five-distance detection

Country Status (1)

Country Link
CN (1) CN110163147B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951811A (en) * 2017-03-20 2017-07-14 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN109741306A (en) * 2018-12-26 2019-05-10 北京石油化工学院 Image processing method applied to hazardous chemical storehouse stacking

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4634292B2 (en) * 2005-12-06 2011-02-16 株式会社リコー Image processing apparatus, image processing method, and program for causing computer to execute the method
CN108550101B (en) * 2018-04-19 2023-07-25 腾讯科技(深圳)有限公司 Image processing method, device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951811A (en) * 2017-03-20 2017-07-14 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN109741306A (en) * 2018-12-26 2019-05-10 北京石油化工学院 Image processing method applied to hazardous chemical storehouse stacking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种高斯混合模型的危化品堆垛目标提取方法;袁碧贤等;《计算机与应用化学》;20181128;第35卷(第11期);第947-952页 *

Also Published As

Publication number Publication date
CN110163147A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
US9754160B2 (en) Method and device for detecting gathering of objects based on stereo vision as well as non-transitory computer-readable medium
CN107392958B (en) Method and device for determining object volume based on binocular stereo camera
CN111222395A (en) Target detection method and device and electronic equipment
US9767383B2 (en) Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image
US20210048530A1 (en) Apparatus and method for efficient point cloud feature extraction and segmentation framework
CN109447902B (en) Image stitching method, device, storage medium and equipment
CN111369611B (en) Image pixel depth value optimization method, device, equipment and storage medium thereof
KR20150041428A (en) Method for object detection and apparatus thereof
CN110163147B (en) Binaryzation method, device, equipment and storage medium for stacking five-distance detection
Albrecht et al. Visual maritime attention using multiple low-level features and naive bayes classification
CN109961092B (en) Binocular vision stereo matching method and system based on parallax anchor point
Wang et al. LBP-based edge detection method for depth images with low resolutions
CN116664829A (en) RGB-T semantic segmentation method, system, device and storage medium
CN113409334B (en) Centroid-based structured light angle point detection method
Fernandez et al. One-shot absolute pattern for dense reconstruction using DeBruijn coding and windowed Fourier transform
CN115424181A (en) Target object detection method and device
CN112270693B (en) Method and device for detecting motion artifact of time-of-flight depth camera
Fatichah et al. Optical flow feature based for fire detection on video data
CN115731256A (en) Vertex coordinate detection method, device, equipment and storage medium
Cao et al. Depth image vibration filtering and shadow detection based on fusion and fractional differential
CN112672052A (en) Image data enhancement method and system, electronic equipment and storage medium
CN114092850A (en) Re-recognition method and device, computer equipment and storage medium
CN111783648A (en) Method and device for extracting guardrail in road point cloud
Volkov et al. Straight Edge Segments Localization on Noisy Images.
Pirahansiah et al. Camera calibration for multi-modal robot vision based on image quality assessment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant