CN111539980B - Multi-target tracking method based on visible light - Google Patents

Multi-target tracking method based on visible light Download PDF

Info

Publication number
CN111539980B
CN111539980B CN202010343384.2A CN202010343384A CN111539980B CN 111539980 B CN111539980 B CN 111539980B CN 202010343384 A CN202010343384 A CN 202010343384A CN 111539980 B CN111539980 B CN 111539980B
Authority
CN
China
Prior art keywords
image
value
target
gray
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010343384.2A
Other languages
Chinese (zh)
Other versions
CN111539980A (en
Inventor
武春风
刘洋
白明顺
秦建飞
王晓丹
陈黎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CASIC Microelectronic System Research Institute Co Ltd
Original Assignee
CASIC Microelectronic System Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CASIC Microelectronic System Research Institute Co Ltd filed Critical CASIC Microelectronic System Research Institute Co Ltd
Priority to CN202010343384.2A priority Critical patent/CN111539980B/en
Publication of CN111539980A publication Critical patent/CN111539980A/en
Application granted granted Critical
Publication of CN111539980B publication Critical patent/CN111539980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Abstract

The invention relates to the technical field of image detection and discloses a multi-target tracking method based on visible light. Comprising the following steps: converting the original image format of the target into a gray image, and shrinking the gray image by N times; then carrying out morphological expansion operation on the reduced image; subtracting the gray level image after preliminary treatment by using the graph after morphological expansion operation to obtain a gray level dark region, stretching the region image, acquiring a binarization threshold value and performing binarization treatment; on the other hand, a canny edge of the reduced gray level image is obtained; deleting a connected region with a length and a width which do not meet the size of the target information according to the target information, and deleting a region without edges in the connected region; and restoring the coordinates of the communication area to the size of the coordinates of the original image of the target. The method combines the difference value of the original image and the morphological operation image in the identification algorithm to perform initial positioning of the target, and adds a judging mechanism of target screening, thereby not only ensuring instantaneity, but also improving accuracy, and being applicable to target detection and static target detection under the sky.

Description

Multi-target tracking method based on visible light
Technical Field
The invention relates to the technical field of image detection, in particular to a multi-target tracking method based on visible light.
Background
With the development of science and technology, the progress of society and the improvement of living standard, the security consciousness of groups and individuals is continuously enhanced, and the video monitoring system is also increasingly widely applied. At present, it has been widely used in security monitoring, automatic monitoring and remote monitoring of systems and fields of banks, museums, traffic roads, businesses, military, public security, electric power, factories and mines, intelligent communities, etc. The function of the monitoring system is also developed from the original simple functions of manually monitoring video signals, displaying multiple pictures of the system, recording hard disk video and the like to the intelligent motion detection and target tracking by utilizing a computer.
The target detection refers to detecting a suspected target area in the sequence image and extracting the target from the background image. The main target detection algorithms at present comprise a frame difference method, a background difference method, a harr+adaboost, a hog+svm, SSD, YOLO, RCNN, morphological series and the like. These methods each have advantages and disadvantages.
Frame difference method and background difference method: the method is simple and easy to implement, the calculated amount of the algorithm is small, and the instantaneity can be ensured. However, in general, the imaging size is determined by the distance between the target and the lens, and targets with different sizes and different motion states (motion and rest) can appear at the same moment, so that targets with various motion states cannot be detected by the traditional methods of frame difference, background difference and the like; i.e. invalid for stationary objects and requires a continuous number of images to be associated.
Calculation of the type of learning of the harr+adaboost and hog+svm feature models: the method is particularly sensitive to the condition that the sample is required to be large, and the interference of illumination and extraneous events and the like is particularly sensitive, so that the time efficiency is low.
SSD, YOLO, RCNN algorithm based on deep learning: the detection effect is good, the precision is high, but the time efficiency is low, and the real-time processing can not be achieved.
Top cap algorithm: the top hat algorithm is adopted to detect the visible light small target, the visible light imaging effect of the image is utilized to extract the small target, and the variance-based morphological visible light small target extraction method which is accelerated is correspondingly provided. The top hat algorithm firstly expands the image, then corrodes the image, and then subtracts the original image from the image to separate the adjacent lighted small area. And the imaging of the small visible light target is characterized by small area and bright imaging periphery, and a template larger than the small target area is arranged to detect a small bright area by a top hat algorithm. An important point of the algorithm is the choice of template size, which determines the size of the region that can be detected, if too small, no region is detected, if too much time is used. The advantages are that: the detection result is more accurate.
The morphological method comprises the following steps:
morphology object detection is a technique that uses the difference between the current image and the morphology operation to detect adjacent bright or dark areas. The size of the target area is controlled by the size of the morphological operation template, and the area target can be detected by further screening the related information of the target area.
The morphology adopts structural elements with a certain morphology to measure and extract the corresponding shapes in the image so as to achieve the purposes of image analysis and identification, and the basic principle is that the centers of the structural elements are aligned with pixel points on the image, and then the pixels in the field covered by the structural elements are the pixels to be analyzed.
The conventional morphological operations are based on a series of image operations of a certain shape, including dilation, erosion, binarization, open operation, close operation, top hat algorithm, black hat algorithm, morphological gradients, etc. However, the conventional morphological method is not suitable for detecting the target in the sky background (because the sky background has low contrast), so that the target detection is realized according to the application scene and the morphological operation in the sky background, and a corresponding updating algorithm is needed to achieve the target detection in the motion background; in addition, aiming at dynamic targets in the sky, the size of the targets is always changed, and the targets are not well recognized and positioned; furthermore, the conventional morphological operation method based on the frame difference method cannot realize the detection of the static target.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: in order to solve the problems, a multi-target tracking method based on visible light is provided.
The technical scheme adopted by the invention is as follows: a visible light-based multi-target tracking method, comprising:
step S1, converting a target original image format into a gray image;
s2, reducing the gray level image by N times;
step S3, performing morphological expansion operation on the target image obtained in the step S2, and expanding a brighter area;
s4, subtracting the gray level image after preliminary treatment from the graph after morphological expansion operation to obtain a gray level dark area;
step S5, stretching the region image according to the graph obtained in the step S4;
step S6, acquiring a binarization threshold value and performing binarization processing on the graph obtained in the step S5;
step S7, acquiring a canny edge of the image obtained in the step S2;
step S8, deleting the connected region with the length and width not conforming to the size of the target information according to the target information and deleting the region without the edge in the connected region by combining the canny edge of the step S7 aiming at the image obtained in the step S6; reserving a region conforming to the target feature;
and S9, restoring the coordinates of the communication area to the size of the coordinates of the original image of the target.
Further, in the step S3, a diamond template with a template size of 15 is selected to perform the expansion operation on the image.
Further, the step S5 includes the following steps: stretching the region obtained in the step S4 to a target region [ lowOut, highOut ], wherein the minimum gray value lowout=0 and the maximum gray value highout=255; a current image stretching region [ lowIn, highIn ], wherein the minimum gray value is lowIn and the maximum gray value is highIn; calculating an average gray value mediagray of the image, if the average gray value is smaller than or equal to 0, setting the average gray value as 1, setting a minimum gray value lowIn as a current average gray value, and setting a maximum gray value highin=mediagray+ (255-mediagray)/3, wherein gama represents a gamma correction coefficient; old represents the gray value before stretching, the gray value new after stretching:
Figure GDA0004133608830000041
further, the step S6 includes the following steps: step S61, counting an image histogram, counting gray values corresponding to the number of pixels accounting for 95% of the image area from lower gray, and multiplying the obtained gray values by 2 to obtain a current threshold; step S62, binarizing the image, setting the background to 0 if the binarized value is smaller than the current threshold value, otherwise selecting the object to be set to 255.
Further, in the step S61, if the current threshold is greater than the maximum gray value highIn, the current threshold=maximum gray value-10 is set.
Further, the step S7 includes the following steps: step S71, using a Gaussian filter to smooth the image and filter out noise; step S72, respectively calculating the horizontal gradient G of the image by adopting a 3*3 matrix x And a gradient G in the vertical direction y Gradient g=g of the resulting image x +G y The method comprises the steps of carrying out a first treatment on the surface of the The method comprises the steps of carrying out a first treatment on the surface of the Gradient direction θ=arctan (G x /G y ) The method comprises the steps of carrying out a first treatment on the surface of the Step S73, dot by dotJudging whether the gradient modulus value of the point is larger than the calculated gradient positive and negative direction value or not by taking the gradient modulus value as the center, if so, taking the gradient modulus value as an extreme value and reserving the extreme value, otherwise, taking the gradient modulus value as the extreme value and clearing the extreme value; step S74, determining the real and potential edges by applying Double-Threshold (Double-Threshold) detection, setting a Double-Threshold low threshold=graymax/3, and a Double-Threshold high threshold=graymax, wherein grayMax is the maximum value of the gray map obtained in step S5, and if the gradient value of the edge pixel is higher than the high Threshold, marking the edge as a strong edge pixel; if the gradient value of the edge pixel is less than the high threshold and greater than the low threshold, then marking the edge as a weak edge pixel; if the gradient value of an edge pixel is less than the low threshold, the edge is suppressed.
Further, for the portion marked as a weak edge pixel, the weak edge pixel caused by the true edge is connected to the strong edge pixel, and the weak edge pixel caused by the noise response is suppressed.
Further, the weak edge pixel judgment method caused by the real edge is as follows: looking at the weak edge pixels and 8 neighborhood pixels thereof, as long as one of the pixels is a strong edge pixel, the weak edge pixel point remains as a real edge.
Compared with the prior art, the beneficial effects of adopting the technical scheme are as follows: in the technical scheme of the invention, morphology is adopted as a basic algorithm for ensuring algorithm instantaneity, and as direct morphological operation cannot be fully suitable for target detection of complex background, initial positioning of targets is carried out by combining differences between original images and morphological operation images in an identification algorithm, and a judgment mechanism of target screening is added. Thus, the real-time performance is ensured, and the accuracy of algorithm identification is improved; in addition, the technical scheme of the invention can be used for detecting the dynamic target in the sky, and solves the problems that the dynamic target in the sky is not well recognized and positioned and the contrast in the sky is low; furthermore, the technical scheme of the invention can also realize the detection of the static target.
Drawings
Fig. 1 is a schematic flow chart of a multi-target tracking method based on visible light.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, in the visible light-based multi-target tracking method, binarization, screening and classification are performed on a difference image between an original image and an image obtained after morphological operation, and suspected targets are displayed on a monitor in real time. Mainly comprises the following steps:
step S1: and (5) format conversion. The RGB format color image is converted into gray level image, namely three-channel color image is converted into single-channel image, thus reducing the calculated amount;
step S2: the image is reduced, the processing speed is increased, and the processing speed is slower because the image is too large, and the processing condition is accelerated by reducing the image by one time.
Step S3: morphological operations. And (5) expanding the image by using a diamond template with the template size of 15 so as to enlarge a brighter area. A brighter region is a region that is greater than a particular gray value, where a brighter region may refer to a region that has a gray value greater than 120. Dilation is used to connect (join) adjacent elements, which is also the most intuitive presentation of the dilated image. The basic morphological operations are erosion and swelling. In morphology, structural elements are the most important and fundamental concept. The role of the structural elements in the morphological transformation corresponds to a "filter window" in the signal processing. Let B (x) denote the structural element, for each point x in the workspace E: the result of etching E with B (x) is a set of all points where B is contained in E after translating the structural element B. The result of expanding E with B (x) is a set of points that shift structural element B such that the intersection of B and E is non-null.
Step S4: neighboring darker areas are acquired. Subtracting the gray level image obtained in the step 1 from the morphological operation image to obtain a gray level dark area. A darker area is an area that is greater than a particular gray value, where darker area may refer to an area having a gray value less than 120. The areas with dark gray values can be obtained to exclude the highlight areas which do not accord with the gray values, so that the subsequent processing speed is increased.
Step S5: stretching the region image. Stretching the image specification area (the darkness area obtained in step S4) to a target area [ lowOut, highOut ], wherein the minimum gray value lowout=0, and the maximum gray value highout=255; a current image stretching region [ lowIn, highIn ], wherein the minimum gray value is lowIn and the maximum gray value is highIn; calculating an average gray value media gray of the image, if the average gray value is less than or equal to 0, setting the average gray value as 1, setting a minimum gray value lowIn as the current average gray value, and setting a maximum gray value highin=media gray+ (255-media gray)/3, wherein gama represents a gamma correction coefficient (the gamma correction coefficient is a display coefficient, and for better display, a correction is performed in the acquired image to adapt to a display, similar to an inverse push value); old represents the gray value before stretching, the gray value new after stretching:
Figure GDA0004133608830000061
this allows stretching the intermediate gray values.
Step S6: and obtaining a binarization threshold value and performing binarization. And counting an image histogram, counting gray values which are 95% of the image area from low gray, multiplying the gray values by 2 to obtain a current threshold value T, and if the current threshold value is larger than the maximum gray value, obtaining the current threshold value T=the maximum gray value-10. The image is binarized, if the image is smaller than the current threshold value T, the background is set to 0, otherwise, the object is set to 255.
Step S7: and acquiring a canny edge. And (3) setting a bilateral threshold value by using the maximum value grayMax of the gray map obtained in the step (5), wherein a bilateral threshold value low threshold value=grayMax/3, and a bilateral threshold value high threshold value=grayMax.
The method mainly comprises the following 5 steps:
1) A gaussian filter is used to smooth the image and filter out noise.
2) And calculating the gradient strength and the gradient direction of each pixel point in the image. Sobel operator templates using the following two 3*3 matrices (S x ,S y ) Respectively calculating the horizontal gradient G of the image x And a gradient G in the vertical direction y Gradient g=g of the resulting image x +G y The method comprises the steps of carrying out a first treatment on the surface of the Gradient direction θ=arctan (G x /G y ) LadderThe degree direction is the angle formed by the horizontal gradient and the vertical gradient.
3) Non-maximum (Non-Maximum Suppression) suppression is applied to eliminate spurious responses from edge detection. Namely, taking a point as the center, whether the gradient modulus value is larger than the calculated gradient positive and negative direction value (theta is the gradient direction, the positive and negative value along the direction is the positive and negative direction value where the derivative is the maximum along the gradient direction), if so, the value is the extremum, otherwise, the value is not the extremum to be cleared.
4) Double-Threshold (Double-Threshold) detection is applied to determine true and potential edges.
After applying the non-maximum suppression, the remaining pixels can more accurately represent the actual edges in the image. However, there are still some edge pixels due to noise and color variations. To address these spurious responses, it is necessary to filter edge pixels with weak gradient values and preserve edge pixels with high gradient values, which can be achieved by selecting a high and low threshold. If the gradient value of the edge pixel is higher than the high threshold value, marking it as a strong edge pixel; if the gradient value of the edge pixel is less than the high threshold and greater than the low threshold, it is marked as a weak edge pixel; if the gradient value of the edge pixel is less than the low threshold, it is suppressed. The choice of threshold depends on the content of a given input image.
5) Edge detection is ultimately accomplished by suppressing isolated weak edges.
Pixels that are classified as strong edges have been determined to be edges because they are extracted from the true edges in the image. However, for weak edge pixels, because these pixels can be extracted from the real edges, they can also be due to noise or color variations. In order to obtain accurate results, weak edges caused by the latter (noise or color change) should be suppressed. Typically, weak edge pixels caused by real edges will be connected to strong edge pixels, while noise responses are unconnected. To track edge connections, a weak edge point can remain as a true edge as long as one is a strong edge pixel by looking at the weak edge pixel and its 8 neighborhood pixels.
Step S8: screening the connected region, and combining the edge map and the binary map. And (3) deleting the connected region with the length and width not conforming to the target information size according to the target information, deleting the region without the edge in the connected region by combining the canny edge of the step S7, and reserving the region conforming to the target characteristics aiming at the image obtained in the step S6. Through edge detection, the method can be used for adapting to target information with unfixed size and poor identification of a dynamic target.
The connected Region (Connected Component) generally refers to an image Region (Region, blob) formed by foreground pixels having the same pixel value and adjacent positions in the image. The same block area with similar pixel values.
Step S9: and restoring the coordinates of the connected areas to the original image coordinate size. Because the image is the scaled connected region screening, the coordinates are multiplied by the reduction multiple to restore to the original image size.
The invention is not limited to the specific embodiments described above. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification, as well as to any novel one, or any novel combination, of the steps of the method or process disclosed. It is intended that insubstantial changes or modifications from the invention as described herein be covered by the claims below, as viewed by a person skilled in the art, without departing from the true spirit of the invention.

Claims (6)

1. A visible light-based multi-target tracking method, comprising:
step S1, converting a target original image format into a gray image;
s2, reducing the gray level image by N times;
step S3, performing morphological expansion operation on the target image obtained in the step S2, and expanding a brighter area;
s4, subtracting the gray level image after preliminary treatment from the graph after morphological expansion operation to obtain a gray level dark area;
step S5, stretching the region image according to the graph obtained in the step S4;
step S6, acquiring a binarization threshold value and performing binarization processing on the graph obtained in the step S5;
step S7, acquiring a canny edge of the image obtained in the step S2;
step S8, deleting the connected region with the length and width not conforming to the size of the target information according to the target information and deleting the region without the edge in the connected region by combining the canny edge of the step S7 aiming at the image obtained in the step S6;
s9, restoring the coordinates of the communication area to the size of the coordinates of the original image of the target;
the step S5 includes the following steps: stretching the region obtained in the step S4 to a target region [ lowOut, highOut ], wherein the minimum gray value lowout=0 and the maximum gray value highout=255; a current image stretching region [ lowIn, highIn ], wherein the minimum gray value is lowIn and the maximum gray value is highIn; calculating an average gray value mediagray of the image, if the average gray value is smaller than or equal to 0, setting the average gray value as 1, setting a minimum gray value lowIn as a current average gray value, and setting a maximum gray value highin=mediagray+ (255-mediagray)/3, wherein gama represents a gamma correction coefficient; old represents the gray value before stretching, the gray value new after stretching:
Figure QLYQS_1
the step S7 includes the following steps: step S71, using a Gaussian filter to smooth the image and filter out noise; step S72, respectively calculating the horizontal gradient G of the image by adopting a 3*3 matrix x And a gradient G in the vertical direction y Gradient g=g of the resulting image x +G y The method comprises the steps of carrying out a first treatment on the surface of the Gradient direction θ=arctan (G x /G y ) The method comprises the steps of carrying out a first treatment on the surface of the Step S73, taking a point as a center, judging whether the gradient modulus value of the point is larger than the calculated gradient positive and negative direction value, if so, the gradient modulus value is an extreme value and is reserved, and if not, the gradient modulus value is not the extreme value and is cleared; step S74, determining real and potential edges by applying dual-threshold detection, setting a dual-threshold low threshold=graymax/3, and a dual-threshold high threshold=graymax, wherein grayMax refers to the maximum gray map value obtained in step S5, and if the gradient value of the edge pixel is higher than the high threshold, marking the edge as a strong edge pixel; if edge pixelsIf the gradient value of (2) is less than the high threshold and greater than the low threshold, then marking the edge as a weak edge pixel; if the gradient value of an edge pixel is less than the low threshold, the edge is suppressed.
2. The method of claim 1, wherein in step S3, a diamond-shaped template with a template size of 15 is selected to perform the expansion operation on the image.
3. The method of multi-target tracking based on visible light as claimed in claim 1, wherein the step S6 includes the following processes: step S61, counting an image histogram, counting gray values corresponding to the number of pixels accounting for 95% of the image area from lower gray, and multiplying the obtained gray values by 2 to obtain a current threshold; step S62, binarizing the image, setting the background to 0 if the binarized value is smaller than the current threshold value, otherwise selecting the object to be set to 255.
4. The method according to claim 3, wherein in the step S61, if the current threshold is greater than the maximum gray value highIn, the current threshold = maximum gray value-10 is set.
5. The visible light-based multi-target tracking method of claim 1, wherein for a portion marked as a weak edge pixel, a weak edge pixel caused by a real edge is connected to a strong edge pixel, and a weak edge pixel caused by a noise response is suppressed.
6. The method of claim 5, wherein the method of determining weak edge pixels caused by real edges is: looking at the weak edge pixels and 8 neighborhood pixels thereof, as long as one of the pixels is a strong edge pixel, the weak edge pixel point remains as a real edge.
CN202010343384.2A 2020-04-27 2020-04-27 Multi-target tracking method based on visible light Active CN111539980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010343384.2A CN111539980B (en) 2020-04-27 2020-04-27 Multi-target tracking method based on visible light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010343384.2A CN111539980B (en) 2020-04-27 2020-04-27 Multi-target tracking method based on visible light

Publications (2)

Publication Number Publication Date
CN111539980A CN111539980A (en) 2020-08-14
CN111539980B true CN111539980B (en) 2023-04-21

Family

ID=71978926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010343384.2A Active CN111539980B (en) 2020-04-27 2020-04-27 Multi-target tracking method based on visible light

Country Status (1)

Country Link
CN (1) CN111539980B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269841A (en) * 2021-05-18 2021-08-17 江西晶浩光学有限公司 Gray scale testing method and device, electronic equipment and storage medium
CN113989628B (en) * 2021-10-27 2022-08-26 哈尔滨工程大学 Underwater signal lamp positioning method based on weak direction gradient
CN115296738B (en) * 2022-07-28 2024-04-16 吉林大学 Deep learning-based unmanned aerial vehicle visible light camera communication method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000075856A1 (en) * 1999-06-07 2000-12-14 Metrologic Instruments, Inc. Unitary package identification and dimensioning system employing ladar-based scanning methods
CN1885317A (en) * 2006-07-06 2006-12-27 上海交通大学 Adaptive edge detection method based on morphology and information entropy
CN101383004A (en) * 2007-09-06 2009-03-11 上海遥薇实业有限公司 Passenger target detecting method combining infrared and visible light images
CN101976437A (en) * 2010-09-29 2011-02-16 中国资源卫星应用中心 High-resolution remote sensing image variation detection method based on self-adaptive threshold division
CN102663387A (en) * 2012-04-16 2012-09-12 南京大学 Cortical bone width automatic calculating method on basis of dental panorama
CN103903230A (en) * 2014-03-28 2014-07-02 哈尔滨工程大学 Video image sea fog removal and clearing method
CN107564041A (en) * 2017-08-31 2018-01-09 成都空御科技有限公司 A kind of detection method of visible images moving air target
CN107871324A (en) * 2017-11-02 2018-04-03 中国南方电网有限责任公司超高压输电公司检修试验中心 One kind is based on twin-channel method for tracking target and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401304B2 (en) * 2007-07-27 2013-03-19 Sportvision, Inc. Detecting an object in an image using edge detection and morphological processing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000075856A1 (en) * 1999-06-07 2000-12-14 Metrologic Instruments, Inc. Unitary package identification and dimensioning system employing ladar-based scanning methods
CN1885317A (en) * 2006-07-06 2006-12-27 上海交通大学 Adaptive edge detection method based on morphology and information entropy
CN101383004A (en) * 2007-09-06 2009-03-11 上海遥薇实业有限公司 Passenger target detecting method combining infrared and visible light images
CN101976437A (en) * 2010-09-29 2011-02-16 中国资源卫星应用中心 High-resolution remote sensing image variation detection method based on self-adaptive threshold division
CN102663387A (en) * 2012-04-16 2012-09-12 南京大学 Cortical bone width automatic calculating method on basis of dental panorama
CN103903230A (en) * 2014-03-28 2014-07-02 哈尔滨工程大学 Video image sea fog removal and clearing method
CN107564041A (en) * 2017-08-31 2018-01-09 成都空御科技有限公司 A kind of detection method of visible images moving air target
CN107871324A (en) * 2017-11-02 2018-04-03 中国南方电网有限责任公司超高压输电公司检修试验中心 One kind is based on twin-channel method for tracking target and device

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
侯晴宇等.基于似然函数最速下降的红外与可见光图像配准.《光子学报》.2011,第433-437页. *
刘洋.面向边境安防的运动目标检测与跟踪算法研究.《中国优秀硕士学位论文全文数据库信息科技辑》.2017,第I138-5240页. *
张培恒.基于图像融合的运动目标检测算法研究.《中国优秀硕士学位论文全文数据库信息科技辑》.2020,第I138-1313页. *
张雷.图像匹配及融合算法的研究.《中国优秀硕士学位论文全文数据库信息科技辑》.2017,第I138-2709页. *
时俊楠.无人船海上目标单目视觉检测与跟踪算法研究.《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》.2020,第C036-69页. *
王畇浩.基于可见光与红外航拍图像的输电线路绝缘子多故障检测研究.《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》.2020,第C042-360页. *
邹英.水上桥梁目标识别算法研究.《中国优秀博硕士学位论文全文数据库 (硕士)信息科技辑》.2007,第I138-393页. *

Also Published As

Publication number Publication date
CN111539980A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN111209810B (en) Boundary frame segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time through visible light and infrared images
CN111539980B (en) Multi-target tracking method based on visible light
CN104978567B (en) Vehicle checking method based on scene classification
CN109934224B (en) Small target detection method based on Markov random field and visual contrast mechanism
WO2022027931A1 (en) Video image-based foreground detection method for vehicle in motion
CN106815583B (en) Method for positioning license plate of vehicle at night based on combination of MSER and SWT
CN109241973B (en) Full-automatic soft segmentation method for characters under texture background
CN110415208A (en) A kind of adaptive targets detection method and its device, equipment, storage medium
CN111369570B (en) Multi-target detection tracking method for video image
CN111104943A (en) Color image region-of-interest extraction method based on decision-level fusion
JP7450848B2 (en) Transparency detection method based on machine vision
Naufal et al. Preprocessed mask RCNN for parking space detection in smart parking systems
Lam et al. Highly accurate texture-based vehicle segmentation method
Avery et al. Investigation into shadow removal from traffic images
Aung et al. Automatic license plate detection system for myanmar vehicle license plates
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN114519694A (en) Seven-segment digital tube liquid crystal display screen identification method and system based on deep learning
CN110276260B (en) Commodity detection method based on depth camera
CN112949389A (en) Haze image target detection method based on improved target detection network
Kaur et al. An Efficient Method of Number Plate Extraction from Indian Vehicles Image
Abdusalomov et al. Robust shadow removal technique for improving image enhancement based on segmentation method
CN111666811A (en) Method and system for extracting traffic sign area in traffic scene image
Long et al. An Efficient Method For Dark License Plate Detection
Singh et al. Vehicle number plate recognition using matlab
CN112926676B (en) False target identification method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant