CN110033474B - Target detection method, target detection device, computer equipment and storage medium - Google Patents

Target detection method, target detection device, computer equipment and storage medium Download PDF

Info

Publication number
CN110033474B
CN110033474B CN201910091468.9A CN201910091468A CN110033474B CN 110033474 B CN110033474 B CN 110033474B CN 201910091468 A CN201910091468 A CN 201910091468A CN 110033474 B CN110033474 B CN 110033474B
Authority
CN
China
Prior art keywords
image
target
local
target image
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910091468.9A
Other languages
Chinese (zh)
Other versions
CN110033474A (en
Inventor
胡锦龙
李婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Tianwei Electronic System Engineering Co ltd
Original Assignee
Xi'an Tianwei Electronic System Engineering Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Tianwei Electronic System Engineering Co ltd filed Critical Xi'an Tianwei Electronic System Engineering Co ltd
Priority to CN201910091468.9A priority Critical patent/CN110033474B/en
Publication of CN110033474A publication Critical patent/CN110033474A/en
Application granted granted Critical
Publication of CN110033474B publication Critical patent/CN110033474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a target detection method, a target detection device, a computer device and a storage medium, wherein a target image is obtained, and the target image is processed according to the local minimum contrast of the target image and the local maximum difference of the target image to obtain a local contrast enhanced image of the target image; and then filtering the local contrast enhanced image of the target image to determine a real target area. By adopting the method, the interference of a complex scene can be reduced, the detection rate of the weak and small targets is improved, and the false alarm is reduced.

Description

Target detection method, target detection device, computer equipment and storage medium
Technical Field
The present application relates to the field of detection technologies, and in particular, to a target detection method, an apparatus, a computer device, and a storage medium.
Background
With the development of detection technology, a complex scene detection technology appears, and currently, methods of background suppression, spatial filtering, space-time filtering and frequency domain transformation are mainly adopted.
However, the conventional method has a problem of low detection rate.
Disclosure of Invention
In view of the above, it is necessary to provide an object detection method, an apparatus, a computer device and a storage medium for solving the above technical problems.
A method of target detection, the method comprising:
acquiring a target image, and processing the target image according to the local minimum contrast of the target image and the local maximum difference of the target image to obtain a local contrast enhanced image of the target image;
and filtering the local contrast enhanced image of the target image to determine a real target area.
In one embodiment, the obtaining a target image, and processing the target image according to the local minimum contrast of the target image and the local maximum difference of the target image to obtain a local contrast enhanced image of the target image includes:
acquiring the maximum value of a central image block pixel, and obtaining the local maximum difference value of the target image according to the maximum value of the central image block pixel and the average gray value of a plurality of field image blocks, wherein the neighborhood image blocks are adjacent to the central image block;
obtaining local minimum contrast of the target image according to the pixel gray average value of each field image block in the plurality of field image blocks and the maximum value of the pixel of the central image block;
and obtaining a local contrast enhanced image of the target image according to the local minimum contrast of the target image and the local maximum difference of the target image.
In one embodiment, the obtaining the maximum value of the central image block pixel and obtaining the local maximum difference value of the target image according to the maximum value of the central image block pixel and the average gray-scale value of the plurality of field image blocks includes:
dividing the target image to respectively obtain a central image block and a plurality of field image blocks;
and acquiring the pixel gray level mean value of each field image block in the plurality of field image blocks, and calculating the average gray level value of the plurality of field image blocks according to the pixel gray level mean value of each field image block.
In one embodiment, the filtering the local contrast enhanced image of the target image and determining the real target region includes:
performing threshold segmentation on the local contrast enhanced image of the target image according to a preset threshold to obtain a binary image;
performing connected domain analysis on the binary image to obtain the number and distribution of candidate target regions in the binary image;
and performing time domain correlation analysis on the candidate target regions in the binary image according to the number and distribution of the candidate target regions to obtain a real target region in the candidate targets.
In one embodiment, the performing threshold segmentation on the local contrast enhanced image of the target image according to a preset threshold to obtain a binary image includes:
obtaining a mean value and a standard deviation of the local contrast enhanced image according to the local contrast enhanced image of the target image;
and determining a preset threshold for performing threshold segmentation on the local contrast enhanced image of the target image according to the mean value and the standard deviation of the local contrast enhanced image.
In one embodiment, the performing, according to the number and distribution of the candidate target regions, a time-domain correlation analysis on a plurality of candidate target regions in the binary image to obtain a real target region in the plurality of candidate targets includes:
and removing non-target areas from the candidate target areas according to the motion continuity and the data correlation degree between adjacent frames to obtain the real target area.
In one embodiment, the removing non-target regions from the candidate target regions according to the motion continuity and the data association degree between adjacent frames to obtain the real target region includes:
if only one candidate target area is detected in the three continuous frames and the distance between the center positions of the candidate target areas between the two continuous frames is smaller than a preset pixel threshold value, determining that the candidate target is the real target area;
if a plurality of candidate target areas are detected by the first frame, calculating Euclidean distance and local contrast of each candidate target area detected by the second frame and each candidate area detected by the first frame respectively, and taking the candidate area with the minimum Euclidean distance and the closest local contrast as the real target area.
In one embodiment, the performing connected component analysis on the binary image to obtain the number and distribution of candidate target regions in the binary image includes:
and carrying out cluster analysis on the binary image to obtain the number and the marks of the candidate target areas in the binary image.
An object detection apparatus, the apparatus comprising:
the enhanced image acquisition module is used for acquiring a target image, and processing the target image according to the local minimum contrast of the target image and the local maximum difference of the target image to obtain a local contrast enhanced image of the target image;
and the real target area determining module is used for filtering the local contrast enhanced image of the target image and determining a real target area.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method as claimed in any one of the above when the computer program is executed.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of the preceding claims.
According to the target detection method, the target detection device, the computer equipment and the storage medium, the target image is obtained, and the local contrast enhanced image of the target image is obtained by processing the target image according to the local minimum contrast of the target image and the local maximum difference of the target image; and then filtering the local contrast enhanced image of the target image to determine a real target area. By the method, the interference of a complex scene can be reduced, the weak and small target detection rate is improved, and the false alarm is reduced.
Drawings
FIG. 1 is a diagram of an exemplary implementation of a target detection method;
FIG. 2 is a schematic flow chart diagram illustrating a method for target detection in one embodiment;
FIG. 3 is a flowchart illustrating step S1 according to another embodiment;
FIG. 4 is a flowchart illustrating step S11 according to another embodiment;
FIG. 5 is a flowchart illustrating step S2 according to another embodiment;
FIG. 6 is a flowchart illustrating step S21 according to another embodiment;
FIG. 7 is a flowchart illustrating step S231 in another embodiment;
FIG. 8(a) is a diagram of an aerial scene in another embodiment;
FIG. 8(b) is an enhanced image of an aerial scene obtained by using a conventional local contrast method in another embodiment;
FIG. 8(c) is an enhanced image of an aerial scene with improved and optimized local contrast using the method of the present application in another embodiment;
FIG. 9(a) is a seascape scene in another embodiment;
FIG. 9(b) is an enhanced image of a seascape scene with improved optimization of local contrast using the method of the present application in another embodiment;
FIG. 10(a) is a sea-surface scenario in another embodiment;
FIG. 10(b) is an enhanced sea scene image after local contrast is optimized by the method of the present application in another embodiment;
FIG. 11 is a block diagram of an object detection device in one embodiment;
FIG. 12 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The target detection method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The method comprises the steps that a terminal 102 obtains a target image and transmits the target image to a server 104, and the server 104 processes the target image according to the local minimum contrast of the target image and the local maximum difference of the target image to obtain a local contrast enhanced image of the target image; and then filtering the local contrast enhanced image of the target image to determine a real target area. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In an embodiment, as shown in fig. 2, a method for detecting an object is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
step S1: and acquiring a target image, and processing the target image according to the local minimum contrast of the target image and the local maximum difference of the target image to obtain a local contrast enhanced image of the target image.
Specifically, the target image is a complex scene image including a sea-sky junction scene, a sea surface scene, a high-altitude scene, and the like. For the sea-sky connection and sea surface scenes, interference of fish scale light reflection and sea clutter is caused, the local contrast of a target image is relatively low, and the target image needs to be processed through the local minimum contrast and the local maximum difference so as to improve the local contrast of the target image. In addition, since some features have similar radiation intensities in the visible, near-infrared, or mid-infrared bands, low contrast in the target image may also result when the features having similar radiation intensities in the target image are relatively concentrated.
The local contrast enhancement image of the target image refers to dividing the overall contrast of the target image into a plurality of continuous small block areas, the contrast of the small block areas is the local contrast of the target image, and the contrast enhancement is to stretch or compress the brightness value range in the target image into a specified brightness display range so as to improve the overall or local contrast of the image.
Step S2: and filtering the local contrast enhanced image of the target image to determine a real target area.
Specifically, the true target area refers to a weak target in the target image, wherein the weak target may be an infrared weak target or a near-infrared weak target, and the like. For example, the real target area is a sun-fall in a target image connected with the sea and the sky, the background area in the target image occupies a large part, and interference is generated on the real target area.
According to the target detection method, the target detection device, the computer equipment and the storage medium, the target image is obtained, and the local contrast enhanced image of the target image is obtained by processing the target image according to the local minimum contrast of the target image and the local maximum difference of the target image; and then filtering the local contrast enhanced image of the target image to determine a real target area. By the method, the interference of a complex scene can be reduced, the weak and small target detection rate is improved, and the false alarm is reduced.
In one embodiment, in conjunction with fig. 3, the step S1 includes:
step S11: obtaining the maximum value of a pixel of a central image block, and obtaining the local maximum difference value of the target image according to the maximum value of the pixel of the central image block and the average gray value of a plurality of field image blocks, wherein the neighborhood image blocks are adjacent to the central image block.
In particular, the central image block refers to the smallest image block containing the central point of the target image. The neighborhood image block refers to each image block of the target image except the central image block.
Let the maximum value of the central block pixel be LnThe mean value of all neighborhood blocks is mILocal maximum difference Z of target imageIThe specific calculation process is as follows:
ZI=Ln-mI
step S12: and obtaining the local minimum contrast of the target image according to the pixel gray average value of each field image block in the plurality of field image blocks and the maximum value of the pixel of the central image block.
In particular, the local minimum contrast C of the target imagewComprises the following steps:
Figure GDA0002091715770000061
wherein L isnIs the maximum value of the sub-block pixel in the center of the window, miAnd taking the value of i as 1, 2 and … 8 for the pixel mean value of the ith neighborhood sub-block.
Step S13: and obtaining a local contrast enhanced image of the target image according to the local minimum contrast of the target image and the local maximum difference of the target image.
In particular, the local minimum contrast C is improvedwAnd the local maximum difference ZtThe product of the squares of (a) to (b), resulting in an enhanced local contrast image, noted EELCM:
Figure GDA0002091715770000062
in one embodiment, in conjunction with fig. 4, the step S11 is preceded by:
step S9: and dividing the target image to respectively obtain a central image block and a plurality of field image blocks.
In particular, segmenting the target image refers to geometric segmentation, i.e. dividing the target image into a plurality of equal or unequal image blocks. For example, the target image is equally divided to form N × N image blocks, N is set to 3, and the size of each image block is 3 × 3, that is, one central image block and 8 neighborhood image blocks are obtained.
Step S10: and acquiring the pixel gray level mean value of each field image block in the plurality of field image blocks, and calculating the average gray level value of the plurality of field image blocks according to the pixel gray level mean value of each field image block.
Specifically, the obtained central image block is marked as "0", and the remaining neighborhood image blocks are respectively marked as "1" - "8", which represent 8 neighborhoods of the central image block.
Mean value m of pixel gray levels of ith neighborhood blockiComprises the following steps:
Figure GDA0002091715770000071
Nbthe number of pixel points of each image sub-block is represented,
Figure GDA0002091715770000073
and taking the value of i as the gray value of the jth pixel point in the ith adjacent domain block, wherein i is 1, 2 and … 8.
Calculating the average gray value m of 8 adjacent domain blocksI
Figure GDA0002091715770000072
In one embodiment, in conjunction with fig. 5, the step S2 includes:
step S21: and performing threshold segmentation on the local contrast enhanced image of the target image according to a preset threshold to obtain a binary image.
Specifically, the preset threshold refers to a threshold selected by performing threshold segmentation on the local contrast enhanced image of the target image. In order to adapt to different complex scene changes, a preset threshold value is adaptively selected according to the statistical information of the contrast value.
Step S22: and carrying out connected domain analysis on the binary image to obtain the number and distribution of candidate target regions in the binary image.
Specifically, the distribution of the candidate target region includes the position of the region in the binary image and the mark corresponding to the position thereof. Wherein the position can be recorded in coordinate or numerical form.
The method is used for analyzing the connected domain of the binary image and mainly adopts a clustering method. In practical engineering application, due to the real-time consideration, the connected domain analysis obtains the mark by traversing the whole image, so that the time complexity is high, and the realization of an embedded platform is not facilitated. Therefore, in order to accelerate the processing speed, the invention adopts a clustering method to take the blocks which are nearest to the white area as one-to-one target blocks. Since the clustering is only performed on the target region, the time complexity is greatly reduced compared to performing the operation on the entire image.
Step S23: and performing time domain correlation analysis on the candidate target regions in the binary image according to the number and distribution of the candidate target regions to obtain a real target region in the candidate targets.
Specifically, the temporal correlation analysis includes analysis of motion continuity and data correlation between adjacent frames of a plurality of candidate target regions, so as to confirm a final real target region. The purpose of performing time domain correlation analysis on a plurality of candidate target regions in the binary image is to further confirm the candidate target regions to obtain a real target region, so that the detection accuracy is improved.
In one embodiment, in conjunction with fig. 6, the step S21 includes:
step S211: and obtaining the mean value and the standard deviation of the local contrast enhanced image according to the local contrast enhanced image of the target image.
Specifically, the local contrast of the local contrast enhanced image has a plurality of obtaining modes of the mean value and the standard deviation, and a mean value and standard deviation calculation formula can be adopted between the local contrast enhanced image and the local contrast enhanced image, or the local contrast enhanced image and the standard deviation enhanced image can be directly realized by software programming.
For example, the following is the code used to calculate the mean and standard deviation using meanstdddev as follows:
Figure GDA0002091715770000081
Figure GDA0002091715770000091
step S212: and determining a preset threshold for performing threshold segmentation on the local contrast enhanced image of the target image according to the mean value and the standard deviation of the local contrast enhanced image.
Specifically, the preset threshold is calculated as follows:
thr=mu+k*sigma
the average value and the variance of the local contrast are mu and sigma respectively, the range of the preset threshold is 1-3, and the set value is 2 according to the actual scene.
In one embodiment, the step S22 includes:
step S221: and carrying out cluster analysis on the binary image to obtain the number and the marks of the candidate target areas in the binary image.
Specifically, the marks are relative to the candidate target area positions, for example, the marks of the candidate target area positions are numbers 1, 2, 3, and the like, and are also recorded in the form of text, graphics, or the like.
In one embodiment, the step S23 includes:
step S231: and removing non-target areas from the candidate target areas according to the motion continuity and the data correlation degree between adjacent frames to obtain the real target area.
Specifically, false alarms (non-target areas) are further removed according to motion continuity and data correlation between adjacent frames, and a real target area is confirmed. In general, the contrast of the target area is high. However, the actual scenes are complex and various, and the local contrast of the background block may be higher than that of the target block, especially for the scenes with a sea-sky junction and a sea surface, which are interfered by fish scale light reflection and sea clutter, the local contrast of the target region may be lower than that of the sea clutter, and after the contrast enhancement, the sea clutter in the background may be used as a candidate target region, which brings great interference to the real target. Therefore, in order to improve the accuracy of detection, it is necessary to further confirm the candidate target region to obtain a true target region.
On the other hand, the object has motion continuity between adjacent frames due to the randomness of the background appearance. Therefore, it is considered to further confirm the target by using the inter-frame continuity and the data association.
In one embodiment, in conjunction with fig. 7, the step S231 includes:
step S2311: and if only one candidate target area is detected in the three continuous frames and the distance between the center positions of the candidate target areas between the two continuous frames is smaller than a preset pixel threshold value, determining that the candidate target is the real target area.
Specifically, the preset pixel threshold refers to a distance between center positions of candidate target regions between two consecutive frames, the preset pixel threshold may be set according to an actual situation, and the preset pixel threshold in the present application is set to 10 pixels.
Step S2312: if a plurality of candidate target areas are detected by the first frame, calculating Euclidean distance and local contrast of each candidate target area detected by the second frame and each candidate area detected by the first frame respectively, and taking the candidate area with the minimum Euclidean distance and the closest local contrast as the real target area.
Specifically, the present application uses a standard euclidean distance for calculation, and since the candidate target regions are two-dimensional, the euclidean distance of each candidate region can be expressed as:
Figure GDA0002091715770000101
Figure GDA0002091715770000102
where ρ is a point (x)2,y2) And point (x)1,y1) The euclidean distance between; | X | is a point (X)2,y2) Euclidean distance to the origin.
In order to verify the effectiveness of the method of the present application in detecting infrared weak and small targets in different actual scenes, actual scene data is used for testing, including air scenes, sea-sky-phase scenes and sea-surface scenes, and the results are shown in fig. 8-10. Fig. 8(a) is an aerial scene with cloud layer interference in the background, fig. 8(b) is an aerial scene enhancement image obtained by using the existing local contrast method, and fig. 8(c) is an aerial scene enhancement result obtained by improving and optimizing the local contrast by using the method of the present application. FIG. 9(a) is a sea-sky junction scene with weak targets, and FIG. 9(b) is a sea-sky junction scene enhancement result after local contrast is improved and optimized by the method of the present application; fig. 10(a) shows a sea surface scene and sea clutter interference, and fig. 10(b) shows a sea surface scene enhancement result after local contrast is improved and optimized by the method of the present application.
As can be seen from fig. 8(b) and 8(c), the target can be enhanced by using the local contrast method, but in addition to the enhancement of the target, the enhancement result obtained by using the conventional method also enhances the edge portion with higher brightness in the background (such as the cloud layer edge in the figure), which brings more false alarms for the subsequent threshold segmentation and detection; the target in the enhancement result obtained by the method (c) of fig. 8 is enhanced, and strong background edges (cloud layer edges) are suppressed, so that false alarms are greatly reduced.
As can be seen from fig. 9(b) and 10(b), the target can be enhanced and the sea clutter in the sea surface background can be suppressed, and the method is suitable for extremely weak targets.
In order to quantitatively evaluate the beneficial effects of the method and the prior art, the performance of the method (denoted as EELCM), the existing local contrast calculation method (abbreviated as ELCM) and the morphological method (abbreviated as ETH) is compared by using two indexes of detection rate and false alarm rate, and the results are shown in table 1. As can be seen from the table, compared with the prior art, the method provided by the invention has the advantages that the false alarm rate is greatly reduced while the higher detection rate is kept, and the adaptability and the reliability of the product are improved.
TABLE 1
Figure GDA0002091715770000111
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 11, there is provided an object detection apparatus including: an enhanced image acquisition module and a real target area determination module, wherein:
an enhanced image obtaining module 10, configured to obtain a target image, and process the target image according to a local minimum contrast of the target image and a local maximum difference of the target image to obtain a local contrast enhanced image of the target image;
and a real target region determining module 20, configured to filter the local contrast enhanced image of the target image, and determine a real target region.
In one embodiment, the enhanced image acquisition module 10 includes:
the maximum difference obtaining module 11 is configured to obtain a maximum value of a pixel of a central image block, and obtain a local maximum difference of the target image according to the maximum value of the pixel of the central image block and an average gray value of image blocks in multiple fields, where the neighborhood image block is adjacent to the central image block;
the local minimum contrast acquisition module 12 is configured to obtain a local minimum contrast of the target image according to a pixel grayscale mean of each field image block of the plurality of field image blocks and a maximum value of a pixel of the central image block;
and the local contrast enhanced image obtaining module 13 is configured to obtain a local contrast enhanced image of the target image according to the local minimum contrast of the target image and the local maximum difference of the target image.
In one embodiment, the maximum difference value obtaining module 11 comprises:
the target image segmentation module 14 is configured to segment the target image to obtain a central image block and a plurality of field image blocks respectively;
the average gray value calculating module 15 is configured to obtain a pixel gray average value of each field image block in the plurality of field image blocks, and calculate an average gray value of the plurality of field image blocks according to the pixel gray average value of each field image block.
In one embodiment, the real target area determination module 20 includes:
a binary image obtaining module 21, configured to perform threshold segmentation on the local contrast enhanced image of the target image according to a preset threshold, so as to obtain a binary image;
a connected component analysis module 22, configured to perform connected component analysis on the binary image to obtain the number and distribution of candidate target regions in the binary image;
and the time domain correlation analysis module 23 is configured to perform time domain correlation analysis on the multiple candidate target regions in the binary image according to the number and distribution of the candidate target regions, so as to obtain a real target region in the multiple candidate targets.
In one embodiment, the binary image acquisition module 21 includes:
a mean and standard deviation obtaining module 211, configured to obtain a mean and a standard deviation of a local contrast of the local contrast-enhanced image according to the local contrast-enhanced image of the target image;
a threshold segmentation module 212, configured to determine a preset threshold for performing threshold segmentation on the local contrast enhanced image of the target image according to the mean and the standard deviation of the local contrast enhanced image.
In one embodiment, the time domain correlation analysis module 23 includes:
a real target region obtaining module 231, configured to remove a non-target region from the multiple candidate target regions according to the motion continuity and the data association between adjacent frames, so as to obtain the real target region.
In one embodiment, the real target area obtaining module 231 includes:
a first detecting module 2311, configured to determine that the candidate target is the real target area if only one candidate target area is detected in all three consecutive frames and a distance between center positions of the candidate target areas between two consecutive frames is smaller than a preset pixel threshold;
the second detecting module 2312 is configured to, if multiple candidate target regions are detected in the first frame, perform euclidean distance and local contrast calculation on each candidate target region detected in the second frame and each candidate region detected in the first frame, and take a candidate region with a smallest euclidean distance and a closest local contrast as the real target region.
In one embodiment, the connected domain analysis module 22 includes:
and the number and mark acquisition module 221 is configured to perform cluster analysis on the binary image to obtain the number and marks of the candidate target regions in the binary image.
For a specific definition of the target detection device, reference may be made to the above definition of a target detection method, which is not described herein again. The modules in the target detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 12. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing object detection data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of object detection.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of any of the methods described above when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of any of the methods described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of object detection, the method comprising:
acquiring a target image, and processing the target image according to the local minimum contrast of the target image and the local maximum difference of the target image to obtain a local contrast enhanced image of the target image; wherein the local minimum contrast C of the target imagewObtained by the following method:
dividing the target image into a plurality of image blocks;
calculating the local minimum contrast of the target image:
Figure FDA0003114201650000011
wherein L isnIs the maximum value, m, of the pixels of a central image block of the plurality of image blocksiThe mean value of the pixels of the ith neighborhood image block adjacent to the central image block;
filtering the local contrast enhanced image of the target image to determine a real target area;
the obtaining of the target image and the processing of the target image according to the local minimum contrast of the target image and the local maximum difference of the target image to obtain the local contrast enhanced image of the target image includes:
acquiring the maximum value of a central image block pixel, and obtaining the local maximum difference value of the target image according to the maximum value of the central image block pixel and the average gray value of a plurality of field image blocks, wherein the neighborhood image blocks are adjacent to the central image block;
obtaining local minimum contrast of the target image according to the pixel gray average value of each field image block in the plurality of field image blocks and the maximum value of the pixel of the central image block;
obtaining a local contrast enhanced image of the target image according to the local minimum contrast of the target image and the local maximum difference of the target image; wherein the local contrast enhanced image is obtained by multiplying a local minimum contrast by a square of a local maximum difference.
2. The method according to claim 1, wherein obtaining the maximum value of the central image block pixel and obtaining the local maximum difference value of the target image according to the maximum value of the central image block pixel and the average gray-scale value of the plurality of field image blocks comprises:
dividing the target image to respectively obtain a central image block and a plurality of field image blocks;
and acquiring the pixel gray level mean value of each field image block in the plurality of field image blocks, and calculating the average gray level value of the plurality of field image blocks according to the pixel gray level mean value of each field image block.
3. The method of claim 2, wherein the filtering the local contrast enhanced image of the target image to determine a true target region comprises:
performing threshold segmentation on the local contrast enhanced image of the target image according to a preset threshold to obtain a binary image;
performing connected domain analysis on the binary image to obtain the number and distribution of candidate target regions in the binary image;
and performing time domain correlation analysis on the candidate target regions in the binary image according to the number and distribution of the candidate target regions to obtain a real target region in the candidate targets.
4. The method according to claim 3, wherein the threshold segmentation of the local contrast enhanced image of the target image according to a preset threshold value to obtain a binary image comprises:
obtaining a mean value and a standard deviation of the local contrast enhanced image according to the local contrast enhanced image of the target image;
and determining a preset threshold for performing threshold segmentation on the local contrast enhanced image of the target image according to the mean value and the standard deviation of the local contrast enhanced image.
5. The method according to claim 3, wherein the performing time domain correlation analysis on a plurality of candidate target regions in the binary image according to the number and distribution of the candidate target regions to obtain a real target region in the plurality of candidate targets comprises:
and removing non-target areas from the candidate target areas according to the motion continuity and the data correlation degree between adjacent frames to obtain the real target area.
6. The method of claim 5, wherein the removing non-target regions from the candidate target regions according to the motion continuity and the data correlation between adjacent frames to obtain the real target region comprises:
if only one candidate target area is detected in the three continuous frames and the distance between the center positions of the candidate target areas between the two continuous frames is smaller than a preset pixel threshold value, determining that the candidate target is the real target area;
if a plurality of candidate target areas are detected by the first frame, calculating Euclidean distance and local contrast of each candidate target area detected by the second frame and each candidate area detected by the first frame respectively, and taking the candidate area with the minimum Euclidean distance and the closest local contrast as the real target area.
7. The method according to claim 3, wherein the performing connected component analysis on the binary image to obtain the number and distribution of candidate target regions in the binary image comprises:
and carrying out cluster analysis on the binary image to obtain the number and the marks of the candidate target areas in the binary image.
8. An object detection apparatus, characterized in that the apparatus comprises:
the enhanced image acquisition module is used for acquiring a target image, and processing the target image according to the local minimum contrast of the target image and the local maximum difference of the target image to obtain a local contrast enhanced image of the target image; wherein the local minimum contrast C of the target imagewObtained by the following method:
dividing the target image into a plurality of image blocks;
calculating the local minimum contrast of the target image:
Figure FDA0003114201650000031
wherein L isnMaximum value of a pixel of the central image block, miThe mean value of the pixels of the ith neighborhood image block adjacent to the central image block;
the real target area determining module is used for filtering the local contrast enhanced image of the target image and determining a real target area;
wherein the enhanced image acquisition module comprises:
the maximum difference acquisition module is used for acquiring the maximum value of a pixel of a central image block and acquiring the local maximum difference of the target image according to the maximum value of the pixel of the central image block and the average gray value of a plurality of field image blocks, wherein the neighborhood image blocks are adjacent to the central image block;
the local minimum contrast acquisition module is used for obtaining the local minimum contrast of the target image according to the pixel gray average value of each field image block in the plurality of field image blocks and the maximum value of the pixel of the central image block;
the local contrast enhanced image acquisition module is used for acquiring a local contrast enhanced image of the target image according to the local minimum contrast of the target image and the local maximum difference of the target image, wherein the local contrast enhanced image is acquired by multiplying the local minimum contrast and the square of the local maximum difference.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201910091468.9A 2019-01-30 2019-01-30 Target detection method, target detection device, computer equipment and storage medium Active CN110033474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910091468.9A CN110033474B (en) 2019-01-30 2019-01-30 Target detection method, target detection device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910091468.9A CN110033474B (en) 2019-01-30 2019-01-30 Target detection method, target detection device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110033474A CN110033474A (en) 2019-07-19
CN110033474B true CN110033474B (en) 2021-09-03

Family

ID=67235513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910091468.9A Active CN110033474B (en) 2019-01-30 2019-01-30 Target detection method, target detection device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110033474B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113629574A (en) * 2021-08-18 2021-11-09 国网湖北省电力有限公司襄阳供电公司 Strong wind sand area transmission conductor galloping early warning system based on video monitoring technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682296A (en) * 2012-03-21 2012-09-19 北京航空航天大学 Self-adaption estimating method of size of infrared small dim target under complicated background condition
CN104834915A (en) * 2015-05-15 2015-08-12 中国科学院武汉物理与数学研究所 Small infrared object detection method in complex cloud sky background
CN107590496A (en) * 2017-09-18 2018-01-16 南昌航空大学 The association detection method of infrared small target under complex background
CN107992873A (en) * 2017-10-12 2018-05-04 西安天和防务技术股份有限公司 Object detection method and device, storage medium, electronic equipment
CN109191395A (en) * 2018-08-21 2019-01-11 深圳创维-Rgb电子有限公司 Method for enhancing picture contrast, device, equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101620728A (en) * 2009-06-19 2010-01-06 北京航空航天大学 New infrared background inhibiting method based on self-adaption background forecast
CN101893580B (en) * 2010-06-10 2012-01-11 北京交通大学 Digital image based detection method of surface flaw of steel rail
US9593365B2 (en) * 2012-10-17 2017-03-14 Spatial Transcriptions Ab Methods and product for optimising localised or spatial detection of gene expression in a tissue sample
CN103996209B (en) * 2014-05-21 2017-01-11 北京航空航天大学 Infrared vessel object segmentation method based on salient region detection
CN106056115B (en) * 2016-05-25 2019-01-22 西安科技大学 A kind of infrared small target detection method under non-homogeneous background

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682296A (en) * 2012-03-21 2012-09-19 北京航空航天大学 Self-adaption estimating method of size of infrared small dim target under complicated background condition
CN104834915A (en) * 2015-05-15 2015-08-12 中国科学院武汉物理与数学研究所 Small infrared object detection method in complex cloud sky background
CN107590496A (en) * 2017-09-18 2018-01-16 南昌航空大学 The association detection method of infrared small target under complex background
CN107992873A (en) * 2017-10-12 2018-05-04 西安天和防务技术股份有限公司 Object detection method and device, storage medium, electronic equipment
CN109191395A (en) * 2018-08-21 2019-01-11 深圳创维-Rgb电子有限公司 Method for enhancing picture contrast, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孟博.空中红外小目标检测及硬件加速研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2017,第I138-5354页. *
空中红外小目标检测及硬件加速研究;孟博;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170315;摘要,第27-40页 *

Also Published As

Publication number Publication date
CN110033474A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
US10467743B1 (en) Image processing method, terminal and storage medium
CN107194408B (en) Target tracking method of mixed block sparse cooperation model
CN109903272B (en) Target detection method, device, equipment, computer equipment and storage medium
Deng et al. Infrared small target detection based on the self-information map
CN109035287B (en) Foreground image extraction method and device and moving vehicle identification method and device
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
CN112464829B (en) Pupil positioning method, pupil positioning equipment, storage medium and sight tracking system
CN109255802B (en) Pedestrian tracking method, device, computer equipment and storage medium
CN110400294B (en) Infrared target detection system and detection method
CN111191533A (en) Pedestrian re-identification processing method and device, computer equipment and storage medium
CN107194896B (en) Background suppression method and system based on neighborhood structure
CN110502977B (en) Building change classification detection method, system, device and storage medium
WO2020062546A1 (en) Target tracking processing method and electronic device
CN112395944A (en) Multi-scale ratio difference combined contrast infrared small target detection method based on weighting
CN113228105A (en) Image processing method and device and electronic equipment
CN110033474B (en) Target detection method, target detection device, computer equipment and storage medium
CN116805327A (en) Infrared small target tracking method based on dynamic convolution kernel
CN109934777B (en) Image local invariant feature extraction method, device, computer equipment and storage medium
CN109934870B (en) Target detection method, device, equipment, computer equipment and storage medium
CN113762027B (en) Abnormal behavior identification method, device, equipment and storage medium
CN112084874B (en) Object detection method and device and terminal equipment
Zhang et al. Moving object detection algorithm based on pixel spatial sample difference consensus
CN113963017A (en) Real-time infrared small and weak target detection method and device and computer equipment
Zhao et al. ResFuseYOLOv4_Tiny: Enhancing detection accuracy for lightweight networks in infrared small object detection tasks
CN112652004B (en) Image processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant