CN107085828B - Image splicing and fusing method based on human visual characteristics - Google Patents

Image splicing and fusing method based on human visual characteristics Download PDF

Info

Publication number
CN107085828B
CN107085828B CN201710298017.3A CN201710298017A CN107085828B CN 107085828 B CN107085828 B CN 107085828B CN 201710298017 A CN201710298017 A CN 201710298017A CN 107085828 B CN107085828 B CN 107085828B
Authority
CN
China
Prior art keywords
image
path
splicing
region
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710298017.3A
Other languages
Chinese (zh)
Other versions
CN107085828A (en
Inventor
史再峰
高阳
庞科
高静
徐江涛
刘铭赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201710298017.3A priority Critical patent/CN107085828B/en
Publication of CN107085828A publication Critical patent/CN107085828A/en
Application granted granted Critical
Publication of CN107085828B publication Critical patent/CN107085828B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of image processing and computer graphics, and aims to improve the quality of spliced images by using visual saliency and masking effect, reduce the image quality reduction caused by edge fracture and obtain high-quality spliced and fused images conforming to the vision. The invention adopts the technical scheme that an image splicing and fusing method based on human visual characteristics comprises the following steps: processing the overlapping area of the two images after matching operation processing, and searching a splicing path in the area: firstly, searching an area with visual masking characteristics in an overlapped area; secondly, solving the pixel weight of the smooth area and the texture area; thirdly, solving the visual saliency of the overlapping area; fourthly, solving a splicing path; and fifthly, completing image splicing by using the path. The invention is mainly applied to the image processing occasion.

Description

Image splicing and fusing method based on human visual characteristics
Technical Field
The invention relates to the field of image processing and computer graphics and the field of improving image quality based on human eye characteristics. In particular to the field of optimizing the image fusion effect based on the human eye visual characteristic when image matching and splicing are carried out. In particular to an image splicing and fusing method based on human visual characteristics.
Background
The image splicing and fusion technology is widely applied to the fields of automotive electronics, unmanned aerial vehicles, military, remote sensing and the like. The technology extracts, matches and calculates characteristic information of two or more images with overlapped areas to obtain splicing parameters, and fuses the images into an image with a wide visual angle after the images are deformed according to the splicing parameters. In practical application, due to the limitation of a splicing algorithm, a shooting method and lens distortion, some calculation errors are introduced, so that calculated splicing parameters are inaccurate, and deformed image overlapping regions cannot be completely overlapped, so that a fractured edge appears in a spliced image. The broken edges captured by human eyes can seriously affect the quality of spliced images and reduce the subjective feeling of the human eyes on the images.
Image quality evaluation based on the human visual system has been receiving continuous attention in recent years because objective image quality evaluation in accordance with subjective feeling can be obtained by using such a method. The human visual system has a variety of characteristics. Among them, visual saliency and masking properties are two prominent features. Visual saliency is a research hotspot in the field of human eye attention mechanisms, and is mainly expressed in the field of computer vision as simulating human visual attention mechanisms. The human visual attention mechanism is able to focus limited cognitive resources on important stimuli in the scene while suppressing those unimportant information. For an image, the graphics in the image have different shapes and colors, so that different graphics have different stimuli to human eyes and have different visual saliency. Masking properties are another property possessed by the human eye. The human eye has a higher sensitivity to a strong contrast image and cannot effectively recognize an object when the object is similar to the background. This is a particular embodiment of the human eye masking feature.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to solve the problems of dislocation, edge fracture, splicing gaps and the like in image fusion in a common image splicing method, and aims to improve the quality of spliced images and reduce the image quality reduction caused by edge fracture based on human eye characteristics by utilizing visual saliency and masking effect so as to obtain high-quality spliced and fused images conforming to vision. The invention adopts the technical scheme that an image splicing and fusing method based on human visual characteristics comprises the following steps:
processing the overlapping area of the two images after matching operation processing, and searching a splicing path in the area:
firstly, searching for areas with visual masking characteristics in the overlapped area, wherein the areas are divided into two types: one is a smooth region; the other is a region with fine texture; when the dislocation occurs in the first type area, the dislocation is invisible, and when the dislocation occurs in the second type area, the fine dislocation is masked by fine textures in the background and cannot effectively attract the attention of human eyes; the specific method comprises the following steps: extracting edge information in an image by using an edge detection operator to obtain a 2-value image C related to the edge information, wherein the brightness corresponding to edge pixels in the image C is 1, performing morphological dilation operation on the edge in the image C in the amplitude of r to obtain a 2-value image D, wherein a region with a brightness value of 0 in the 2-value image D can be regarded as a region with little change, performing morphological erosion operation on the image D twice continuously in the amplitude of a parameter r to obtain an image E, and a region with the brightness of 1 in the 2-value image E is a region with fine texture;
solving the pixel weight of the smooth area and the texture area, and solving the range in each pixel window for the pixel point of the smooth area according to formula 1 to obtain a numerical value representing the similarity degree between the pixel point and the surrounding environment:
G=xmax-xmin(1)
in equation 1, G represents the range of the window centered on the pixel to be calculated, xmaxRepresents the maximum value, x, in the windowminRepresenting the minimum value in the window, the smaller G represents the more similar the environment around the pixel, and the local entropy of the pixel of the fine texture region is obtained by formula 2:
Figure BDA0001283543140000021
(i, j) is the coordinate of the pixel in the calculation window, the vertex at the upper left corner of the window is the origin coordinate (0,0) of the window, p is the probability that the gray level of the current pixel occupies the local total gray level, m and n are the length and the width of the pixel window, ∑ is a summation sign, H is the local entropy of the pixel, which represents the chaos degree at the point, and a weight map mask of the masking region is calculated according to the formula 3 according to the obtained local range and the local entropy:
Figure BDA0001283543140000022
in formula 3, mask represents a masking characteristic weight map, and k1 and k2 are the local range and the local entropy weight, respectively.
Thirdly, calculating the visual significance of the coincident region: processing the image of the overlapping area by using a visual Saliency algorithm with wider applicability to obtain a visual Saliency weight map Saliency of the overlapping area, wherein the higher the Saliency value is, the more easily the pixels draw the attention of human eyes;
fourthly, solving the splicing path: firstly, the selection weight of the splicing path is obtained according to the formula 4:
cost=k3·mask+k4·Saliency (4)
cost represents the weight of the pixels of the selected splicing path, and k3 and k4 are the proportions of masking property and significance respectively;
searching a fusion path according to cost by taking the intersection point of two graph boundaries in the coincidence region as a starting point, sequentially selecting path points in a coordinate range of the coincidence region according to a column, if the obtained path point coordinate is (x, y), searching the next path point by taking the range from (x +1, y-a) to (x +1, y + b), wherein a and b are constants larger than 0, taking the point coordinate with the minimum cost value in the range as the next path point coordinate, and so on to obtain a splicing fusion path based on human eye characteristic optimization;
and fifthly, completing image splicing by using the path.
One specific example is that the edge of the image is detected by using a canny operator, then expansion and corrosion are carried out at the amplitude of 5, a region with the masking property is obtained, the local range value of the smooth region is subtracted from 255, and then the product is multiplied by the specific gravity k1 to be 0.2, and the local entropy of the fine texture region is multiplied by the specific gravity k2 to be 0.3 to obtain the weight map mask of the masking property;
adding the mask multiplied by the specific gravity k3 to 0.4 and the salienty multiplied by the specific gravity k4 to 0.6 to obtain the selection weight cost of the splicing path;
when a splicing path is searched, the searching range of the next path point with the current path point coordinate (x, y) is (x +1, y-3) to (x +1, y +5), and the point with the minimum cost is selected as the next path point.
The invention has the characteristics and beneficial effects that:
1. according to the invention, the regions with masking effect on dislocation in the image are spliced and fused, so that the visual attention caused by dislocation is eliminated or weakened, and the quality of the fused image is improved;
2. the invention uses the pixels with low visual saliency as the splicing fusion path, so that the position where the dislocation occurs is far away from the region interested by human eyes and occurs in the region focused by the human eyes, the key information of the image is reserved, and the quality of the image is improved.
Description of the drawings:
FIG. 1 shows the effect of the edge of a fine-textured pattern and the morphological dilation operation.
Fig. 2 morphological dilation, erosion algorithm treatment and resulting fine-textured regions (black and gray regions of the last image).
Fig. 3 selection of a stitching path.
FIG. 4 is a flow chart of the present invention.
Detailed Description
The image splicing and fusion is essentially that the pixels of the processed images are gathered on one image by taking the overlapping area of the two images as reference. The common fusion method is to take one input spliced image as a reference, and perform padding after the operation of the other spliced image. Therefore, the path for image stitching and fusion is the boundary of the reference image. When the operation parameters have a certain deviation from the actual values and the edges of the graph intersect with the splicing path, the phenomenon of edge dislocation or edge breakage occurs. If these misalignments occur in areas of high visual saliency, the quality of the image is severely affected. Meanwhile, texture regions (such as sand, tree crowns and the like) with higher disorder degree exist in the image, and when the staggered edges occur in the regions, the staggered edges cannot be perceived due to the human eye masking characteristic. Therefore, the invention utilizes the characteristics of human eyes to restrict the splicing path which is possible to have dislocation to the area with lower visual significance and the fine texture area with masking characteristic, thereby reducing the number of dislocation edges which can be identified by human eyes and effectively improving the image quality. The specific method comprises the following steps:
and processing the overlapped area of the two images after matching operation processing. A splice path is sought within the region.
Firstly, searching for an area with visual masking characteristics in the overlapped area. Such regions are largely divided into two categories: one is a smooth area, such as the sky, a calm lake, etc.; the other is a region with fine texture, such as sand, dense leaves, etc. Dislocations that occur in the first type of regions are not visible, while in the second type of regions, small dislocations are masked by the fine texture in the background and are not effectively noticeable to the human eye. The specific method comprises the following steps: and extracting edge information in the image by using an edge detection operator to obtain a 2-value image C related to the edge information. In image C, the brightness corresponding to the edge pixel is 1. As shown in fig. 1, the black squares represent the detected edge pixels. And performing morphological dilation operation on the edge in the image C by the amplitude of r to obtain a 2-value image D. At this time, the area with the luminance value of 0 in the 2-value image D can be regarded as an area with little change (as shown by the white pixel in the right image in fig. 1). As shown in fig. 2, the image D was subjected to morphological erosion twice consecutively at an r-level to obtain an image E. The region with brightness of 1 in the 2-valued image E is the region with fine texture, i.e. the gray and black regions in fig. 2.
And secondly, calculating the pixel weight of the smooth area and the texture area. And (3) calculating the range difference in each pixel window for the pixel points in the smooth area according to a formula 1 to obtain a numerical value representing the similarity degree between the pixel point and the surrounding environment.
G=xmax-xmin(1)
In equation 1, G represents the range of the window centered on the pixel to be calculated, xmaxRepresents the maximum value, x, in the windowminRepresenting the minimum value in the window. A smaller G represents a more similar environment around the pixel. The local entropy is obtained for the pixels of the fine texture region by equation 2.
Figure BDA0001283543140000031
(i, j) is the coordinate of the pixel within the calculation window, and the vertex at the upper left corner of the window is the origin coordinate (0,0) of the window. p is the probability of the gray level of the current pixel occupying the local total gray level, m and n are the length and width of the pixel window, and H is the local entropy of the pixel and represents the chaos degree at the point. And calculating a weight map mask of the masking region according to the obtained local range and local entropy and a formula 3.
Figure BDA0001283543140000041
In formula 3, mask represents a masking characteristic weight map, and k1 and k2 are the local range and the local entropy weight, respectively.
And thirdly, solving the visual significance of the overlapped area. And processing the image of the overlapping area by using a visual Saliency algorithm with wide applicability to obtain a visual Saliency weight map Saliency of the overlapping area. The higher the visual saliency, the higher the attention of the human eye.
And fourthly, solving the splicing path. First, the selection weight of the splicing path is obtained according to formula 4.
cost=k3·mask+k4·Saliency (4)
cost represents the weight of the pixels of the selected splicing path, and k3 and k4 are the weights of the masking characteristic and the saliency respectively.
And searching a fusion path according to cost by taking the intersection point of the boundaries of the two graphs in the overlapping area as a starting point. And sequentially selecting path points in a row within the coordinate range of the overlapping area. As shown in fig. 3, if the obtained waypoint coordinates are (x, y), the search range of the next waypoint is (x +1, y-a) to (x +1, y + b). The point coordinate with the minimum cost value in the range is taken as the next path point coordinate. By analogy, a splicing fusion path based on human eye characteristic optimization is obtained.
And fifthly, completing image splicing by using the path.
In implementation, the canny operator is used to detect the edges of the image. Then, expansion and etching were performed at an amplitude of 5 to obtain a region having masking property. And subtracting the local polarization value of the smooth region from 255, and multiplying the local polarization value by the specific gravity k1 to obtain 0.2, and multiplying the local entropy of the fine texture region by the specific gravity k2 to obtain 0.3 to obtain a weight map mask of the masking property.
Adding the mask multiplied by the specific gravity k3 to 0.4 and the salience multiplied by the specific gravity k4 to 0.6 to obtain the selection weight cost of the splicing path.
When the splicing path is searched, the search range of the next path point with the current path point coordinate (x, y) is (x +1, y-3) to (x +1, y + 5). The point with the lowest cost is selected as the next waypoint.
According to this embodiment, the present invention has the best splicing fusion effect.

Claims (2)

1. An image splicing and fusing method based on human visual characteristics is characterized by comprising the following steps:
processing the overlapping area of the two images after matching operation processing, and searching a splicing path in the area:
1) searching for areas with visual masking characteristics in the overlapped area, wherein the areas are divided into two types: one is a smooth region; the other is a region with fine texture; when the dislocation occurs in the first type area, the dislocation is invisible, and when the dislocation occurs in the second type area, the fine dislocation is masked by fine textures in the background and cannot effectively attract the attention of human eyes; the specific method comprises the following steps: extracting edge information in an image by using an edge detection operator to obtain a 2-value image C related to the edge information, wherein the brightness corresponding to edge pixels in the image C is 1, performing morphological dilation operation on the edge in the image C in the amplitude of r to obtain a 2-value image D, wherein the region with the brightness value of 0 in the 2-value image D can be regarded as a region with little change, performing morphological erosion operation on the image D twice continuously in the amplitude of a parameter r to obtain an image E, and the region with the brightness of 1 in the 2-value image E is a region with fine texture;
2) solving the pixel weight of the smooth area and the texture area, and solving the range of each pixel window for the pixel point of the smooth area according to formula 1 to obtain a numerical value representing the similarity degree between the pixel point and the surrounding environment:
G=xmax-xmin(1)
in equation 1, G represents the range of the window centered on the pixel to be calculated, xmaxRepresents the maximum value, x, in the windowminRepresenting the minimum value in the window, the smaller G represents the more similar the environment around the pixel, and the local entropy of the pixel of the fine texture region is obtained by formula 2:
Figure FDA0002224707650000011
(i, j) is the coordinate of the pixel in the calculation window, the vertex at the upper left corner of the window is the origin coordinate (0,0) of the window, p is the probability that the gray level of the current pixel occupies the local total gray level, m and n are the length and the width of the pixel window, ∑ is a summation sign, H is the local entropy of the pixel, which represents the chaos degree at the point, and a weight map mask of the masking region is calculated according to the formula 3 according to the obtained local range and the local entropy:
Figure FDA0002224707650000012
in formula 3, mask represents a masking characteristic weight map, and k1 is the specific gravity of local extreme difference and the specific gravity of k2 local entropy;
3) and calculating the visual significance of the overlapped area: processing the image of the overlapping area by using a visual Saliency algorithm with wider applicability to obtain a visual Saliency weight map Saliency of the overlapping area, wherein the higher the Saliency value is, the more easily the pixels draw the attention of human eyes;
4) solving a splicing path: firstly, the selection weight of the splicing path is obtained according to the formula 4:
cost=k3·mask+k4·Saliency (4)
cost represents the weight of the pixels of the selected splicing path, and k3 and k4 are the proportions of masking property and significance respectively;
searching a fusion path according to cost by taking the intersection point of two graph boundaries in the coincidence region as a starting point, sequentially selecting path points in a coordinate range of the coincidence region according to a column, if the obtained path point coordinate is (x, y), searching the next path point by taking the range from (x +1, y-a) to (x +1, y + b), wherein a and b are constants larger than 0, taking the point coordinate with the minimum cost value in the range as the next path point coordinate, and so on to obtain a splicing fusion path based on human eye characteristic optimization;
5) and completing image splicing by using the path.
2. The image stitching and blending method based on human visual characteristics as claimed in claim 1, wherein a canny operator is used to detect the edges of the image, then the image is expanded and eroded with an amplitude of 5 to obtain the region with masking characteristics, the local range value of the smooth region is subtracted from 255 and then multiplied by the specific gravity k 1-0.2, and the local entropy of the fine texture region is multiplied by the specific gravity k 2-0.3 to obtain the weight map mask of the masking characteristics;
adding the mask multiplied by the specific gravity k3 to 0.4 and the salienty multiplied by the specific gravity k4 to 0.6 to obtain the selection weight cost of the splicing path;
when a splicing path is searched, the searching range of the next path point with the current path point coordinate (x, y) is (x +1, y-3) to (x +1, y +5), and the point with the minimum cost is selected as the next path point.
CN201710298017.3A 2017-04-29 2017-04-29 Image splicing and fusing method based on human visual characteristics Expired - Fee Related CN107085828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710298017.3A CN107085828B (en) 2017-04-29 2017-04-29 Image splicing and fusing method based on human visual characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710298017.3A CN107085828B (en) 2017-04-29 2017-04-29 Image splicing and fusing method based on human visual characteristics

Publications (2)

Publication Number Publication Date
CN107085828A CN107085828A (en) 2017-08-22
CN107085828B true CN107085828B (en) 2020-06-26

Family

ID=59612216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710298017.3A Expired - Fee Related CN107085828B (en) 2017-04-29 2017-04-29 Image splicing and fusing method based on human visual characteristics

Country Status (1)

Country Link
CN (1) CN107085828B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064409B (en) * 2018-10-19 2023-04-11 广西师范大学 Visual image splicing system and method for mobile robot

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514580A (en) * 2013-09-26 2014-01-15 香港应用科技研究院有限公司 Method and system used for obtaining super-resolution images with optimized visual experience
CN105023253A (en) * 2015-07-16 2015-11-04 上海理工大学 Visual underlying feature-based image enhancement method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903093A (en) * 2012-09-28 2013-01-30 中国航天科工集团第三研究院第八三五八研究所 Poisson image fusion method based on chain code mask
CN105100579B (en) * 2014-05-09 2018-12-07 华为技术有限公司 A kind of acquiring and processing method and relevant apparatus of image data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514580A (en) * 2013-09-26 2014-01-15 香港应用科技研究院有限公司 Method and system used for obtaining super-resolution images with optimized visual experience
CN105023253A (en) * 2015-07-16 2015-11-04 上海理工大学 Visual underlying feature-based image enhancement method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Visual artifacts based image splicing detection in uncompressed images;V T Manu等;《2015 IEEE International Conference on Computer Graphics, Vision and Information Security》;20160411;第145-150页 *
基于NSST域人眼视觉特性的图像融合方法;孔韦韦;《哈尔滨工程大学学报》;20130630;第34卷(第6期);第777-782页 *
基于视觉显著性与对比度特性的图像质量评价;李钊等;《南开大学学报(自然科学版)》;20151231;第48卷(第6期);第46-52页 *

Also Published As

Publication number Publication date
CN107085828A (en) 2017-08-22

Similar Documents

Publication Publication Date Title
US10169664B2 (en) Re-identifying an object in a test image
Negru et al. Exponential contrast restoration in fog conditions for driving assistance
CN106683100B (en) Image segmentation defogging method and terminal
CN107301624B (en) Convolutional neural network defogging method based on region division and dense fog pretreatment
US8831337B2 (en) Method, system and computer program product for identifying locations of detected objects
CN107194866B (en) Image fusion method for reducing spliced image dislocation
CN107705288A (en) Hazardous gas spillage infrared video detection method under pseudo- target fast-moving strong interferers
CN111209770A (en) Lane line identification method and device
CN104574366A (en) Extraction method of visual saliency area based on monocular depth map
CN111160291B (en) Human eye detection method based on depth information and CNN
CN110866926B (en) Infrared remote sensing image rapid and fine sea-land segmentation method
CN107808140B (en) Monocular vision road recognition algorithm based on image fusion
CN113744315B (en) Semi-direct vision odometer based on binocular vision
CN103034983A (en) Defogging method based on anisotropic filtering
Halmaoui et al. Contrast restoration of road images taken in foggy weather
CN115631116B (en) Aircraft power inspection system based on binocular vision
Choi et al. Fog detection for de-fogging of road driving images
CN110866889A (en) Multi-camera data fusion method in monitoring system
CN110188640B (en) Face recognition method, face recognition device, server and computer readable medium
JP7096175B2 (en) Object extraction method and device
CN107085828B (en) Image splicing and fusing method based on human visual characteristics
CN110751068B (en) Remote weak and small target visual detection method based on self-adaptive space-time fusion
JP2010136207A (en) System for detecting and displaying pedestrian
Fatichah et al. Optical flow feature based for fire detection on video data
CN112598777B (en) Haze fusion method based on dark channel prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200626

Termination date: 20210429

CF01 Termination of patent right due to non-payment of annual fee