CN108769550B - Image significance analysis system and method based on DSP - Google Patents

Image significance analysis system and method based on DSP Download PDF

Info

Publication number
CN108769550B
CN108769550B CN201810467233.0A CN201810467233A CN108769550B CN 108769550 B CN108769550 B CN 108769550B CN 201810467233 A CN201810467233 A CN 201810467233A CN 108769550 B CN108769550 B CN 108769550B
Authority
CN
China
Prior art keywords
video
image
frame
visible light
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810467233.0A
Other languages
Chinese (zh)
Other versions
CN108769550A (en
Inventor
王常勇
周瑾
徐葛森
韩久琦
柯昂
张华亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Pharmacology and Toxicology of AMMS
Original Assignee
Institute of Pharmacology and Toxicology of AMMS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Pharmacology and Toxicology of AMMS filed Critical Institute of Pharmacology and Toxicology of AMMS
Priority to CN201810467233.0A priority Critical patent/CN108769550B/en
Publication of CN108769550A publication Critical patent/CN108769550A/en
Application granted granted Critical
Publication of CN108769550B publication Critical patent/CN108769550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals

Abstract

The invention discloses an image significance analysis system and method based on DSP, wherein the system comprises an infrared video camera, a visible light video camera, a holder driver, a photoelectric sensor and a video management system, wherein the infrared video camera is used for collecting infrared video information of a target image, and the visible light video camera is used for collecting visible light video information of the target image; the holder driver is used for installing the infrared video camera and the visible light video camera; the video management system firstly fuses video information of the infrared camera and the visible light video camera; the photoelectric sensor is used for detecting whether the shell of the video management system is illegally opened. The invention solves the problem of inconsistent sample characteristic distribution among different cameras by using a data fusion technology, enhances the adjusting capability of the system by using a remarkable characteristic self-adaptive weight adjusting technology, can realize higher-level data safety standard, and can be widely applied to the fields of video monitoring, artificial intelligence, target detection and tracking and the like.

Description

Image significance analysis system and method based on DSP
Technical Field
The invention relates to the technical field of video image processing, in particular to an image significance analysis system based on a DSP.
Background
The video fusion technology can integrate the video information collected by different cameras, eliminate the redundancy and contradiction possibly existing between different camera information, complement each other, improve the timeliness and reliability of target extraction in the video, and improve the use efficiency of video data. The method comprises the steps that a target in a video image is usually in motion, the traditional image processing technology is usually not careful when a dynamic video image is processed, the video image processing needs to consider not only the static characteristics of the image but also the dynamic characteristics of the image, and the extraction of the video image salient target can be completed only by carrying out data fusion on the dynamic characteristics and the static characteristics of the target. The extraction of the image salient target is firstly applied to the field of military target identification, and is now widely applied to the fields of video monitoring, artificial intelligence, target detection and tracking and the like. The image significance analysis system can be realized by an ASIC chip, an FPGA chip or a DSP chip, and has the characteristics of parallel algorithms, high operation speed, high transplantation difficulty of video processing algorithms, long development period and high project risk. In addition, in the existing image saliency analysis system, a data security protection mechanism is not set, so that an illegal user can acquire image data in the image saliency analysis system through an illegal way.
Disclosure of Invention
In order to solve the technical problem, the invention provides an image significance analysis system based on a DSP (digital signal processor), which comprises an infrared video camera, a visible light video camera, a holder driver, a photoelectric sensor and a video management system, wherein the infrared video camera is used for acquiring infrared video information of a target image, and the visible light video camera is used for acquiring visible light video information of the target image; the holder driver is used for installing the infrared video camera and the visible light video camera; the video management system firstly fuses video information of the infrared camera and the visible light video camera; the photoelectric sensor is used for detecting whether the shell of the video management system is illegally opened.
In the system, the video management system is further used for calculating the pixel density, the HUE color, the pixel azimuth, the interframe displacement and the interframe swing shallow salient features of the fused video information, and then adopting an adaptive weight adjustment technology to complete the extraction of the salient objects in the video in the fusion process of the shallow salient feature data so as to obtain a frame-by-frame salient distribution map of the object image.
In the system, when the photoelectric sensor detects that the shell of the video management system is illegally opened, the video management system is informed to erase the program and the video data stored in the shell.
The system also comprises a wireless transmission system, a visible light camera light supplement lamp and a display screen, wherein the visible light camera light supplement lamp is used for supplementing light when the light around the target is weak; the wireless transmission system is used for transmitting the frame-by-frame significant distribution map of the acquired target image in a wireless mode; the display screen is used for displaying the frame-by-frame saliency map of the acquired target image.
In the system, when a target image moves, the video management system controls the holder driver to drive the infrared video camera or the visible light video camera to rotate.
The invention also provides an image significance analysis method based on the DSP, wherein whether the shell of the video management system is illegally opened or not is detected, and if the shell of the video management system is not illegally opened, infrared video information and visible light video information of a target image are collected; fusing video information of the infrared camera and the visible light video camera; acquiring frame information of the fused video information; and calculating a significant value of the acquired frame information, and acquiring a frame-by-frame significant distribution map of the target image according to the calculated significant value.
In the method, when the case of the video management system is detected to be illegally opened, data and programs stored in the video management system are erased.
In the method, the fusing the video information of the infrared camera and the visible light video camera further comprises respectively performing wavelet transformation on the video information of the infrared camera and the visible light video camera to obtain multi-resolution representation of the infrared image and the visible light image; performing data fusion processing on the images with the different resolutions to obtain fused multi-scale images; and performing wavelet inverse transformation on the multi-scale image after image fusion to obtain a fused image.
In the method, the acquiring of the frame information of the fused video information includes performing color space description and channel unified frame description on the fused video information to acquire the frame information.
In the method, the step of calculating the significant value of the acquired frame information and obtaining a frame-by-frame significant distribution map of the target image according to the calculated significant value further comprises the steps of mapping the frame information acquired by the color space description to other color spaces, and then performing a HUE color significant value calculation process to obtain a HUE color significant value;
performing interframe swing significant value calculation, pixel azimuth significant value calculation, interframe displacement significant value calculation and pixel density significant value calculation on the frame information acquired by the channel unified frame description to obtain an interframe swing significant value, a pixel azimuth significant value, an interframe displacement significant value and a pixel density significant value; and calculating the obtained pixel density significant value, and obtaining an interframe swing significant value, a pixel azimuth significant value, an interframe displacement significant value and a pixel density significant value to perform self-adaptive weight adjustment to obtain a frame-by-frame significant distribution map of the target image.
The invention brings the following beneficial results: the invention uses the information and data fusion technology based on pixel density and direction, color, interframe displacement, swing and the like to complete the extraction of the obvious target, solves the problems that the characteristics of the target are single when a single camera system collects video and the distribution of the characteristics of samples among different cameras is inconsistent, can realize higher-level data safety standard, and can be widely applied to the fields of video monitoring, artificial intelligence, unmanned driving, target detection and tracking and the like.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a general block diagram of the DSP-based image saliency analysis system of the present invention.
Fig. 2 is a flowchart of the fusion of an infrared video image and a visible light video image.
FIG. 3 is a flow chart of the DSP-based image saliency analysis method of the present invention.
FIG. 4 is a flow chart of adaptive weight selection for multiple channels.
Detailed Description
Specific embodiments of the present invention will be described in detail below with reference to the accompanying drawings, but it should be understood that the scope of the present invention is not limited to the specific embodiments.
Fig. 1 is a general block diagram of an image saliency analysis system based on a DSP according to the present invention, which includes an infrared video camera, a visible light video camera, a pan-tilt driver, a visible light camera fill-in light, a photoelectric sensor, a video management system, a display screen, and a wireless transmission system. The system comprises an infrared video camera, a visible light video camera, a video management system, a self-adaptive weight adjustment technology and a wireless transmission system, wherein the infrared video camera is used for collecting infrared video information of a target image, the visible light video camera is used for collecting visible light video information of the target image, the infrared video camera and the visible light video camera finish the collection of original video information and then transmit the original video information to the video management system through the transmission interface, the video management system firstly fuses the video information of the infrared camera and the visible light video camera, the remarkable target extraction in the video is finished by calculating pixel density, HUE color, pixel azimuth, interframe displacement and interframe swing shallow layer remarkable characteristics of the fused video with different channels, and then the frame-by-frame remarkable distribution diagram of the target image can be displayed on a display screen and also can be transmitted to other equipment through the wireless transmission system in the fusion process of the shallow layer remarkable characteristic data, the wireless transmission system can adopt wireless transmission protocols such as Bluetooth, wifi and ZigBee.
The infrared video camera and the visible light video camera in the picture 1 shoot the area of the significant target, the infrared video camera and the visible light video camera are installed on a holder driver, shooting can be intelligently switched between the infrared video camera and the visible light video camera, when light around the significant target is weak, light can be supplemented through a light supplementing lamp of the visible light camera, when the significant target moves, a video management system can control the holder driver to drive the infrared video camera or the visible light video camera to rotate, when the significant target is shielded, the video management system can carry out least square linear recursive prediction of short time on the motion trend of the significant target according to previous video information. Through the mode, all-weather monitoring, capturing and tracking of the remarkable target can be realized.
In fig. 1, the photoelectric sensor may be installed inside or outside the chassis of the video management system, and may detect whether the chassis of the video management system is illegally opened. When the video management system shell of the video management system is detected to be illegally opened, the photoelectric sensor of the video management system generates a high-level pulse signal and transmits the high-level pulse signal to the video management system of the video management system in an interruption mode, when the video management system of the video management system receives the high-level pulse signal sent by the photoelectric sensor, a self-destruction program is started, and the program and video data stored in the video management system are erased, so that higher information security level is obtained.
Fig. 2 is a flowchart of the fusion of the infrared video image and the visible light video image in the embodiment of the present invention, which specifically includes the following steps:
(21) respectively performing wavelet transformation on the original infrared video image and the original visible light video image to obtain multi-resolution representation of the infrared image and the visible light image;
(22) performing data fusion processing on the images with various resolutions to obtain a fused multi-scale image, wherein the image fusion processing adopts a proportional weighting method, and the specific implementation method comprises the following steps:
Figure BDA0001662374870000051
wherein, Vx(p),Vy(p) is a weighting coefficient, Xf(p) is the gray value after image fusion, Hx(p),Ky(P) is the gray value of the visible and infrared images before image fusion, Ex(p),Ey(p) is the expected values of the visible and infrared images before image fusion;
(23) and performing wavelet inverse transformation on the multi-scale image after image fusion to obtain a fused image.
Fig. 3 is a flowchart of a DSP-based image saliency analysis method in a specific embodiment of the present application.
The image significance analysis method based on the DSP comprises the steps of firstly detecting whether a shell of a video management system is illegally opened, entering an interruption process when the shell of the video management system is detected to be illegally opened, immediately starting a self-destruction program, and erasing data and programs stored in the video management system. And if the system is detected not to be illegally opened, entering a normal flow, performing image significant analysis on video images acquired by the infrared camera and the visible light camera, and transmitting the image significant analysis through a wireless system. The normal process mainly comprises the following steps:
firstly, acquiring an infrared image video stream and a visible light image video stream, respectively finishing primary processing of an infrared video and a visible light video through an infrared video processing process and a visible light video processing process, and carrying out image fusion on the primary processing results of the infrared image video stream and the visible light image video stream to obtain a fused image video stream, wherein the detailed flow processing process is shown in fig. 2.
Secondly, the color space description process and the channel unification frame description process are carried out on the fused image video stream.
The HUE color saliency value calculation process comprises calculation of saliency features S _ O and calculation of saliency edge features S _ E, and a saliency map S _ W1 × S _ O + W2 × S _ E, W1+ W2 1, W1 is not less than W2 through linear weighting mode synthesis, wherein S represents a HUE color saliency value, W1 represents the weight of the saliency features, W2 represents the weight of the saliency edge features, S _ O represents the saliency features, and S _ E represents the saliency edge features.
Wherein the significant volume feature S _ O is calculated as follows: fusing the local features S _ O _ L of the sub-block images and the global features S _ O _ G of the sub-block images to obtain image saliency features S11 at a certain scale,
s _ L + ws _ O _ G, w + w is 1, w is not less than w, the local feature S _ O _ L and the global feature S _ O _ G of the sub-block image are obtained by euclidean distance of HUE between blocks, a region with large change in euclidean distance of HUE between blocks is defined as a saliency region, then the salient features S, S of the image at the multi-scale are calculated by the same algorithm, and finally the salient features of the image at the multi-scale are fused to obtain a salient feature S _ O, S _ O is w S + w S, w + w + w is 1, S _ O is the salient feature of the image at the scale 3, S _ O _ L is the local feature of the fused sub-block image, S _ O _ G is the global feature of the sub-block image, S is the salient feature of the image at the scale 1, S is the salient feature of the image at the scale 2, S is the salient feature of the image at the scale 3, S is the salient feature at the scale 4, and S, w, S, w.
The salient body edge feature S _ E calculation process comprises the steps of performing Gaussian filtering on an image before salient body edge feature extraction is performed, then calculating the sum of absolute values of pixel chrominance difference values of a certain pixel and pixels in the neighborhood of the certain pixel by taking an N × N neighborhood with the certain pixel as the center, and when one or more components are larger than a preset threshold value T1 or the sum of four components is larger than a preset threshold value T2, considering that the pixel is located at the edge position of an object in an original image, performing secondary Gaussian filtering on the image after the operation is completed, and obtaining threshold values T1 and T2 by counting a large number of standard images.
And the channel unified frame description process realizes the unified processing of the videos of different channels. Then, an interframe swing significant value calculation process, a pixel orientation significant value calculation process, an interframe displacement significant value calculation process and a pixel density significant value calculation process are carried out. The method specifically comprises the following steps:
(31) interframe swing saliency calculation process
The input of the calculation process of the interframe swing significant value is unified description of two continuous frames, and difference information of the two frames is calculated firstly; then, setting pixels lower than a certain threshold value in the differential information to zero; and finally, the optimized difference information is subjected to a pixel density significant value calculation process, and the swing significant value distribution between two continuous frames is obtained.
(32) Pixel orientation saliency calculation process and interframe displacement saliency calculation process
And obtaining an area containing the moving object by adopting a frame separation difference method, and then accurately calculating the area containing the moving object by utilizing an optical flow method to obtain a pixel orientation significant value and an inter-frame displacement significant value.
The process of obtaining the area containing the moving object by the frame separation difference method is as follows: firstly, carrying out median filtering processing on a fused video image and carrying out gray level transformation to obtain a gray level image, secondly, selecting three frames of images at an interval of n aiming at the dynamic video image subjected to gray level transformation, wherein n depends on the speed of a moving object, the higher the speed is, the larger the value of n is, and the gray level value of the previous frame of image is fi-n(x, y) the gray value of the current frame image is fi(x, y), the gray value of the next frame image is fi+n(x, y), and then calculating the difference value of the gray values of the two adjacent frames of images to respectively obtain FB(x,y)=|fi-n(x,y)-fi(x,y)|,Ff(x,y)=|fi+n(x,y)-fi(x, y) |, then determining a gray value threshold value T through a two-dimensional Otsu thresholding fast algorithm, performing thresholding processing on the image subjected to frame separation difference by utilizing the T to obtain a binary image subjected to frame separation difference, and finally calculating a frame difference image FB(x,y),FfThe intersection of (x, y) yields a region E containing the moving objectn(x,y)。
The optical flow method performs the following precise calculation process on the area containing the moving object: and u and v are two speed components in the x and y directions of (x and y) on the image at the moment t, wherein the x and y represent the positions of pixel points in the image, and the two displacements in the x and y directions of (x and y) on the image can be obtained by solving the u and v and integrating the u and v with time, so that a pixel orientation significant value and an inter-frame displacement significant value are obtained. The calculation method of u and v is as follows:
Figure BDA0001662374870000071
Figure BDA0001662374870000072
wherein (f)x,fy) Is the spatial gradient of the grey scale of the image,
Figure BDA0001662374870000073
is the average value, u, of the pixel gray values of the u, v four adjacent domains of the point to be solved(n+1)The value of u over n +1 iterations, v(n+1)The value of v over n +1 iterations is expressed and λ represents the lagrange multiplier. Initial conditions are u(0)=v(0)=0,(fx,fy) Can be obtained from the difference between adjacent pixels of the same sub-image, ftAnd obtaining the difference of corresponding pixel points of the two images.
Figure BDA0001662374870000074
Wherein E isi,j,kRepresenting the pixel grey value at time k for image position (i, j). f. oftRepresents the partial derivative of the pixel gray level at image position (i, j) with respect to time t, (f)x,fy) The partial derivatives of the pixel gray values for the image position (i, j) with respect to x, y, respectively.
(33) Pixel density saliency calculation process
The input of the pixel density significant value calculation process is the output of a unified frame description process, firstly, the representation of the image under the multilayer pyramids with different spatial scales is constructed, and A is set0Is a gray scale matrix representation of the original image (M × N), where M × N represents the size of the image, A0At the top layer of the pyramid, the 1 st, 2 nd, 3 rd, … th and n-1 st layers of the pyramid are generated in sequence and are respectively represented as A1,A2,...,A(n-1)Corresponding to the L,1 is more than or equal to L and less than or equal to (n-1), the gray level calculation formula is as follows:
Figure BDA0001662374870000081
wherein
Figure BDA0001662374870000082
Then, obtaining the original density distribution of the pixels through the difference among the multilayer pyramids; secondly, performing edge weakening and modification on the original density distribution; then, resampling the density distribution after the edge is weakened; then, locally inhibiting the resampling density distribution; and finally, superposing and averaging all the density distributions to obtain a pixel density significant value. Here, A1,A2,...,A(n-1)Image gray value matrix representation of the 1 st, 2 nd, 3 rd, … th, n-1 th layer of the pyramid, respectively, aLAnd (x, y) represents the image gray scale value with the L-th layer pixel position of the gold tower as (x, y).
And then, carrying out an adaptive weight adjustment process on the HUE color significant value feature, the interframe swing significant value feature, the pixel orientation significant value feature, the interframe displacement significant value feature and the pixel density significant value feature of the image obtained by the calculation to obtain a frame-by-frame significant distribution map of the target image.
Fig. 4 is a flowchart of adaptive weight adjustment performed on significant value features of more than one channels, where the significant value features of the multiple channels are fused in an adaptive weighted combination manner to obtain a significant value distribution map g ═ λ of a potential target1L12L23L34L45L5,λ12345=1,L1,L2,L3,L4,L5Respectively representing pixel density, HUE color, pixel orientation, interframe displacement and interframe swing shallow layer significant characteristic values, and weight lambda of each channelkIs calculated by the formula
Figure BDA0001662374870000083
k is 1,2,3,4,5, where σkRepresenting the size of the likelihood of the sample being in the overlapping region, σkTake an appropriate valuePixel density, HUE color, pixel orientation, inter-frame displacement, and inter-frame wobble light salient feature sample values may be divided into overlapping and non-overlapping regions. Given a sample, the posterior probability P of the sample belonging to each class can be pre-estimated according to the Gaussian parameters of each classiI1, 2.. I, a vector M ═ P is formed by these probabilities1,P2,P3,...,PI}, vector of
Figure BDA0001662374870000091
Figure BDA0001662374870000092
Wherein
Figure BDA0001662374870000093
Represents the vector M and
Figure BDA0001662374870000094
the spatial distance of (a).
And then, determining the weights of the visible light video and the infrared video in the image according to the calculation result of the self-adaptive weight adjustment process, and performing feedback control on the acquisition of the visible light video and the infrared video through a holder driver control process and a light supplement lamp control process.
Finally, the video management system can realize the wireless transmission of the frame-by-frame saliency map of the image through the video wireless transmission process.
It will be appreciated by those of ordinary skill in the art that the examples described herein are intended to assist the reader in understanding the manner in which the invention is practiced, and it is to be understood that the scope of the invention is not limited to such specifically recited statements and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (7)

1. An image significant analysis system based on DSP is characterized by comprising an infrared video camera, a visible video camera, a holder driver, a photoelectric sensor and a video management system, wherein,
the infrared video camera is used for collecting infrared video information of a target image, and the visible light video camera is used for collecting visible light video information of the target image;
the holder driver is used for installing the infrared video camera and the visible light video camera;
the video management system firstly fuses video information of the infrared camera and the visible light video camera;
the photoelectric sensor is used for detecting whether the shell of the video management system is illegally opened;
the video management system is further used for calculating pixel density, HUE color, pixel orientation, interframe displacement and interframe swing shallow salient features of the fused video information, and then adopting an adaptive weight adjustment technology to complete salient object extraction in the video in the process of fusing shallow salient feature data to obtain a frame-by-frame salient distribution map of the object image.
2. The DSP-based image saliency analysis system of claim 1, characterized in that when said photosensor detects an illegal opening of a chassis of said video management system, said video management system is notified to erase programs and video data stored therein.
3. The DSP-based image saliency analysis system of claim 1 further comprising a wireless transmission system, a visible light camera fill light, and a display screen, wherein,
the visible light camera light supplementing lamp is used for supplementing light when light around a target is weak;
the wireless transmission system is used for transmitting the frame-by-frame significant distribution map of the acquired target image in a wireless mode;
the display screen is used for displaying the frame-by-frame saliency map of the acquired target image.
4. The DSP-based image saliency analysis system of claim 1, wherein said video management system controls said pan-tilt drive to rotate said infrared video camera or said visible light video camera as a target image moves.
5. A DSP-based image saliency analysis method,
detecting whether a shell of the video management system is illegally opened, and if the shell of the video management system is not illegally opened, acquiring infrared video information and visible light video information of a target image;
fusing video information of the infrared camera and the visible light video camera;
acquiring frame information of the fused video information;
calculating a significant value of the acquired frame information, and acquiring a frame-by-frame significant distribution map of the target image according to the calculated significant value;
acquiring frame information of the fused video information, wherein the acquiring of the frame information of the fused video information comprises performing color space description and channel unified frame description on the fused video information to acquire the frame information;
the color space description and the channel unified frame description of the fused video information comprise: acquiring frame information after RGB three channels are unified and carrying out unification processing on videos of different channels;
the calculating the saliency value of the acquired frame information and obtaining the frame-by-frame saliency map of the target image according to the calculated saliency value further comprises,
mapping the frame information acquired by the color space description to other color spaces, and then performing a HUE color significant value calculation process to obtain a HUE color significant value;
performing interframe swing significant value calculation, pixel azimuth significant value calculation, interframe displacement significant value calculation and pixel density significant value calculation on the frame information acquired by the channel unified frame description to obtain an interframe swing significant value, a pixel azimuth significant value, an interframe displacement significant value and a pixel density significant value;
and calculating the obtained pixel density significant value, and obtaining an interframe swing significant value, a pixel azimuth significant value, an interframe displacement significant value and a pixel density significant value to perform self-adaptive weight adjustment to obtain a frame-by-frame significant distribution map of the target image.
6. The DSP-based image saliency analysis method of claim 5, characterized in that when detecting an illegal opening of a chassis of a video management system, erasing data and programs stored in the video management system.
7. The DSP-based image saliency analysis method of claim 5, wherein said fusing video information of infrared cameras and visible light video cameras further comprises,
respectively performing wavelet transformation on video information of the infrared camera and the visible light video camera to obtain multi-resolution representation of an infrared image and a visible light image;
performing data fusion processing on the images with the different resolutions to obtain fused multi-scale images;
and performing wavelet inverse transformation on the multi-scale image after image fusion to obtain a fused image.
CN201810467233.0A 2018-05-16 2018-05-16 Image significance analysis system and method based on DSP Active CN108769550B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810467233.0A CN108769550B (en) 2018-05-16 2018-05-16 Image significance analysis system and method based on DSP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810467233.0A CN108769550B (en) 2018-05-16 2018-05-16 Image significance analysis system and method based on DSP

Publications (2)

Publication Number Publication Date
CN108769550A CN108769550A (en) 2018-11-06
CN108769550B true CN108769550B (en) 2020-07-07

Family

ID=64008120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810467233.0A Active CN108769550B (en) 2018-05-16 2018-05-16 Image significance analysis system and method based on DSP

Country Status (1)

Country Link
CN (1) CN108769550B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109788170A (en) * 2018-12-25 2019-05-21 合肥芯福传感器技术有限公司 It is a kind of based on infrared with video image processing system and method for visible light
CN110213501A (en) * 2019-06-25 2019-09-06 浙江大华技术股份有限公司 A kind of grasp shoot method, device, electronic equipment and storage medium
CN111988540A (en) * 2020-08-20 2020-11-24 合肥维信诺科技有限公司 Image acquisition method and system and display panel
CN112578675B (en) * 2021-02-25 2021-05-25 中国人民解放军国防科技大学 High-dynamic vision control system and task allocation and multi-core implementation method thereof
CN113263149B (en) * 2021-05-12 2022-07-19 燕山大学 Device and method for detecting and controlling liquid level of molten pool in double-roller thin strip vibration casting and rolling
CN113159229B (en) * 2021-05-19 2023-11-07 深圳大学 Image fusion method, electronic equipment and related products

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510007A (en) * 2009-03-20 2009-08-19 北京科技大学 Real time shooting and self-adapting fusing device for infrared light image and visible light image
CN202541437U (en) * 2012-01-20 2012-11-21 南京航空航天大学 Vehicle-mounted infrared night view driving device
CN103024281A (en) * 2013-01-11 2013-04-03 重庆大学 Infrared and visible video integration system
CN103200394A (en) * 2013-04-07 2013-07-10 南京理工大学 Target image real time transmission and tracking method based on digital signal processor (DSP) and target image real time transmission and tracking device based on digital signal processor (DSP)
CN204331736U (en) * 2014-12-08 2015-05-13 成都三零凯天通信实业有限公司 A kind of cloud terminal integrative machine with security function
CN104700381A (en) * 2015-03-13 2015-06-10 中国电子科技集团公司第二十八研究所 Infrared and visible light image fusion method based on salient objects
CN106385530A (en) * 2015-07-28 2017-02-08 杭州海康威视数字技术股份有限公司 Double-spectrum camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100846498B1 (en) * 2006-10-18 2008-07-17 삼성전자주식회사 Image analysis method and apparatus, motion segmentation system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101510007A (en) * 2009-03-20 2009-08-19 北京科技大学 Real time shooting and self-adapting fusing device for infrared light image and visible light image
CN202541437U (en) * 2012-01-20 2012-11-21 南京航空航天大学 Vehicle-mounted infrared night view driving device
CN103024281A (en) * 2013-01-11 2013-04-03 重庆大学 Infrared and visible video integration system
CN103200394A (en) * 2013-04-07 2013-07-10 南京理工大学 Target image real time transmission and tracking method based on digital signal processor (DSP) and target image real time transmission and tracking device based on digital signal processor (DSP)
CN204331736U (en) * 2014-12-08 2015-05-13 成都三零凯天通信实业有限公司 A kind of cloud terminal integrative machine with security function
CN104700381A (en) * 2015-03-13 2015-06-10 中国电子科技集团公司第二十八研究所 Infrared and visible light image fusion method based on salient objects
CN106385530A (en) * 2015-07-28 2017-02-08 杭州海康威视数字技术股份有限公司 Double-spectrum camera

Also Published As

Publication number Publication date
CN108769550A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
CN108769550B (en) Image significance analysis system and method based on DSP
US11288818B2 (en) Methods, systems, and computer readable media for estimation of optical flow, depth, and egomotion using neural network trained using event-based learning
CN110956094B (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network
CN107844779B (en) Video key frame extraction method
KR102574141B1 (en) Image display method and device
US10977802B2 (en) Motion assisted image segmentation
US11064178B2 (en) Deep virtual stereo odometry
CN113286194A (en) Video processing method and device, electronic equipment and readable storage medium
CN101834986B (en) Imaging apparatus, mobile body detecting method, mobile body detecting circuit and program
US8922674B2 (en) Method and system for facilitating color balance synchronization between a plurality of video cameras and for obtaining object tracking between two or more video cameras
KR101906796B1 (en) Device and method for image analyzing based on deep learning
CN111161309B (en) Searching and positioning method for vehicle-mounted video dynamic target
KR20110136152A (en) Apparatus and method of creating high dynamic range image empty ghost image by using filtering
CN109934108A (en) The vehicle detection and range-measurement system and implementation method of a kind of multiple target multiple types
Wu et al. Real‐time running detection system for UAV imagery based on optical flow and deep convolutional networks
Wang et al. Object counting in video surveillance using multi-scale density map regression
CN115147450B (en) Moving target detection method and detection device based on motion frame difference image
JP5539565B2 (en) Imaging apparatus and subject tracking method
CN111160255B (en) Fishing behavior identification method and system based on three-dimensional convolution network
CN116917954A (en) Image detection method and device and electronic equipment
KR101697211B1 (en) Image recognition apparatus, method and system for providing parking space information based on interaction and image recognition
CN116205822B (en) Image processing method, electronic device and computer readable storage medium
WO2023106103A1 (en) Image processing device and control method for same
CN114979607B (en) Image processing method, image processor and electronic equipment
Kiaee et al. Evaluation of moving object detection based on various input noise using fixed camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant