CN117934571B - 4K high-definition KVM seat management system - Google Patents

4K high-definition KVM seat management system Download PDF

Info

Publication number
CN117934571B
CN117934571B CN202410322919.6A CN202410322919A CN117934571B CN 117934571 B CN117934571 B CN 117934571B CN 202410322919 A CN202410322919 A CN 202410322919A CN 117934571 B CN117934571 B CN 117934571B
Authority
CN
China
Prior art keywords
matching
pair
point
image
angles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410322919.6A
Other languages
Chinese (zh)
Other versions
CN117934571A (en
Inventor
陈聪
杨静
廖超
张忠
李治强
何杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Aesop Technology Co ltd
Original Assignee
Guangzhou Aesop Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Aesop Technology Co ltd filed Critical Guangzhou Aesop Technology Co ltd
Priority to CN202410322919.6A priority Critical patent/CN117934571B/en
Publication of CN117934571A publication Critical patent/CN117934571A/en
Application granted granted Critical
Publication of CN117934571B publication Critical patent/CN117934571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a 4K high-definition KVM seat management system. The system includes a memory and a processor executing a computer program stored by the memory to perform the steps of: acquiring 4K high-definition monitoring of the area to be monitored under different angles, and processing the monitoring images under every two angles by adopting a sift characteristic point matching algorithm to acquire characteristic points and matching pairs; and processing the monitoring images under every two angles by utilizing different amplification factors and different rotation angles to obtain initial abnormal degrees, determining a corrected fitting degree threshold according to the positions among the characteristic points in the monitoring images under the same angle before the moment corresponding to the monitoring image of the characteristic point in each matching point pair, further fusing the monitoring images to obtain a fused image, and compressing the fused image. The invention improves the fusion effect of the monitoring images.

Description

4K high-definition KVM seat management system
Technical Field
The invention relates to the technical field of image processing, in particular to a 4K high-definition KVM seat management system.
Background
The 4K high-definition KVM seat management system can be used for managing and controlling a plurality of monitoring cameras, display screens, computers and other devices, and monitoring personnel can view monitoring pictures in real time through the KVM system to perform remote operation and control; in the monitoring and reconnaissance process of the area to be monitored, the condition that cameras are monitored at the same time under a plurality of different angles of the same area exists, and in the transmission process of images under a plurality of angles, redundancy of image information under partial angles exists, so that the compression and transmission efficiency is reduced, the images under different angles are fused and then compressed and transmitted by generally utilizing the image fusion technology, the data quantity required to be transmitted is reduced, the bandwidth required by transmission can be greatly reduced, and network resources are saved.
In the conventional image fusion process, a RANSAC (RANdom SAmple Consensus) algorithm is often utilized to screen matching points; however, as the RANSAC algorithm randomly selects samples to carry out regression analysis of a sub-sample set in each iteration process, different sample points in each fitting process select the same fitting degree threshold value to select inner points, the mode ignores spatial structure information in the neighborhood of pixel points in part of images, so that the fitting effect is poor, and as the selection of the best fitting degree threshold value is influenced by the spatial distribution of all characteristic points, an accurate fitting degree threshold value cannot be selected well, so that the selected inner points have poor performance on a real regression model, further the fusion effect of a monitoring image is poor, and the compression effect of the monitoring image is influenced.
Disclosure of Invention
In order to solve the problem that the fusion effect is poor when the monitoring images under different angles of the area to be detected are fused by the existing method, the invention aims to provide a 4K high-definition KVM seat management system, and the adopted technical scheme is as follows:
the invention provides a 4K high-definition KVM seat management system, which comprises a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the following steps:
Acquiring monitoring videos of an area to be monitored under different angles, wherein the monitoring videos are formed by continuous multi-frame 4K high-definition monitoring images;
Respectively processing the monitoring images under every two angles at the same moment by adopting a sift characteristic point matching algorithm to obtain characteristic points and matching pairs in the monitoring images; processing the monitoring images under every two angles at the same moment by using different preset amplification factors and different preset rotation angles, and obtaining the optimal amplification factor and the optimal rotation angle corresponding to each frame of monitoring image and the similarity between each matching point pair based on the similarity of gray values of pixel points in the neighborhood of two pixel points in each matching point pair after processing; obtaining initial abnormality degree of each matching point pair according to similarity corresponding to pixel points in the neighborhood of the characteristic point in each matching point pair, the position of the characteristic point in each matching point pair, the optimal magnification and the optimal rotation angle;
Correcting the initial abnormality degree according to the position distribution of the characteristic points in each matching point pair and the position distribution among the characteristic points in the monitoring image at the same angle before the moment corresponding to the monitoring image of the characteristic points in each matching point pair, and obtaining the target abnormality degree of each matching point pair; determining a fitting degree threshold value after each feature point in each matching point pair is corrected based on the target abnormality degree;
And selecting an inner point set based on the corrected fitting degree threshold, fusing every two frames of monitoring images under different angles to obtain a fused image, and compressing the fused image.
Preferably, the processing of the monitored images under each two angles at the same time by using different preset amplification factors and different preset rotation angles, and obtaining the optimal amplification factor and the optimal rotation angle corresponding to each frame of monitored image based on the similarity of gray values of the pixels in the neighborhood of the two pixels in each matching point pair after the processing, includes:
combining each preset magnification with each preset rotation angle to obtain each parameter combination;
For a monitored image at any two angles at any instant:
Processing the monitoring image under the first angle of the two angles based on the data in each parameter combination to obtain a first image corresponding to each parameter combination; processing the monitoring image under the second angle of the two angles based on the data in each parameter combination to obtain a second image corresponding to each parameter combination;
For the ith matching pair in the monitored image at two angles: in a first image corresponding to each parameter combination, constructing a first gray sequence of an ith matching pair under each parameter combination based on gray values of all pixel points in a preset neighborhood of a feature point of the ith matching pair; in a second image corresponding to each parameter combination, constructing a second gray sequence of the ith matching pair under each parameter combination based on gray values of all pixel points in a preset neighborhood of the feature point of the ith matching pair;
Calculating the correlation coefficient of each first gray level sequence and each second gray level sequence respectively, and marking the parameter combination corresponding to the first gray level sequence when the phase relation number is the maximum value as a first target combination and marking the parameter combination corresponding to the second gray level sequence when the phase relation number is the maximum value as a second target combination; the method comprises the steps of taking the preset magnification in a first target combination as the optimal magnification corresponding to a monitoring image under a first angle of two angles, taking the preset rotation angle in the first target combination as the optimal rotation angle corresponding to the monitoring image under the first angle of the two angles, taking the preset magnification in a second target combination as the optimal magnification corresponding to the monitoring image under the second angle of the two angles, and taking the preset rotation angle in the second target combination as the optimal rotation angle corresponding to the monitoring image under the second angle of the two angles.
Preferably, the obtaining of the similarity between each matching point pair includes:
For the ith matching pair in the monitored image at two angles: and taking the normalization result of the maximum value of the correlation coefficient as the similarity between the ith matching pair.
Preferably, the obtaining the initial anomaly degree of each matching point pair according to the similarity corresponding to the pixel point in the neighborhood of the feature point in each matching point pair, the position of the feature point in each matching point pair, the optimal magnification and the optimal rotation angle includes:
For either matched pair: recording an optimal rotation angle corresponding to a monitoring image where the 2 nd feature point in the matching pair is positioned as a reference angle, and constructing a rotation transformation matrix of the matching pair based on the reference angle, wherein the size of the rotation transformation matrix is as follows The elements of the first row and the first column and the elements of the first row and the second column in the rotation transformation matrix are cosine values of reference angles, the elements of the first row and the second column are opposite numbers of sine values of the reference angles, and the elements of the second row and the first column are sine values of the reference angles;
And obtaining the initial abnormality degree of each matching pair based on the similarity corresponding to the pixel points in the neighborhood of the pixel point in each matching pair, the position of the characteristic point in each matching pair, the optimal magnification factor and the rotation transformation matrix.
Preferably, the initial degree of abnormality of the ith matching pair is calculated using the following formula:
Wherein, Represents the initial degree of abnormality of the ith matching pair,/>Representing the number of feature points belonging to a matching pair contained within the neighborhood of the 1 st feature point of the i-th matching pair,/>Representing similarity between the j-th matching pair where the feature point belonging to the matching pair is located, contained in the neighborhood of the 1 st feature point of the i-th matching pair,/>Representing the optimal magnification factor corresponding to the monitoring image of the 1 st feature point in the j-th matching pair where the feature point in the i-th matching pair is located and included in the neighborhood of the 1 st feature point in the i-th matching pair,/>Representing the optimal magnification factor corresponding to the monitoring image of the 2 nd feature point in the j-th matching pair where the feature point in the i-th matching pair is located and included in the neighborhood of the 1 st feature point in the i-th matching pair,/>A rotation transformation matrix representing the j-th matching pair in which the feature point belonging to the matching pair is located, which is contained in the neighborhood of the 1 st feature point of the i-th matching pair,/>Representing the coordinates of the 1 st feature point in the i-th matching pair,/>Representing the coordinates of the 1 st feature point in the j-th matching pair where the feature point in the i-th matching pair is located, which is contained in the neighborhood of the 1 st feature point in the i-th matching pair,/>Representing the coordinates of the 2 nd feature point in the j-th matching pair where the feature point belonging to the matching pair is located, which is contained in the neighborhood of the 1 st feature point in the i-th matching pair,/>Representing the coordinates of the 2 nd feature point in the i-th matching pair,/>Representing the L2 norm of the vector.
Preferably, the correcting the initial anomaly degree according to the position distribution of the feature points in each matching point pair and the position distribution between the feature points in the monitored image at the same angle before the moment corresponding to the monitored image in which the feature points in each matching point pair are located, to obtain the target anomaly degree of each matching point pair includes:
for the i-th matching pair:
The preset time period before the moment corresponding to the monitoring image where the ith matching pair is positioned is recorded as a reference time period, and the monitoring image with the same angle as the monitoring image corresponding to the monitoring image where the ith matching pair is positioned in the reference time period is recorded as a reference image;
respectively acquiring a feature point with the smallest position difference with a1 st feature point in an i-th matching pair in each frame of reference image, and taking the feature point as a first reference point in each frame of reference image; based on the coordinates of a first reference point in each two adjacent reference images, coordinate difference vectors corresponding to the first reference points of each two adjacent reference images are obtained, and an included angle between the coordinate difference vectors corresponding to the first reference points of each two adjacent reference images and a preset direction is used as a first angle; respectively marking the modular length of the coordinate difference vector corresponding to the first reference point of each two adjacent frames of reference images as a first modular length; respectively taking the negative correlation normalization result of the difference between each first module length and the average value of all the first module lengths as a weight coefficient corresponding to each first angle, and recording the product between each first angle and the corresponding weight coefficient as a first product corresponding to each first angle;
Respectively acquiring a feature point with the minimum position difference with a2 nd feature point in an i-th matching pair in each frame of reference image, and taking the feature point as a second reference point in each frame of reference image; based on the coordinates of a second reference point in each two adjacent reference images, coordinate difference vectors corresponding to the second reference points of each two adjacent reference images are obtained, and an included angle between the coordinate difference vectors corresponding to the second reference points of each two adjacent reference images and a preset direction is used as a second angle; respectively marking the modular length of the coordinate difference vector corresponding to the second reference point of each two adjacent frames of reference images as a second modular length; respectively taking the negative correlation normalization result of the difference between each second module length and the average value of all second module lengths as a weight coefficient corresponding to each second angle, and recording the product between each second angle and the corresponding weight coefficient as a second product corresponding to each second angle;
All the first products form a first product sequence, all the second products form a second product sequence, and a pearson correlation coefficient between the first product sequence and the second product sequence is calculated;
and correcting the initial abnormality degree of the ith matching point pair based on the Pelson correlation coefficient to obtain the target abnormality degree of the ith matching point pair.
Preferably, the target abnormality degree of the i-th matching point pair is calculated using the following formula:
Wherein, Representing the target abnormality degree of the ith matching point pair,/>Represents the initial degree of abnormality of the ith matching pair,/>Representing a first product sequence,/>Representing a second product sequence,/>Representing the pearson correlation coefficient between two sequences,/>Representing taking absolute value symbols.
Preferably, the determining the threshold of the degree of fit after each feature point correction in each matching point pair based on the target degree of abnormality includes:
and respectively taking the product of the negative correlation normalization result of the initial abnormality degree of each matching point pair and the initial fitting degree threshold value as the fitting degree threshold value after each characteristic point in each matching point pair is corrected.
Preferably, the selecting the inner point set based on the corrected fitting degree threshold value, and fusing each two frames of monitoring images under different angles to obtain a fused image includes:
selecting an inner point set by adopting a RANSAC algorithm based on the corrected fitting degree threshold value, and obtaining a homography transformation matrix corresponding to the inner point set;
and fusing every two frames of monitoring images under different angles at the same moment based on the homography transformation matrix corresponding to the interior point set to obtain a fused image.
Preferably, the compressing the fused image includes:
and adopting an H.265 algorithm to compress the fusion image of the continuous frames to obtain a compression processing result.
The invention has at least the following beneficial effects:
The invention considers the problem that when the traditional RANSAC algorithm processes the monitored image, the threshold values of all points serving as internal points are the same, so that misjudgment of partial abnormal data points can exist, the structural similarity between matched point pairs after the sift characteristic points are matched is analyzed, the difference between the conversion positions of the points and the positions of the characteristic points in the matched pair is estimated by the similarity of the points in the neighborhood of the characteristic points in the matched pair and the corresponding affine transformation information, the probability that the characteristic points possibly do not exist due to shielding factors is analyzed, the initial abnormal degree of each matched point pair is obtained, the positions of partial objects in the monitored image can be changed to a certain extent due to the fact that the monitored video is formed by a plurality of monitored images at different moments, the situation that the positions of the corresponding characteristic points possibly increase or decrease in the monitored image of continuous frames can be considered, when the image is matched, the position mapping relation between the images at the same moment is considered, the difference between the conversion positions of the points and the characteristic points in the matched pair is combined with the position of the monitored image at the same moment, the initial abnormal degree of the characteristic points in the monitored image is not matched, the initial abnormal degree of the image is compressed, the abnormal degree is determined, the self-matching degree of the two points in the monitored image is reduced, the abnormal degree of the image is fused, and the abnormal degree is further accurately is reduced, and the abnormal degree is fused, the position of the image is accurately is fused, and the position of the characteristic points in the monitored image is fused.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method executed by a 4K high definition KVM seat management system according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description is given below of a 4K high-definition KVM seat management system according to the present invention with reference to the accompanying drawings and the preferred embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the 4K high-definition KVM seat management system provided by the present invention with reference to the accompanying drawings.
A 4K high definition KVM seat management system embodiment:
The specific scene aimed at by this embodiment is: in the process of transmitting and storing the monitoring videos collected by the cameras under different angles of the area to be monitored, a large amount of image information redundancy exists, so that the collected monitoring images under different angles are required to be fused, and then the fusion result is further compressed and transmitted, so that the bandwidth required by transmission is effectively reduced, and network resources are saved.
The embodiment provides a 4K high-definition KVM seat management system, which implements the steps shown in fig. 1, and specifically includes the following steps:
Step S1, acquiring monitoring videos of an area to be monitored under different angles, wherein the monitoring videos are formed by continuous multi-frame 4K high-definition monitoring images.
In the embodiment, firstly, 4K high-definition cameras are installed at different proper positions of an area to be monitored and are used for collecting 4K high-definition monitoring videos of the area to be monitored under different angles, in the embodiment, the cameras are installed at four positions of the area to be monitored, namely, 4K high-definition monitoring videos under four angles are collected in total, and in specific application, an implementer sets the number of the cameras and the installation positions of the cameras according to specific conditions; in this embodiment, the acquisition frequencies of all cameras are the same and are 20 times a second, that is, each acquisition time acquires monitoring images of the area to be monitored under different angles, and the existing image denoising method is adopted to denoise each acquired monitoring image respectively to obtain denoised images, which needs to be described as follows: the subsequent monitoring images are all denoised images, and the subsequent monitoring videos are all formed by the denoised images. All monitoring images under the same angle form a 4K high-definition monitoring video; in a specific application, an implementer can set the acquisition frequency of the camera according to specific conditions.
So far, the embodiment obtains the monitoring video of the area to be monitored under different angles, and the monitoring video under each angle is composed of continuous multi-frame 4K high-definition monitoring images.
Step S2, respectively adopting a sift characteristic point matching algorithm to process the monitoring images under every two angles at the same moment to obtain characteristic points and matching pairs in the monitoring images; processing the monitoring images under every two angles at the same moment by using different preset amplification factors and different preset rotation angles, and obtaining the optimal amplification factor and the optimal rotation angle corresponding to each frame of monitoring image and the similarity between each matching point pair based on the similarity of gray values of pixel points in the neighborhood of two pixel points in each matching point pair after processing; and obtaining the initial abnormality degree of each matching point pair according to the similarity corresponding to the pixel points in the neighborhood of the characteristic point in each matching point pair, the position of the characteristic point in each matching point pair, the optimal magnification and the optimal rotation angle.
After the monitoring images under a plurality of different angles are obtained, characteristic points in different images need to be matched for achieving the purpose of image fusion, and the traditional RANSAC algorithm only utilizes a random sampling mode to screen interior points from the plurality of matching points as a final transformed sample set, so that the selection of the interior points is particularly important; however, the condition that part of the characteristic points are blocked by other objects inevitably occurs in the monitoring images under different angles, so that the characteristic points are only generated in a single monitoring image, and when the characteristic points are matched, the part of the pixel points can cause larger interference to the fusion of the images, so that the embodiment can utilize the neighborhood information in the monitoring images and the structural similarity of the matching points, and further combine the prior information of the abnormality degree of each point in the process of selecting the inner points by using the RANSAC algorithm, and further reduce the influence of the abnormal data points on the fusion of the final image.
Specifically, firstly, a sift characteristic point matching algorithm is adopted to process the monitoring image under every two angles at the same moment to obtain characteristic points and a plurality of matching pairs in the monitoring image. The sift feature point matching algorithm is the prior art and will not be described in detail here. It should be noted that: each matching pair contains two feature points, and the two feature points come from monitoring images at different angles. Then, the preset magnification and the preset rotation angle are set, the preset magnification in this embodiment is 1.5, 2, 2.5, 3, 3.5 and 4, and the preset rotation angle is 0.25 pi, 0.5 pi, 0.75 pi, 1.25 pi, 1.5 pi and 1.75 pi, respectively, and in the specific application, the practitioner can set according to the specific situation. Each preset amplification factor is combined with each preset rotation angle to obtain a plurality of parameter combinations, namely (1.5,0.25 pi) is a parameter combination, (1.5,0.5 pi) is a parameter combination, (1.5,0.75 pi) is a parameter combination, (1.5, pi) is a parameter combination, (1.5,1.25 pi) is a parameter combination, (1.5, 1.5 pi) is a parameter combination, (1.5,1.75 pi) is a parameter combination, (2, 0.5 pi) is a parameter combination, (2, pi) is a parameter combination, and the preset amplification factors and the preset rotation angles can all form a parameter combination by analogy.
For a monitored image at any two angles at any instant:
Processing the monitoring image under the first angle of the two angles based on the data in each parameter combination to obtain a first image corresponding to each parameter combination, namely amplifying and rotating the first image to obtain a first image; and processing the monitoring image under the second angle of the two angles based on the data in each parameter combination to obtain a second image corresponding to each parameter combination, namely amplifying and rotating the second image to obtain a second image. For the ith matching pair in the monitored image at two angles: in a first image corresponding to each parameter combination, constructing a first gray sequence of an ith matching pair under each parameter combination based on gray values of all pixel points in a preset neighborhood of a feature point of the ith matching pair; in a second image corresponding to each parameter combination, constructing a second gray sequence of the ith matching pair under each parameter combination based on gray values of all pixel points in a preset neighborhood of the feature point of the ith matching pair; the correlation coefficients of each first gray level sequence and each second gray level sequence are calculated respectively, wherein the correlation coefficients are pearson correlation coefficients, the calculation method of the pearson correlation coefficients is the prior art, and redundant description is omitted here. In this embodiment, the preset neighborhood is eight neighbors, and in a specific application, an implementer may set the preset neighborhood according to a specific situation.
If the matching represents a real corresponding region, the images corresponding to different angles in the neighborhood range of the feature points in the matching pair can be approximately regarded as a radiation transformation, namely, the condition of generating perspective transformation under different regions is not considered; therefore, the correlation of the gray values of the feature points in the neighborhood corresponding to the two feature points is analyzed under different scales and rotation angles, and when the correlation reaches the maximum, the maximum value can be used for preliminarily representing the similarity between matched pairs to a certain extent. Therefore, in this embodiment, the parameter combination corresponding to the first gray scale sequence when the correlation number takes the maximum value is recorded as the first target combination, and the parameter combination corresponding to the second gray scale sequence when the correlation number takes the maximum value is recorded as the second target combination; the method comprises the steps of taking the preset magnification in a first target combination as the optimal magnification corresponding to a monitoring image under a first angle of two angles, taking the preset rotation angle in the first target combination as the optimal rotation angle corresponding to the monitoring image under the first angle of the two angles, taking the preset magnification in a second target combination as the optimal magnification corresponding to the monitoring image under the second angle of the two angles, and taking the preset rotation angle in the second target combination as the optimal rotation angle corresponding to the monitoring image under the second angle of the two angles. For the ith matching pair in the monitored image at two angles: and taking the normalization result of the maximum value of the correlation coefficient as the similarity between the ith matching pair, and performing normalization processing on the maximum value of the correlation coefficient by using a norm function. By adopting the method, the similarity between each matching pair can be obtained.
For either matched pair: recording an optimal rotation angle corresponding to a monitoring image where the 2 nd feature point in the matching pair is positioned as a reference angle, and constructing a rotation transformation matrix of the matching pair based on the reference angle, wherein the size of the rotation transformation matrix is as followsThe elements of the first row and the first column and the elements of the first row and the second column in the rotation transformation matrix are cosine values of the reference angle, the elements of the first row and the second column are opposite numbers of sine values of the reference angle, and the elements of the second row and the first column are sine values of the reference angle. By adopting the method, the rotation transformation matrix of each matched pair can be obtained. And then obtaining the initial abnormality degree of each matching pair based on the similarity corresponding to the pixel points in the neighborhood of the pixel point in each matching pair, the position of the characteristic point in each matching pair, the optimal magnification factor and the rotation transformation matrix. The calculation formula of the initial abnormality degree of the ith matching pair and the rotation transformation matrix of the jth matching pair where the feature point belonging to the matching pair is located, which is contained in the neighborhood of the 1 st feature point of the ith matching pair, are respectively as follows:
Wherein, Represents the initial degree of abnormality of the ith matching pair,/>Representing the number of feature points belonging to a matching pair contained within the neighborhood of the 1 st feature point of the i-th matching pair,/>Representing similarity between the j-th matching pair where the feature point belonging to the matching pair is located, contained in the neighborhood of the 1 st feature point of the i-th matching pair,/>Representing the optimal magnification factor corresponding to the monitoring image of the 1 st feature point in the j-th matching pair where the feature point in the i-th matching pair is located and included in the neighborhood of the 1 st feature point in the i-th matching pair,/>Representing the optimal magnification factor corresponding to the monitoring image of the 2 nd feature point in the j-th matching pair where the feature point in the i-th matching pair is located and included in the neighborhood of the 1 st feature point in the i-th matching pair,/>A rotation transformation matrix representing the j-th matching pair in which the feature point belonging to the matching pair is located, which is contained in the neighborhood of the 1 st feature point of the i-th matching pair,/>Representing the coordinates of the 1 st feature point in the i-th matching pair,/>Representing the coordinates of the 1 st feature point in the j-th matching pair where the feature point in the i-th matching pair is located, which is contained in the neighborhood of the 1 st feature point in the i-th matching pair,/>Representing the coordinates of the 2 nd feature point in the j-th matching pair where the feature point belonging to the matching pair is located, which is contained in the neighborhood of the 1 st feature point in the i-th matching pair,/>Representing the coordinates of the 2 nd feature point in the i-th matching pair,/>Representing the optimal rotation angle corresponding to the monitoring image of the 2 nd feature point in the j-th matching pair where the feature point belonging to the matching pair is located and contained in the neighborhood of the 1 st feature point in the i-th matching pair,/>Representing the L2 norm of the vector, cos () represents a cosine value and sin () represents a sine value.
The method is used for representing the predicted coordinates of the feature points in the ith matching pair in the second frame of monitoring image, reflecting the coordinates of the matching points estimated by using the matching points of the adjacent points in the first frame of monitoring image on the basis of not considering the matching point positions of the ith matching pair in the second frame of monitoring image; representing the magnitude of the distance between the estimated point and the matching point.
By adopting the method, the initial abnormality degree of each matching pair can be obtained.
Step S3, correcting the initial abnormality degree according to the position distribution of the characteristic points in each matching point pair and the position distribution among the characteristic points in the monitoring image at the same angle before the moment corresponding to the monitoring image of the characteristic points in each matching point pair, and obtaining the target abnormality degree of each matching point pair; and determining a fitting degree threshold value after each feature point in each matching point pair is corrected based on the target abnormality degree.
In consideration of the fact that the monitoring video is composed of a plurality of monitoring images at different moments, the positions of partial objects in the monitoring images possibly change to a certain extent, so that the situation that corresponding feature points increase or decrease in the monitoring images of continuous frames possibly occurs, and when image registration is carried out, the position mapping relation between the images at the same moment is considered, so that the position change relation of the feature points in the monitoring images of different frames needs to be analyzed, the influence of partial abnormal points with larger changes between adjacent frames on subsequent registration is avoided, and the registration accuracy can be effectively improved. Therefore, the embodiment corrects the initial abnormality degree by using the positions of the feature points in the monitoring image at the same angle before the time corresponding to the monitoring image of the feature point in each matching point pair.
Specifically, for the i-th matching pair:
The preset time period before the moment corresponding to the monitoring image where the ith matching pair is positioned is recorded as a reference time period, and the monitoring image with the same angle as the monitoring image corresponding to the monitoring image where the ith matching pair is positioned in the reference time period is recorded as a reference image; the duration of the preset time period in this embodiment is 1 minute, and in a specific application, the practitioner may set according to the specific situation.
Respectively acquiring a feature point with the smallest position difference with a 1 st feature point in an i-th matching pair in each frame of reference image, and taking the feature point as a first reference point in each frame of reference image; the method for acquiring the position difference between the feature points in the embodiment comprises the following steps: corresponding Euclidean distances are obtained based on the coordinates of the two feature points, the Euclidean distances are used as position differences, the position differences are represented by the Euclidean distances, the Euclidean distances are far, and the position differences are larger. Based on the coordinates of a first reference point in each two adjacent reference images, coordinate difference vectors corresponding to the first reference points of each two adjacent reference images are obtained, and an included angle between the coordinate difference vectors corresponding to the first reference points of each two adjacent reference images and a preset direction is used as a first angle; respectively marking the modular length of the coordinate difference vector corresponding to the first reference point of each two adjacent frames of reference images as a first modular length; respectively taking a negative correlation normalization result of the difference between each first module length and the average value of all the first module lengths as a weight coefficient corresponding to each first angle; in this embodiment, for any one of the first module lengths, the absolute value of the difference between the first module length and the average value of all the first module lengths is used as the difference between the first module length and the average value of all the first module lengths, and the value of the exponential function with the natural constant as the base and the negative difference as the exponent is used as the weight coefficient corresponding to the first angle corresponding to the first module length. Recording the product between each first angle and the corresponding weight coefficient as a first product corresponding to each first angle; respectively acquiring a feature point with the minimum position difference with a 2 nd feature point in an i-th matching pair in each frame of reference image, and taking the feature point as a second reference point in each frame of reference image; based on the coordinates of a second reference point in each two adjacent reference images, coordinate difference vectors corresponding to the second reference points of each two adjacent reference images are obtained, and an included angle between the coordinate difference vectors corresponding to the second reference points of each two adjacent reference images and a preset direction is used as a second angle; respectively marking the modular length of the coordinate difference vector corresponding to the second reference point of each two adjacent frames of reference images as a second modular length; in this embodiment, for any one second module length, the absolute value of the difference between the second module length and the average value of all the second module lengths is used as the difference between the second module length and the average value of all the second module lengths, and the value of an exponential function with the natural constant as a base and the negative difference as an index is used as the weight coefficient corresponding to the second angle corresponding to the second module length. Recording the product between each second angle and the corresponding weight coefficient as a second product corresponding to each second angle; all the first products form a first product sequence, all the second products form a second product sequence, and a pearson correlation coefficient between the first product sequence and the second product sequence is calculated; the pearson correlation coefficient is used for reflecting the stability of the position change of the feature points in the two monitoring images, so that the initial abnormality degree of the ith matching pair is corrected based on the pearson correlation coefficient to obtain the target abnormality degree of the ith matching point pair. The specific calculation formula of the target abnormality degree of the ith matching point pair is as follows:
Wherein, Representing the target abnormality degree of the ith matching point pair,/>Represents the initial degree of abnormality of the ith matching pair,/>Representing a first product sequence,/>Representing a second product sequence,/>Representing the pearson correlation coefficient between two sequences,/>Representing taking absolute value symbols.
The characteristic is the stability between the characteristic points in the matching point pairs, and if the value is larger, the higher the stability of the ith matching point pair is, the lower the target abnormality degree of the ith matching point pair is.
By adopting the method provided by the embodiment, the target abnormality degree of each matching pair can be obtained. The more likely that the feature points in the matching pair with higher target abnormality degree are not the feature points of the real region, so that when RANSAC self-adaptive sampling is performed, the distance threshold of the sample points with higher target abnormality degree needs to be reduced, so that the fitting effect on the pixel points with higher target abnormality degree is prevented from being better, and the non-real feature points are used as interior points by mistake, and the accuracy of subsequent image fusion is reduced. Based on the above, the product of the negative correlation normalization result of the initial abnormality degree of each matching point pair and the initial fitting degree threshold value is used as the fitting degree threshold value after each characteristic point in each matching point pair is corrected; in this embodiment, for any matching pair, an exponential function value with a natural constant as a base and a negative initial fitting degree threshold of the matching point pair as an exponent is used as a fitting degree threshold after correction of each feature point in the matching pair. In this embodiment, the initial fitting degree threshold is 1.7, and in a specific application, the practitioner may set the initial fitting degree threshold according to the specific situation.
By adopting the method provided by the embodiment, the fitting degree threshold value of each characteristic point in each matching point pair after correction can be obtained, and the embodiment endows the characteristic points with higher target abnormality degree with smaller fitting degree threshold value, so that the probability of serving as internal points in the fitting result is lower, and the accuracy of the model is improved.
And S4, selecting an inner point set based on the corrected fitting degree threshold, fusing every two frames of monitoring images under different angles to obtain a fused image, and compressing the fused image.
In the embodiment, a corrected fitting degree threshold value of each characteristic point in each matching point pair is obtained in step S3, and then, in the embodiment, an internal point set is selected by adopting a RANSAC algorithm based on the corrected fitting degree threshold value, and a homography transformation matrix corresponding to the internal point set is obtained; then, based on homography transformation matrixes corresponding to the interior point sets, fusing every two frames of monitoring images under different angles at the same moment to obtain a final fusion result of the two frames of monitoring images, namely obtaining corresponding fusion images, wherein the following needs to be described: when fusion processing is performed on the monitoring images, for any one time: and sequencing the monitoring images at all angles at the moment according to the sequence of the feature points from large to small to obtain a monitoring image sequence corresponding to the moment, and sequentially carrying out fusion processing on every two adjacent monitoring images in the monitoring image sequence corresponding to the moment according to the sequence from left to right. The RANSAC algorithm is a prior art and will not be described in detail here.
And the H.265 algorithm is adopted to compress the fusion image of the continuous frames, so that a compression processing result is obtained, and more efficient compression is realized, thereby reducing the size of a video file and the bandwidth required by transmission, and improving the performance and effect of the KVM seat management system in video transmission and monitoring. The h.265 algorithm is a prior art and will not be described in detail here.
So far, the method provided by the embodiment completes the compression processing of the 4K high-definition video of the area to be monitored.
In the embodiment, when the conventional RANSAC algorithm is considered to process the monitored image, the threshold values of the points serving as the inner points are the same, so that the problem that misjudgment is performed on part of abnormal data points possibly exists, structural similarity between matched point pairs after the sift characteristic points are matched is analyzed, the difference between the transformation positions of the points and the positions of the characteristic points in the matched pair is estimated by the similarity of the points in the neighborhood of the characteristic points in the matched pair and corresponding affine transformation information, the probability that the matched points possibly exist due to shielding factors is analyzed, the initial abnormal degree of each matched point pair is obtained, the positions of part of objects in the monitored image possibly change to a certain extent due to the fact that the monitored video is formed by a plurality of monitored images at different moments, the situation that the corresponding characteristic points possibly increase or decrease in the monitored image of continuous frames is considered when the images are matched, the position mapping relation between the images at the same moment is combined, the probability of the positions of the characteristic points in the matched pair before the monitored image is matched with the characteristic points at the same moment is estimated, the initial abnormal degree of each matched image is determined, the abnormal degree of the points in the monitored image is compressed, the abnormal image is fused with the position of the matched image is further accurately, and the abnormal degree of the abnormal image is determined, and the abnormal image is fused in the condition is further reduced, and the abnormal image is fused, the position of the abnormal image is obtained, and the abnormal image is accurately is fused, and the abnormal image is well is better.
It should be noted that: the foregoing description of the preferred embodiments of the present invention is not intended to be limiting, but rather, any modifications, equivalents, improvements, etc. that fall within the principles of the present invention are intended to be included within the scope of the present invention.

Claims (6)

1. A 4K high definition KVM seat management system comprising a memory and a processor, wherein the processor executes a computer program stored in the memory to implement the steps of:
Acquiring monitoring videos of an area to be monitored under different angles, wherein the monitoring videos are formed by continuous multi-frame 4K high-definition monitoring images;
Respectively processing the monitoring images under every two angles at the same moment by adopting a sift characteristic point matching algorithm to obtain characteristic points and matching pairs in the monitoring images; processing the monitoring images under every two angles at the same moment by using different preset amplification factors and different preset rotation angles, and obtaining the optimal amplification factor and the optimal rotation angle corresponding to each frame of monitoring image and the similarity between each matching point pair based on the similarity of gray values of pixel points in the neighborhood of two pixel points in each matching point pair after processing; obtaining initial abnormality degree of each matching point pair according to similarity corresponding to pixel points in the neighborhood of the characteristic point in each matching point pair, the position of the characteristic point in each matching point pair, the optimal magnification and the optimal rotation angle;
Correcting the initial abnormality degree according to the position distribution of the characteristic points in each matching point pair and the position distribution among the characteristic points in the monitoring image at the same angle before the moment corresponding to the monitoring image of the characteristic points in each matching point pair, and obtaining the target abnormality degree of each matching point pair; determining a fitting degree threshold value after each feature point in each matching point pair is corrected based on the target abnormality degree;
Selecting an inner point set based on the corrected fitting degree threshold, fusing every two frames of monitoring images under different angles to obtain a fused image, and compressing the fused image;
obtaining the initial anomaly degree of each matching point pair according to the similarity corresponding to the pixel points in the neighborhood of the characteristic point in each matching point pair, the position of the characteristic point in each matching point pair, the optimal magnification and the optimal rotation angle, wherein the initial anomaly degree comprises the following steps:
For either matched pair: recording an optimal rotation angle corresponding to a monitoring image where the 2 nd feature point in the matching pair is positioned as a reference angle, and constructing a rotation transformation matrix of the matching pair based on the reference angle, wherein the size of the rotation transformation matrix is as follows The elements of the first row and the first column and the elements of the first row and the second column in the rotation transformation matrix are cosine values of reference angles, the elements of the first row and the second column are opposite numbers of sine values of the reference angles, and the elements of the second row and the first column are sine values of the reference angles;
Obtaining initial abnormality degree of each matching pair based on similarity corresponding to pixel points in a neighborhood of the pixel point in each matching pair, the position of a characteristic point in each matching pair, the optimal magnification factor and the rotation transformation matrix;
the initial degree of anomaly for the ith matching pair is calculated using the following formula:
Wherein, Represents the initial degree of abnormality of the ith matching pair,/>Representing the number of feature points belonging to a matching pair contained within the neighborhood of the 1 st feature point of the i-th matching pair,/>Representing similarity between the j-th matching pair where the feature point belonging to the matching pair is located, contained in the neighborhood of the 1 st feature point of the i-th matching pair,/>Representing the optimal magnification factor corresponding to the monitoring image of the 1 st feature point in the j-th matching pair where the feature point in the i-th matching pair is located and included in the neighborhood of the 1 st feature point in the i-th matching pair,/>Representing the optimal magnification factor corresponding to the monitoring image of the 2 nd feature point in the j-th matching pair where the feature point in the i-th matching pair is located and included in the neighborhood of the 1 st feature point in the i-th matching pair,/>A rotation transformation matrix representing the j-th matching pair in which the feature point belonging to the matching pair is located, which is contained in the neighborhood of the 1 st feature point of the i-th matching pair,/>Representing the coordinates of the 1 st feature point in the i-th matching pair,/>Representing the coordinates of the 1 st feature point in the j-th matching pair where the feature point in the i-th matching pair is located, which is contained in the neighborhood of the 1 st feature point in the i-th matching pair,/>Representing the coordinates of the 2 nd feature point in the j-th matching pair where the feature point belonging to the matching pair is located, which is contained in the neighborhood of the 1 st feature point in the i-th matching pair,/>Representing the coordinates of the 2 nd feature point in the i-th matching pair,/>An L2 norm representing the vector;
the correcting the initial abnormality degree according to the position distribution of the feature points in each matching point pair and the position distribution between the feature points in the monitoring image at the same angle before the moment corresponding to the monitoring image of the feature points in each matching point pair, to obtain the target abnormality degree of each matching point pair, including:
for the i-th matching pair:
The preset time period before the moment corresponding to the monitoring image where the ith matching pair is positioned is recorded as a reference time period, and the monitoring image with the same angle as the monitoring image corresponding to the monitoring image where the ith matching pair is positioned in the reference time period is recorded as a reference image;
respectively acquiring a feature point with the smallest position difference with a1 st feature point in an i-th matching pair in each frame of reference image, and taking the feature point as a first reference point in each frame of reference image; based on the coordinates of a first reference point in each two adjacent reference images, coordinate difference vectors corresponding to the first reference points of each two adjacent reference images are obtained, and an included angle between the coordinate difference vectors corresponding to the first reference points of each two adjacent reference images and a preset direction is used as a first angle; respectively marking the modular length of the coordinate difference vector corresponding to the first reference point of each two adjacent frames of reference images as a first modular length; respectively taking the negative correlation normalization result of the difference between each first module length and the average value of all the first module lengths as a weight coefficient corresponding to each first angle, and recording the product between each first angle and the corresponding weight coefficient as a first product corresponding to each first angle;
Respectively acquiring a feature point with the minimum position difference with a2 nd feature point in an i-th matching pair in each frame of reference image, and taking the feature point as a second reference point in each frame of reference image; based on the coordinates of a second reference point in each two adjacent reference images, coordinate difference vectors corresponding to the second reference points of each two adjacent reference images are obtained, and an included angle between the coordinate difference vectors corresponding to the second reference points of each two adjacent reference images and a preset direction is used as a second angle; respectively marking the modular length of the coordinate difference vector corresponding to the second reference point of each two adjacent frames of reference images as a second modular length; respectively taking the negative correlation normalization result of the difference between each second module length and the average value of all second module lengths as a weight coefficient corresponding to each second angle, and recording the product between each second angle and the corresponding weight coefficient as a second product corresponding to each second angle;
All the first products form a first product sequence, all the second products form a second product sequence, and a pearson correlation coefficient between the first product sequence and the second product sequence is calculated;
correcting the initial abnormality degree of the ith matching point pair based on the Pelson correlation coefficient to obtain the target abnormality degree of the ith matching point pair;
the target abnormality degree of the i-th matching point pair is calculated by adopting the following formula:
Wherein, Representing the target abnormality degree of the ith matching point pair,/>Represents the initial degree of abnormality of the ith matching pair,/>Representing a first product sequence,/>Representing a second product sequence,/>Representing the pearson correlation coefficient between the two sequences,Representing taking absolute value symbols.
2. The 4K high-definition KVM seat management system according to claim 1, wherein the processing of the monitored images at each two angles at the same time by using different preset magnifications and different preset rotation angles, and obtaining the optimal magnifications and the optimal rotation angles corresponding to each frame of the monitored images based on the similarity of the gray values of the pixels in the neighborhood of the two pixels in each matching pair after the processing, comprises:
combining each preset magnification with each preset rotation angle to obtain each parameter combination;
For a monitored image at any two angles at any instant:
Processing the monitoring image under the first angle of the two angles based on the data in each parameter combination to obtain a first image corresponding to each parameter combination; processing the monitoring image under the second angle of the two angles based on the data in each parameter combination to obtain a second image corresponding to each parameter combination;
For the ith matching pair in the monitored image at two angles: in a first image corresponding to each parameter combination, constructing a first gray sequence of an ith matching pair under each parameter combination based on gray values of all pixel points in a preset neighborhood of a feature point of the ith matching pair; in a second image corresponding to each parameter combination, constructing a second gray sequence of the ith matching pair under each parameter combination based on gray values of all pixel points in a preset neighborhood of the feature point of the ith matching pair;
Calculating the correlation coefficient of each first gray level sequence and each second gray level sequence respectively, and marking the parameter combination corresponding to the first gray level sequence when the phase relation number is the maximum value as a first target combination and marking the parameter combination corresponding to the second gray level sequence when the phase relation number is the maximum value as a second target combination; the method comprises the steps of taking the preset magnification in a first target combination as the optimal magnification corresponding to a monitoring image under a first angle of two angles, taking the preset rotation angle in the first target combination as the optimal rotation angle corresponding to the monitoring image under the first angle of the two angles, taking the preset magnification in a second target combination as the optimal magnification corresponding to the monitoring image under the second angle of the two angles, and taking the preset rotation angle in the second target combination as the optimal rotation angle corresponding to the monitoring image under the second angle of the two angles.
3. The KVM seat management system of claim 2, wherein the obtaining of the similarity between each pair of matching points comprises:
For the ith matching pair in the monitored image at two angles: and taking the normalization result of the maximum value of the correlation coefficient as the similarity between the ith matching pair.
4. The 4K high definition KVM seat management system of claim 1, wherein the determining the corrected fitness threshold for each feature point in each matching point pair based on the target degree of anomaly comprises:
and respectively taking the product of the negative correlation normalization result of the initial abnormality degree of each matching point pair and the initial fitting degree threshold value as the fitting degree threshold value after each characteristic point in each matching point pair is corrected.
5. The 4K high-definition KVM seat management system according to claim 1, wherein the selecting the inner point set based on the corrected fitting degree threshold value, and fusing each two frames of monitoring images under different angles to obtain a fused image comprises:
selecting an inner point set by adopting a RANSAC algorithm based on the corrected fitting degree threshold value, and obtaining a homography transformation matrix corresponding to the inner point set;
and fusing every two frames of monitoring images under different angles at the same moment based on the homography transformation matrix corresponding to the interior point set to obtain a fused image.
6. The 4K high definition KVM seat management system of claim 1, wherein the compressing the fused image comprises:
and adopting an H.265 algorithm to compress the fusion image of the continuous frames to obtain a compression processing result.
CN202410322919.6A 2024-03-21 2024-03-21 4K high-definition KVM seat management system Active CN117934571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410322919.6A CN117934571B (en) 2024-03-21 2024-03-21 4K high-definition KVM seat management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410322919.6A CN117934571B (en) 2024-03-21 2024-03-21 4K high-definition KVM seat management system

Publications (2)

Publication Number Publication Date
CN117934571A CN117934571A (en) 2024-04-26
CN117934571B true CN117934571B (en) 2024-06-07

Family

ID=90754127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410322919.6A Active CN117934571B (en) 2024-03-21 2024-03-21 4K high-definition KVM seat management system

Country Status (1)

Country Link
CN (1) CN117934571B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101007409B1 (en) * 2010-05-26 2011-01-14 삼성탈레스 주식회사 Apparatus and method for processing image fusion signal for improvement of target detection
CN104240212A (en) * 2014-09-03 2014-12-24 西安电子科技大学 ISAR image fusion method based on target characteristics
CN111383204A (en) * 2019-12-19 2020-07-07 北京航天长征飞行器研究所 Video image fusion method, fusion device, panoramic monitoring system and storage medium
CN113470085A (en) * 2021-05-19 2021-10-01 西安电子科技大学 Image registration method based on improved RANSAC
CN113792788A (en) * 2021-09-14 2021-12-14 安徽工业大学 Infrared and visible light image matching method based on multi-feature similarity fusion
WO2022217794A1 (en) * 2021-04-12 2022-10-20 深圳大学 Positioning method of mobile robot in dynamic environment
CN115294409A (en) * 2022-10-08 2022-11-04 南通商翼信息科技有限公司 Video compression method, system and medium for security monitoring
WO2023236733A1 (en) * 2022-06-08 2023-12-14 珠海一微半导体股份有限公司 Visual tracking method of robot
CN117692649A (en) * 2024-02-02 2024-03-12 广州中海电信有限公司 Ship remote monitoring video efficient transmission method based on image feature matching

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10970425B2 (en) * 2017-12-26 2021-04-06 Seiko Epson Corporation Object detection and tracking

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101007409B1 (en) * 2010-05-26 2011-01-14 삼성탈레스 주식회사 Apparatus and method for processing image fusion signal for improvement of target detection
CN104240212A (en) * 2014-09-03 2014-12-24 西安电子科技大学 ISAR image fusion method based on target characteristics
CN111383204A (en) * 2019-12-19 2020-07-07 北京航天长征飞行器研究所 Video image fusion method, fusion device, panoramic monitoring system and storage medium
WO2022217794A1 (en) * 2021-04-12 2022-10-20 深圳大学 Positioning method of mobile robot in dynamic environment
CN113470085A (en) * 2021-05-19 2021-10-01 西安电子科技大学 Image registration method based on improved RANSAC
CN113792788A (en) * 2021-09-14 2021-12-14 安徽工业大学 Infrared and visible light image matching method based on multi-feature similarity fusion
WO2023236733A1 (en) * 2022-06-08 2023-12-14 珠海一微半导体股份有限公司 Visual tracking method of robot
CN115294409A (en) * 2022-10-08 2022-11-04 南通商翼信息科技有限公司 Video compression method, system and medium for security monitoring
CN117692649A (en) * 2024-02-02 2024-03-12 广州中海电信有限公司 Ship remote monitoring video efficient transmission method based on image feature matching

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CUDA并行计算下基于扩展SURF的多摄像机视频融合方法;崔哲;孟凡荣;姚睿;石记红;;南京大学学报(自然科学);20160730(第04期);全文 *
一种基于不变特征的视频拼接方法;邓军;甘新胜;;指挥控制与仿真;20090615(03);全文 *
基于SIFT算子融合最大相异系数的自适应图像匹配算法;陈虹;肖越;肖成龙;宋好;;计算机应用;20171226(第05期);全文 *
基于局部结构的图像拼接算法研究;赵亚茹;刘洲峰;张弘;李碧草;;中原工学院学报;20200825(第04期);全文 *
基于改进SIFT特征点匹配的图像拼接算法;宋佳乾;汪西原;;计算机测量与控制;20150225(02);全文 *
网格运动统计算法的电力走廊图像拼接;葛继空 等;《北京测绘》;20220425;第36卷(第4期);全文 *

Also Published As

Publication number Publication date
CN117934571A (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN115294409B (en) Video processing method, system and medium for security monitoring
CN112017135B (en) Method, system and equipment for spatial-temporal fusion of remote sensing image data
US8964041B2 (en) System and method for video stabilization of rolling shutter cameras
Yang et al. Progressively complementary network for fisheye image rectification using appearance flow
US7523078B2 (en) Bayesian approach for sensor super-resolution
CN110113560B (en) Intelligent video linkage method and server
CN103841298B (en) Video image stabilization method based on color constant and geometry invariant features
CN108492263B (en) Lens radial distortion correction method
US11488279B2 (en) Image processing apparatus, image processing system, imaging apparatus, image processing method, and storage medium
CN113159466A (en) Short-time photovoltaic power generation prediction system and method
JP2020149641A (en) Object tracking device and object tracking method
CN114529593A (en) Infrared and visible light image registration method, system, equipment and image processing terminal
CN110555866A (en) Infrared target tracking method for improving KCF feature descriptor
CN115131346B (en) Fermentation tank processing procedure detection method and system based on artificial intelligence
CN117934571B (en) 4K high-definition KVM seat management system
CN113705393A (en) 3D face model-based depression angle face recognition method and system
CN112465702A (en) Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video
CN108076341A (en) A kind of video satellite is imaged in-orbit real-time digital image stabilization method and system
CN114820376A (en) Fusion correction method and device for stripe noise, electronic equipment and storage medium
CN108426566B (en) Mobile robot positioning method based on multiple cameras
CN112419172A (en) Remote sensing image processing method for correcting and deblurring inclined image
CN112017108A (en) Satellite ortho-image color relative correction method based on independent model method block adjustment
CN112241640B (en) Graphic code determining method and device and industrial camera
Tian et al. Research on Super-Resolution Enhancement Technology Using Improved Transformer Network and 3D Reconstruction of Wheat Grains
CN114648564B (en) Visible light and infrared image optimization registration method and system for unsteady state target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant