CN117893534A - Bus multimedia intelligent display screen detection method based on image feature analysis - Google Patents

Bus multimedia intelligent display screen detection method based on image feature analysis Download PDF

Info

Publication number
CN117893534A
CN117893534A CN202410290071.3A CN202410290071A CN117893534A CN 117893534 A CN117893534 A CN 117893534A CN 202410290071 A CN202410290071 A CN 202410290071A CN 117893534 A CN117893534 A CN 117893534A
Authority
CN
China
Prior art keywords
sub
image
gray
block
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410290071.3A
Other languages
Chinese (zh)
Other versions
CN117893534B (en
Inventor
王春平
谢凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhangjiagang Leda Automobile Electrical Appliance Co ltd
Original Assignee
Zhangjiagang Leda Automobile Electrical Appliance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhangjiagang Leda Automobile Electrical Appliance Co ltd filed Critical Zhangjiagang Leda Automobile Electrical Appliance Co ltd
Priority to CN202410290071.3A priority Critical patent/CN117893534B/en
Publication of CN117893534A publication Critical patent/CN117893534A/en
Application granted granted Critical
Publication of CN117893534B publication Critical patent/CN117893534B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the field of image feature recognition, in particular to a bus multimedia intelligent display screen detection method based on image feature analysis. Firstly, acquiring multi-frame gray images on the surface of a multimedia display screen of a bus in a working state, extracting associated images of each frame of gray images, partitioning the gray images and the associated images of each frame, analyzing pixel point gray value distribution between sub-blocks at the same positions of the gray images and the associated images, judging whether coherent points exist in the sub-blocks according to the acquired coherent degree, extracting the coherent points, acquiring initial extreme points in each frame of gray images based on a SIFT algorithm, and screening real extreme points based on the distance between the initial extreme points and the coherent points in the sub-blocks and the coherent degree of the sub-blocks; and detecting the multimedia display screen of the bus based on the real extreme point. The invention can reduce the error of the extreme point determined by the SIFT algorithm in the gray level image and improve the accuracy of detecting the multimedia display screen of the bus.

Description

Bus multimedia intelligent display screen detection method based on image feature analysis
Technical Field
The invention relates to the field of image feature recognition, in particular to a bus multimedia intelligent display screen detection method based on image feature analysis.
Background
The multimedia display screen of the bus can display various types of picture information such as propaganda advertisements and routes, accurately detect picture content characteristics in the multimedia display screen of the bus, and can improve service information transmission and passenger experience, but because the environment in the bus is influenced by factors such as illumination, weather conditions and the like, the accuracy of extracting picture image characteristics of the display screen can be reduced, and therefore, the display screen detection algorithm needs to be ensured to have certain stability.
In the related art, a Scale-invariant feature transform algorithm (Scale-INVARIANT FEATURE TRANSFORM, SIFT) is generally used for extracting features of collected images of a content picture of a multimedia display screen of a bus, so that the detection of the multimedia display screen is completed, but in the running process of the bus, the illumination intensity in the bus is easy to change, so that the illumination intensity on the multimedia display screen is changed frequently, and larger errors exist in extreme points of the images determined by the SIFT algorithm, so that the detection accuracy of the multimedia display screen of the bus is reduced.
Disclosure of Invention
In order to solve the technical problem that the extreme points of an image determined by a SIFT algorithm have larger errors due to frequent illumination intensity change on a multimedia display screen in the process of bus driving, and further the accuracy of detection of the multimedia display screen of the bus is reduced, the invention aims to provide a method for detecting the multimedia intelligent display screen of the bus based on image feature analysis, and the adopted technical scheme is as follows:
the invention provides a bus multimedia intelligent display screen detection method based on image feature analysis, which comprises the following steps:
Acquiring each frame of gray level image in a preset time period on the surface of a multimedia display screen of the bus in a working state;
Taking other gray images of a preset first number of frames which are nearest to each frame of gray image as reference images of the corresponding gray images in time sequence; according to the difference of pixel point gray value distribution between each frame of gray image and each corresponding reference image, screening out the associated image of each frame of gray image from all the reference images;
Partitioning each frame of gray level image and the corresponding associated image to obtain different sub-blocks; randomly selecting one sub-block from each frame of gray level image as a sub-block to be detected, taking the sub-block at the same position as the sub-block to be detected as a reference sub-block in the associated image, and obtaining the continuity degree of the sub-block to be detected according to the difference of pixel point gray level distribution between the sub-block to be detected and the reference sub-block and the distribution of pixel point gray level in the sub-block to be detected;
Judging whether a coherent point exists in the sub-block to be detected based on the coherence degree, and if so, extracting the coherent point in the sub-block to be detected according to the gradient information of the pixel point in the sub-block to be detected; carrying out local detection on each frame of gray level image, obtaining different initial extreme points in the gray level image, and screening out target extreme points from all initial extreme points of each frame of gray level image, wherein sub blocks where the target extreme points are located have coherent points; according to the distance between the target extreme point and the continuous point in the sub-block and the continuous degree of the corresponding sub-block, the real extreme point is screened out from all the target extreme points of each frame of gray level image;
And detecting the multimedia display screen of the bus based on the real extreme point.
Further, the screening the associated image of each frame gray image from all the reference images according to the difference of the pixel gray value distribution between each frame gray image and each corresponding reference image includes:
screening out a preset second number of pixel points with the maximum gradient value from each frame of gray level image to serve as first image key pixel points of the gray level image, and sequencing gray level values of the first image key pixel points according to the sequence from big to small to serve as a first image gray level sequence of the gray level image;
Screening out a preset second number of pixel points with the maximum gradient value from each reference image corresponding to the gray level image to serve as second image key pixel points of the reference image, and sequencing the gray level values of the second image key pixel points according to the sequence from large to small to serve as a second image gray level sequence of the reference image;
obtaining the image difference between the gray image and each reference image according to the difference of gradient values at the same position between the gray sequence of the first image and the gray sequence of the second image, the distribution of pixel gray values in the gray image and the distribution of pixel gray values in the corresponding reference image;
And taking the reference image corresponding to the minimum value of the image anisotropy as the associated image of the gray level image.
Further, the obtaining the image difference between the gray image and each reference image according to the difference of the gradient values at the same position between the gray sequence of the first image and the gray sequence of the second image, the distribution of the gray values of the pixels in the gray image, and the distribution of the gray values of the pixels in the corresponding reference image comprises:
The calculation formula of the image variability is as follows:
Wherein,, Represents the/> Frame gray image and corresponding/> Image variability between the individual reference images; /(I) Represents the/> Frame gray image and corresponding/> correlation between the individual reference images; /(I) Represents the/> Average value of gray values of all pixel points in the frame gray image; /(I) Represents the/> Frame gray image/> average value of gray values of all pixel points in the reference images; /(I) Representing a first set; /(I) Represents the/> Frame gray image and corresponding/> Covariance of pixel gray values between the reference images; /(I) Represents the/> Standard deviation of gray values of all pixel points in the frame gray image; /(I) Represents the/> Frame gray image/> Standard deviation of gray values of all pixel points in the reference images; /(I) Represents the/> First image gray sequence of frame gray image/> Gray values; /(I) Represents the/> Frame gray image/> the/>, in the second image gray sequence of the reference images Gray values; /(I) Representing a preset second number; /(I) Representing the first set/> Average value of all elements in (a); /(I) Representing the first set/> variance of all elements in (a).
Further, the obtaining the coherence degree of the sub-block to be measured according to the difference of the gray value distribution of the pixel points between the sub-block to be measured and the reference sub-block and the gray value distribution of the pixel points in the sub-block to be measured includes:
Screening out a preset third number of pixel points with the maximum gradient value from the sub-block to be tested of the gray image to serve as first sub-block key pixel points of the sub-block to be tested, and sequencing gray values of the first sub-block key pixel points according to the sequence from big to small to serve as a first sub-block gray sequence of the sub-block to be tested;
Screening out a preset third number of pixel points with the maximum gradient value from a reference sub-block of the associated image, taking the pixel points as second sub-block key pixel points of the reference sub-block, and taking a sequence of the gray values of the second sub-block key pixel points after sequencing from the big to the small as a second sub-block gray sequence of the reference sub-block;
obtaining the sub-block difference between the sub-block to be detected and the reference sub-block according to the difference of gray values at the same position between the first sub-block gray sequence and the second sub-block gray sequence, the distribution of the gray values of the pixels of the sub-block to be detected in the gray image and the distribution of the gray values of the pixels of the reference sub-block in the associated image;
And obtaining the continuity degree of the sub-block to be detected in the gray image according to the sub-block difference between the sub-block to be detected and the reference sub-block, the distribution of the gray values of the pixel points in the sub-block to be detected in the gray image and the gray values of the key pixel points of the first sub-block.
Further, the obtaining the sub-block difference between the sub-block to be measured and the reference sub-block according to the difference of the gray values at the same position between the first sub-block gray sequence and the second sub-block gray sequence, the distribution of the gray values of the pixels of the sub-block to be measured in the gray image, and the distribution of the gray values of the pixels of the reference sub-block in the associated image includes:
The calculation formula of the sub-block diversity is as follows:
Wherein,, Represents the/> Frame gray image/> The sub-block corresponds to the associated image sub-block differences between sub-blocks; /(I) Represents the/> Frame gray image/> average value of gray values of all pixel points in each sub-block; /(I) Represent the first first/>, associated image of frame gray image average value of gray values of all pixel points in each sub-block; /(I) representing a second set; Represents the/> Frame gray image/> The sub-block corresponds to the associated image covariance of pixel gray values among sub-blocks; /(I) Represents the/> Frame gray image/> Standard deviation of gray values of all pixel points in each sub-block; /(I) Represents the/> first/>, associated image of frame gray image Standard deviation of gray values of all pixel points in each sub-block; /(I) Represents the/> Frame gray image/> First sub-block gray level sequence of sub-block/> Gray values; /(I) Represents the/> first/>, associated image of frame gray image The second sub-block gray level sequence of the sub-block/> Gray values; /(I) Representing a preset third number; /(I) Representing a second set Average value of all elements in (a); /(I) Representing the second set/> variance of all elements in (a).
Further, the obtaining the continuity degree of the sub-block to be measured in the gray image according to the sub-block difference between the sub-block to be measured and the reference sub-block, the distribution of the gray values of the pixel points in the sub-block to be measured in the gray image, and the gray values of the key pixel points of the first sub-block comprises:
Performing negative correlation mapping on the sub-block difference between the sub-block to be detected and the reference sub-block to obtain a first continuous parameter of the sub-block to be detected in the gray level image;
dividing the average value of gray values of all first sub-block key pixel points in the sub-block to be tested of the gray image by the average value of gray values of all pixel points of the gray image to obtain key characteristic parameters of the sub-block to be tested;
normalizing the information entropy of the gray value of the pixel point in the sub-block to be detected in the gray image to obtain the gray chaotic parameter of the sub-block to be detected;
taking the product value of the gray chaotic parameter and the key characteristic parameter as a second coherent parameter of the sub-block to be measured;
Taking the preset weight as the weight of the first coherent parameter, and taking the result of negative correlation mapping of the preset weight as the weight of the second coherent parameter;
And normalizing the weighted sum result of the first coherent parameter and the second coherent parameter to obtain the coherence degree of the sub-block to be detected in the gray level image.
Further, the step of judging whether the continuity point exists in the sub-block to be detected based on the continuity degree, and if so, extracting the continuity point in the sub-block to be detected according to the gradient information of the pixel point in the sub-block to be detected includes:
If the degree of continuity of the sub-block to be detected is larger than a preset continuity threshold value, a continuity point exists in the sub-block to be detected, a pixel point with the largest gradient value in the sub-block to be detected is used as the continuity point, and otherwise, no continuity point exists in the sub-block to be detected.
Further, the locally detecting each frame of gray level image, and obtaining different initial extreme points in the gray level image includes:
and carrying out local detection on each frame of gray level image based on the SIFT algorithm to obtain different initial extreme points in the gray level image.
Further, the step of screening the real extremum points from all the target extremum points of each frame of gray level image according to the distance between the target extremum points and the consecutive points in the sub-blocks and the continuity degree of the corresponding sub-blocks includes:
Performing negative correlation mapping on the distance between the target extreme point and the continuous point in the sub-block to obtain a trusted parameter of the target extreme point; normalizing the product value of the reliable parameter and the consistency degree of the sub-block where the target extreme point is located to obtain the reliable degree of the target extreme point;
And in each frame of gray level image, taking the target extreme point with the credibility larger than a preset credibility threshold value as a real extreme point.
Further, the detecting the bus multimedia display screen based on the real extreme point includes:
screening different key points from real extreme points of each frame of gray level image based on SIFT algorithm, and constructing feature descriptors of each key point;
inputting the feature descriptors of key points in all gray images into a semantic segmentation network for training to obtain a trained semantic segmentation network, inputting all gray images into the trained semantic segmentation network, and taking the output result as a detection result of a bus multimedia display screen.
The invention has the following beneficial effects:
The invention firstly collects the gray level image of each frame of the surface of the display screen in the working state, considers that the gray level image of each frame is larger in error due to the change of the illumination intensity of the surface of the display screen, and the gray level image which is closer in time sequence can not generate larger change due to illumination, the picture content of the display screen presented by the display screen is similar, therefore, the gray level image of each frame and the corresponding associated image can be combined in the follow-up process to carry out analysis, more accurate extreme points can be obtained, the extreme points are considered to be obtained by carrying out local detection on the gray level image, the extreme points can be distributed at different positions of the gray level image, the gray level image and the associated image can be segmented, the influence degree of illumination on the region where the sub-blocks of the gray level image are located is reflected by the acquired degree, the influence degree of illumination on the sub-blocks which are smaller in the influence is further, the extreme points which are not influenced by the illumination are extracted in the sub-blocks, the following extreme points are more accurate in the follow-up process to accurately detect the extreme points, and the accuracy of the distance between the extreme points and the adjacent extreme points in the display screen can be more accurately determined, and the accuracy of the distance between the extreme points and the adjacent extreme points in the display screen can be more accurately detected.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for detecting a multimedia intelligent display screen of a bus based on image feature analysis according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and effects adopted by the invention to achieve the preset aim, the following is a detailed description of specific implementation, structure, characteristics and effects of the method for detecting the multimedia intelligent display screen of the bus based on image characteristic analysis, which is provided by the invention, with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a specific scheme of a bus multimedia intelligent display screen detection method based on image feature analysis, which is specifically described below with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of a method for detecting a multimedia intelligent display screen of a bus based on image feature analysis according to an embodiment of the present invention is shown, where the method includes:
Step S1: and acquiring each frame of gray level image in a preset time period on the surface of the multimedia display screen of the bus in the working state.
The invention provides a method for detecting a bus multimedia intelligent display screen based on image feature analysis, which aims to solve the problem that illumination intensity in a bus is easy to change in the process of driving the bus, so that the illumination intensity on the multimedia display screen is changed frequently, and the extreme point of an image determined by a SIFT algorithm has larger error, so that the accuracy of detecting the bus multimedia display screen is reduced.
The embodiment of the invention firstly arranges the industrial camera right in front of the multimedia display screen of the bus, and records the video of the multimedia display screen, wherein the video duration is a preset time period, the preset time period is set to be 1 minute in one embodiment of the invention, and the specific numerical value of the preset time period can also be set by an implementer according to specific implementation scenes, so that the method is not limited; the invention detects the picture content of the multimedia display screen, thus ensuring that the multimedia display screen is in a starting-up or working state in the video recording process, after the video recording is completed, importing the collected video data on the surface of the display screen into professional video software such as Premiere software, processing the imported video by utilizing a single-frame image extraction function in the software, thereby extracting the images of continuous multi-frames of the whole video, and subsequently reducing the influence of illumination on the detection and identification of the picture characteristics of the multimedia display screen of the bus by analyzing the difference of the images between adjacent frames.
In order to reduce the calculation amount of the subsequent image processing and improve the processing speed, in one embodiment of the invention, the image of each frame is subjected to graying processing and converted into a single-channel gray image, so that the gray image of each frame of the monitoring video is obtained. It should be noted that the graying process is a technical means well known to those skilled in the art, and will not be described herein.
In the embodiment of the invention, when the local detection is carried out on each frame of gray image based on the SIFT algorithm, the extreme point is detected according to the difference of gray values among pixels in the gray image, and the difference of the gray values of the pixels in a partial area of each frame of gray image is smaller due to the influence of illumination, so the embodiment of the invention carries out frame-by-frame processing on the obtained gray image based on the histogram equalization algorithm, thereby being convenient for reducing the error of the detection of the extreme point, improving the accuracy of the detection of the picture of the bus multimedia display screen, and the histogram equalization algorithm is a technical means well known to the person skilled in the art and is not repeated herein.
Step S2: taking other gray images of a preset first number of frames which are nearest to each frame of gray image as reference images of the corresponding gray images in time sequence; and screening out the associated images of each frame of gray level image from all the reference images according to the difference of pixel point gray level value distribution between each frame of gray level image and each corresponding reference image.
In the embodiment of the invention, the feature extraction is carried out on each frame of gray level image through the SIFT algorithm in the follow-up process, so that the detection of the multimedia display screen is realized, but in the driving process of the bus, the internal illumination environment shows irregular changes, so that the change of the illumination intensity of the surface of the multimedia display screen is more frequent, the SIFT algorithm is more sensitive to illumination, the pixel points under the influence of the illumination can be mistakenly detected as extreme points, and the key points extracted through the SIFT algorithm and the feature descriptors of the key points have larger errors, so that the detection accuracy of the display screen is reduced.
In consideration of the fact that larger difference between gray images with a shorter distance on time sequence cannot occur due to illumination change, the picture information of a display screen presented by the gray images is similar, therefore, in order to obtain the reference image which is most similar to the picture information displayed by each gray image, the invention firstly uses other gray images with a first number which are nearest to each gray image on time sequence as reference images of corresponding gray images, wherein the first number is set to 2, specific numerical values of the first number can be set by an implementer according to specific implementation scenes, the difference between the gray images and the picture information of the display screen displayed by each corresponding reference image is mainly reflected through difference of pixel point gray value distribution, in order to obtain the reference image which is most similar to the picture information displayed by each gray image, the difference of pixel point gray value distribution between each gray image and each corresponding reference image can be analyzed, and then related images of the corresponding gray images are selected from all the reference images, wherein each gray image is most similar to the picture information of the display screen displayed by the corresponding related images, the difference between the gray images is most affected by the gray images, after the gray images are most affected by the gray images, the difference between each gray image and each gray image is further analyzed by the corresponding gray image, the difference is further, the difference of each gray image is not affected by the corresponding gray image is further, and the difference of the gray image is further analyzed, and the difference of the difference is not affected by the gray image is further affected by the image, and the difference is further, and the difference is not affected by the image.
Preferably, in one embodiment of the present invention, the method for acquiring the associated image of each frame of gray scale image specifically includes:
Because the image features at the pixel points with larger gradient values are obvious, and the difference between the images can be more obviously reflected, the gradient value of each pixel point in each frame of gray level image and the corresponding associated image can be calculated based on a sobel gradient operator, which is a technical means well known to the skilled person, and is not repeated herein, firstly, the pixel points with the preset second number of maximum gradient values are screened out from each frame of gray level image to be used as the first image key pixel points of the gray level image, and the sequence after the gray values of the first image key pixel points are sequenced in the sequence from large to small is used as the first image gray level sequence of the gray level image; then, a preset second number of pixel points with the maximum gradient value are screened out from the reference image corresponding to the gray level image, the gray level value of the second image key pixel point is sequenced from the big to the small as the second image key pixel point of the reference image, the preset second number is set to be 100 as the second image gray level sequence of the reference image, the specific value of the preset second number can be set by an operator according to the specific implementation scene, the method is not limited, and the sequencing mode can be performed from the small to the big, and the method is not limited; because the structural similarity in the existing SSIM algorithm is an index for evaluating the similarity of two images, the calculation process of the structural similarity between the two images can be adjusted to a certain extent based on the difference of gray values at the same position between the gray sequence of the first image and the gray sequence of the second image, so that the image difference between the gray image and each reference image is obtained; the smaller the image disparity is, the more similar the frame gray image and the picture information of the display screen presented by the corresponding reference image are, so that the reference image corresponding to the minimum value of the image disparity can be used as the associated image of the gray image. The expression of the image specificity can be specifically, for example:
Wherein,, Represents the/> Frame gray image and corresponding/> Image variability between the individual reference images; /(I) Represents the/> Frame gray image and corresponding/> correlation between the individual reference images; /(I) Represents the/> Average value of gray values of all pixel points in the frame gray image; /(I) Represents the/> Frame gray image/> average value of gray values of all pixel points in the reference images; /(I) Representing a first set; /(I) Represents the/> Frame gray image and corresponding/> Covariance of pixel gray values between the reference images; /(I) Represents the/> Standard deviation of gray values of all pixel points in the frame gray image; /(I) Represents the/> Frame gray image/> Standard deviation of gray values of all pixel points in the reference images; /(I) Represents the/> First image gray sequence of frame gray image/> Gray values; /(I) Represents the/> Frame gray image/> the/>, in the second image gray sequence of the reference images Gray values; /(I) Representing a preset second number; /(I) Representing the first set/> Average value of all elements in (a); /(I) Representing the first set/> Variance of all elements in (a); /(I) Representing an averaging function,/> Representing the difference function.
Image variability during acquisition of image variability The smaller the difference of picture information of the display screen presented between the gray image and the reference image is, the smaller the difference is, i.e. the more similar the two images are, wherein in the degree of association/> Will/> And/> The removed part is the structural similarity formula between the frame gray level image and the reference image, and the value range of the structural similarity is/> The greater the structural similarity, i.e. the closer the structural similarity is to 1, the more similar the gray image is to the reference image, and the structural similarity is to evaluate the overall similarity between the images, so the embodiment of the invention passes/> And/> reflecting the difference of key features between the gray image and the reference image and utilizing/> And/> the structural similarity formula is adjusted to a certain degree, so that the structural similarity formula is adjusted to be the structural similarity formula through the relevance/> the similarity between images in the scene of the invention can be more accurately evaluated, and the range of the structural similarity is/> When adding/> And After that, its value may be greater than 1, i.e. association/> may be greater than 1, where when/> And/> The larger the difference between the key features of the gray image and each reference image, the larger the correlation/>, the the greater the 1, the greater the difference between the gray image and the reference image, the more dissimilar the two, when/> And/> The smaller the difference between the key features of the gray image and each reference image, the smaller the difference, when/> And/> When approaching 0, the result of the correlation calculation approaches to the structural similarity, and the correlation/> The smaller the 1, the larger the difference between the gray image and the reference image, the more dissimilar the two, thus the degree of correlation/> the closer to 1, the more similar the gray image is to the reference image, the degree of association/>, can be determined Absolute value of difference from 1 as image variability/> ,/> The smaller the description gray image is, the more similar the reference image is.
After the associated image of each frame of gray level image is obtained, each frame of gray level image and the corresponding associated image can be combined for analysis in the follow-up, so that the error of the determined extreme point in each frame of gray level image due to the illumination change of the surface of the multimedia display screen is reduced, and the detection accuracy of the multimedia display screen of the bus is improved.
Step S3: partitioning each frame of gray level image and the corresponding associated image to obtain different sub-blocks; and randomly selecting one sub-block from each frame of gray level image as a sub-block to be detected, taking the sub-block at the same position as the sub-block to be detected as a reference sub-block in the associated image, and obtaining the consistency degree of the sub-block to be detected according to the difference of pixel point gray level value distribution between the sub-block to be detected and the reference sub-block and the distribution of pixel point gray level values in the sub-block to be detected.
In the embodiment of the invention, a plurality of extreme points are required to be extracted from each frame of gray level image based on a SIFT algorithm in the follow-up process, the extreme points are distributed at different positions in the gray level image, as the gray level value of each pixel can be influenced by illumination, the acquired extreme points have larger errors, namely the pixel under the influence of illumination is wrongly detected as the extreme points, and the images of a multimedia display screen which are presented by certain local areas in the gray level image are basically not influenced by illumination change, namely under the influence of illumination, the image characteristics of certain local areas in the same positions in the gray level image and the associated image are not greatly changed, which means that the pixel points which are basically not influenced by illumination change exist in the areas, the acquisition errors of the extreme points can be reduced by the pixel points in the follow-up process, therefore, the images can be divided into a plurality of small areas with the same size, the images are firstly respectively divided into a plurality of small areas, meanwhile, based on the thought of analyzing the similarity of the images and the reference images in the step 2, the difference of the pixel values in the sub-blocks in the gray level image at the same positions is not influenced by illumination, namely the image characteristics of the sub-blocks in the sub-blocks are not influenced by illumination change, the degree of the sub-blocks can be estimated, the difference of the pixel values in the sub-blocks in the sub-frames is not influenced by illumination change, the sub-blocks is not influenced by the gray level change, the gradient of the image is estimated, and the degree of the difference of the sub-blocks is estimated, and the difference of the pixel value is not influenced by the gray level is estimated, and the gray level value is continuously, and the gray level is estimated, and the gray value has the gray level value has the similarity value of the brightness value of the gray value is the gray value, for convenience of description, in the embodiment of the present invention, one sub-block is arbitrarily selected as a sub-block to be detected in each frame of gray level image, one sub-block is arbitrarily selected as a reference sub-block in the associated image, and the positions of the sub-block to be detected and the reference sub-block are the same, in the embodiment of the present invention, the number of sub-blocks in each image is 400, and the number of sub-blocks can also be set by an implementer according to a specific implementation scenario, which is not limited herein.
It should be noted that, when the gray image and the corresponding associated image of each frame are not enough to divide a complete sub-block due to the boundary problem, the boundary between the gray image and the associated image may be filled with pixels, where the filling of the boundary between the images is a technical means well known to those skilled in the art, and will not be described herein.
Preferably, in one embodiment of the present invention, the method for acquiring the continuity degree of the sub-block to be measured in the gray scale image specifically includes:
The sub-blocks can be analogically into images based on the thought of solving the image variability in the step 2, the pixel points with a preset third number of maximum gradient values are screened out from the sub-blocks to be tested of the gray level image to be used as first sub-block key pixel points of the sub-blocks to be tested, and the gray level values of the first sub-block key pixel points are sequenced according to the sequence from big to small to be used as a first sub-block gray level sequence of the sub-blocks to be tested; the method comprises the steps of screening out a preset third number of pixel points with the maximum gradient value from a reference sub-block of an associated image of a gray level image, taking the pixel points as second sub-block key pixel points of the reference sub-block, sequencing gray level values of the second sub-block key pixel points according to a sequence from large to small, taking the sequence as a second sub-block gray level sequence of the reference sub-block, setting the preset third number as 30, wherein the specific numerical value of the preset third number can be set by an operator according to a specific implementation scene, the sequencing mode is not limited, and the sequencing mode can also be performed according to the sequence from small to large, and is not limited; and further, according to the difference of gray values of the same positions between the first sub-block gray sequence and the second sub-block gray sequence, a certain adjustment is carried out on the calculation process of the structural similarity between the sub-block to be detected and the reference, so as to obtain the sub-block difference between the sub-blocks of the same positions of the gray image and the associated image. The expression of the sub-block difference may specifically be, for example:
Wherein,, Represents the/> Frame gray image/> The sub-block corresponds to the associated image Sub-block differences between sub-blocks, th/>, gray scale image The first/>, of the individual sub-blocks and associated images The positions of the sub-blocks are the same; /(I) Represents the/> Frame gray image/> average value of gray values of all pixel points in each sub-block; /(I) Represents the/> first/>, associated image of frame gray image average value of gray values of all pixel points in each sub-block; /(I) representing a second set; /(I) Represents the/> Frame gray image/> The sub-block corresponds to the associated image covariance of pixel gray values among sub-blocks; /(I) Represents the/> Frame gray image/> Standard deviation of gray values of all pixel points in each sub-block; /(I) Represents the/> first/>, associated image of frame gray image Standard deviation of gray values of all pixel points in each sub-block; /(I) Represents the/> Frame gray image/> First sub-block gray level sequence of sub-block/> Gray values; /(I) Represents the/> first/>, associated image of frame gray image The second sub-block gray level sequence of the sub-block/> Gray values; /(I) Representing a preset third number; /(I) Representing the second set/> Average value of all elements in (a); /(I) Representing the second set/> variance of all elements in (a).
In the process of obtaining the sub-block difference, the sub-block difference the larger the difference between the display screen picture information presented by the sub-block of each frame of gray level image and the sub-block of the associated image at the same position is, the smaller the influence of illumination change on the region of the sub-block in the gray level image is, and the analysis of the sub-block difference acquisition process can refer to the analysis of the image difference in the step 2 and is not repeated here.
After the sub-block difference between the sub-block to be measured and the reference sub-block is obtained, the distribution of the gray values of the pixel points in the sub-block to be measured of the frame gray image and the gray values of the key pixel points of the first sub-block can be further combined, the key information contained in the sub-block to be measured is analyzed, so that the consistency degree of the sub-block to be measured in the gray image is obtained, the influence degree of illumination change on the region where the sub-block to be measured of the frame gray image is located is reflected through the consistency degree, the possibility that the pixel points which are not influenced by the illumination change exist in the sub-block can be reflected, the consistency points in the sub-block can be conveniently extracted subsequently, the acquisition error of the extreme value points is reduced, and further, the acquisition mode of the consistency degree comprises:
Performing negative correlation mapping on the sub-block difference between the sub-block to be detected and the reference sub-block to obtain a first continuous parameter of the sub-block to be detected in the gray level image; dividing the average value of gray values of all first sub-block key pixel points in the sub-block to be tested of the gray image by the average value of gray values of all pixel points of the gray image to obtain key characteristic parameters of the sub-block to be tested in the gray image; normalizing the information entropy of the gray value of the pixel point in the sub-block to be detected in the gray image to obtain the gray chaotic parameter of the sub-block to be detected, wherein the information entropy can be obtained through the existing formula and is not described in detail herein; taking the product value of the gray chaotic parameter and the key characteristic parameter as a second coherent parameter of the sub-block to be detected in the gray image; taking the preset weight as the weight of the first coherent parameter, and taking the result of negative correlation mapping of the preset weight as the weight of the second coherent parameter; and normalizing the weighted sum result of the first coherent parameter and the second coherent parameter to obtain the coherence degree of the sub-block to be detected in the gray level image. The expression of the degree of coherence may specifically be, for example:
Wherein,, Represents the/> Frame gray image/> The degree of coherence of the individual sub-blocks; /(I) Represents the/> Frame gray image/> The sub-block corresponds to the associated image sub-block differences between sub-blocks; /(I) Represents the/> Frame gray image/> The average of all grey values in the first sub-block grey sequence of the sub-blocks, i.e./>, i.e. > Frame gray image/> An average value of gray values of all first sub-block key pixel points in each sub-block; /(I) Represents the/> Average value of gray values of all pixel points in the frame gray image; /(I) Represents the/> Frame gray image/> Information entropy of each sub-block; /(I) Represents the/> Frame gray image/> Information entropy of each sub-block; /(I) Represents the/> The number of sub-blocks in the frame gray scale image; /(I) The preset weight is set to 0.6, and the specific value of the preset weight can be set by an implementer according to a specific implementation scene, so that the preset weight is not limited herein, but is required to be ensured to be larger than 0.5; /(I) Representing a normalization function; /(I) The first adjustment parameter is indicated, the denominator is prevented from being 0, the first adjustment parameter is set to be 0.01, and a specific value of the first adjustment parameter can be set by an operator according to a specific implementation scenario, and is not limited herein.
In the acquisition of the degree of continuity of each sub-block in a gray scale image The larger the display screen picture information of the sub-block at the same position between the gray level image and the associated image is similar under the influence of illumination, the smaller the influence of illumination change on the region where the sub-block is positioned in the gray level image is, wherein the difference/> The smaller the display screen picture information presented by the sub-blocks illustrating the same position of the gray image and the associated image is, the more similar, and thus the first continuous parameter The greater the degree of coherence/> The larger the key feature parameter/> The larger the pixel gray value of the sub-block with obvious characteristics is, the higher the overall duty ratio of the pixel gray value is, and the more important information is in the sub-block, the degree of continuity/> The larger the gray scale disorder parameter/> The larger also illustrates that the more important information in the sub-block, the more likely the sub-block contains key features of the gray image, the degree of coherence/> The larger and with preset weights the first continuous parameter/> The duty cycle in the calculation is greater so that the second coherence parameter/> The duty cycle in the calculation is smaller and the degree of coherence/> Normalized to/> And in the range, the subsequent evaluation and analysis are convenient.
In one embodiment of the present invention, the normalization process may specifically be, for example, maximum and minimum normalization processes, and the normalization in the subsequent steps may be performed by using the maximum and minimum normalization processes, and in other embodiments of the present invention, other normalization methods may be selected according to a specific range of values, which will not be described herein.
Step S4: judging whether a coherent point exists in the sub-block to be detected based on the coherence degree, and if so, extracting the coherent point in the sub-block to be detected according to the gradient information of the pixel point in the sub-block to be detected; carrying out local detection on each frame of gray level image to obtain different initial extremum points in the gray level image, screening out target extremum points from all initial extremum points of each frame of gray level image, wherein a sub-block where the target extremum points are located has a coherent point, and screening out real extremum points from all target extremum points of each frame of gray level image according to the distance between the target extremum points and the coherent point in the sub-block and the coherence degree of the corresponding sub-block.
The larger the coherence degree is, the smaller the picture information of the multimedia display screen presented by the region where the sub-block to be detected is located is affected by illumination change, and further, the pixel points which are not affected by illumination change, namely the coherence points, exist in the sub-block to be detected, and the pixel points with larger gradient values are affected by illumination, so that whether the coherence points exist in the sub-block to be detected can be judged based on the coherence degree, if so, the coherence points in the corresponding sub-block are extracted according to the gradient values of the pixel points in the sub-block to be detected, and the distance between the acquired extreme point and the coherence points can be used for analyzing the credibility of the extreme point in the follow-up process, so that the real extreme point in each frame of gray level image can be accurately extracted.
Preferably, in one embodiment of the present invention, the method for determining whether there are consecutive points in the sub-block to be tested and extracting the consecutive points specifically includes:
If the continuity degree of the sub-block to be measured is greater than the preset continuity threshold value, the continuity point exists in the sub-block to be measured, and for the pixel point with the greater gradient value, the gradient value of the pixel point is more likely to be caused by the change of the picture of the image, and is less likely to be caused by the influence of illumination, so that the pixel point with the maximum gradient value in the sub-block to be measured can be used as the continuity point, otherwise, the continuity point does not exist in the sub-block to be measured, the preset continuity threshold value is set to be 0.8, and the specific value of the preset continuity threshold value can also be set by an implementer according to specific implementation scenes, and is not limited.
Through the steps, the continuous point in each sub-block in each frame of gray level image can be obtained, the extreme point in the gray level image can be initially extracted based on the SIFT algorithm, the initial extreme point can be obtained, the initial extreme point with larger error can be further removed in the follow-up process, the characteristic descriptor constructed by the SIFT algorithm is utilized for accurately detecting the multimedia display screen of the bus, and the SIFT algorithm is a technical means well known to those skilled in the art and is not repeated herein.
Because of the influence of illumination change, the error of the initial extremum point extracted from each frame gray level image through the SIFT algorithm is larger, the initial extremum point with larger error needs to be removed, and for the sub-block with larger coherence degree, the influence degree of illumination on the region where the sub-block is located is smaller, meanwhile, if the sub-block with no coherence point exists, the influence degree of illumination on the sub-block is larger, the error of the initial extremum point in the sub-block is larger, the initial extremum point can be directly screened out, therefore, the target extremum point can be screened out from all the initial extremum points of the frame gray level image, wherein the distance between the target extremum point and the coherence point in the sub-block is smaller, the influence degree of illumination on the target extremum point is smaller, and therefore, the distance between the target extremum point and the coherence point in the sub-block and the coherence degree of the corresponding sub-block can be analyzed, and the real extremum point can be screened out from all the target extremum points of the frame gray level image.
Preferably, in one embodiment of the present invention, the method for acquiring the true extremum point specifically includes:
Performing negative correlation mapping on the distance between the target extreme point and the continuous point in the sub-block to obtain a credible parameter of the target extreme point; normalizing the product value of the reliability parameter and the consistency degree of the sub-block where the target extreme point is located to obtain the reliability degree of the corresponding target extreme point, wherein the larger the reliability degree is, the smaller the error of the target extreme point is, so that in each frame of gray level image, the target extreme point with the reliability degree larger than a preset reliability threshold can be used as a real extreme point, the preset reliability threshold is set to be 0.5, and the specific value of the preset reliability threshold can also be set by an implementer according to a specific implementation scene without limitation. The expression of the degree of confidence may specifically be, for example:
Wherein,, Represents the/> First/>, in frame gray scale image The degree of confidence of each target extreme point; /(I) Represents the/> First/>, in frame gray scale image The continuity degree of the sub-blocks where the target extreme points are located; /(I) Represents the/> First/>, in frame gray scale image distances between the respective target extreme points and consecutive points in the sub-block; /(I) Representing a normalization function; /(I) The second adjustment parameter is indicated, the denominator is prevented from being 0, the second adjustment parameter is set to be 0.01, and a specific value of the second adjustment parameter can also be set by an operator according to a specific implementation scenario, which is not limited herein. /(I)
In the process of acquiring the credibility of each target extreme point in the gray level image, the credibility the larger the target extreme point is, the smaller the degree of influence of illumination change is, and the smaller the error of the target extreme point is, wherein the continuity degree/>, of the sub-block where the target extreme point is The larger the sub-block is, the smaller the influence of illumination change is, and the smaller the error of the target extreme point in the sub-block is, the reliability degree/>, of the target extreme point is The larger the distance/>, between the target extreme point and the consecutive point in the sub-block, since the consecutive point in the sub-block is substantially unaffected by the change in illumination The smaller the trusted parameter/> the larger the error of the target extreme point is, the smaller the error of the target extreme point is, the credibility/>, of the target extreme point The greater and the degree of confidence/> Normalized to/> And in the range, the target extreme point is convenient to evaluate and analyze, and the real extreme point is screened out.
After the real extreme point in each frame of gray level image is obtained, the characteristic of each frame of gray level image can be extracted based on the real extreme point in the follow-up process, so that accurate detection of the bus multimedia display screen is realized.
Step S5: and detecting the multimedia display screen of the bus based on the real extreme point.
The real extreme point obtained through the process is the extreme point existing in the image and is not the extreme point formed by the influence of illumination, so that the bus multimedia display screen can be detected based on the real extreme point, and the accuracy of detecting the bus multimedia display screen is improved.
Preferably, in one embodiment of the present invention, the method for detecting a multimedia display screen of a bus specifically includes:
Screening different key points from real extreme points of each frame of gray level image based on SIFT algorithm, and constructing feature descriptors of each key point; inputting feature descriptors of key points in all gray images into a semantic segmentation network, taking a large number of gray images with different frames as a data set, marking the category information in the gray images, wherein the category of background pixel points in the gray images is marked as 0, marking the pixel points of a multimedia display screen as 1, performing supervision training on the semantic segmentation network by adopting a cross entropy loss function, inputting all gray images into the trained semantic segmentation network, taking an output result as a detection result of the multimedia display screen of a bus, and thus completing online detection of the multimedia display screen of the bus.
In summary, the embodiment of the invention firstly obtains the gray level image of each frame on the surface of the multimedia display screen of the bus in the working state, and takes other gray level images of a preset first number of frames which are nearest to the gray level image of each frame as reference images of the corresponding gray level images in time sequence; according to the difference of pixel point gray value distribution between each frame of gray image and each corresponding reference image, screening out the associated image of the corresponding gray image from all the corresponding reference images; partitioning each frame of gray level image and the corresponding associated image to obtain different sub-blocks; obtaining the consistency degree of the corresponding sub-blocks in the gray level image according to the difference of the gray value distribution of the pixel points in the sub-blocks at the same position between the gray level image and the corresponding associated image and the distribution of the gray value of the pixel points in the sub-blocks at the corresponding position in the gray level image; judging whether a coherent point exists in the corresponding sub-block based on the coherence degree, and if so, extracting the coherent point in the corresponding sub-block; carrying out local detection on each frame of gray level image to obtain different initial extreme points in the gray level image; screening target extreme points from all initial extreme points of each frame of gray level image, and screening real extreme points from all target extreme points of each frame of gray level image according to the distance between the target extreme points and the continuous points in the sub-blocks and the continuity degree of the corresponding sub-blocks; and detecting the multimedia display screen of the bus based on the real extreme point.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (10)

1. The method for detecting the multimedia intelligent display screen of the bus based on the image feature analysis is characterized by comprising the following steps of:
Acquiring each frame of gray level image in a preset time period on the surface of a multimedia display screen of the bus in a working state;
Taking other gray images of a preset first number of frames which are nearest to each frame of gray image as reference images of the corresponding gray images in time sequence; according to the difference of pixel point gray value distribution between each frame of gray image and each corresponding reference image, screening out the associated image of each frame of gray image from all the reference images;
Partitioning each frame of gray level image and the corresponding associated image to obtain different sub-blocks; randomly selecting one sub-block from each frame of gray level image as a sub-block to be detected, taking the sub-block at the same position as the sub-block to be detected as a reference sub-block in the associated image, and obtaining the continuity degree of the sub-block to be detected according to the difference of pixel point gray level distribution between the sub-block to be detected and the reference sub-block and the distribution of pixel point gray level in the sub-block to be detected;
Judging whether a coherent point exists in the sub-block to be detected based on the coherence degree, and if so, extracting the coherent point in the sub-block to be detected according to the gradient information of the pixel point in the sub-block to be detected; carrying out local detection on each frame of gray level image, obtaining different initial extreme points in the gray level image, and screening out target extreme points from all initial extreme points of each frame of gray level image, wherein sub blocks where the target extreme points are located have coherent points; according to the distance between the target extreme point and the continuous point in the sub-block and the continuous degree of the corresponding sub-block, the real extreme point is screened out from all the target extreme points of each frame of gray level image;
And detecting the multimedia display screen of the bus based on the real extreme point.
2. The method for detecting the multimedia intelligent display screen of the bus based on the image feature analysis according to claim 1, wherein the step of screening the associated image of each frame of gray level image from all the reference images according to the difference of the pixel point gray level value distribution between each frame of gray level image and each corresponding reference image comprises the following steps:
screening out a preset second number of pixel points with the maximum gradient value from each frame of gray level image to serve as first image key pixel points of the gray level image, and sequencing gray level values of the first image key pixel points according to the sequence from big to small to serve as a first image gray level sequence of the gray level image;
Screening out a preset second number of pixel points with the maximum gradient value from each reference image corresponding to the gray level image to serve as second image key pixel points of the reference image, and sequencing the gray level values of the second image key pixel points according to the sequence from large to small to serve as a second image gray level sequence of the reference image;
obtaining the image difference between the gray image and each reference image according to the difference of gradient values at the same position between the gray sequence of the first image and the gray sequence of the second image, the distribution of pixel gray values in the gray image and the distribution of pixel gray values in the corresponding reference image;
And taking the reference image corresponding to the minimum value of the image anisotropy as the associated image of the gray level image.
3. The method for detecting the multimedia intelligent display screen of the bus based on the image feature analysis according to claim 2, wherein the obtaining the image difference between the gray level image and each reference image according to the difference of the gradient values at the same position between the gray level sequence of the first image and the gray level sequence of the second image, the distribution of the gray level values of the pixels in the gray level image and the distribution of the gray level values of the pixels in the corresponding reference image comprises:
The calculation formula of the image variability is as follows:
Wherein,, Represents the/> Frame gray image and corresponding/> Image variability between the individual reference images; /(I) Represent the first Frame gray image and corresponding/> correlation between the individual reference images; /(I) Represents the/> Average value of gray values of all pixel points in the frame gray image; /(I) Represents the/> Frame gray image/> average value of gray values of all pixel points in the reference images; /(I) Representing a first set; /(I) Represents the/> Frame gray image and corresponding/> Covariance of pixel gray values between the reference images; /(I) Represents the/> Standard deviation of gray values of all pixel points in the frame gray image; /(I) Represents the/> Frame gray image/> Standard deviation of gray values of all pixel points in the reference images; /(I) Represents the/> First image gray sequence of frame gray image/> Gray values; /(I) Represents the/> Frame gray image/> the/>, in the second image gray sequence of the reference images Gray values; /(I) Representing a preset second number; /(I) Representing the first set/> Average value of all elements in (a); /(I) Representing the first set/> variance of all elements in (a).
4. The method for detecting the multimedia intelligent display screen of the bus based on the image feature analysis according to claim 1, wherein the obtaining the consistency degree of the sub-block to be detected according to the difference of the distribution of the gray values of the pixel points between the sub-block to be detected and the reference sub-block and the distribution of the gray values of the pixel points in the sub-block to be detected comprises:
Screening out a preset third number of pixel points with the maximum gradient value from the sub-block to be tested of the gray image to serve as first sub-block key pixel points of the sub-block to be tested, and sequencing gray values of the first sub-block key pixel points according to the sequence from big to small to serve as a first sub-block gray sequence of the sub-block to be tested;
Screening out a preset third number of pixel points with the maximum gradient value from a reference sub-block of the associated image, taking the pixel points as second sub-block key pixel points of the reference sub-block, and taking a sequence of the gray values of the second sub-block key pixel points after sequencing from the big to the small as a second sub-block gray sequence of the reference sub-block;
obtaining the sub-block difference between the sub-block to be detected and the reference sub-block according to the difference of gray values at the same position between the first sub-block gray sequence and the second sub-block gray sequence, the distribution of the gray values of the pixels of the sub-block to be detected in the gray image and the distribution of the gray values of the pixels of the reference sub-block in the associated image;
And obtaining the continuity degree of the sub-block to be detected in the gray image according to the sub-block difference between the sub-block to be detected and the reference sub-block, the distribution of the gray values of the pixel points in the sub-block to be detected in the gray image and the gray values of the key pixel points of the first sub-block.
5. The method for detecting the multimedia intelligent display screen of the bus based on the image feature analysis according to claim 4, wherein the obtaining the sub-block variability between the sub-block to be detected and the reference sub-block according to the difference of gray values at the same position between the first sub-block gray level sequence and the second sub-block gray level sequence, the distribution of the gray values of the pixels of the sub-block to be detected in the gray level image, and the distribution of the gray values of the pixels of the reference sub-block in the associated image comprises:
The calculation formula of the sub-block diversity is as follows:
Wherein,, Represents the/> Frame gray image/> The sub-block corresponds to the associated image sub-block differences between sub-blocks; /(I) Represents the/> Frame gray image/> average value of gray values of all pixel points in each sub-block; /(I) Represents the/> first/>, associated image of frame gray image average value of gray values of all pixel points in each sub-block; /(I) representing a second set; /(I) Represents the/> Frame gray image/> The sub-block corresponds to the associated image Covariance of pixel gray values among sub-blocks; Represents the/> Frame gray image/> Standard deviation of gray values of all pixel points in each sub-block; /(I) Represents the/> first/>, associated image of frame gray image Standard deviation of gray values of all pixel points in each sub-block; /(I) Represents the/> Frame gray image/> First sub-block gray level sequence of sub-block/> Gray values; /(I) Represents the/> first/>, associated image of frame gray image The second sub-block gray level sequence of the sub-block/> Gray values; /(I) Representing a preset third number; /(I) Representing the second set/> Average value of all elements in (a); /(I) Representing the second set/> variance of all elements in (a).
6. The method for detecting the multimedia intelligent display screen of the bus based on the image feature analysis according to claim 4, wherein the obtaining the continuity degree of the sub-block to be detected in the gray image according to the sub-block difference between the sub-block to be detected and the reference sub-block, the distribution of the gray values of the pixel points in the sub-block to be detected in the gray image, and the gray values of the key pixel points of the first sub-block comprises:
Performing negative correlation mapping on the sub-block difference between the sub-block to be detected and the reference sub-block to obtain a first continuous parameter of the sub-block to be detected in the gray level image;
dividing the average value of gray values of all first sub-block key pixel points in the sub-block to be tested of the gray image by the average value of gray values of all pixel points of the gray image to obtain key characteristic parameters of the sub-block to be tested;
normalizing the information entropy of the gray value of the pixel point in the sub-block to be detected in the gray image to obtain the gray chaotic parameter of the sub-block to be detected;
taking the product value of the gray chaotic parameter and the key characteristic parameter as a second coherent parameter of the sub-block to be measured;
Taking the preset weight as the weight of the first coherent parameter, and taking the result of negative correlation mapping of the preset weight as the weight of the second coherent parameter;
And normalizing the weighted sum result of the first coherent parameter and the second coherent parameter to obtain the coherence degree of the sub-block to be detected in the gray level image.
7. The method for detecting the multimedia intelligent display screen of the bus based on the image feature analysis according to claim 1, wherein the step of judging whether the continuity point exists in the sub-block to be detected based on the continuity degree, and if so, extracting the continuity point in the sub-block to be detected according to the gradient information of the pixel point in the sub-block to be detected comprises the following steps:
If the degree of continuity of the sub-block to be detected is larger than a preset continuity threshold value, a continuity point exists in the sub-block to be detected, a pixel point with the largest gradient value in the sub-block to be detected is used as the continuity point, and otherwise, no continuity point exists in the sub-block to be detected.
8. The method for detecting the multimedia intelligent display screen of the bus based on the image feature analysis according to claim 1, wherein the step of locally detecting each frame of gray level image to obtain different initial extreme points in the gray level image comprises the following steps:
and carrying out local detection on each frame of gray level image based on the SIFT algorithm to obtain different initial extreme points in the gray level image.
9. The method for detecting the multimedia intelligent display screen of the bus based on the image feature analysis according to claim 1, wherein the step of screening out the real extremum points from all the target extremum points of each frame of gray level image according to the distance between the target extremum points and the consecutive points in the sub-blocks and the continuity degree of the corresponding sub-blocks comprises the following steps:
Performing negative correlation mapping on the distance between the target extreme point and the continuous point in the sub-block to obtain a trusted parameter of the target extreme point; normalizing the product value of the reliable parameter and the consistency degree of the sub-block where the target extreme point is located to obtain the reliable degree of the target extreme point;
And in each frame of gray level image, taking the target extreme point with the credibility larger than a preset credibility threshold value as a real extreme point.
10. The method for detecting the multimedia intelligent display screen of the bus based on the image feature analysis according to claim 1, wherein the detecting the multimedia display screen of the bus based on the real extreme point comprises the following steps:
screening different key points from real extreme points of each frame of gray level image based on SIFT algorithm, and constructing feature descriptors of each key point;
inputting the feature descriptors of key points in all gray images into a semantic segmentation network for training to obtain a trained semantic segmentation network, inputting all gray images into the trained semantic segmentation network, and taking the output result as a detection result of a bus multimedia display screen.
CN202410290071.3A 2024-03-14 2024-03-14 Bus multimedia intelligent display screen detection method based on image feature analysis Active CN117893534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410290071.3A CN117893534B (en) 2024-03-14 2024-03-14 Bus multimedia intelligent display screen detection method based on image feature analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410290071.3A CN117893534B (en) 2024-03-14 2024-03-14 Bus multimedia intelligent display screen detection method based on image feature analysis

Publications (2)

Publication Number Publication Date
CN117893534A true CN117893534A (en) 2024-04-16
CN117893534B CN117893534B (en) 2024-05-24

Family

ID=90644367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410290071.3A Active CN117893534B (en) 2024-03-14 2024-03-14 Bus multimedia intelligent display screen detection method based on image feature analysis

Country Status (1)

Country Link
CN (1) CN117893534B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014183544A (en) * 2013-03-21 2014-09-29 Fujitsu Ltd Image processing unit and image processing method
CN116614705A (en) * 2023-07-18 2023-08-18 华洋通信科技股份有限公司 Coal face camera regulation and control system based on multi-mode video feature analysis
CN116703898A (en) * 2023-08-03 2023-09-05 山东优奭趸泵业科技有限公司 Quality detection method for end face of precision mechanical bearing
CN116993726A (en) * 2023-09-26 2023-11-03 山东克莱蒙特新材料科技有限公司 Mineral casting detection method and system
CN117437223A (en) * 2023-12-20 2024-01-23 连兴旺电子(深圳)有限公司 Intelligent defect detection method for high-speed board-to-board connector
CN117615088A (en) * 2024-01-22 2024-02-27 沈阳市锦拓电子工程有限公司 Efficient video data storage method for safety monitoring

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014183544A (en) * 2013-03-21 2014-09-29 Fujitsu Ltd Image processing unit and image processing method
CN116614705A (en) * 2023-07-18 2023-08-18 华洋通信科技股份有限公司 Coal face camera regulation and control system based on multi-mode video feature analysis
CN116703898A (en) * 2023-08-03 2023-09-05 山东优奭趸泵业科技有限公司 Quality detection method for end face of precision mechanical bearing
CN116993726A (en) * 2023-09-26 2023-11-03 山东克莱蒙特新材料科技有限公司 Mineral casting detection method and system
CN117437223A (en) * 2023-12-20 2024-01-23 连兴旺电子(深圳)有限公司 Intelligent defect detection method for high-speed board-to-board connector
CN117615088A (en) * 2024-01-22 2024-02-27 沈阳市锦拓电子工程有限公司 Efficient video data storage method for safety monitoring

Also Published As

Publication number Publication date
CN117893534B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
US9524558B2 (en) Method, system and software module for foreground extraction
CN104318225B (en) Detection method of license plate and device
JP2002288658A (en) Object extracting device and method on the basis of matching of regional feature value of segmented image regions
CN104866616A (en) Method for searching monitor video target
CN103413149B (en) Method for detecting and identifying static target in complicated background
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN110472081B (en) Shoe picture cross-domain retrieval method based on metric learning
CN111260645B (en) Tampered image detection method and system based on block classification deep learning
CN112528939A (en) Quality evaluation method and device for face image
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN111753642B (en) Method and device for determining key frame
CN114820625A (en) Automobile top block defect detection method
CN110766075A (en) Tire area image comparison method and device, computer equipment and storage medium
CN112258403A (en) Method for extracting suspected smoke area from dynamic smoke
CN115908590A (en) Data intelligent acquisition method and system based on artificial intelligence
CN113743378B (en) Fire monitoring method and device based on video
US9953238B2 (en) Image processing method and system for extracting distorted circular image elements
CN112287884B (en) Examination abnormal behavior detection method and device and computer readable storage medium
CN117893534B (en) Bus multimedia intelligent display screen detection method based on image feature analysis
CN114519689A (en) Image tampering detection method, device, equipment and computer readable storage medium
US20230386023A1 (en) Method for detecting medical images, electronic device, and storage medium
CN113435444B (en) Immunochromatography detection method, immunochromatography detection device, storage medium and computer equipment
CN115631488A (en) Jetson Nano-based fruit maturity nondestructive testing method and system
CN112949630A (en) Weak supervision target detection method based on frame classification screening
CN107886102B (en) Adaboost classifier training method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant