CN116823673B - High-speed elevator car passenger state visual perception method based on image processing - Google Patents

High-speed elevator car passenger state visual perception method based on image processing Download PDF

Info

Publication number
CN116823673B
CN116823673B CN202311068758.4A CN202311068758A CN116823673B CN 116823673 B CN116823673 B CN 116823673B CN 202311068758 A CN202311068758 A CN 202311068758A CN 116823673 B CN116823673 B CN 116823673B
Authority
CN
China
Prior art keywords
video
image
index
edge
enhancement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311068758.4A
Other languages
Chinese (zh)
Other versions
CN116823673A (en
Inventor
张福生
顾月江
徐津
葛阳
高鹏
于青松
张建
金晓伟
张波
季节
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
General Elevator Co ltd
Changshu Institute of Technology
Original Assignee
General Elevator Co ltd
Changshu Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by General Elevator Co ltd, Changshu Institute of Technology filed Critical General Elevator Co ltd
Priority to CN202311068758.4A priority Critical patent/CN116823673B/en
Publication of CN116823673A publication Critical patent/CN116823673A/en
Application granted granted Critical
Publication of CN116823673B publication Critical patent/CN116823673B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a high-speed elevator car passenger state visual perception method based on image processing. The method comprises the steps of obtaining a video enhancement image through guide filtering on each video gray level image, and obtaining a noise evaluation index of each video enhancement image according to isolation and abnormality analysis of noise points; obtaining a fitting edge according to the offset condition of the pixel points, and analyzing the similarity condition between the fitting edge and the edge to obtain a margin protection evaluation index of each video enhanced image; and obtaining an enhancement effect index through the noise evaluation index and the edge protection evaluation index, determining an optimal regularization parameter according to the enhancement effect index of all the video enhancement images, and obtaining an optimal video enhancement image to judge the passenger state according to the guiding filtering corresponding to the optimal regularization parameter. The invention improves the perception capability of the passenger state through image processing, and avoids the adverse effect caused by the abnormal state of the passenger.

Description

High-speed elevator car passenger state visual perception method based on image processing
Technical Field
The invention relates to the technical field of image processing, in particular to a high-speed elevator car passenger state visual perception method based on image processing.
Background
Elevator public safety is one of the important factors of high-rise building design considerations. With the rapid development of economy, the demands of society for smart home and smart life are increasing, and the demands of public place service facilities for intellectualization are also expanding. As an important carrier for transporting urban high-rise buildings, high-speed elevators with higher efficiency and higher speed have been developed, but the safety of the elevators is still the primary consideration for the operation of the elevators. In order to better early warn the passenger state in the high-speed elevator, intelligent elevator monitoring becomes a main concern.
In the process of visually sensing the state of a passenger, the elevator video monitoring mainly obtains abnormal behaviors of the gesture of the passenger, such as behaviors of running, beating a door, or fainting, and the like, which influence the life safety of the elevator and the passenger. However, the monitoring camera needs to reliably capture images under different light, weather and environmental conditions, so that no fault or data loss occurs in long-time operation, and therefore, the design of the monitoring camera usually focuses on stability and reliability rather than the number of pixels, which can lead to poor video image quality and incapability of accurately acquiring passenger states. In order to obtain the passenger state more accurately, the guiding filtering is usually selected to carry out edge protection filtering to enhance the video image, but the filtering effect is different in different light scenes, and at this time, a self-adaptive regularization parameter adjustment method is needed to adjust the guiding filtering to obtain a better quality video enhanced image. However, in the existing method for adaptively adjusting regularization parameters, parameter selection is only performed through integral edge details, but in an elevator car, the enhancement effect of a large number of irrelevant areas is considered, the evaluation of the enhancement effect of a heavy passenger area is affected, the regularization parameters are not selected well, the obtained enhanced image is not accurate enough, the perception capability of the passenger state is affected, the detection reliability of the abnormal state of the passenger is not strong, and adverse effects are brought to the safety of the elevator and the passenger.
Disclosure of Invention
In order to solve the technical problems that regularization parameters in the prior art are not good in selection, the obtained enhanced image is not accurate enough, and the perceptibility of the passenger state is affected, so that the detection reliability of the abnormal state of the passenger is not strong, the invention aims to provide a high-speed elevator car passenger state visual perception method based on image processing, and the adopted technical scheme is as follows:
the invention provides a high-speed elevator car passenger state visual perception method based on image processing, which comprises the following steps:
acquiring a preset number of video gray images, and enhancing each video gray image by adopting guide filtering to obtain a corresponding video enhanced image;
obtaining a noise evaluation index of each video enhanced image according to the abnormality degree of each pixel point in each video enhanced image, the local difference and the overall difference of the offset condition of each pixel point;
fitting pixel points according to the offset condition of the pixel points in each video enhanced image and each video gray image to obtain fitting edges in the video enhanced images and the video gray images; obtaining a margin protection evaluation index of each video enhancement image according to the similarity degree between the edges and the fitting edges and the difference degree between the edges and the fitting edges in each video enhancement image and the corresponding video gray level image;
Obtaining an enhancement effect index of each video enhancement image according to the noise evaluation index and the edge protection evaluation index of each video enhancement image; determining optimal regularization parameters of the guide filtering according to enhancement effect indexes of all the video enhancement images, and obtaining the optimal video enhancement images by adopting the guide filtering corresponding to the optimal regularization parameters; and acquiring the passenger state according to the optimal video enhanced image.
Further, the method for obtaining the noise evaluation index comprises the following steps:
obtaining an isolation index of each pixel in a corresponding video enhanced image according to the local difference of the offset condition of each pixel in each video enhanced image and the overall difference of the offset condition of each pixel between each video enhanced image and an adjacent video enhanced image; obtaining an abnormality index of each pixel point in all video enhancement images according to the abnormality degree of the pixel value of each pixel point in all video enhancement images;
acquiring a motion vector of each pixel point between each video enhanced image and the next video enhanced image; normalizing the magnitude of a motion vector of each pixel point in each video enhanced image to obtain the vector weight of each pixel point in the corresponding video enhanced image;
According to the abnormal index and the isolated index of each pixel point in the corresponding video enhancement image, obtaining the noise index of each pixel point in the corresponding video enhancement image, wherein the abnormal index and the isolated index are in positive correlation with the noise index; and carrying out weighted summation on noise indexes of all pixel points in each video enhanced image through vector weights to obtain noise evaluation indexes of the corresponding video enhanced images.
Further, the method for obtaining the isolated index comprises the following steps:
taking a pixel point in any video enhanced image as a target point;
acquiring the angle difference and the size difference of a motion vector between a target point and each other pixel point in a preset first neighborhood range; calculating the product of each angle difference and the corresponding size difference, and taking the accumulated value of all products corresponding to the target point as a local difference index of the target point in the corresponding video enhancement image;
in the adjacent video enhancement images corresponding to the video enhancement images, taking the pixel points with the same positions as the target points as the relative pixel points of the target points; obtaining the average angle difference and the average size difference of motion vectors between the target point and all the relative pixel points, and taking the product of the average angle difference and the average size difference as an integral difference index of the target point in the corresponding video enhancement image;
And calculating the product of the local difference index and the overall difference index of the target point to obtain the isolated index of the target point in the corresponding video enhanced image.
Further, the method for obtaining the abnormality index includes:
in all the video enhancement images, detecting the abnormal value of the pixel point at the same position through an abnormal detection algorithm;
taking the pixel point with the corresponding abnormal value as a reference pixel point; calculating pixel value differences between the reference pixel point and each other pixel point at the same position, and taking the sum value of all pixel value differences corresponding to the reference pixel point as an abnormal difference value of the reference pixel point; multiplying the pixel value corresponding to the reference pixel point by the abnormal difference value to obtain an abnormal index of the reference pixel point in the corresponding video enhancement image;
and setting the abnormal indexes of all the pixel points without abnormal values as preset abnormal indexes.
Further, the method for acquiring the fitting edge comprises the following steps:
acquiring a motion vector of each pixel point between each video gray level image and the next video gray level image; taking all video enhancement images and all video gray-scale images as images to be processed;
in each image to be processed, carrying out pixel point clustering according to the motion vector of the pixel point and carrying out iterative merging to obtain a cluster of each image to be processed; and performing edge fitting on all the cluster clusters to obtain fitting edges of each image to be processed.
Further, the method for obtaining the edge protection evaluation index comprises the following steps:
acquiring edges in each image to be processed;
in each image to be processed, obtaining the similarity between each edge and each fitting edge, and obtaining the most similar fitting edge corresponding to each edge according to the similarity; adding the similarity between all edges and the corresponding most similar fitting edges to obtain an edge similarity index of each image to be processed;
in each video enhanced image and the corresponding video gray level image, according to the difference condition between the edge quantity and the edge length of all edges, obtaining an edge difference index of each video enhanced image; obtaining a fitting edge similarity index of each video enhanced image according to the similarity conditions among all fitting edges;
determining a margin protection evaluation index of each video enhancement image according to the edge similarity index, the fitting edge similarity index and the edge difference index of each video enhancement image and the corresponding video gray level image; the edge similarity index and the fitting edge similarity index are in positive correlation with the edge protection evaluation index, and the edge difference index and the edge protection evaluation index are in negative correlation.
Further, the method for obtaining the edge difference index comprises the following steps:
Acquiring a video enhancement image and a center point of each edge in a corresponding video gray level image; forming opposite edge pairs by edges closest to the center point in the video enhancement image and edges closest to the center point in the corresponding video gray level image;
acquiring the difference of the number of pixel points between two edges in each opposite edge pair; acquiring the edge quantity difference between each video enhancement image and the corresponding video gray level image; according to the edge quantity difference and the pixel point quantity difference of all opposite edges, obtaining an edge difference index of each video enhanced image, wherein the edge quantity difference and the pixel point quantity difference are in positive correlation with the edge difference index.
Further, the method for obtaining the fitted edge similarity index comprises the following steps:
acquiring a fitting center point of each fitting edge in the video enhancement image and the corresponding video gray image; forming a pair of opposite fitting edges by the fitting edge closest to the fitting center point in the video enhanced image and the fitting edge closest to the fitting center point in the corresponding video gray level image;
the similarity between two fitting edges in each pair of opposite fitting edges is taken as the fitting degree of each pair of opposite fitting edges; and adding all fitting degrees corresponding to each video enhanced image to obtain a fitting edge similarity index of each video enhanced image.
Further, the method for obtaining the enhancement effect index comprises the following steps:
taking the product of the noise evaluation index after the inverse proportion normalization of each video enhanced image and the preset noise effect weight as the noise effect index of each video enhanced image; taking the product of the edge protection evaluation index normalized by each video enhanced image and a preset edge protection effect weight as an edge protection effect index of each video enhanced image;
and taking the sum value of the noise effect index and the edge protection effect index of each video enhanced image as the enhancement effect index of each video enhanced image.
Further, the method for obtaining the optimal regularization parameters comprises the following steps:
when the enhancement effect indexes of all the video enhancement images are larger than or equal to a preset enhancement threshold, taking regularization parameters in the corresponding guide filtering as optimal regularization parameters;
when the enhancement effect index of the video enhancement image is smaller than a preset enhancement threshold, acquiring the average noise effect index and the average edge protection effect index of all the video enhancement images; when the average noise effect index is smaller than the average edge protection effect index, increasing regularization parameters in the corresponding guide filtering by a preset adjustment amount to obtain new guide filtering; when the average noise effect index is larger than the average edge protection effect index, reducing regularization parameters in the corresponding guide filtering by a preset adjustment amount to obtain new guide filtering;
Re-acquiring enhancement effect indexes of all video enhancement images according to the new guide filtering and determining optimal regularization parameters; when the guiding filtering corresponding to all regularization parameters does not meet the requirement that the enhancement effect index of all video enhancement images is larger than or equal to a preset enhancement threshold value, obtaining enhancement effect indexes and values of all video enhancement images corresponding to each regularization parameter, and taking the regularization parameter corresponding to the maximum sum value as the optimal regularization parameter.
The invention has the following beneficial effects:
1. according to the method, the video enhancement image is obtained through the guide filtering on each video gray level image, and the enhancement effect of each video enhancement image is comprehensively judged according to the key factors of the denoising effect and the edge protection effect in the video enhancement image, which influence the enhancement effect. According to the isolation and abnormality analysis of the noise points, obtaining a noise evaluation index of each video enhanced image, according to the analysis of the pixel point offset condition, obtaining a fitting edge, and according to the analysis of the similarity condition between the fitting edge and the edge, obtaining a margin protection evaluation index of each video enhanced image. The enhancement effect index of each video enhancement image is obtained through the noise evaluation index and the edge protection evaluation index, the selection suitability of regularization parameters in filtering is reflected through the enhancement effect index, the optimal regularization parameters are determined according to the enhancement effect indexes of all the video enhancement images, and the optimal video enhancement image with the best denoising and edge protection effects on the passenger area can be obtained according to the guiding filtering corresponding to the optimal regularization parameters, so that the image quality is higher. The current optimal regularization parameters can be obtained through more comprehensive enhancement effect evaluation, so that the quality of the optimal video enhancement image is higher and more accurate, the passenger state can be obtained more accurately, the perception capability of the passenger state in the high-speed elevator car is improved, the reliability of passenger state detection is ensured, and the adverse effect caused by the abnormal state of the passenger is avoided to a great extent.
2. In the denoising effect analysis, the isolation characteristic of noise points is considered, the isolation condition of pixel points is considered as a whole according to the local and overall difference degree of the offset condition of each pixel point in the video enhanced image, and the isolation index of each pixel point in the corresponding video enhanced image is obtained. In consideration of the abnormal characteristics of the pixel values of the noise points, the abnormal index of each pixel point is obtained through the abnormal degree of the pixel value of each pixel point in all the video enhancement images, and the noise evaluation index of each video enhancement image is obtained according to the isolated index and the abnormal index. Through the noise characteristic analysis of the offset pixel points, the denoising effect is evaluated on the region possibly with the motion condition in the image, so that the noise evaluation index is more reliable, and the judgment on the subsequent enhancement effect is more accurate.
3. In the edge protection effect analysis, the edge of the pixel point area generating motion change is fitted according to the pixel point offset condition to serve as a fitting edge, the part which is possibly a passenger area can be reflected through the motion area, the edge protection effect of the fitting edge is further analyzed, the edge protection degree of the passenger part is preferentially considered, and the subsequent perception effect on the passenger state is conveniently improved. And (3) fitting edges in the video enhanced image and the video gray level image, analyzing a plurality of difference conditions between the fitted edges and between the edges, and synthesizing the edge protection effect of the passenger area to obtain the edge protection evaluation index of each video enhanced image. The subsequent evaluation of the enhancement effect is more beneficial to identifying the passenger state, and the obtained optimal video enhancement image can more accurately reflect the passenger area.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for visually sensing the status of a passenger in a high-speed elevator car based on image processing according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a visual perception method for the passenger state of the high-speed elevator car based on image processing according to the invention, which is provided by combining the accompanying drawings and the preferred embodiment, and the specific implementation, structure, characteristics and effects thereof are described in detail below. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the high-speed elevator car passenger state visual perception method based on image processing.
Referring to fig. 1, a flowchart of a method for visually sensing the passenger status of a high-speed elevator car based on image processing according to an embodiment of the present invention is shown, the method includes the following steps:
s1: and acquiring a preset number of video gray images, and enhancing each video gray image by adopting guide filtering to obtain a corresponding video enhanced image.
In the embodiment of the invention, monitoring equipment in the high-speed elevator is set to acquire 30 frames of video images per second, one frame of video image corresponds to one video image, the video images acquired per second are considered to be close, so that the first 10 video images are selected for analysis, namely the preset number is 10, and video gray images are obtained through video image graying processing. The image graying method is a technical means known to those skilled in the art, and specific graying methods such as weighted average method, etc. are not limited herein.
The invention adopts the guide filtering algorithm to carry out edge protection filtering on the video gray level image, so that the analysis of the passenger state can be more accurate finally through the image. The image enhancement effect is analyzed on the first 10 video gray images, the guide filtering is adjusted, the optimal image enhancement parameters are obtained, the parameters are used for carrying out image denoising enhancement on the video images corresponding to 30 frames obtained in one second, and the optimal denoising result in each second is obtained in a self-adaptive mode.
The guided filtering algorithm is as follows:
wherein the method comprises the steps ofFor pixels->For a central window size +.>Square window of>Is square window->Pixel point in->Is pixel dot +.>Guiding the filtered output value, < >>For guiding pixel points in the image>Gray value of +.>Respectively when window->The center is located at the pixel point +.>The linear coefficients of the linear model are guided. It should be noted that, the formula of the guided filtering algorithm is a well-known means known to those skilled in the art, so the meaning of the specific formula is not repeated.
Wherein the corresponding parametersThe method comprises the following steps:
in the method, in the process of the invention,for pixels->For a central window size +.>The window size of the guided filtering is set according to the actual situation, and the window size is set to +. >。/>Is square window->Pixel point in->For the total number of pixels in the window, +.>Guide images +.>In window->Gray mean value, variance, and +.>For image->In window->Gray average value of middle pixel point, +.>Pixels are respectively->In guiding image, image->Gray value of>Represented as regularization parameters. It should be noted that, the application of the formula of the linear coefficient in the guided filtering is also a well-known technique known to those skilled in the art, so the meaning of the specific formula is not described herein.
The regularization parameters influence the filtering effect of the guided filtering by adjusting the linear coefficients, when the values of the regularization parameters are larger, the guided filtering method is closer to the average filter, the edge protection effect is poorer, and when the values of the regularization parameters are smaller, the overall denoising effect of the guided filtering on the image is not obvious. Therefore, the method analyzes the denoising effect through the initial regularization parameters, continuously adjusts the regularization parameters to achieve a better denoising effect, and further filters the images in the similar states by using the same regularization parameters. In the embodiment of the invention, the initial regularization parameter is set to 0.5, and the implementer can adjust the initial regularization parameter by himself.
And carrying out filtering enhancement on the 10 video gray images according to the guiding filtering corresponding to the initial regularization parameters to obtain video enhancement images corresponding to each video gray image, further comprehensively analyzing the denoising effect and the edge protection effect in each video enhancement image to obtain enhancement effects, and adjusting the regularization parameters.
S2: and obtaining a noise evaluation index of each video enhanced image according to the abnormality degree of each pixel point in each video enhanced image, the local difference and the overall difference of the offset condition of each pixel point.
Firstly, analyzing noise conditions in video enhanced images to obtain the denoising effect of each video enhanced image, and calculating the difference condition of each pixel point for the continuous video enhanced images due to continuity of the video images, wherein the possibility of the noise point is obtained through the difference of the pixel points.
When a passenger appears in an elevator, the state judgment of the passenger is usually human body posture judgment and the like, and the abnormal state of the passenger is often caused by abnormal behaviors of the passenger, so that pixels in a passenger area in a video image of the elevator can change with time to generate certain deviation.
Firstly, judging according to the characteristic that noise points have isolation, analyzing local differences of the offset conditions of each pixel point in each video enhanced image, reflecting the similarity degree of the offset conditions of each pixel point and surrounding adjacent pixel points through the local differences, and when the pixel points are more likely to be noise points, the offset conditions of the pixel points and the surrounding adjacent pixel points are more different. Further analyzing the overall difference of the offset condition of each pixel point between each video enhanced image and the adjacent video enhanced image, wherein for the continuous video enhanced images, the noise points have extremely strong randomness, so the probability of occurrence of the noise points in the same pixel point is extremely small, according to the offset difference between each pixel point in each video enhanced image and the adjacent video enhanced image, the possibility that the pixel point is noise can be seen, when the offset between each pixel point and the adjacent video enhanced image is similar, the corresponding pixel point is possibly a normal pixel point, and if the difference is larger, the corresponding pixel point is the noise point in the corresponding video enhanced image. And obtaining the isolation index of each pixel point in the corresponding video enhancement image according to the local and the whole analysis.
In the embodiment of the invention, a motion vector of each pixel point between each video enhanced image and the next video enhanced image is obtained by adopting a three-part search algorithm in block matching, and the offset direction and the offset size of offset of each pixel point in each video enhanced image after offset occurs are reflected through the motion vector. The isolation condition of the pixel points is reflected through the difference of the offset condition, and one pixel point in any video enhancement image is taken as a target point in order to facilitate the subsequent isolation analysis of each pixel point in each video enhancement.
The method for calculating the angle difference and the size difference of the motion vector between the target point and each other pixel point in the preset first neighborhood range comprises the following steps of: and taking the absolute value of the angle difference between the target point and each other pixel point motion vector as an angle difference, and taking the absolute value of the magnitude difference between the target point and each other pixel point motion vector as a magnitude difference. In other embodiments of the present invention, the angle difference and the magnitude difference may be reflected in the form of a ratio, which is not limited herein.
Calculating the product of each angle difference and the corresponding size difference, wherein the product reflects the deviation difference condition of the target point and each other pixel point, taking the accumulated value of all products corresponding to the target point as a local difference index of the target point in the corresponding video enhancement image, reflecting the difference degree of the deviation condition between the target point and the local pixel point through the local difference index, and when the deviation condition of the target point and the local adjacent pixel point is more consistent, indicating that the target point is more likely to be a normal point and the possibility of being a noise point is less.
Because the pixel points are changed in the continuously adjacent video enhanced images, the pixel points are similar, and the pixel points with the same position as the target point are taken as the opposite pixel points of the target point in the adjacent video enhanced images of the corresponding video enhanced images. According to the difference degree of the offset condition of the target point and the opposite pixel point, the local difference index can be adjusted, and the target point at the special position is a normal point when the offset condition of the target point and the opposite pixel point is smaller although the local difference is larger, possibly because the local difference of the target point is larger due to the different enhancement degree.
Since each video enhancement image typically has two adjacent video enhancement images, the relative pixels of the target point are also two, only at the first and last video enhancement images, the relative pixels of the target point are one. And calculating the angle difference and the size difference of the motion vector between the target point and each relative pixel point, and when the number of the relative pixel points is multiple, averaging the angle difference and the size difference to obtain the average angle difference and the average size difference of the motion vector between the target point and all the relative pixel points.
In the embodiment of the invention, the angle difference is the absolute value of the angle difference of the motion vector between the target point and each relative pixel point, and the magnitude difference is the absolute value of the magnitude difference of the motion vector between the target point and each relative pixel point. Taking the product of the average angle difference and the average size difference as an integral difference index of the target point in the corresponding video enhanced images, reflecting the integral similarity degree of the target point between the video enhanced images through the integral difference, and indicating that the more similar the target point is, the more likely the target point is a normal point.
And calculating the product of the local difference index and the overall difference index of the target point to obtain the isolated index of the target point in the corresponding video enhanced image. The difference degree of the offset condition of the target point in the corresponding video enhancement image is reflected through the isolated index, the isolated degree of the target point can be judged through the difference degree of the filtering offset condition, and in the embodiment of the invention, for the accuracy of subsequent calculation, the specific expression of the isolated index is as follows:
In the method, in the process of the invention,represented as video enhancement image +.>Middle->Isolation index of individual pixels, +.>Denoted as +.>Average angle difference between individual pixel points and relative pixel points, < >>Denoted as +.>Average size difference between individual pixel points and relative pixel points, +.>Denoted as +.>A pixel point and a first neighborhood range of the preset first neighborhood range>Size difference between other pixels, +.>Denoted as +.>A pixel point and a first neighborhood range of the preset first neighborhood range>The angular difference between the other pixels.
Wherein,denoted as +.>Local difference index of each pixel point, < >>Denoted as +.>The larger the local difference index and the global difference index, the larger the deviation difference between the pixel point and the local pixel point as well as the deviation difference between the pixel point and the relative pixel point areThe stronger the isolation of a pixel, the more likely the pixel is a noise point.
Further, according to the abnormal property of the noise points in the pixel values, the abnormal degree of each pixel point in the continuous video enhanced image is analyzed, the possibility that the pixel points are noise points in each video enhanced image is judged according to the abnormal condition of the pixel values, and an abnormal index is obtained.
Through analyzing the pixel values of the pixel points at the same position in all the continuous video enhanced images, in the embodiment of the invention, an LOF anomaly detection algorithm is selected as an anomaly detection algorithm, and in all the video enhanced images, the pixel values of the pixel points at the same position are detected by LOF anomaly detection, and then the pixel points with the anomaly values are used as reference pixel points. It should be noted that, the detection of the LOF abnormality is a technical means well known to those skilled in the art, and will not be described herein.
Calculating the pixel value difference between the reference pixel point and each other pixel point at the same position, taking the sum value of all pixel value differences corresponding to the reference pixel point as an abnormal difference value of the reference pixel point, wherein the abnormal difference value reflects the abnormal degree of the reference pixel point, and when the difference between the reference pixel point and the other pixel points is larger, the serious abnormal condition is indicated, and the noise is more obvious.
Multiplying a pixel value corresponding to a reference pixel point by an abnormal difference value to obtain an abnormal index of the reference pixel point in a corresponding video enhanced image, wherein the abnormal degree of the reference pixel point is reflected by the magnitude of the abnormal value and the magnitude degree of the abnormal difference value, the greater the abnormal degree is, the more likely the abnormal degree is a noise point, and the more obvious the noise is, in the embodiment of the invention, for the accuracy of subsequent calculation, the specific expression of the abnormal index is as follows:
in the method, in the process of the invention,expressed as reference pixel points->Abnormal index of->Expressed as reference pixel points->Pixel value of>Expressed as total number of other pixel points corresponding to the same position of the reference pixel point, +.>Expressed as reference pixel points->Is at the same position as the corresponding>The pixel values of the individual pixels differ.
Wherein,expressed as reference pixel points- >When the abnormal difference value is larger, the pixel value of the reference pixel point is larger, the abnormal index is larger, which indicates that the reference pixel point is more likely to be a noise point in the corresponding video enhanced image, and the noise degree is more obvious. In the embodiment of the invention, for the pixel points where the abnormal value is not detected, the abnormal index of all the pixel points where the abnormal value is not detected is set as the preset abnormal index, the preset abnormal index is 0.1, and the implementer can adjust according to specific implementation conditions.
And combining the characteristics of the pixel value abnormality and the position isolation of the noise points, and obtaining the noise evaluation index of each video enhanced image according to the abnormality index and the isolation index of all the pixel points in each video enhanced image. The invention mainly aims at identifying the abnormal state of the passengers, namely identifying the gesture of the passengers, does not need to analyze the elevator car area with unchanged background, only analyzes the denoising effect of the image for the area pixel points possibly formed by the passengers, improves the accuracy of regularization parameters, and ensures that the gesture of the passengers in the image is acquired more accurately.
Since the elevator car is a constant background area, the size of the motion vector of the pixel point is zero during image enhancement denoising, the noise evaluation is not participated, and the pixel point of the passenger in a changing state has certain change in continuous video images. Therefore, the size of the motion vector of each pixel point in each video enhanced image is normalized to obtain the vector weight of each pixel point in the corresponding video enhanced image, and the larger the variation is, the more the noise condition of the pixel point is considered.
According to the abnormal index and the isolated index of each pixel point in the corresponding video enhanced image, the noise index of each pixel point is obtained, the abnormal index and the isolated index are in positive correlation with the noise index, and the noise index reflects the noise possibility of each pixel point. The noise indexes of all pixel points in each video enhanced image are weighted and summed through vector weights, so that the noise evaluation index of the corresponding video enhanced image is obtained, and the noise possibility of the pixel points possibly being passenger areas in each video enhanced image is reflected through the noise evaluation index, and in the implementation of the invention, the expression of the noise evaluation index is as follows:
in the method, in the process of the invention,represented as video enhancement image +.>Noise evaluation index of->Represented as video enhancement image +.>Middle (f)Isolation index of individual pixels, +.>Denoted as +.>Abnormality index of each pixel, +.>Represented as video enhancement image +.>Middle->Size of motion vector of each pixel, < >>Represented as video enhancement image +.>Maximum motion vector size of all pixels in (a)>Expressed as total number of pixels in the video enhancement image,/->Expressed as an adjustment coefficient, is set to 0.001 in the embodiment of the present invention in order to prevent the denominator from making the formula meaningless.
Wherein,represented as video enhancement image +.>Middle->Vector weight of each pixel, +.>Represented as video enhancement image +.>Middle->In the embodiment of the invention, the size of the motion vector is normalized by a maximum value normalization method, and the abnormal index and the isolated index are reflected in a product form to have positive correlation with the noise index. In other embodiments of the present invention, the magnitude of the motion vector may be normalized by other normalization methods, such as a standard normalization method, or the magnitude of the motion vector may be normalized by other basic mathematical operations to reflect that the anomaly index and the isolation index both have positive correlation with the noise index, such as addition, where the normalization method and the positive correlation characterization method are not limited.
So far, the analysis of the noise condition in each video enhanced image is completed, and the noise evaluation index of each video enhanced image is obtained.
S3: fitting pixel points according to the offset condition of the pixel points in each video enhanced image and each video gray image to obtain fitting edges in the video enhanced images and the video gray images; and obtaining the edge protection evaluation index of each video enhanced image according to the similarity degree between the edges and the fitting edges in each video enhanced image and the corresponding video gray level image and the difference degree between the edges and the fitting edges.
On the other hand, the edge protection effect of each video enhanced image is analyzed, and through the comprehensive analysis of the edge protection effect of the video enhanced image and the edge protection effect of the possible passenger area, firstly, according to S2, the area where the pixel point deviation condition occurs in each video enhanced image, namely the pixel point area with the motion vector, is obtained, and objects in the motion state exist in the areas, and the passenger area is very likely to be obtained.
Thus, a motion vector between each video gray-scale image and the next video gray-scale image is first acquired for each pixel point, and all video enhanced images and all video gray-scale images are taken as images to be processed, so that each video enhanced image or each video gray-scale image is analyzed.
And in each image to be processed, carrying out pixel point clustering according to the motion vector of the pixel point, and carrying out iterative merging to obtain a cluster of each image to be processed. In the embodiment of the invention, a DBSCAN clustering algorithm is adopted for clustering, wherein the radius is 3, the minimum density is 3, an implementer can automatically adjust the clustering of the pixel points according to the position of the pixel point with the motion vector to obtain an initial cluster, when the distance between the central points of the two initial clusters is smaller than 8, the positions of the two corresponding initial clusters are considered to be relatively close to each other and can be combined, so that the initial clusters are iteratively combined until the initial clusters cannot be combined, and the rest clusters at the moment are taken as the clusters of each image to be processed.
In other embodiments of the present invention, other clustering methods such as mean shift clustering may be used to obtain the initial cluster, and the shortest distance between the initial clusters is used to combine, so that the finally obtained cluster can represent a complete area where pixel point changes exist, and the method for obtaining the cluster is not limited.
And performing edge fitting on all the cluster clusters to obtain a fitting edge of each image to be processed, wherein the fitting edge is the edge of the passenger area. It should be noted that, in the embodiment of the present invention, both the clustering algorithm and the edge fitting method are technical means well known to those skilled in the art, and are not described herein.
After the fitting edge in each image to be processed is obtained, the similarity condition of the edge and the fitting edge in each video enhancement image and the corresponding video gray level image is analyzed, and the edge retention condition of each video enhancement image can be obtained. And obtaining the edge protection evaluation index of each video enhancement image according to the similarity degree between the edges and the fitting edges and the difference degree between the edges and the fitting edges.
And the edge detection is used for acquiring the edge in each image to be processed, the similarity degree of the edge and the fitting edge is analyzed in each image to be processed, the passenger area represented by the fitting edge can be evaluated, and when the edge is more similar to the fitting edge, the represented passenger area is more true, and the edge protection evaluation calculation in the corresponding video enhancement image is more accurate. It should be noted that, the edge detection is a technical means well known to those skilled in the art, and canny edge detection may be selected, which is not described herein.
In each image to be processed, the similarity between each edge and each fitting edge is obtained through a shape similarity algorithm, the most similar fitting edge corresponding to each edge is obtained through a matching algorithm according to the similarity, the similarity between each edge and the most similar fitting edge is found, and the accuracy of each fitting edge is judged. It should be noted that, the shape similarity algorithm and the matching algorithm are all technical means well known to those skilled in the art, and may be calculated by using methods such as contour matching or pseudo-point matching, which is not limited herein.
And adding the similarity between all the edges and the corresponding most similar fitting edges to obtain an edge similarity index of each image to be processed, and reflecting the real situation of the fitting edges corresponding to the fitting areas in each processed image through the edge similarity index.
Further, in each video enhancement image and the corresponding video gray level image, the edge retention condition of the filtered image is analyzed, only the edge difference condition is analyzed first, and the edge difference index of each video enhancement image is obtained according to the difference condition between the edge number and the edge length of all edges.
According to the position of each edge in each video enhancement image, determining the nearest edge in the corresponding video gray level image, and taking the nearest two edges as opposite edge pairs, in the embodiment of the invention, acquiring the center point of each edge in the video enhancement image and the corresponding video gray level image; and forming opposite edge pairs by the edge closest to the center point in the video enhanced image and the edge closest to the center point in the corresponding video gray image, wherein two edges exist in the opposite edge pairs, one is in the video enhanced image, and the other is in the corresponding video gray image.
And acquiring the pixel point number difference of the edge pixel points between the two edges in each opposite edge pair, and acquiring the edge number difference between each video enhanced image and the corresponding video gray image.
According to the edge quantity difference and the pixel quantity difference of all opposite edge pairs, obtaining an edge difference index of each video enhanced image, wherein the edge quantity difference and the pixel quantity difference are in positive correlation with the edge difference index. The retention degree of the edges in the video enhanced image is reflected by the total number difference of the edges and the difference between the edge length integers, and in the embodiment of the invention, the expression of the edge difference index is as follows:
In the method, in the process of the invention,represented as video enhancement image +.>Edge difference index of->Represented as video enhancement image +.>Difference from the number of edges of the corresponding video gray image, < >>Denoted as +.>Difference in number of pixels between two edges in the pair of opposite edges +.>Expressed as the total number of pairs of opposite edges in the video enhancement image.
In other embodiments of the present invention, other basic mathematical operations may be used to reflect that the edge number difference and the pixel number difference are both in positive correlation with the edge difference index, for example, addition, exponentiation, etc., without limitation.
Wherein the value of one is set to prevent the edge difference index from being zero when the edge number difference is zero, ignoring the difference between edge lengths. When the difference of the number of edges and the length of the edges between the video enhanced image and the corresponding video gray-scale image is larger, the edge protection effect of the enhanced effect is poorer, and the edge difference index is larger.
Further, the influence condition of filtering on the pixel points of the passenger area is reflected through the similarity degree of the fitting edges between the video enhanced image and the corresponding video gray level image, and when the fitting edges are similar, the filtering effect is better, and the analysis on the passenger area is not influenced. And in each video enhanced image and the corresponding video gray level image, obtaining a fitting edge similarity index of each video enhanced image according to the similarity condition among all fitting edges.
Preferably, a fitting center point of each fitting edge in the video enhancement image and the corresponding video gray image is obtained; and forming a pair of opposite fitting edges by the fitting edge closest to the fitting center point in the video enhanced image and the fitting edge closest to the fitting center point in the corresponding video gray image, and completing the matching of the position similarity edges between the fitting edges, wherein one fitting edge is in the video enhanced image, and the other fitting edge is in the video gray image.
And obtaining the similarity between two fitting edges in each pair of opposite fitting edges through a shape similarity algorithm, wherein the similarity is used as the fitting degree of each pair of opposite fitting edges, and the filtered change degree of the passenger area at the corresponding position is reflected through the similarity of the fitting edges at the corresponding position. And adding all fitting degrees corresponding to each video enhanced image to obtain a fitting edge similarity index of each video enhanced image, and reflecting the overall change degree of the passenger area in the video enhanced image according to the fitting edge similarity index.
And finally, determining the edge protection evaluation index of each video enhanced image according to the edge similarity index, the fitting edge similarity index and the edge difference index of each video enhanced image and the corresponding video gray level image, analyzing the degree of change after the passenger area is filtered through accuracy analysis acquired for the passenger area, analyzing the degree of retention of the filtered edge, and determining the edge protection evaluation index of each video enhanced image through three aspects.
In the embodiment of the invention, for the accuracy of subsequent calculation, the specific expression of the edge protection evaluation index is as follows:
in the method, in the process of the invention,represented as video enhancement image +.>Is (are) a margin-protecting evaluation index,/->Represented as video enhancement image +.>Edge similarity index of>Represented as video enhancement image +.>Corresponding video gray-scale image->Edge similarity index of>Represented as video enhancement image +.>Is fit to the edge similarity index,/>Represented as video enhancement image +.>Edge difference index of->Expressed as adjustment coefficients, in order to prevent the denominator from making the formula meaningless with zero.
The product form reflects that the edge similarity index and the fitting edge similarity index are in positive correlation with the edge protection evaluation index, and the ratio form reflects that the edge difference index and the edge protection evaluation index are in negative correlation. In other embodiments of the present invention, other basic mathematical operations may be used to reflect that the edge similarity index and the fitted edge similarity index both have positive correlation with the edge protection evaluation index, such as addition, exponentiation, etc., and that the edge difference index has negative correlation with the edge protection evaluation index, such as subtraction, etc., without limitation.
And finally, analyzing the edge retention condition in each video enhanced image to obtain the edge protection evaluation index of each video enhanced image.
S4: obtaining an enhancement effect index of each video enhanced image according to the noise evaluation index and the edge evaluation index of each video enhanced image; determining optimal regularization parameters of the guide filtering according to enhancement effect indexes of all the video enhancement images, and obtaining the optimal video enhancement images by adopting the guide filtering corresponding to the optimal regularization parameters; and acquiring the passenger state according to the optimal video enhanced image.
And (3) comprehensively analyzing the denoising effect and the edge protection effect in each video enhanced image through the S2 and the S3, further, obtaining the enhanced effect index of each video enhanced image according to the noise evaluation index and the edge protection evaluation index of each video enhanced image, and judging whether the enhanced degree of each video enhanced image meets the standard or not through the enhanced effect index.
And taking the product of the noise evaluation index after the inverse proportion normalization of each video enhanced image and the preset noise effect weight as the noise effect index of each video enhanced image, and reflecting the denoising degree of each video enhanced image through the noise effect index. And taking the product of the edge protection evaluation index normalized by each video enhanced image and the preset edge protection effect weight as an edge protection effect index of each video enhanced image, and reflecting the edge reservation degree of each video enhanced image through the edge protection effect index.
Taking the sum of the noise effect index and the edge protection effect index of each video enhanced image as the enhancement effect index of each video enhanced image, and reflecting the enhancement degree of each video enhanced image through the enhancement effect index, wherein in the embodiment of the invention, the expression of the enhancement effect index is as follows:
in the method, in the process of the invention,represented as video enhancement image +.>Is an enhancement effect index of->Represented as video enhancement image +.>Is (are) a margin-protecting evaluation index,/->Represented as video enhancement image +.>Noise evaluation index of->The value is expressed as a preset guard edge effect weight value,expressed as a preset noise effect weight, +.>It should be noted that, normalization is a technical means well known to those skilled in the art, and the normalization function may be selected by linear normalization or standard normalization, and the specific normalization method is not limited herein.
Wherein,represented as video enhancement image +.>Is a protective effect index of->Represented as video enhancement image +.>The larger the edge protection effect index and the noise effect index, the better the edge preservation effect in the video enhanced image is, the less the noise is, so the better the enhancement effect of the video enhanced image is.
In one embodiment of the present invention, when the enhancement effect index of all the video enhancement images is greater than or equal to the preset enhancement threshold, it is indicated that the enhancement effect of all the video enhancement images is better, the corresponding regularization parameters are selected more appropriately, and the regularization parameters in the corresponding guided filtering are used as the optimal regularization parameters. In the embodiment of the invention, the preset enhancement threshold is 0.7, and an implementer can adjust the enhancement threshold according to implementation conditions.
When the enhancement effect index of the video enhancement image is smaller than the preset enhancement threshold, the enhancement effect of the video enhancement image is poor, regularization parameters need to be adjusted, and the average noise effect index and the average edge protection effect index of all the video enhancement images are obtained.
And adjusting regularization parameters according to the comparison of the denoising and edge protection effects, and when the average noise effect index is smaller than the average edge protection effect index, indicating that the denoising effect is poor, increasing the regularization parameters in the corresponding guide filtering by a preset adjustment amount to obtain new guide filtering. When the average noise effect index is larger than the average edge protection effect index, the edge protection effect is relatively poor, the regularization parameters in the corresponding guide filtering are reduced by a preset adjustment amount, and the new guide filtering is obtained. And re-acquiring enhancement effect indexes of all the video enhancement images according to the new guide filtering, determining optimal regularization parameters, continuously adjusting the guide filtering, and searching for the situation that the enhancement effect of all the video enhancement images is optimal.
When the guiding filtering corresponding to all regularization parameters does not meet the requirement that the enhancement effect index of all video enhancement images is larger than or equal to a preset enhancement threshold, the fact that none of the regularization parameters can enable the enhancement effect of all video enhancement images to reach the standard is indicated, the situation that the enhancement effect is optimal is searched at the moment, enhancement effect indexes and values of all video enhancement images corresponding to each regularization parameter are obtained, the regularization parameter corresponding to the maximum sum value is used as an optimal regularization parameter, and the regularization parameter when the overall enhancement effect is optimal is selected as the optimal regularization parameter.
In the step S1, the number of video images acquired per second in one embodiment of the invention is set, the guide filtering corresponding to the optimal regularization parameters is obtained by analyzing the preset number of video images in the second, all the video images in the corresponding second are enhanced, and the optimal video enhanced image at the moment is obtained.
According to the embodiment of the invention, the passenger state can be acquired according to the optimal video image, the passenger state is acquired through the openpore neural network, the optimal video image is input into the neural network and output as to whether the passenger abnormal state exists or not, the optimal video image is marked through network training, the normal behavior mark is 0, the abnormal behavior mark is 1, and it is noted that the openpore neural network is an existing neural network for human body gesture recognition, and the training process of the specific network is a technical means well known to those skilled in the art and is not repeated herein.
In summary, the invention obtains the video enhancement image from each video gray level image through the guiding filtering, and comprehensively judges the enhancement effect of each video enhancement image according to the denoising effect and the edge protection effect in the video enhancement image. In the denoising effect analysis, the isolation characteristic of noise points is considered, and the isolation index of each pixel point in the corresponding video enhancement image is obtained according to the local and overall difference degree of the offset condition of each pixel point in the video enhancement image. In consideration of the abnormal characteristics of the pixel values of the noise points, the abnormal index of each pixel point is obtained through the abnormal degree of the pixel value of each pixel point in all the video enhancement images, and the noise evaluation index of each video enhancement image is obtained according to the isolated index and the abnormal index. In edge protection effect analysis, the edge of a pixel point area generating motion change is fitted according to the pixel point offset condition to be used as a fitting edge, and the edge protection evaluation index of each video enhancement image is obtained through comprehensive analysis through a plurality of difference conditions among the fitting edges and between the edges in the video enhancement image and the video gray level image. The enhancement effect index of each video enhancement image is obtained through the noise evaluation index and the edge protection evaluation index, the selection fit condition of regularization parameters in filtering is reflected through the enhancement effect, the optimal regularization parameters are determined according to the enhancement effect indexes of all the video enhancement images, the optimal video enhancement image can be obtained according to the guiding filtering corresponding to the optimal regularization parameters, the area edge retaining effect in the image is enabled to be better, the passenger state can be obtained more accurately, the perception capability of the passenger state in the high-speed elevator car is improved, and adverse effects caused by the abnormal state of the passenger are avoided to a great extent.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (3)

1. A method for visually perceiving the status of a passenger in a high-speed elevator car based on image processing, the method comprising:
acquiring a preset number of video gray images, and enhancing each video gray image by adopting guide filtering to obtain a corresponding video enhanced image;
obtaining a noise evaluation index of each video enhanced image according to the abnormality degree of each pixel point in each video enhanced image, the local difference and the overall difference of the offset condition of each pixel point;
fitting pixel points according to the offset condition of the pixel points in each video enhanced image and each video gray image to obtain fitting edges in the video enhanced images and the video gray images; obtaining a margin protection evaluation index of each video enhancement image according to the similarity degree between the edges and the fitting edges and the difference degree between the edges and the fitting edges in each video enhancement image and the corresponding video gray level image;
Obtaining an enhancement effect index of each video enhancement image according to the noise evaluation index and the edge protection evaluation index of each video enhancement image; determining optimal regularization parameters of the guide filtering according to enhancement effect indexes of all the video enhancement images, and obtaining the optimal video enhancement images by adopting the guide filtering corresponding to the optimal regularization parameters; acquiring a passenger state according to the optimal video enhanced image;
the method for acquiring the noise evaluation index comprises the following steps:
obtaining an isolation index of each pixel in a corresponding video enhanced image according to the local difference of the offset condition of each pixel in each video enhanced image and the overall difference of the offset condition of each pixel between each video enhanced image and an adjacent video enhanced image; obtaining an abnormality index of each pixel point in all video enhancement images according to the abnormality degree of the pixel value of each pixel point in all video enhancement images;
acquiring a motion vector of each pixel point between each video enhanced image and the next video enhanced image; normalizing the magnitude of a motion vector of each pixel point in each video enhanced image to obtain the vector weight of each pixel point in the corresponding video enhanced image;
According to the abnormal index and the isolated index of each pixel point in the corresponding video enhancement image, obtaining the noise index of each pixel point in the corresponding video enhancement image, wherein the abnormal index and the isolated index are in positive correlation with the noise index; carrying out weighted summation on noise indexes of all pixel points in each video enhanced image through vector weights to obtain noise evaluation indexes of the corresponding video enhanced images;
the method is characterized by comprising the following steps of:
taking a pixel point in any video enhanced image as a target point;
acquiring the angle difference and the size difference of a motion vector between a target point and each other pixel point in a preset first neighborhood range; calculating the product of each angle difference and the corresponding size difference, and taking the accumulated value of all products corresponding to the target point as a local difference index of the target point in the corresponding video enhancement image;
in the adjacent video enhancement images corresponding to the video enhancement images, taking the pixel points with the same positions as the target points as the relative pixel points of the target points; obtaining the average angle difference and the average size difference of motion vectors between the target point and all the relative pixel points, and taking the product of the average angle difference and the average size difference as an integral difference index of the target point in the corresponding video enhancement image;
Calculating the product of the local difference index and the overall difference index of the target point to obtain an isolated index of the target point in the corresponding video enhanced image;
the method for acquiring the abnormality index comprises the following steps:
in all the video enhancement images, detecting the abnormal value of the pixel point at the same position through an abnormal detection algorithm;
taking the pixel point with the abnormal value as a reference pixel point; calculating pixel value differences between the reference pixel point and each other pixel point at the same position, and taking the sum value of all pixel value differences corresponding to the reference pixel point as an abnormal difference value of the reference pixel point; multiplying the pixel value corresponding to the reference pixel point by the abnormal difference value to obtain an abnormal index of the reference pixel point in the corresponding video enhancement image;
setting the abnormal indexes of all pixel points without abnormal values as preset abnormal indexes;
the method for acquiring the fitting edge comprises the following steps:
acquiring a motion vector of each pixel point between each video gray level image and the next video gray level image; taking all video enhancement images and all video gray-scale images as images to be processed;
in each image to be processed, carrying out pixel point clustering according to the motion vector of the pixel point and carrying out iterative merging to obtain a cluster of each image to be processed; performing edge fitting on all the cluster clusters to obtain a fitting edge of each image to be processed;
The method for acquiring the edge protection evaluation index comprises the following steps:
acquiring edges in each image to be processed;
in each image to be processed, obtaining the similarity between each edge and each fitting edge, and obtaining the most similar fitting edge corresponding to each edge according to the similarity; adding the similarity between all edges and the corresponding most similar fitting edges to obtain an edge similarity index of each image to be processed;
in each video enhanced image and the corresponding video gray level image, according to the difference condition between the edge quantity and the edge length of all edges, obtaining an edge difference index of each video enhanced image; obtaining a fitting edge similarity index of each video enhanced image according to the similarity conditions among all fitting edges;
determining a margin protection evaluation index of each video enhancement image according to the edge similarity index, the fitting edge similarity index and the edge difference index of each video enhancement image and the corresponding video gray level image; the edge similarity index and the fitting edge similarity index are in positive correlation with the edge protection evaluation index, and the edge difference index and the edge protection evaluation index are in negative correlation;
the method for acquiring the edge difference index comprises the following steps:
Acquiring a video enhancement image and a center point of each edge in a corresponding video gray level image; forming opposite edge pairs by edges closest to the center point in the video enhancement image and edges closest to the center point in the corresponding video gray level image;
acquiring the difference of the number of pixel points between two edges in each opposite edge pair; acquiring the edge quantity difference between each video enhancement image and the corresponding video gray level image; according to the edge quantity difference and the pixel point quantity difference of all opposite edges, obtaining an edge difference index of each video enhanced image, wherein the edge quantity difference and the pixel point quantity difference are in positive correlation with the edge difference index;
the method for acquiring the fitting edge similarity index comprises the following steps:
acquiring a fitting center point of each fitting edge in the video enhancement image and the corresponding video gray image; forming a pair of opposite fitting edges by the fitting edge closest to the fitting center point in the video enhanced image and the fitting edge closest to the fitting center point in the corresponding video gray level image;
the similarity between two fitting edges in each pair of opposite fitting edges is taken as the fitting degree of each pair of opposite fitting edges; and adding all fitting degrees corresponding to each video enhanced image to obtain a fitting edge similarity index of each video enhanced image.
2. The image processing-based visual perception method for the passenger state of the high-speed elevator car according to claim 1, wherein the method for obtaining the enhancement effect index comprises the following steps:
taking the product of the noise evaluation index after the inverse proportion normalization of each video enhanced image and the preset noise effect weight as the noise effect index of each video enhanced image; taking the product of the edge protection evaluation index normalized by each video enhanced image and a preset edge protection effect weight as an edge protection effect index of each video enhanced image;
and taking the sum value of the noise effect index and the edge protection effect index of each video enhanced image as the enhancement effect index of each video enhanced image.
3. The image processing-based visual perception method for the passenger state of the high-speed elevator car according to claim 2, wherein the method for acquiring the optimal regularization parameter comprises the following steps:
when the enhancement effect indexes of all the video enhancement images are larger than or equal to a preset enhancement threshold, taking regularization parameters in the corresponding guide filtering as optimal regularization parameters;
when the enhancement effect index of the video enhancement image is smaller than a preset enhancement threshold, acquiring the average noise effect index and the average edge protection effect index of all the video enhancement images; when the average noise effect index is smaller than the average edge protection effect index, increasing regularization parameters in the corresponding guide filtering by a preset adjustment amount to obtain new guide filtering; when the average noise effect index is larger than the average edge protection effect index, reducing regularization parameters in the corresponding guide filtering by a preset adjustment amount to obtain new guide filtering;
Re-acquiring enhancement effect indexes of all video enhancement images according to the new guide filtering and determining optimal regularization parameters; when the guiding filtering corresponding to all regularization parameters does not meet the requirement that the enhancement effect index of all video enhancement images is larger than or equal to a preset enhancement threshold value, obtaining enhancement effect indexes and values of all video enhancement images corresponding to each regularization parameter, and taking the regularization parameter corresponding to the maximum sum value as the optimal regularization parameter.
CN202311068758.4A 2023-08-24 2023-08-24 High-speed elevator car passenger state visual perception method based on image processing Active CN116823673B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311068758.4A CN116823673B (en) 2023-08-24 2023-08-24 High-speed elevator car passenger state visual perception method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311068758.4A CN116823673B (en) 2023-08-24 2023-08-24 High-speed elevator car passenger state visual perception method based on image processing

Publications (2)

Publication Number Publication Date
CN116823673A CN116823673A (en) 2023-09-29
CN116823673B true CN116823673B (en) 2023-11-10

Family

ID=88122367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311068758.4A Active CN116823673B (en) 2023-08-24 2023-08-24 High-speed elevator car passenger state visual perception method based on image processing

Country Status (1)

Country Link
CN (1) CN116823673B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117130373B (en) * 2023-10-26 2024-03-08 超技工业(广东)股份有限公司 Control method of carrier conveying robot in semi-finished product bin

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818547A (en) * 2016-09-14 2018-03-20 北京航空航天大学 The minimizing technology of the spiced salt and Gaussian mixed noise in a kind of sequence towards twilight image
CN109636766A (en) * 2018-11-28 2019-04-16 南京理工大学 Polarization differential and intensity image Multiscale Fusion method based on marginal information enhancing
CN110765964A (en) * 2019-10-30 2020-02-07 常熟理工学院 Method for detecting abnormal behaviors in elevator car based on computer vision
CN111667422A (en) * 2020-05-25 2020-09-15 东华大学 Image/video target detection result enhancement method based on synthesis filter
CN113673307A (en) * 2021-07-05 2021-11-19 浙江工业大学 Light-weight video motion recognition method
CN113743269A (en) * 2021-08-26 2021-12-03 浙江工业大学 Method for identifying video human body posture in light weight mode

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818547A (en) * 2016-09-14 2018-03-20 北京航空航天大学 The minimizing technology of the spiced salt and Gaussian mixed noise in a kind of sequence towards twilight image
CN109636766A (en) * 2018-11-28 2019-04-16 南京理工大学 Polarization differential and intensity image Multiscale Fusion method based on marginal information enhancing
CN110765964A (en) * 2019-10-30 2020-02-07 常熟理工学院 Method for detecting abnormal behaviors in elevator car based on computer vision
CN111667422A (en) * 2020-05-25 2020-09-15 东华大学 Image/video target detection result enhancement method based on synthesis filter
CN113673307A (en) * 2021-07-05 2021-11-19 浙江工业大学 Light-weight video motion recognition method
CN113743269A (en) * 2021-08-26 2021-12-03 浙江工业大学 Method for identifying video human body posture in light weight mode

Also Published As

Publication number Publication date
CN116823673A (en) 2023-09-29

Similar Documents

Publication Publication Date Title
US7668338B2 (en) Person tracking method and apparatus using robot
US7366330B2 (en) Method, apparatus, and program for detecting faces
CN109035188B (en) Intelligent image fusion method based on target feature driving
US7957560B2 (en) Unusual action detector and abnormal action detecting method
US7840036B2 (en) Human being detection apparatus, method of detecting human being, and human being detecting program
EP2959454B1 (en) Method, system and software module for foreground extraction
EP1995691B1 (en) Method and apparatus for segmenting a motion area
CN107909027B (en) Rapid human body target detection method with shielding treatment
CN110705376A (en) Abnormal behavior detection method based on generative countermeasure network
CN116823673B (en) High-speed elevator car passenger state visual perception method based on image processing
US20020051578A1 (en) Method and apparatus for object recognition
US20070165951A1 (en) Face detection method, device and program
CN108596087B (en) Driving fatigue degree detection regression model based on double-network result
US20160364865A1 (en) Image processing device, image processing method, and program
US20040081359A1 (en) Error propogation and variable-bandwidth mean shift for feature space analysis
CN115601368A (en) Method for detecting defects of sheet metal parts of building material equipment
CN111369458B (en) Infrared dim target background suppression method based on multi-scale rolling guide filtering smoothing
CN115346197A (en) Driver distraction behavior identification method based on bidirectional video stream
KR101441107B1 (en) Method and apparatus for determining abnormal behavior
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN112329764A (en) Infrared dim target detection method based on TV-L1 model
CN109993744B (en) Infrared target detection method under offshore backlight environment
US9584807B2 (en) Method and apparatus for motion estimation in a video system
CN111079509B (en) Abnormal behavior detection method based on self-attention mechanism
CN113780462B (en) Vehicle detection network establishment method based on unmanned aerial vehicle aerial image and application thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant