CN116342891B - Structured teaching monitoring data management system suitable for autism children - Google Patents

Structured teaching monitoring data management system suitable for autism children Download PDF

Info

Publication number
CN116342891B
CN116342891B CN202310589183.4A CN202310589183A CN116342891B CN 116342891 B CN116342891 B CN 116342891B CN 202310589183 A CN202310589183 A CN 202310589183A CN 116342891 B CN116342891 B CN 116342891B
Authority
CN
China
Prior art keywords
window
monitoring image
pixel point
difference
rehabilitation monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310589183.4A
Other languages
Chinese (zh)
Other versions
CN116342891A (en
Inventor
张金燕
徐海萍
张丽军
徐瑞
杜晓艳
丁丹丹
王晓芳
申思
吴玉菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Kexun Intelligent Technology Co ltd
Zhengzhou University Third Affiliated Hospital Henan Maternity and Child Health Care Hospital
Original Assignee
Jinan Kexun Intelligent Technology Co ltd
Zhengzhou University Third Affiliated Hospital Henan Maternity and Child Health Care Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Kexun Intelligent Technology Co ltd, Zhengzhou University Third Affiliated Hospital Henan Maternity and Child Health Care Hospital filed Critical Jinan Kexun Intelligent Technology Co ltd
Priority to CN202310589183.4A priority Critical patent/CN116342891B/en
Publication of CN116342891A publication Critical patent/CN116342891A/en
Application granted granted Critical
Publication of CN116342891B publication Critical patent/CN116342891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a structured teaching monitoring data management system suitable for autism children. The system comprises a data acquisition module, a data acquisition module and a data processing module, wherein the data acquisition module is used for acquiring a rehabilitation monitoring image in the structural teaching of the autism children; the window screening module is used for screening out a net difference window from the local window; the pixel point matching module is used for carrying out gray level compensation on the pixel points in the net difference window to obtain a compensation value, and matching the pixel points in the rehabilitation monitoring image of the adjacent frame based on the compensation value to obtain a matched pixel pair; and the noise reduction module is used for obtaining a dynamic region and a background region in the rehabilitation monitoring image according to the matched pixel pairs, reducing noise of the dynamic region and the background region, and obtaining a high-quality rehabilitation monitoring image in the autism children structural teaching. The invention improves the denoising effect of the existing denoising algorithm, so that the obtained high-quality rehabilitation monitoring image can more clearly reflect the dangerous behavior and facial expression of the autism children.

Description

Structured teaching monitoring data management system suitable for autism children
Technical Field
The invention relates to the technical field of image data processing, in particular to a structured teaching monitoring data management system suitable for autism children.
Background
Autism children are children groups with unique behavior patterns and unique cognitive characteristics, and structured teaching is an autism children training course which is highest in evaluation in European and American countries, so that a comprehensive and structured teaching method is provided for treating and training autism children. It emphasizes the coordination of skill training with the environment, importance is attached to the participation of the home and the cooperation of the home and profession. Through structural teaching, the children with autism can recognize and understand the requirements and changes of the environment, understand causal relationship, strengthen the desire of children with autism and improve the skill of children to communicate, so as to achieve integration
Social and independent living purposes. Visual arrangement, environmental arrangement, routine, degree schedule in structured education allows autistic children to improve the ability to understand and adapt to the environment, learn the relationship of activities to the environment, activities performed on a day or time, and sequencing of activities. Because autism children are extremely sensitive and pay attention to details, the stress response is easy to occur due to emotion change or external stimulus in the structured teaching training process, so that the training condition needs to be monitored at all times and timely paid attention to so as to make adjustment timely, and abnormal conditions are avoided as much as possible.
The method for monitoring the abnormal state of the autism children is a monitoring technology for identifying dangerous behaviors and facial expressions of the autism children by utilizing machine vision and artificial intelligence, so that overdriving reactions are avoided in the training process of patients, and accidents are prevented. The monitoring system needs to accurately identify the behavior, expression and interaction information of the patient in the training process, which is also the core data for recording the feedback of the autism children in the structural training process. The training of autism children is continuous, so that the monitoring system almost runs all the time, a large amount of image noise can be generated on the monitoring picture along with longer running time, the definition of the monitoring picture is greatly interfered, and the quality of monitoring influence is poor. Further noise reduction of the monitor picture is required.
The current common method for denoising the monitoring image is DNR-3D denoising, and the motion foreground and the background image can be distinguished after the DNR-3D denoising is adopted, but because white noise of Gaussian noise is overlapped on almost all pixel points, the accuracy of the obtained motion foreground is lower through the DNR-3D denoising, and the denoising effect on the monitoring image is poorer.
Disclosure of Invention
In order to solve the technical problem of poor noise reduction effect on a monitoring image, the invention aims to provide a structured teaching monitoring data management system suitable for autism children, which comprises the following modules:
the data acquisition module is used for acquiring rehabilitation monitoring images in the structural teaching of the autism children;
the window screening module is used for screening out a net difference window from the local window by comparing the gray level difference of the rehabilitation monitoring image of the adjacent frame with the gray level difference in the local window in the rehabilitation monitoring image;
the pixel point matching module is used for acquiring differential images of adjacent frame rehabilitation monitoring images in the autism children structural teaching; calculating the signal-to-noise ratio of each net difference window of a previous frame of rehabilitation monitoring image in the difference image and the adjacent frame of rehabilitation monitoring image, and taking the signal-to-noise ratio as a local signal-to-noise ratio; obtaining the noise degree of the net difference window according to the local signal-to-noise ratio of the net difference window in the adjacent frame rehabilitation monitoring image and the gray level difference of the adjacent frame rehabilitation monitoring image; determining a compensation value of the pixel point in the net difference window according to the noise degree of the net difference window and the gray value of the pixel point in the net difference window; matching pixel points in the rehabilitation monitoring images of adjacent frames based on the compensation values of the pixel points to obtain at least two matched pixel pairs;
the noise reduction module is used for obtaining a dynamic region and a background region in the rehabilitation monitoring image in the autism children structural teaching according to the optical flow vector corresponding to the matched pixel pair, and respectively reducing noise of the dynamic region and the background region to obtain a high-quality rehabilitation monitoring image in the autism children structural teaching.
Preferably, the filtering the net difference window from the local window by comparing the gray level difference of the rehabilitation monitoring image of the adjacent frame with the gray level difference in the local window in the rehabilitation monitoring image includes:
taking the absolute value of the difference value of the sum of the pixel values of all the pixel points in the local window at the same position in the rehabilitation monitoring image of the adjacent frame as a local difference value; taking the absolute value of the difference value of the sum of the pixel values of all pixel points in the adjacent frame rehabilitation monitoring image as an integral difference value, and taking the ratio of the integral difference value to the number of local windows as a net noise difference value;
and when the local difference value is larger than the net noise difference value, taking the local window corresponding to the local difference value as a net difference window.
Preferably, the calculating the signal-to-noise ratio of each net difference window of the previous frame of rehabilitation monitoring image in the difference image and the adjacent frame of rehabilitation monitoring image as a local signal-to-noise ratio includes:
for any net difference window in the difference image, calculating a gray variance corresponding to the net difference window according to the pixel value of a pixel point in the net difference window in the previous frame of the rehabilitation monitoring image in the adjacent frame of the rehabilitation monitoring image and the gray mean value of the previous frame of the rehabilitation monitoring image in the adjacent frame of the rehabilitation monitoring image, and taking the gray variance as a first variance; calculating a gray variance corresponding to the difference image according to the pixel value of the pixel point in the net difference window in the difference image and the gray mean value of the difference image, and taking the gray variance as a second variance; and taking the logarithmic function value taking 10 as a base and taking the ratio of the first variance and the second variance as a true number as a local signal-to-noise ratio corresponding to the net difference window.
Preferably, the calculation formula of the noise degree is as follows:
wherein ,noise level for the i-th net difference window; />Local signal-to-noise ratio for the i-th net difference window; m is the number of net difference windows corresponding to the nth frame of rehabilitation monitoring image and the (n+1) th frame of rehabilitation monitoring image; />The sum of pixel values of all pixel points in the n+1th frame rehabilitation monitoring image; />And the sum of pixel values of all pixel points in the nth frame rehabilitation monitoring image is obtained.
Preferably, the determining the compensation value of the pixel point in the net difference window according to the noise degree of the net difference window and the gray value of the pixel point in the net difference window includes:
selecting any pixel point in the net difference window as a target pixel point, and taking the ratio of the sum of the pixel value of the target pixel point and the pixel value of the pixel point in the net difference window corresponding to the target pixel point as the compensation weight of the target pixel point; taking the product of the compensation weight of the target pixel point and the noise degree of the net difference window corresponding to the target pixel point as a compensation difference value of the pixel point; and taking the difference value between the gray value of the target pixel point and the compensation difference value as the compensation value of the target pixel point.
Preferably, the compensation value of the central pixel point of the non-net difference window is the original gray value of the central pixel point.
Preferably, the matching the pixel points in the rehabilitation monitoring image of the adjacent frames based on the compensation values of the pixel points to obtain at least two matched pixel pairs includes:
two pixel points in the matched pixel pair belong to rehabilitation monitoring images of different frames;
for the adjacent frame rehabilitation monitoring images, selecting any pixel point in the previous frame rehabilitation monitoring image as a previous frame pixel point, calculating the mean square error of each pixel point in the previous frame pixel point and the next frame rehabilitation monitoring image based on the compensation value of the pixel point, and forming a matched pixel pair by the pixel point of the next frame rehabilitation monitoring image and the previous frame pixel point corresponding to the minimum mean square error.
Preferably, the calculation formula of the mean square error of each pixel point in the previous frame pixel point and the subsequent frame rehabilitation monitoring image is as follows:
wherein ,the mean square error of any pixel point b in the rehabilitation monitoring image of the previous frame pixel point a and the subsequent frame; />Is the size of the local window; />The compensation value of the v pixel point in the partial window to which the pixel point a of the previous frame belongs; />And the compensation value of the v pixel point in the local window to which the pixel point b belongs in the rehabilitation monitoring image of the next frame.
Preferably, the obtaining the dynamic area and the background area in the rehabilitation monitoring image in the autism children structural teaching according to the optical flow vector corresponding to the matched pixel pair includes:
and acquiring an optical flow vector corresponding to the matched pixel, forming an optical flow field by the optical flow vector, and determining a dynamic region and a background region in a later frame of rehabilitation monitoring image in the adjacent frame of rehabilitation monitoring image corresponding to the matched pixel pair according to the optical flow field.
The embodiment of the invention has at least the following beneficial effects:
aiming at the problem of interference of long-time operation of a safety monitoring and recording system in the structural training process of the autism children, the invention provides a method for improving the accuracy of acquiring a dynamic region and a background region in a rehabilitation monitoring image by optimizing the pixel point matching precision so as to improve the denoising effect of the existing noise reduction technology on the rehabilitation monitoring image in the structural teaching of the autism children. According to the method, the net difference window is screened out from the plurality of local windows, only the pixel points in the net difference window are compensated later, and the problem that the calculated amount is overlarge because all the pixel points in the local windows are required to be compensated is avoided. The method comprises the steps of obtaining a dynamic region and a background region in a rehabilitation monitoring image according to an optical flow vector corresponding to a matched pixel, and obtaining the dynamic region and the background region in the rehabilitation monitoring image according to the optical flow vector corresponding to the matching pixel.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a system block diagram of a structured teaching monitoring data management system for autism children according to one embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following is a structural teaching monitoring data management system for children with autism according to the invention, which is provided by combining the accompanying drawings and the preferred embodiment, and the specific implementation, structure, characteristics and effects thereof are described in detail below. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The embodiment of the invention provides a specific implementation method of a structured teaching monitoring data management system suitable for autism children, which is suitable for the teaching management scene of the autism children. The monitoring image is obtained through the camera under the scene, and then the rehabilitation monitoring image is obtained. The technical problem that noise reduction effect on a monitoring image is poor is solved. According to the invention, the accuracy of acquiring the dynamic region and the background region in the rehabilitation monitoring image is improved by optimizing the pixel point matching precision, so that the denoising effect of the existing denoising technology on the rehabilitation monitoring image in the monitoring image is improved.
The invention provides a specific scheme suitable for the structural teaching monitoring data management system for the autism children, which is specifically described below with reference to the accompanying drawings.
Referring to fig. 1, a system block diagram of a structured teaching monitoring data management system for autism children according to one embodiment of the present invention is shown, the system includes the following modules:
the data acquisition module 10 is used for acquiring rehabilitation monitoring images in the structural teaching of the autism children.
And extracting continuous video images from the monitoring images of the home or the rehabilitation center, dividing the continuous video images into static frame images, and taking the static frame images as initial monitoring images. In the case of a closed-circuit television camera with poor ambient light and long operating time, noise problems are very likely to occur, and such noise is almost white noise. In order to simplify the calculation and save the operation time, the gray scale normalization processing is carried out on the initial monitoring image, and the processed image is used as the rehabilitation monitoring image.
The window screening module 20 is configured to screen a net difference window from the local window by comparing the gray level difference of the rehabilitation monitoring image of the adjacent frame with the gray level difference in the local window in the rehabilitation monitoring image.
Because in many cases, the guardian or the rehabilitation engineer cannot be guaranteed to accompany the autism children at any time, the artificial intelligent monitoring system plays a great role, and the noise problem cannot be avoided even though the camera with higher quality is used as a household normal level hardware device, so that the image definition depends on the preprocessing system, the definition is guaranteed in the training process, the follow-up retrospective training record data is needed, and the adjustment of the rehabilitation training scheme is very important.
Since the occurrence of image noise is random, the noise occurrence for each frame of image is different. In order to avoid motion blur of the monitored images of adjacent frames after 2D noise reduction, the monitored images are processed by the traditional DNR-3D noise reduction method, and the DNR-3D noise reduction method depicts moving bodies in a scene and calculates the direction and the speed of the moving images according to the content relation between the front frame and the rear frame. This allows exposure, removal of background noise and dynamic noise in the video by temporal noise reduction techniques. Namely, by comparing the pixel points which move in adjacent frames, the motion pixel points are subjected to spatial domain noise reduction, and the stationary pixel points are smoothed by adopting a time domain noise reduction method of weighting and averaging of the adjacent frames. The method can distinguish the motion foreground image from the static background image after processing. However, white noise such as gaussian noise is superimposed on almost all pixels, only the depths of the noise are different, so that it is impossible to find moving pixels in continuous frames regardless of the noise, especially when the motion amplitude of the body of the monitored target is small, the moving pixels are more difficult to determine, and therefore, the accuracy of the method for determining the moving pixels in the noise image by the existing algorithm is unstable, and the processing effect is completely dependent on the accuracy of motion estimation. The monitoring target in the embodiment of the invention is the autistic children.
Aiming at the problems that the accuracy is unstable and the processing effect is completely dependent on the accuracy of motion estimation in the existing method for determining the motion pixel point in the noise image by the algorithm, the invention considers that the total information quantity between the adjacent frame recovery monitoring images is extremely small in change, and when motion occurs, the main reason for influencing the accuracy of motion estimation is that the superimposed noise waves on the adjacent frame recovery are different, so if the noise information difference on the adjacent frame can be estimated, and the noise interference degree on the adjacent frame recovery monitoring images can be adjusted to be consistent through compensation, even if the noise still cannot be removed directly through an estimated value, the accuracy of motion estimation can be improved when the noise environment is consistent, and the method is helpful for improving the noise reduction effect during the subsequent 3D noise reduction.
The monitoring system is assumed to be continuously interfered by noise after long-time operation, the adjacent frame rehabilitation monitoring images contain noise waves with different degrees, and the continuous frame changes little, and the information quantity in the adjacent frame rehabilitation monitoring images can be assumed to be almost equal although the monitoring target moves. And taking the absolute value of the difference value of the sum of the pixel values of all pixel points in any two adjacent frames of rehabilitation monitoring images as the difference value of noise information in the adjacent frames of images, and taking the absolute value of the difference value as the whole difference value. I.e. taking the absolute value of the difference value of the sum of the pixel values of all the pixel points of the rehabilitation monitoring images of the adjacent frames as the whole difference value.
Because the noise is randomly distributed, local information needs to be analyzed, a sliding window of 5×5 is arranged, and in order to complete the window taking pixel points at the edge as the center, blank spaces are complemented at the edges of all input images, because the invention arranges the window of 5×5, the four edges of the image are outwards complemented with two rows and two columns of pixel points, and the pixel value of the filled pixel points is 0.
For global images, the total noise information variance can be calculated; however, the motion area is unknown locally, and because the rehabilitation monitoring image is not a completely static area, the gray level difference between the local areas cannot be directly compared to obtain the noise information amount difference. And if the noise interference on the n+1 frame rehabilitation monitoring image is higher, only the noise information difference amount can be known to be derived from the n+1 frame rehabilitation monitoring image as a whole, but the local noise information is more or less uncertain. Since it is finally necessary to compensate for the region where the noise level is high, it is necessary to determine the compensation target.
And calculating a window with information quantity net difference through the average noise difference value of the local window, and screening out the net difference window by comparing the gray level difference of the rehabilitation monitoring image of the adjacent frame with the gray level difference in the local window in the rehabilitation monitoring image. It can also be said that when the difference of a certain local window in the neighboring frame rehabilitation monitoring image satisfies the screening condition, the local window is considered to generate a net noise difference amount in the neighboring frame, and the local window is regarded as a net difference window. Specific: taking the absolute value of the difference value of the sum of the pixel values of all the pixel points in the local window at the same position in the rehabilitation monitoring image of the adjacent frame as a local difference value; and taking the absolute value of the difference value of the sum of the pixel values of all pixel points in the adjacent frame rehabilitation monitoring image as an integral difference value, and taking the ratio of the integral difference value to the number of local windows as a net noise difference value. And when the local difference value is larger than the net noise difference value, taking the local window corresponding to the local difference value as a net difference window. That is, the local difference value is larger than the net noise difference value, and the screening condition is satisfied.
The formula of the screening condition is:
wherein ,the method comprises the steps of (1) obtaining a (n+1) th frame of rehabilitation monitoring image and a local difference value of an ith local window in the n frame of rehabilitation monitoring image; />The net noise difference of the n+1th frame rehabilitation monitoring image and the n frame rehabilitation monitoring image; />The size of the local window is the number of pixel points in the local window; />The pixel value of the v pixel point in the i local window in the n+1th frame rehabilitation monitoring image is the pixel value of the v pixel point; />The pixel value of the v pixel point in the i local window in the nth frame of rehabilitation monitoring image is obtained; h is the number of local windows; />And the overall difference value of the n+1th frame rehabilitation monitoring image and the n frame rehabilitation monitoring image.
And when the local difference value is larger than the net noise difference value, taking the local window corresponding to the local difference value as a net difference window.
The number of the local windows is not supplemented with the number of the pixels in the rehabilitation monitoring image before two rows and two columns of pixels because the local windows slide in the rehabilitation monitoring image. The net noise difference is the average noise difference which is distributed in each local window in average for the total difference of the rehabilitation monitoring images of the adjacent frames. The local difference value reflects the difference of gray sums at any local window position of the adjacent frame rehabilitation monitoring images, and when the local difference value is close to 0, the noise interference degree at the local window position in the adjacent two frames of rehabilitation monitoring images is nearly similar. When the local difference value is larger than the net noise difference value, the noise interference of one frame in the two adjacent frames of rehabilitation monitoring images is far larger than the noise of the other frame, and the local window position on the adjacent frames of rehabilitation monitoring images is considered to generate the net noise difference value, so that all local windows generating the net noise difference value on the adjacent frames of rehabilitation monitoring images can be screened out by comparing the local difference value and the net noise difference value.
Furthermore, the compensation point can be determined according to the gray value sum of the local window in the rehabilitation monitoring image of the adjacent frame, that is, when the local noise interference difference between the two frames at the position is large, compensation is needed, and the position of the frame at which the compensation is performed is determined. When the gray value of the central pixel point of the ith local window of the nth frame of rehabilitation monitoring image is larger than the gray value of the central pixel point of the ith local window of the (n+1) th frame of rehabilitation monitoring image, taking the central pixel point of the ith local window of the nth frame of rehabilitation monitoring image as a compensation point to be compensated; when the gray value of the central pixel point of the ith local window of the n+1 frame of rehabilitation monitoring image is larger than that of the ith local window of the n frame of rehabilitation monitoring image, taking the central pixel point of the ith local window of the n+1 frame of rehabilitation monitoring image as a compensation point to be compensated. It should be noted that in the embodiment of the present invention, only the compensation point may be compensated, and each pixel point in the rehabilitation monitoring image may also be compensated.
The pixel point matching module 30 is used for acquiring differential images of the rehabilitation monitoring images of the adjacent frames in the structural teaching of the autism children; calculating the signal-to-noise ratio of each net difference window of a previous frame of rehabilitation monitoring image in the difference image and the adjacent frame of rehabilitation monitoring image, and taking the signal-to-noise ratio as a local signal-to-noise ratio; obtaining the noise degree of the net difference window according to the local signal-to-noise ratio of the net difference window in the adjacent frame rehabilitation monitoring image and the gray level difference of the adjacent frame rehabilitation monitoring image; determining a compensation value of the pixel point in the net difference window according to the noise degree of the net difference window and the gray value of the pixel point in the net difference window; and matching the pixel points in the rehabilitation monitoring images of the adjacent frames based on the compensation values of the pixel points to obtain at least two matched pixel pairs.
The local signal-to-noise ratio is obtained by calculating the signal-to-noise ratio of the local window, the signal-to-noise ratio describes the noise degree of the noise image compared with the noise degree of the original image, in the embodiment of the invention, the previous frame of the rehabilitation monitoring image can be used as the original image, any local window in the rehabilitation monitoring image is used as the local window corresponding to the original image, the local window at the same position of the next frame of the rehabilitation monitoring image is used as the noise image, and the signal-to-noise ratio of the local window of the previous frame of the rehabilitation monitoring image and the local window of the next frame of the rehabilitation monitoring image are calculated, so that the local signal-to-noise ratio of the local window position on the adjacent frame of the rehabilitation monitoring image is obtained.
And for the adjacent frame of rehabilitation monitoring image, subtracting the previous frame of rehabilitation monitoring image A from the next frame of rehabilitation monitoring image B, wherein the two images are noise images, and subtracting the noise images from the next frame of rehabilitation monitoring image B to obtain a differential image C. We regard the previous frame of the rehabilitation monitoring image a as the information part of the next frame of the rehabilitation monitoring image B, and regard the differential image C as the noise part of the next frame of the rehabilitation monitoring image B which is more than the previous frame of the rehabilitation monitoring image a. And respectively calculating the variance of the rehabilitation monitoring image A and the variance of the differential image C of the previous frame, calculating the ratio of the variances, and obtaining the local signal-to-noise ratio in dB after logarithm. In the embodiment of the invention, when the local signal to noise ratio is calculated, the variance calculation of all local windows is not calculated by the gray average value in each window, but is calculated by taking the global image average value as the standard, namely the gray average value of the rehabilitation monitoring image or the differential image of the previous frame.
The method for obtaining the local signal-to-noise ratio comprises the following steps: firstly, obtaining differential images of rehabilitation monitoring images of adjacent frames; further, the signal-to-noise ratio of each net difference window of the previous frame of rehabilitation monitoring image in the difference image and the adjacent frame of rehabilitation monitoring image is calculated and used as the local signal-to-noise ratio. Specific: for any net difference window in the difference image, calculating a gray variance corresponding to the net difference window according to the pixel value of a pixel point in the net difference window in the previous frame of the rehabilitation monitoring image in the adjacent frame of the rehabilitation monitoring image and the gray mean value of the previous frame of the rehabilitation monitoring image in the adjacent frame of the rehabilitation monitoring image, and taking the gray variance as a first variance; calculating a gray variance corresponding to the difference image according to the pixel value of the pixel point in the net difference window in the difference image and the gray mean value of the difference image, and taking the gray variance as a second variance; and taking the logarithmic function value taking 10 as a base and taking the ratio of the first variance and the second variance as a true number as a local signal-to-noise ratio corresponding to the net difference window.
The calculation formula of the local signal-to-noise ratio is as follows:
wherein ,local signal-to-noise ratio for the i-th net difference window; />Is a logarithmic function with a base of 10;is the first variance; />Is the second variance; />The size of the local window is the number of pixel points in the local window; />The pixel value of the v pixel point in the i-th net difference window in the previous frame of rehabilitation monitoring image in the adjacent frame of rehabilitation monitoring image; />The gray average value of a previous frame of rehabilitation monitoring image in the adjacent frame of rehabilitation monitoring images; />For the i-th net in the differential imagePixel values of a v-th pixel point in the difference window; />Is the gray average of the differential image. It should be noted that, since the net difference window is a local window that is selected, the size of the local window is the size of the net difference window.
The purpose of using the global image mean in the calculation formula of the local signal-to-noise ratio is to make the measurement standards consistent among the variances of each local window, instead of calculating the internal variances respectively, so that the calculation result cannot be used for the noise difference of the local window. The calculation formula of the local signal-to-noise ratio is slightly improved for the conventional calculation formula of the signal-to-noise ratio, namely, the local signal-to-noise ratio is calculated by using the average value of the global image instead of using the average value of the local window, and the calculation method of the signal-to-noise ratio is a well-known technology of a person skilled in the art and is not described herein. It should be noted that, the global image herein refers to a previous frame of the rehabilitation monitoring image in the differential image and the two adjacent frames of the rehabilitation monitoring images.
After obtaining the local signal-to-noise ratio of each net difference window, under the condition that the total noise difference between the adjacent frame rehabilitation monitoring images is known, namely under the condition that the integral difference value between the adjacent frame rehabilitation monitoring images is known, determining the noise degree of the net difference window according to the local signal-to-noise ratio and the integral difference value of the net difference window in the adjacent frame rehabilitation monitoring images, and obtaining the noise degree of the net difference window according to the local signal-to-noise ratio of the net difference window in the adjacent frame rehabilitation monitoring images and the gray level difference of the adjacent frame rehabilitation monitoring images.
The calculation formula of the noise degree is as follows:
wherein ,noise level for the i-th net difference window; />Local signal-to-noise ratio for the i-th net difference window; />The number of net difference windows corresponding to the nth frame of rehabilitation monitoring image and the (n+1) th frame of rehabilitation monitoring image; />The sum of pixel values of all pixel points in the n+1th frame rehabilitation monitoring image; />And the sum of pixel values of all pixel points in the nth frame rehabilitation monitoring image is obtained.
It should be noted that, because the sizes of the rehabilitation monitoring images are the same, the number of local windows in the rehabilitation monitoring images is the same, and because the net difference windows are obtained by analyzing the rehabilitation monitoring images of adjacent frames, the number of net difference windows corresponding to the rehabilitation monitoring images of adjacent frames is the same, and the positions of the net difference windows are the same.
Wherein, in the calculation of noise degree, the invention adopts the variance obtained by the gray average value calculation of the global image when calculating the local signal-to-noise ratio, so the local signal-to-noise ratio of all the net difference windows can be unified and normalized under the same measurement standard,a process of normalizing the local signal-to-noise ratio of the net difference window; />And the overall difference value of the n+1th frame rehabilitation monitoring image and the n frame rehabilitation monitoring image. The total difference value reflects the total difference of noise of two adjacent frames of rehabilitation monitoring images, and the total difference value is +.>And the integral difference value is in direct proportion to the noise degree.
And calculating the noise degree of each net difference window, and setting the corresponding noise degree of the non-net difference window to be 0 directly. Calculating the noise degree of the net difference window, and when the difference of the rehabilitation monitoring images of adjacent frames is larger, reducing the local noise interference after compensating the net difference window; the noise level of the non-net difference window is set to 0 because the non-net difference window reflects that no net noise difference exists in the window, and the noise information quantity difference of the rehabilitation monitoring images of the adjacent frames is smaller, so that the noise level is set to 0 if compensation is carried out in a trade-off way.
Noise difference between every two adjacent frame rehabilitation monitoring images is eliminated, so that better accuracy is obtained when pixel point motion estimation is performed between the adjacent frame rehabilitation monitoring images. The noise difference between each pixel point of the net difference window is accumulated to obtain the estimated noise degree of the net difference windowIf the noise environment between the pixel points in the adjacent frame rehabilitation monitoring images is to be unified, the deconvolution process of the net noise difference window where the compensation points in one adjacent frame rehabilitation monitoring image are located is adopted. And determining the compensation value of the pixel point according to the noise degree of the net difference window and the gray value of the pixel point in the net difference window. Specific: selecting any pixel point in the net difference window as a target pixel point, and taking the ratio of the sum of the pixel value of the target pixel point and the pixel value of the pixel point in the net difference window corresponding to the target pixel point as the compensation weight of the target pixel point; taking the product of the compensation weight of the target pixel point and the noise degree of the net difference window corresponding to the target pixel point as a compensation difference value of the pixel point; and taking the difference value between the gray value of the target pixel point and the compensation difference value as the compensation value of the target pixel point.
Taking the v pixel point in the net difference window as a target pixel point as an example, the calculation formula of the compensation value is as follows:
wherein ,the compensation value of the v pixel point in the net difference window in the rehabilitation monitoring image is obtained; />The gray value of the v pixel point in the net difference window in the rehabilitation monitoring image is obtained; />The gray value of the ith pixel point in the local window corresponding to the ith pixel point in the net difference window in the rehabilitation monitoring image is obtained; />Noise degree of a corresponding local window of a v pixel point in a net difference window in the rehabilitation monitoring image; />Compensating weight for a v pixel point in a net difference window in the rehabilitation monitoring image; />Compensating difference values for the v pixel points in the net difference window in the rehabilitation monitoring image; />Is the size of the local window.
In the calculation formula of the compensation value, the compensation weight can be said to be that the gray value of the v pixel point is normalized, the larger the gray value is, the larger the corresponding compensation weight is, the larger the compensation difference value reflects the noise distribution amount of the pixel point, the larger the noise distribution amount is, the larger the corresponding noise is, and the gray value minus the compensation difference value obtains the compensated compensation value. The compensation difference value and the compensation value are in inverse relation.
The formula of the compensation value is calculated only for the pixel points in the net difference window, and for the non-net difference window, the compensation value of the center pixel point of the corresponding non-net difference window is the original gray value of the center pixel point because the noise degree corresponding to the center pixel point of the non-net difference window is 0. And obtaining compensation results of all pixel points in the two adjacent frames of rehabilitation monitoring images, namely obtaining compensation values of all pixel points in the two adjacent frames of rehabilitation monitoring images.
Further, based on the compensation value of the pixel points, the pixel points in the rehabilitation monitoring images of the adjacent frames are matched to obtain at least two matched pixel pairs. And calculating the mean square error corresponding to the pixel point, and carrying out pixel point matching on the mean square error. It should be noted that, the compensation value of the pixel point calculated by the method is only used for matching the pixel point to obtain a matched pixel pair, and the pixel value of the pixel point in the original rehabilitation monitoring image is not changed. Wherein, two pixel points in the matched pixel pair belong to different frames of rehabilitation monitoring images.
The specific acquisition method of the matched pixel pair comprises the following steps: for the adjacent frame rehabilitation monitoring images, selecting any pixel point in the previous frame rehabilitation monitoring image as a previous frame pixel point, calculating the mean square error of each pixel point in the previous frame pixel point and the next frame rehabilitation monitoring image based on the compensation value of the pixel point, and forming a matched pixel pair by the pixel point of the next frame rehabilitation monitoring image and the previous frame pixel point corresponding to the minimum mean square error. If for any previous frame pixel point y, calculating a mean square error between the previous frame pixel point y and each pixel point in the next frame rehabilitation monitoring image to obtain a plurality of mean square errors, and screening out the minimum mean square error from the plurality of mean square errors, wherein the pixel point in the next frame rehabilitation monitoring image corresponding to the minimum mean square error and the previous frame pixel point y form a matched pixel pair.
The calculation formula of the mean square error is as follows:
wherein ,the mean square error of any pixel point b in the rehabilitation monitoring image of the previous frame pixel point a and the subsequent frame; />Is the size of the local window; />The compensation value of the v pixel point in the partial window to which the pixel point a of the previous frame belongs; />And the compensation value of the v pixel point in the local window to which the pixel point b belongs in the rehabilitation monitoring image of the next frame.
It should be noted that the calculation method of the mean square error is a well known technique for those skilled in the art, and will not be described herein.
After the mean square error of each pixel point in the previous frame of pixel points and the next frame of rehabilitation monitoring image is calculated, the pixel point of the next frame of rehabilitation monitoring image corresponding to the minimum mean square error and the previous frame of pixel points form a matched pixel pair. After a plurality of matched pixel pairs are obtained, gray level differences of pixel points at the same position in a rehabilitation monitoring image of a later frame and a rehabilitation monitoring image of a previous frame are obtained, and standard deviations of the gray level differences of the pixel points are obtained. And deleting the matched pixel pairs when the corresponding mean square error of the matched pixel pairs is larger than the standard deviation of the gray scale difference, namely only retaining the matched pixel pairs with the corresponding mean square error smaller than or equal to the standard deviation of the gray scale difference. The reason why the matched pixel pairs are deleted based on the standard deviation of the gray scale difference is that the standard deviation represents the overall difference fluctuation range of the two images, and the accuracy of matching is improved by deleting the matched pixel pairs based on the standard deviation of the gray scale difference.
The noise reduction module 40 is configured to obtain a dynamic region and a background region in the rehabilitation monitoring image in the autism children structural teaching according to the optical flow vector corresponding to the matched pixel pair, and respectively reduce noise in the dynamic region and the background region to obtain a high-quality rehabilitation monitoring image in the autism children structural teaching.
And acquiring an optical flow vector corresponding to the matched pixel, forming an optical flow field by the optical flow vector, and determining a dynamic region and a background region in a later frame of rehabilitation monitoring image in the adjacent frame of rehabilitation monitoring image corresponding to the matched pixel pair according to the optical flow field. The dynamic region is a region formed by pixels in the rehabilitation monitoring image of the later frame in the matched pixel pair with the optical flow vector of non-zero value, and the background region is a region formed by pixels in the rehabilitation monitoring image of the later frame in the matched pixel pair with the optical flow vector of zero value. It should be noted that, there are various methods for determining the dynamic area and the background area according to the optical flow field, and in other embodiments, the practitioner may adjust the acquisition method according to the actual situation.
After the dynamic region is obtained, the 3D noise reduction technology is utilized to respectively carry out noise reduction treatment on the dynamic region and the background region of the next frame of rehabilitation monitoring image in the adjacent frames of rehabilitation monitoring images, and the high-quality rehabilitation monitoring image in the autism children structural teaching is obtained. Repeating the above operation on all the adjacent frames of rehabilitation monitoring images to obtain a dynamic area and a background area corresponding to each frame of rehabilitation monitoring image, and respectively carrying out noise reduction treatment. And after noise reduction treatment is carried out on each frame of rehabilitation monitoring image, a clear high-quality rehabilitation monitoring image is obtained. In the embodiment of the invention, the specific 3D noise reduction technology is used for processing the dynamic region and the background region by adopting a time domain noise reduction method or a space domain noise reduction method respectively, and the dynamic region and the background region are selected arbitrarily in a common algorithm.
In summary, the invention aims at the problem of interference of the long-time running of the safety monitoring and recording system in the structural training process of the autism children, and provides the method for improving the accuracy of acquiring the dynamic region and the background region in the rehabilitation monitoring image by optimizing the pixel point matching precision so as to improve the denoising effect of the existing denoising technology on the rehabilitation monitoring image in the monitoring image. According to the method, the net difference window is screened out from the plurality of local windows, only the pixel points in the net difference window are compensated later, and the problem that the calculated amount is overlarge because all the pixel points in the local windows are required to be compensated is avoided. The method comprises the steps of obtaining a dynamic region and a background region in a rehabilitation monitoring image according to an optical flow vector corresponding to a matched pixel, and obtaining the dynamic region and the background region in the rehabilitation monitoring image according to the optical flow vector corresponding to the matching pixel.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.

Claims (4)

1. The structured teaching monitoring data management system for the autism children is characterized by comprising the following modules:
the data acquisition module is used for acquiring rehabilitation monitoring images in the structural teaching of the autism children;
the window screening module is used for screening out a net difference window from the local window by comparing the gray level difference of the rehabilitation monitoring image of the adjacent frame with the gray level difference in the local window in the rehabilitation monitoring image;
the method for acquiring the net difference window comprises the following steps: taking the absolute value of the difference value of the sum of the pixel values of all the pixel points in the local window at the same position in the rehabilitation monitoring image of the adjacent frame as a local difference value; taking the absolute value of the difference value of the sum of the pixel values of all pixel points in the adjacent frame rehabilitation monitoring image as an integral difference value, and taking the ratio of the integral difference value to the number of local windows as a net noise difference value; when the local difference value is larger than the net noise difference value, taking a local window corresponding to the local difference value as a net difference window;
the pixel point matching module is used for acquiring differential images of adjacent frame rehabilitation monitoring images in the autism children structural teaching; calculating the signal-to-noise ratio of each net difference window of a previous frame of rehabilitation monitoring image in the difference image and the adjacent frame of rehabilitation monitoring image, and taking the signal-to-noise ratio as a local signal-to-noise ratio; obtaining the noise degree of the net difference window according to the local signal-to-noise ratio of the net difference window in the adjacent frame rehabilitation monitoring image and the gray level difference of the adjacent frame rehabilitation monitoring image; determining a compensation value of the pixel point in the net difference window according to the noise degree of the net difference window and the gray value of the pixel point in the net difference window; matching pixel points in the rehabilitation monitoring images of adjacent frames based on the compensation values of the pixel points to obtain at least two matched pixel pairs;
the calculation formula of the noise degree is as follows:
wherein ,noise level for the i-th net difference window; />Local signal-to-noise ratio for the i-th net difference window; m is the number of net difference windows corresponding to the nth frame of rehabilitation monitoring image and the (n+1) th frame of rehabilitation monitoring image; />The sum of pixel values of all pixel points in the n+1th frame rehabilitation monitoring image; />The sum of pixel values of all pixel points in the nth frame of rehabilitation monitoring image is used;
determining a compensation value of the pixel point in the net difference window according to the noise degree of the net difference window and the gray value of the pixel point in the net difference window; based on the compensation value of the pixel points, the pixel points in the rehabilitation monitoring image of the adjacent frames are matched, and the acquisition method for obtaining at least two matched pixel pairs comprises the following steps: selecting any pixel point in the net difference window as a target pixel point, and taking the ratio of the sum of the pixel value of the target pixel point and the pixel value of the pixel point in the net difference window corresponding to the target pixel point as the compensation weight of the target pixel point; taking the product of the compensation weight of the target pixel point and the noise degree of the net difference window corresponding to the target pixel point as a compensation difference value of the pixel point; taking the difference value between the gray value of the target pixel point and the compensation difference value as the compensation value of the target pixel point; the compensation value of the central pixel point of the non-net difference window is the original gray value of the central pixel point; two pixel points in the matched pixel pair belong to rehabilitation monitoring images of different frames; for the adjacent frame rehabilitation monitoring images, selecting any pixel point in the previous frame rehabilitation monitoring image as a previous frame pixel point, calculating the mean square error of each pixel point in the previous frame pixel point and the next frame rehabilitation monitoring image based on the compensation value of the pixel point, and forming a matched pixel pair by the pixel point of the next frame rehabilitation monitoring image and the previous frame pixel point corresponding to the minimum mean square error;
the noise reduction module is used for obtaining a dynamic region and a background region in the rehabilitation monitoring image in the autism children structural teaching according to the optical flow vector corresponding to the matched pixel pair, and respectively reducing noise of the dynamic region and the background region to obtain a high-quality rehabilitation monitoring image in the autism children structural teaching.
2. A structured educational monitoring data management system for autism children according to claim 1, wherein the calculating the signal-to-noise ratio of each net difference window of a previous frame of rehabilitation monitoring image of the difference image and the adjacent frame of rehabilitation monitoring image as a local signal-to-noise ratio comprises:
for any net difference window in the difference image, calculating a gray variance corresponding to the net difference window according to the pixel value of a pixel point in the net difference window in the previous frame of the rehabilitation monitoring image in the adjacent frame of the rehabilitation monitoring image and the gray mean value of the previous frame of the rehabilitation monitoring image in the adjacent frame of the rehabilitation monitoring image, and taking the gray variance as a first variance; calculating a gray variance corresponding to the difference image according to the pixel value of the pixel point in the net difference window in the difference image and the gray mean value of the difference image, and taking the gray variance as a second variance; and taking the logarithmic function value taking 10 as a base and taking the ratio of the first variance and the second variance as a true number as a local signal-to-noise ratio corresponding to the net difference window.
3. The structured teaching monitoring data management system for autism children according to claim 1, wherein the calculation formula of the mean square error of each pixel point in the previous frame of pixel points and the subsequent frame of rehabilitation monitoring image is:
wherein ,the mean square error of any pixel point b in the rehabilitation monitoring image of the previous frame pixel point a and the subsequent frame; />Is the size of the local window; />The compensation value of the v pixel point in the partial window to which the pixel point a of the previous frame belongs; />The compensation of the v-th pixel point in the local window to which the pixel point b belongs in the rehabilitation monitoring image of the next frameAnd (5) compensating.
4. The system for managing structured teaching and monitoring data for autism children according to claim 1, wherein the obtaining the dynamic area and the background area in the rehabilitation monitoring image in structured teaching for autism children according to the optical flow vector corresponding to the matched pixel pair comprises:
and acquiring an optical flow vector corresponding to the matched pixel, forming an optical flow field by the optical flow vector, and determining a dynamic region and a background region in a later frame of rehabilitation monitoring image in the adjacent frame of rehabilitation monitoring image corresponding to the matched pixel pair according to the optical flow field.
CN202310589183.4A 2023-05-24 2023-05-24 Structured teaching monitoring data management system suitable for autism children Active CN116342891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310589183.4A CN116342891B (en) 2023-05-24 2023-05-24 Structured teaching monitoring data management system suitable for autism children

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310589183.4A CN116342891B (en) 2023-05-24 2023-05-24 Structured teaching monitoring data management system suitable for autism children

Publications (2)

Publication Number Publication Date
CN116342891A CN116342891A (en) 2023-06-27
CN116342891B true CN116342891B (en) 2023-08-15

Family

ID=86877426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310589183.4A Active CN116342891B (en) 2023-05-24 2023-05-24 Structured teaching monitoring data management system suitable for autism children

Country Status (1)

Country Link
CN (1) CN116342891B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116523801B (en) * 2023-07-03 2023-08-25 贵州医科大学附属医院 Intelligent monitoring method for nursing premature infants

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120095647A (en) * 2011-02-21 2012-08-29 부경대학교 산학협력단 Apparatus and method for home healthcare monitoring
CN102833464A (en) * 2012-07-24 2012-12-19 常州泰宇信息科技有限公司 Method for structurally reconstructing background for intelligent video monitoring
CN111784605A (en) * 2020-06-30 2020-10-16 珠海全志科技股份有限公司 Image denoising method based on region guidance, computer device and computer readable storage medium
CN111833393A (en) * 2020-07-05 2020-10-27 桂林电子科技大学 Binocular stereo matching method based on edge information
CN115760637A (en) * 2022-12-01 2023-03-07 南京哈哈云信息科技有限公司 Elderly physical sign health monitoring method, system and equipment based on endowment robot
CN115797641A (en) * 2023-02-13 2023-03-14 深圳市特安电子有限公司 Electronic equipment gas leakage detection method
CN115880784A (en) * 2023-02-22 2023-03-31 武汉商学院 Scenic spot multi-person action behavior monitoring method based on artificial intelligence
CN115908154A (en) * 2022-09-20 2023-04-04 盐城众拓视觉创意有限公司 Video late-stage particle noise removing method based on image processing
CN116092018A (en) * 2023-04-10 2023-05-09 同方德诚(山东)科技股份公司 Fire-fighting hidden danger monitoring method and system based on intelligent building
CN116153452A (en) * 2023-04-18 2023-05-23 济南科汛智能科技有限公司 Medical electronic medical record storage system based on artificial intelligence

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112311962B (en) * 2019-07-29 2023-11-24 深圳市中兴微电子技术有限公司 Video denoising method and device and computer readable storage medium
US11427193B2 (en) * 2020-01-22 2022-08-30 Nodar Inc. Methods and systems for providing depth maps with confidence estimates

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120095647A (en) * 2011-02-21 2012-08-29 부경대학교 산학협력단 Apparatus and method for home healthcare monitoring
CN102833464A (en) * 2012-07-24 2012-12-19 常州泰宇信息科技有限公司 Method for structurally reconstructing background for intelligent video monitoring
CN111784605A (en) * 2020-06-30 2020-10-16 珠海全志科技股份有限公司 Image denoising method based on region guidance, computer device and computer readable storage medium
CN111833393A (en) * 2020-07-05 2020-10-27 桂林电子科技大学 Binocular stereo matching method based on edge information
CN115908154A (en) * 2022-09-20 2023-04-04 盐城众拓视觉创意有限公司 Video late-stage particle noise removing method based on image processing
CN115760637A (en) * 2022-12-01 2023-03-07 南京哈哈云信息科技有限公司 Elderly physical sign health monitoring method, system and equipment based on endowment robot
CN115797641A (en) * 2023-02-13 2023-03-14 深圳市特安电子有限公司 Electronic equipment gas leakage detection method
CN115880784A (en) * 2023-02-22 2023-03-31 武汉商学院 Scenic spot multi-person action behavior monitoring method based on artificial intelligence
CN116092018A (en) * 2023-04-10 2023-05-09 同方德诚(山东)科技股份公司 Fire-fighting hidden danger monitoring method and system based on intelligent building
CN116153452A (en) * 2023-04-18 2023-05-23 济南科汛智能科技有限公司 Medical electronic medical record storage system based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于大位移区域动态特征的阴燃火检测";袁鹏等;《消防科学与技术》;全文 *

Also Published As

Publication number Publication date
CN116342891A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
US10963993B2 (en) Image noise intensity estimation method, image noise intensity estimation device, and image recognition device
US10902563B2 (en) Moran's / for impulse noise detection and removal in color images
CN104978715A (en) Non-local mean value image denoising method based on filter window and parameter adaption
CN116342891B (en) Structured teaching monitoring data management system suitable for autism children
CN110288550B (en) Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition
KR20110014067A (en) Method and system for transformation of stereo content
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
CN113899349B (en) Sea wave parameter detection method, equipment and storage medium
CN112270691A (en) Monocular video structure and motion prediction method based on dynamic filter network
CN117422653A (en) Low-light image enhancement method based on weight sharing and iterative data optimization
CN111652821A (en) Low-light-level video image noise reduction processing method, device and equipment based on gradient information
CN116596792A (en) Inland river foggy scene recovery method, system and equipment for intelligent ship
CN103473753A (en) Target detection method based on multi-scale wavelet threshold denoising
CN111275642A (en) Low-illumination image enhancement method based on significant foreground content
JP2021515482A (en) Systems and methods for built-in testing of optical sensors
CN114049337A (en) Tunnel deformation detection method and system based on artificial intelligence
CN110750757B (en) Image jitter amount calculation method based on gray scale linear modeling and pyramid decomposition
Ang et al. Noise-aware zero-reference low-light image enhancement for object detection
CN116391203A (en) Method for improving signal-to-noise ratio of image frame sequence and image processing device
JPS63194477A (en) Background picture extracting method
CN104809712B (en) A kind of image fast repairing method based on rough set
CN111582076A (en) Picture freezing detection method based on pixel motion intelligent perception
CN113780100B (en) Facial physiological information shielding method without changing video effect
CN117710245B (en) Astronomical telescope error rapid detection method
CN114511591B (en) Track tracking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant