CN117422627B - AI simulation teaching method and system based on image processing - Google Patents

AI simulation teaching method and system based on image processing Download PDF

Info

Publication number
CN117422627B
CN117422627B CN202311733029.6A CN202311733029A CN117422627B CN 117422627 B CN117422627 B CN 117422627B CN 202311733029 A CN202311733029 A CN 202311733029A CN 117422627 B CN117422627 B CN 117422627B
Authority
CN
China
Prior art keywords
gray
pixel point
current frame
image
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311733029.6A
Other languages
Chinese (zh)
Other versions
CN117422627A (en
Inventor
王亚
赵策
屠静
苏岳
万晶晶
李伟伟
颉彬
周勤民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuoshi Future Beijing technology Co ltd
Original Assignee
Zhuoshi Future Beijing technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuoshi Future Beijing technology Co ltd filed Critical Zhuoshi Future Beijing technology Co ltd
Priority to CN202311733029.6A priority Critical patent/CN117422627B/en
Publication of CN117422627A publication Critical patent/CN117422627A/en
Application granted granted Critical
Publication of CN117422627B publication Critical patent/CN117422627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the technical field of image processing, in particular to an AI simulation teaching method and system based on image processing, comprising the following steps: obtaining a frame-by-frame gray image of a teaching video, obtaining motion blur feature quantity of a pixel point according to gray value change conditions of the pixel point at the same position in a multi-frame gray image, obtaining correlation vectors of the pixel point in combination with gradient directions of the pixel point, obtaining dynamic degree of each block according to the correlation vectors of all the pixel points in each block, obtaining motion feature quantity of the pixel point according to the correlation vectors of the pixel point and other pixel points in eight adjacent domains and the dynamic degree of the block where the pixel point is located, dividing the gray image of the current frame into a plurality of categories according to the motion feature quantity, and performing blind deconvolution on the image of each category to obtain the deblurring gray image of the current frame. The method and the device eliminate the influence of motion blur caused by teacher motion change in the teaching video on the definition of the teaching video, and improve the teaching quality.

Description

AI simulation teaching method and system based on image processing
Technical Field
The invention relates to the technical field of image processing, in particular to an AI simulation teaching method and system based on image processing.
Background
AI simulation teaching is a teaching method for training skills by simulating real natural phenomena or social phenomena with a computer and students playing a certain role in simulation.
In the AI simulation teaching process, a teaching video is required to be recorded for relevant teaching demonstration, and the motion blur caused by the action change of a teacher exists in the teaching video, so that the influence on the definition of the teaching video exists, and the motion blur of the teaching video is removed usually by blind deconvolution at present.
Blind deconvolution uses a unified blur kernel for a single image. Because the motion states of different parts of a teacher in the teaching video are different, motion blur generated at different positions in a single frame image of the teaching video is different, and when the single frame image is subjected to motion blur removal by utilizing a unified blur check, the motion blur at each position in the single frame image cannot be well eliminated.
Disclosure of Invention
In order to solve the problems, the invention discloses an AI simulation teaching method and system based on image processing.
The AI simulation teaching method based on image processing adopts the following technical scheme:
the embodiment of the invention provides an AI simulation teaching method based on image processing, which comprises the following steps:
acquiring a frame-by-frame gray level image of a teaching video;
acquiring motion blur feature quantity of each pixel point in the current frame gray level image according to gray level value change conditions of the pixel points at the same position in the current frame gray level image and all previous frame gray level images;
obtaining a correlation vector of each pixel point according to the gradient direction and the motion blur characteristic quantity of each pixel point in the gray level image of the current frame; dividing a gray image of a current frame into a plurality of blocks, and acquiring the dynamic degree of each block according to the correlation vector of all pixel points in each block;
acquiring motion characteristic quantity of each pixel point in the gray level image of the current frame according to the correlation vector of each pixel point in the gray level image of the current frame and the rest pixel points in eight adjacent domains and the dynamic degree of the block where the correlation vector is located;
dividing the gray level image of the current frame into a plurality of categories according to the motion characteristic quantity; blind deconvolution is carried out on the images of each category respectively to obtain images with each category with the motion blur eliminated, and the images with all the categories with the motion blur eliminated are combined to obtain the deblurred gray image of the current frame.
Preferably, the step of obtaining the motion blur feature value of each pixel point in the current frame gray image according to the gray value change condition of the pixel point at the same position in the current frame gray image and all previous frame gray images includes the following specific steps:
acquiring the gray change frequency of a pixel point at the same position in the current frame gray image and all previous frame gray images according to the gray value of the pixel point; and acquiring the motion blur characteristic quantity of each pixel point in the gray image of the current frame according to the gray change frequency of each pixel point in the gray image of the current frame and the gray difference of each pixel point and the pixel point at the corresponding position in the gray image of the previous frame.
Preferably, the obtaining the gray level change frequency of the pixel point according to the gray level value of the pixel point at the same position in the gray level image of the current frame and the gray level images of all previous frames includes the following specific steps:
dividing a plurality of gray levels; for each pixel point in the gray level image of the current frame, the gray level value of the pixel point and the gray level values of the pixel points at the same position in all the gray level images of the previous frame are formed into a time sequence gray level sequence of the pixel point according to the time sequence; if the gray level of one gray value in the time sequence is different from the gray level of the previous gray value, the gray level is recorded as one gray level change, and the number of gray level changes in the time sequence is counted and used as the gray level change frequency of the pixel point.
Preferably, the obtaining the motion blur feature value of each pixel point in the current frame gray image according to the gray change frequency of each pixel point in the current frame gray image and the gray difference between each pixel point and the pixel point at the corresponding position in the previous frame gray image includes the following specific steps:
wherein,representing the +.f in the gray level image of the current frame>Motion blur feature amount of each pixel, < +.>Pick pass [0, ]>]Is>Representing the number of pixel points contained in each frame of gray level image; />Representing the +.f in the gray level image of the current frame>Gray scale change frequency of each pixel point; />Representing the total frame number of all frame gray images before the current frame gray image; />Representing the +.f in the gray level image of the current frame>Gray level of gray value of each pixel point and the previous gray levelFirst->The absolute value of the difference between the gray levels to which the gray values of the individual pixel points belong; />Representing the maximum function.
Preferably, the obtaining the correlation vector of each pixel point according to the gradient direction and the motion blur feature quantity of each pixel point in the gray level image of the current frame includes the following specific steps:
and forming a correlation vector of the pixel point by the gradient direction of the pixel point and the motion blur feature quantity of the pixel point, wherein the direction of the correlation vector is the gradient direction, and the mode of the correlation vector is the motion blur feature quantity.
Preferably, the obtaining the dynamic degree of each block according to the correlation vector of all the pixel points in each block includes the following specific steps:
wherein,representing the +.f in the gray level image of the current frame>Dynamic degree of individual blocks,/->Get pass [1, ]>]Is>Representing the number of blocks in the gray level image of the current frame; />Representing the +.f in the gray level image of the current frame>No. 4 of individual blocks>Correlation vectors for the individual pixels; />Representing the +.f in the gray level image of the current frame>No. 4 of individual blocks>Modulo length of correlation vector of each pixel point; />Representing the +.f in the gray level image of the current frame>No. 4 of individual blocks>Included angles between the gradient direction and the horizontal direction of each pixel point; />Representing the +.f in the gray level image of the current frame>The average value of included angles between the gradient direction and the horizontal direction of all pixel points of each block; />Representing the +.f in the gray level image of the current frame>The number of pixels contained in each tile.
Preferably, the step of obtaining the motion feature quantity of each pixel point in the current frame gray level image according to the correlation vector of each pixel point in the current frame gray level image and the rest pixel points in the eight adjacent domains and the dynamic degree of the block where the correlation vector is located includes the following specific steps:
wherein,representing the +.f in the gray level image of the current frame>Motion feature quantity of each pixel point; />Representing the +.f in the gray level image of the current frame>The dynamic degree of the block where each pixel point is located; />Representing the +.f in the gray level image of the current frame>Eighth +.>The dynamic degree of the block where each pixel point is located; />Representing the +.f in the gray level image of the current frame>Correlation vectors for the individual pixels; />Representing the +.f in the gray level image of the current frame>Eighth +.>Correlation vectors for the individual pixels; />Representing the +.f in the gray level image of the current frame>Modulo length of correlation vector of each pixel point, < +.>Representing the +.f in the gray level image of the current frame>Correlation vector of each pixel point and +.>Eighth +.>Modulo length of the sum of correlation vectors for individual pixels.
Preferably, the dividing the gray image of the current frame into a plurality of categories according to the motion feature quantity includes the following specific steps:
and carrying out mean shift clustering on all the pixel points in the gray level image of the current frame according to the abscissa and ordinate of each pixel point in the gray level image of the current frame and the motion characteristic quantity to obtain a plurality of categories.
Preferably, the dividing the gray image of the current frame into a plurality of blocks includes the following specific steps:
presetting a block side lengthDividing the gray image of the current frame into a plurality of +.>Size partitioning.
The invention also provides an AI simulation teaching system based on image processing, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of any AI simulation teaching method based on image processing when executing the computer program.
The technical scheme of the invention has the beneficial effects that: according to the method, the motion blur characteristic quantity of the pixel point is obtained according to the gray value change condition of the pixel point at the same position in the multi-frame gray image of the teaching video, the correlation vector of the pixel point is obtained in combination with the gradient direction of the pixel point, the motion degree of each block is obtained according to the correlation vector of all the pixel points in each block, the motion characteristic quantity of the pixel point is obtained according to the correlation vector of the pixel point and the rest pixel points in eight adjacent domains and the motion degree of the block where the pixel point is located, the gray image of the current frame is divided into a plurality of categories according to the motion characteristic quantity, the pixel point in the same motion state is divided into the same category as much as possible, blind deconvolution is carried out on the image of each category, and the motion blur of the pixel points in different motion states is removed by adopting different blur kernels, so that the motion blur of each position in the teaching video gray image can be well removed, the influence of the motion blur caused by the motion change of a teacher in the teaching video on the teaching video can be avoided, and the teaching video definition can be improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of the steps of the AI simulation teaching method based on image processing of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of the specific implementation, structure, characteristics and effects of the AI simulation teaching method based on image processing according to the invention with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the AI simulation teaching method based on image processing provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of an AI simulation teaching method based on image processing according to an embodiment of the present invention is shown, where the method includes the following steps:
s001, acquiring a frame-by-frame gray level image of the teaching video.
And shooting the teaching video by using a high-definition camera, acquiring frame-by-frame images of the teaching video, wherein each frame of image is an RGB image, and graying each acquired frame of RGB image to acquire each frame of gray image of the teaching video for facilitating subsequent processing.
Thus, each frame of gray level image of the teaching video is obtained.
S002, obtaining motion blur feature quantity of each pixel point in the current frame gray level image according to gray level value change conditions of the pixel points at the same position in the current frame gray level image and all previous frame gray level images.
It should be noted that, each frame of gray level image includes a teacher and a teaching courseware or a teaching blackboard writing, and the teacher moves continuously and dynamically during the course of teaching, for example, the teacher lifts up an arm to write a blackboard writing. The motion dynamic change of the teacher can form motion blur in the gray level image, and under the influence of the motion blur, the teaching courseware or the teaching blackboard writing part is not clear in position, so that the teaching quality is influenced. In this embodiment, the motion dynamic change of the teacher is considered to be a continuous process, so that the possibility that the pixel point at the same position is affected by motion blur is analyzed by combining the gray value changes of the pixel points at the same position in different frames of gray images.
Specifically, the number of gray levels is presetFor example +.>Without limitation, the practitioner can set the gray level number +_ according to the actual implementation>. The gray value range [0,255]Evenly divide into->Grey levels.
For each pixel point in the gray level image of the current frame, the gray level value of the pixel point and the gray level value of the pixel point at the same position in all the gray level images of the previous frame are formed into a time sequence gray level sequence of the pixel point according to time sequence, if the gray level to which one gray level value belongs in the time sequence gray level sequence is different from the gray level to which the previous gray level value belongs, the gray level change is marked as one gray level change, the number of times of gray level change in the time sequence gray level sequence is counted, and the number of times of gray level change of the pixel point is used as the gray level change frequency of the pixel point.
According to the gray change frequency of each pixel point in the gray image of the current frame and the gray difference of each pixel point at the corresponding position in the gray image of the previous frame, acquiring the motion blur feature quantity of each pixel point in the gray image of the current frame:
wherein,representing the +.f in the gray level image of the current frame>Motion blur feature quantity of each pixel point for representing the imageThe possibility that the pixel is subjected to motion blurred image, < >>Pick pass [0, ]>]Is>Representing the number of pixel points contained in each frame of gray level image; />Representing the +.f in the gray level image of the current frame>Gray scale change frequency of each pixel point; />Representing the current frame gray level image and the total frame number of all frame gray level images before the current frame gray level image, namely the length of the time sequence gray level sequence of each pixel point in the current frame gray level image; />Representing the +.f in the gray level image of the current frame>Gray difference between each pixel point and the corresponding pixel point in the gray image of the previous frame, namely +.>Gray level to which gray value of each pixel point belongs and the +.>The absolute value of the difference between the gray levels to which the gray values of the individual pixel points belong; />The function of the maximum value is represented,for at->And 1 selecting the maximum value, avoiding being +.>When equal to 0, results in a denominator of 0.
It should be noted that, each frame of gray level image of the teaching video contains a teacher and a teaching courseware or a teaching blackboard writing, the teacher dynamically changes in the course of teaching, and for the teaching courseware and the teaching blackboard writing, the change frequency is smaller than that of the dynamic change of the teacher, so in the motion blur feature quantity formula,the larger the corresponding pixel is, the more likely the teacher is to be the corresponding pixel. Because motion blur is generated in the process of dynamic change of teacher motion, the gray values of corresponding pixel points in the front and back frames are relatively close at the position where the motion blur exists, the page switching of teaching courseware is nearly instantaneous, the motion blur is not generated in a large probability, the gray value difference of the pixel points before and after switching is relatively large, and therefore, in a motion blur characteristic quantity formula, the gray values of the corresponding pixel points before and after switching are relatively close to each otherThe larger the case, the more>The smaller the corresponding pixel point is, the more likely the corresponding pixel point is when the teacher motion dynamically changes, and at the moment, the larger the motion blur feature quantity of the corresponding pixel point is, the greater the possibility that the pixel point is subjected to motion blur images is.
Thus, the motion blur feature quantity of each pixel point in the gray level image of the current frame is obtained.
S003, obtaining a correlation vector of each pixel point according to the gradient direction and the motion blur characteristic quantity of each pixel point in the gray image of the current frame, dividing the gray image of the current frame into a plurality of blocks, and obtaining the dynamic degree of each block according to the correlation vector of all the pixel points in each block.
In this embodiment, a partition side length is presetFor example +.>The method is not particularly limited, and an operator can set the side length of the block according to the actual implementation condition. Dividing the gray image of the current frame into a plurality of +.>Size partitioning.
It should be noted that, because the teacher motion has consistency, for example, when the arm lifts, the motion directions of all the pixels on the arm are consistent, and the motion blur appears as a smear of the edge of the moving object in the image, so the gradient directions of the pixels at the position where the motion blur exists are basically consistent under the influence of the motion blur, and therefore, the embodiment analyzes the gradient directions of all the pixels in the block and the motion blur feature quantity to obtain the block including the motion pixels.
The gradient direction of each pixel point in the gray level image of the current frame is obtained by using an image gradient algorithm, and it should be noted that the image gradient algorithm is not limited in this embodiment, and an operator can select the image gradient algorithm according to actual implementation conditions, for example, a Sobel operator, a lagrangian operator, and the like.
For each pixel point in the gray level image of the current frame, the gradient direction of the pixel point and the motion blur feature quantity of the pixel point form a correlation vector of the pixel point, wherein the direction of the correlation vector is the gradient direction, and the mode of the correlation vector is the motion blur feature quantity. Acquiring the dynamic degree of each block according to the correlation vector of all pixel points of each block in the gray level image of the current frame:
wherein,representing the +.f in the gray level image of the current frame>The dynamic degree of each block is used for representing the +.>The degree of motion blurred pixels contained in the blocks, < >>Get pass [1, ]>]Is>Representing the number of blocks in the gray level image of the current frame; />Representing the +.f in the gray level image of the current frame>No. 4 of individual blocks>Correlation vectors for the individual pixels; />Representing the +.f in the gray level image of the current frame>No. 4 of individual blocks>Modulo length of correlation vector of each pixel, i.e. the +.f in gray image of current frame>No. 4 of individual blocks>Motion blur feature values of the individual pixel points; />Representing the +.f in the gray level image of the current frame>No. 4 of individual blocks>Included angles between the gradient direction and the horizontal direction of each pixel point; />Representing the +.f in the gray level image of the current frame>The average value of included angles between the gradient direction and the horizontal direction of all pixel points of each block; />Representing the +.f in the gray level image of the current frame>The number of pixel points contained in each block; if motion blur exists in the block, the module length of the correlation vector of the pixel points of the motion blur in the block is larger, and the module length of the correlation vector of the other pixel points without the motion blur is smaller>Indicate->The average value of the modular length of the correlation vector of all pixel points in each block is equal to +.>Mean value of modular length of correlation vector of all pixels in each block +.>The larger the->The more motion blurred pixels contained in a segment. The human body motion in the local range shows the same motion trend, so that the gradient direction of the motion-blurred pixel points caused by the motion is basically consistent,indicate->The variance of the included angle between the gradient direction and the horizontal direction of all the pixel points in each block is equal to +.>Variance of included angle between gradient direction and horizontal direction of all pixel points in each blockSmaller, no->The more motion blurred pixels contained in a block, the corresponding +.>The greater the degree of dynamics of the individual tiles.
Thus, the dynamic degree of each block in the gray level image of the current frame is obtained.
S004, according to the correlation vector of each pixel point in the gray level image of the current frame and the rest pixel points in the eight adjacent domains and the dynamic degree of the block where the correlation vector is located, the motion characteristic quantity of each pixel point in the gray level image of the current frame is obtained.
It should be noted that, in the course of the dynamic change of the teacher motion, the motion states of different parts, for example, the swing amplitude and the swing direction of the left arm and the swing direction of the right arm are different, so that the motion blur generated by different parts is different, and it is difficult to deblur by using a unified blur kernel.
Specifically, for each pixel point in the gray level image of the current frame, according to the correlation vector of the pixel point and other pixel points in the eight adjacent domains and the dynamic degree of the block where the pixel point is located, the motion feature quantity of the pixel point is obtained:
wherein,representing the +.f in the gray level image of the current frame>Motion feature quantity of each pixel point; />Representing the +.f in the gray level image of the current frame>The dynamic degree of the block where each pixel point is located; />Representing the +.f in the gray level image of the current frame>Eighth +.>The dynamic degree of the block where each pixel point is located; />Representing the +.f in the gray level image of the current frame>Correlation vectors for the individual pixels; />Representing the +.f in the gray level image of the current frame>Eighth +.>Correlation vectors for the individual pixels; />Representing the +.f in the gray level image of the current frame>Modulo length of correlation vector of each pixel point, < +.>Representing the +.f in the gray level image of the current frame>Correlation vector of each pixel point and +.>Eighth +.>Modulo length of the sum of correlation vectors of the individual pixels; when->The eighth pixel point is in the eighth neighborhood of the pixel point>When the motion states of the two pixels are the same, the directions of the correlation vectors of the two pixels are basically consistent, so that the modular length of the sum of the correlation vectors of the two pixels is close to the sum of the modular lengths of the correlation vectors of the two pixels, when the eenth->The eighth pixel point is in the eighth neighborhood of the pixel point>When the motion states of the two pixels are different, the directions of the correlation vectors of the two pixels are different, so that the modulo length of the sum of the correlation vectors of the two pixels is far smaller than the modulo length of the sum of the correlation vectors of the two pixels, thus>The greater the->The eighth pixel point is in the eighth neighborhood of the pixel point>The more likely the motion states of the individual pixels are the same. When->The eighth pixel point is in the eighth neighborhood of the pixel point>When each pixel point is positioned in the same block, < >>When->The eighth pixel point is in the eighth neighborhood of the pixel point>When the pixel points are located in different blocks, < >>Indicate->The eighth pixel point is in the eighth neighborhood of the pixel point>The mean value of the dynamic degree of the two blocks where the pixel points are located. When->The larger is at the same time->When the motion states of each pixel point and the rest pixel points in eight adjacent domains are more consistent, the first pixel point is +.>The pixel points are more likely to be the pixel points with motion blur in the block where the pixel points are located, the image features corresponding to the pixel points are more likely to be motion, and the motion feature quantity is larger. On the contrary, when->Smaller, at the same time->When the motion states of the pixel points are more inconsistent with the motion states of the rest pixel points in the eight adjacent domains, the pixel points are not more consistent with the motion states of the rest pixel points>The pixel points are more likely to be the pixel points without motion blur influence in the block where the pixel points are located, the image characteristics corresponding to the pixel points are more likely to be static, and the motion characteristic quantity is smaller.
Thus, the motion characteristic quantity of each pixel point in the gray level image of the current frame is obtained.
S005, dividing the gray level image of the current frame into a plurality of categories according to the motion characteristic quantity, performing blind deconvolution on the images of each category to obtain images with each category with the motion blur eliminated, and combining the images with all the categories with the motion blur eliminated to obtain the deblurred gray level image of the current frame.
And carrying out mean shift clustering on all the pixel points in the gray level image of the current frame according to the abscissa and ordinate of each pixel point in the gray level image of the current frame and the motion characteristic quantity to obtain a plurality of categories, wherein the motion states of the pixel points in each category are basically consistent, and the generated motion blur degree is basically consistent.
And performing blind deconvolution on the images of each category respectively to eliminate the dynamic blur in the images of each category. It should be noted that blind deconvolution is a well-known technique, and will not be described in detail herein.
And combining all types of images with the motion blur eliminated into a complete image, recording the complete image as a current frame deblurring gray level image, and converting the current frame deblurring gray level image into an RGB image. It should be noted that, the method for converting the gray image into the RGB image is not limited in particular, and the implementation personnel may set the method according to the actual implementation situation, for example, gray layering, and piecewise linear function based on gray level. Converting a gray image into an RGB image using gray layering or a gray level-based piecewise linear function is a well-known technique, and will not be described in detail herein.
In another example, three channels of each frame of RGB image of the acquired teaching video may be respectively regarded as one gray image, the motion blur is eliminated for each gray image by the above method, and the images with the motion blur eliminated by the three channels are combined into an RGB image.
Therefore, the teaching video in AI simulation teaching is used for removing motion blur.
The embodiment of the invention also provides an AI simulation teaching system based on image processing, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes any step of the AI simulation teaching method based on image processing when executing the computer program.
According to the embodiment of the invention, the motion blur characteristic quantity of the pixel point is obtained through the gray value change condition of the pixel point at the same position in the multi-frame gray image of the teaching video, the correlation vector of the pixel point is obtained in combination with the gradient direction of the pixel point, the dynamic degree of each block is obtained according to the correlation vector of all the pixel points in each block, the motion characteristic quantity of the pixel point is obtained according to the correlation vector of the pixel point and the rest pixel points in eight adjacent domains and the dynamic degree of the block where the pixel point is located, the gray image of the current frame is divided into a plurality of categories according to the motion characteristic quantity, the pixel point in the same motion state is divided into the same category as far as possible, blind deconvolution is carried out on the images of each category, and the motion blur of the pixel points in different motion states is removed by adopting different blur kernels, so that the motion blur of each position in the teaching video gray image can be well removed, the influence of the motion blur caused by the motion blur in a teacher motion change in the video on the teaching video can be avoided, and the teaching video teaching quality can be improved.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.

Claims (6)

1. The AI simulation teaching method based on image processing is characterized by comprising the following steps:
acquiring a frame-by-frame gray level image of a teaching video;
acquiring motion blur feature quantity of each pixel point in the current frame gray level image according to gray level value change conditions of the pixel points at the same position in the current frame gray level image and all previous frame gray level images;
according to the gray value change condition of the pixel points at the same position in the gray image of the current frame and the gray images of all previous frames, the motion blur feature quantity of each pixel point in the gray image of the current frame is obtained, and the method comprises the following specific steps:
acquiring the gray change frequency of a pixel point at the same position in the current frame gray image and all previous frame gray images according to the gray value of the pixel point; acquiring motion blur feature quantity of each pixel point in the gray image of the current frame according to gray change frequency of each pixel point in the gray image of the current frame and gray difference of each pixel point and the pixel point at a corresponding position in the gray image of the previous frame;
obtaining a correlation vector of each pixel point according to the gradient direction and the motion blur characteristic quantity of each pixel point in the gray level image of the current frame;
the method for obtaining the correlation vector of each pixel point according to the gradient direction and the motion blur characteristic quantity of each pixel point in the gray level image of the current frame comprises the following specific steps:
forming a correlation vector of the pixel point by the gradient direction of the pixel point and the motion blur feature quantity of the pixel point, wherein the direction of the correlation vector is the gradient direction, and the mode of the correlation vector is the motion blur feature quantity;
dividing a gray image of a current frame into a plurality of blocks, and acquiring the dynamic degree of each block according to the correlation vector of all pixel points in each block;
the dynamic degree of each block is obtained according to the correlation vector of all pixel points in each block, and the method comprises the following specific steps:
wherein,representing the +.f in the gray level image of the current frame>Dynamic degree of individual blocks,/->Get pass [1, ]>]Is>Representing the number of blocks in the gray level image of the current frame; />Representing the +.f in the gray level image of the current frame>No. 4 of individual blocks>Correlation vectors for the individual pixels; />Representing the +.f in the gray level image of the current frame>No. 4 of individual blocks>Modulo length of correlation vector of each pixel point; />Representing the +.f in the gray level image of the current frame>No. 4 of individual blocks>Included angles between the gradient direction and the horizontal direction of each pixel point;representing the +.f in the gray level image of the current frame>The average value of included angles between the gradient direction and the horizontal direction of all pixel points of each block;representing the +.f in the gray level image of the current frame>The number of pixel points contained in each block;
acquiring motion characteristic quantity of each pixel point in the gray level image of the current frame according to the correlation vector of each pixel point in the gray level image of the current frame and the rest pixel points in eight adjacent domains and the dynamic degree of the block where the correlation vector is located;
the method for obtaining the motion characteristic quantity of each pixel point in the gray image of the current frame according to the correlation vector of each pixel point in the gray image of the current frame and the rest pixel points in eight adjacent domains and the dynamic degree of the block where the correlation vector is located comprises the following specific steps:
wherein,representing the +.f in the gray level image of the current frame>Motion feature quantity of each pixel point; />Representing the +.f in the gray level image of the current frame>The dynamic degree of the block where each pixel point is located; />Representing the +.f in the gray level image of the current frame>Eighth-neighbor-intra-pixel-dot eighth-neighbor-intra-pixel-dot eighth-neighbor-pixel-dotThe dynamic degree of the block where each pixel point is located; />Representing the +.f in the gray level image of the current frame>Correlation vectors for the individual pixels;representing the +.f in the gray level image of the current frame>Eighth +.>Correlation vectors for the individual pixels; />Representing the +.f in the gray level image of the current frame>Modulo length of correlation vector of each pixel point, < +.>Representing the +.f in the gray level image of the current frame>Correlation vector of each pixel point and +.>Eighth +.>Modulo length of the sum of correlation vectors of the individual pixels;
dividing the gray level image of the current frame into a plurality of categories according to the motion characteristic quantity; blind deconvolution is carried out on the images of each category respectively to obtain images with each category with the motion blur eliminated, and the images with all the categories with the motion blur eliminated are combined to obtain the deblurred gray image of the current frame.
2. The AI simulation teaching method based on image processing according to claim 1, wherein the step of obtaining the gray scale change frequency of the pixel point according to the gray scale value of the pixel point at the same position in the gray scale image of the current frame and the gray scale images of all previous frames comprises the following specific steps:
dividing a plurality of gray levels; for each pixel point in the gray level image of the current frame, the gray level value of the pixel point and the gray level values of the pixel points at the same position in all the gray level images of the previous frame are formed into a time sequence gray level sequence of the pixel point according to the time sequence; if the gray level of one gray value in the time sequence is different from the gray level of the previous gray value, the gray level is recorded as one gray level change, and the number of gray level changes in the time sequence is counted and used as the gray level change frequency of the pixel point.
3. The AI simulation teaching method based on image processing according to claim 2, wherein the step of obtaining the motion blur feature quantity of each pixel point in the gray image of the current frame according to the gray change frequency of each pixel point in the gray image of the current frame and the gray difference between each pixel point and the pixel point at the corresponding position in the gray image of the previous frame comprises the following specific steps:
wherein,representing the +.f in the gray level image of the current frame>Motion blur feature amount of each pixel, < +.>Pick pass [0, ]>]Is>Representing pixels contained in each frame of gray scale imageThe number of dots; />Representing the +.f in the gray level image of the current frame>Gray scale change frequency of each pixel point; />Representing the total frame number of all frame gray images before the current frame gray image; />Representing the +.f in the gray level image of the current frame>Gray level to which gray value of each pixel point belongs and the +.>The absolute value of the difference between the gray levels to which the gray values of the individual pixel points belong; />Representing the maximum function.
4. The AI simulation teaching method based on image processing according to claim 1, wherein the classifying the gray image of the current frame into a plurality of categories according to the motion feature quantity comprises the following specific steps:
and carrying out mean shift clustering on all the pixel points in the gray level image of the current frame according to the abscissa and ordinate of each pixel point in the gray level image of the current frame and the motion characteristic quantity to obtain a plurality of categories.
5. The AI simulation teaching method based on image processing according to claim 1, wherein the dividing the gray image of the current frame into a plurality of blocks comprises the following specific steps:
presetting a block side lengthDividing the gray image of the current frame into a plurality of +.>Size partitioning.
6. AI simulation teaching system based on image processing, comprising a memory, a processor and a computer program stored in said memory and running on said processor, characterized in that said processor implements the steps of the method according to any of claims 1-5 when said computer program is executed.
CN202311733029.6A 2023-12-18 2023-12-18 AI simulation teaching method and system based on image processing Active CN117422627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311733029.6A CN117422627B (en) 2023-12-18 2023-12-18 AI simulation teaching method and system based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311733029.6A CN117422627B (en) 2023-12-18 2023-12-18 AI simulation teaching method and system based on image processing

Publications (2)

Publication Number Publication Date
CN117422627A CN117422627A (en) 2024-01-19
CN117422627B true CN117422627B (en) 2024-02-20

Family

ID=89525134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311733029.6A Active CN117422627B (en) 2023-12-18 2023-12-18 AI simulation teaching method and system based on image processing

Country Status (1)

Country Link
CN (1) CN117422627B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689590B (en) * 2024-01-31 2024-04-16 天津灵境智游科技有限公司 AR object interactive display method based on AI technology

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335268A (en) * 2018-01-05 2018-07-27 广西师范大学 A method of the coloured image deblurring based on blind deconvolution
CN110490822A (en) * 2019-08-11 2019-11-22 浙江大学 The method and apparatus that image removes motion blur
CN113269682A (en) * 2021-04-21 2021-08-17 青岛海纳云科技控股有限公司 Non-uniform motion blur video restoration method combined with interframe information
CN113643217A (en) * 2021-10-15 2021-11-12 广州市玄武无线科技股份有限公司 Video motion blur removing method and device, terminal equipment and readable storage medium
CN115908154A (en) * 2022-09-20 2023-04-04 盐城众拓视觉创意有限公司 Video late-stage particle noise removing method based on image processing
CN116071657A (en) * 2023-03-07 2023-05-05 青岛旭华建设集团有限公司 Intelligent early warning system for building construction video monitoring big data
CN116188327A (en) * 2023-04-21 2023-05-30 济宁职业技术学院 Image enhancement method for security monitoring video

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10289951B2 (en) * 2016-11-02 2019-05-14 Adobe Inc. Video deblurring using neural networks
CN111539879B (en) * 2020-04-15 2023-04-14 清华大学深圳国际研究生院 Video blind denoising method and device based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335268A (en) * 2018-01-05 2018-07-27 广西师范大学 A method of the coloured image deblurring based on blind deconvolution
CN110490822A (en) * 2019-08-11 2019-11-22 浙江大学 The method and apparatus that image removes motion blur
CN113269682A (en) * 2021-04-21 2021-08-17 青岛海纳云科技控股有限公司 Non-uniform motion blur video restoration method combined with interframe information
CN113643217A (en) * 2021-10-15 2021-11-12 广州市玄武无线科技股份有限公司 Video motion blur removing method and device, terminal equipment and readable storage medium
CN115908154A (en) * 2022-09-20 2023-04-04 盐城众拓视觉创意有限公司 Video late-stage particle noise removing method based on image processing
CN116071657A (en) * 2023-03-07 2023-05-05 青岛旭华建设集团有限公司 Intelligent early warning system for building construction video monitoring big data
CN116188327A (en) * 2023-04-21 2023-05-30 济宁职业技术学院 Image enhancement method for security monitoring video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Attentive deep network for blind motion deblurring on dynamic scenes;Yong Xu et al.;《Computer Vision and Image Understanding》;20210129;第1-12页 *
基于块合成的视频图像去模糊算法的改进;强钰琦 等;《电视技术》;20160731;第40卷(第7期);第128-133页 *
视觉传感器抖动模糊图像复原技术;朱立夫 等;《传感器与微系统》;20180831;第37卷(第8期);第19-21页 *

Also Published As

Publication number Publication date
CN117422627A (en) 2024-01-19

Similar Documents

Publication Publication Date Title
US20210327031A1 (en) Video blind denoising method based on deep learning, computer device and computer-readable storage medium
CN117422627B (en) AI simulation teaching method and system based on image processing
US11741581B2 (en) Training method for image processing model, image processing method, network device, and storage medium
US9019299B2 (en) Filtering method and apparatus for anti-aliasing
CN109325922B (en) Image self-adaptive enhancement method and device and image processing equipment
CN108305230A (en) A kind of blurred picture integrated conduct method and system
CN106204472B (en) Video image deblurring method based on sparse characteristic
CN103310430B (en) To the fuzzy method and apparatus for carrying out deblurring of nonuniform motion
EP2164040A1 (en) System and method for high quality image and video upscaling
JP2006350334A (en) Method and system for compensating perceptible blurring due to movement between current frames and former frame of digital video sequence
CN112541877B (en) Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
DE102019121200A1 (en) MOTION-ADAPTIVE RENDERING BY SHADING WITH A VARIABLE RATE
CN115393227A (en) Self-adaptive enhancing method and system for low-light-level full-color video image based on deep learning
CN106162131B (en) A kind of real time image processing
CN111986102A (en) Digital pathological image deblurring method
CN111161189A (en) Single image re-enhancement method based on detail compensation network
CN114742774A (en) No-reference image quality evaluation method and system fusing local and global features
CN113627309A (en) Video noise reduction method and device, computer system and readable storage medium
Peng et al. MND-GAN: A Research on Image Deblurring Algorithm Based on Generative Adversarial Network
CN112258434A (en) Detail-preserving multi-exposure image fusion algorithm in static scene
CN113379610A (en) Training method of image processing model, image processing method, medium, and terminal
CN111182094B (en) Video file processing method and device
CN110930322B (en) Defogging method for estimating transmission image by combining image blocking with convolution network
Mourya et al. Techniques for learning to see in the dark: a survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant