CN113115037B - Online education method, system, equipment and storage medium - Google Patents
Online education method, system, equipment and storage medium Download PDFInfo
- Publication number
- CN113115037B CN113115037B CN202110660037.7A CN202110660037A CN113115037B CN 113115037 B CN113115037 B CN 113115037B CN 202110660037 A CN202110660037 A CN 202110660037A CN 113115037 B CN113115037 B CN 113115037B
- Authority
- CN
- China
- Prior art keywords
- frame
- image
- video
- value
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
Abstract
The application provides an online education method, a system, equipment and a storage medium, comprising the following steps: the camera collects the teaching video streaming data of the teacher in real time; calculating a difference matrix Hn of the nth frame and the previous frame of video image in real time from the second frame; traversing the matrix Hn; selecting RGB video image information corresponding to the region position information set S1 of the nth frame image as a data subset set S2; taking the RGB video image information of which the degree of change of the nth frame video image is greater than the threshold value W as a data subset set S3; MPS encoding is performed on the data subset set S3, and the obtained structured data is transmitted to the user end. The invention realizes the data transmission only aiming at the area where the video image changes remarkably, but not reserving the original image in the area where the video image changes remarkably, greatly reduces the data transmission amount, remarkably improves the data transmission efficiency, and greatly accelerates the image data selection speed by the calculation mode of the difference matrix to the video frame image area.
Description
Technical Field
The application relates to the technical field of computer vision, in particular to an online education method, a system, equipment and a storage medium.
Background
At present, along with the continuous increase of the scale of the short video or the real-time video of the online education, higher requirements are provided for the data transmission bandwidth, and the real-time video or the short video can enable a user to carry out the training of related courses at any time and any place in real time, so the video playing fluency degree of the courses is closely related to the network transmission speed and the transmission data volume, and especially during epidemic situations, the online education has greater popularization in the aspect of remote education and plays an important role in remote learning.
However, the traditional online education has low video data transmission efficiency, large video data transmission quantity and higher cost, and cannot meet the requirement of more students; the fast, convenient and efficient transmission of remote video information is necessary at present. In the traditional video transmission, all data are transmitted, and the structural characteristics of the transmitted data cannot be effectively utilized; therefore, a method capable of real-time, fast and reducing the transmission of invalid data volume is an urgent need, thereby improving the user experience.
Disclosure of Invention
In view of the questionThe present invention has been made to provide an online education method, system, device and storage medium that overcome or at least partially solve the above-mentioned problems, by transmitting only image data for a significantly varying area without leaving an original image for the significantly varying area according to the structural characteristics of a course video, the amount of transmission data is greatly reduced, and data transmission efficiency is significantly improved; the image data selection speed is greatly accelerated by the calculation mode of the difference degree matrix to the video frame image area; the image change degree calculation mode can remarkably improve the selection rate of the significant region and greatly reduce image distortion; a method of online education comprising the steps of: the camera collects the teaching video streaming data of the teacher in real time; calculating the difference matrix H of the nth frame and the previous frame in real time from the second framen,
Wherein the content of the first and second substances,a difference between R value matrices of R channels representing video RGB images of the n-th frame and the n-1 th frame,a difference between a matrix of G values representing G channels of video RGB images of the n-th frame and the n-1 st frame,a difference between the B value matrices of B channels representing the video RGB images of the n-th frame and the n-1 st frame,、、matrix proportionality coefficients of an R channel, a G channel and a B channel are respectively;
traverse matrix HnAn area position information set S1 in which the element in the recording matrix is not equal to 0;
selecting RGB video image information corresponding to the region position information set S1 of the nth frame image as a data subset set S2, calculating the image change degree of each connected region set in the data subset set S2, and the image change degree W of the connected region of the kth data subsetkComprises the following steps:
wherein the content of the first and second substances,representing the difference between the mean gray levels of the connected regions of the kth data subset of the n-th frame and the n-1 frame of video,representing the difference between the image steepness of the connected region of the kth data subset of the n-th frame and the n-1 frame video,representing the proportion of the k-th data subset connected region in the whole video image, h is a set threshold value, ifThen, then(ii) a If it isThen, then(ii) a The image steepness is to extract pixel points with the lowest gray value and the highest gray value in each image block, and is respectively T1 and T2;the steepness is L = (T2-T1)/q; wherein q is the number of interval pixels of the pixel points with the lowest gray value and the highest gray value in the image block;the gray level mean value of n data subset connected regions;the mean value of the image steepness of the n data subset connected regions is obtained;
taking the RGB video image information of which the degree of change of the nth frame video image is greater than the threshold value W as a data subset set S3;
MPS is performed on the data subset set S3, and the obtained structured data is transmitted to the user end, where the MPS is a model data storage format and a transmission format for expressing the linear optimization model.
Preferably, the step of obtaining the mean gray level of the connected region includes performing gray processing on the images of the connected region, and selecting a gray level threshold according to the maximum inter-class variance method OSTU.
Preferably, the step of obtaining the mean gray level of the connected region includes graying the video image by a maximum method, and taking the maximum value of the three-component brightness in the color image as the gray level value of the gray level map.
Preferably, the method further comprises preprocessing the video image before obtaining the gray level mean value of the connected region, and filtering and denoising the video image information.
Also disclosed is an online education system including:
the acquisition module is used for acquiring teaching video streaming data of the teacher in real time by the camera; calculating the difference matrix H of the nth frame and the previous frame in real time from the second framen,
Wherein the content of the first and second substances,a difference between R value matrices of R channels representing video RGB images of the n-th frame and the n-1 th frame,a difference between a matrix of G values representing G channels of video RGB images of the n-th frame and the n-1 st frame,a difference between the B value matrices of B channels representing the video RGB images of the n-th frame and the n-1 st frame,、、matrix proportionality coefficients of an R channel, a G channel and a B channel are respectively;
a traversing module for traversing the matrix HnAn area position information set S1 in which the element in the recording matrix is not equal to 0;
a region initial selection module, configured to select RGB video image information corresponding to the region position information set S1 of the nth frame of image as a data subset set S2, calculate an image change degree of each connected region set in the data subset set S2, and calculate an image change degree W of a connected region of the kth data subsetkComprises the following steps:
wherein the content of the first and second substances,representing the difference between the mean gray levels of the connected regions of the kth data subset of the n-th frame and the n-1 frame of video,representing the difference between the image steepness of the connected region of the kth data subset of the n-th frame and the n-1 frame video,representing the proportion of the k-th data subset connected region in the whole video image, h is a set threshold value, ifThen, then(ii) a If it isThen, then(ii) a The image steepness is to extract pixel points with the lowest gray value and the highest gray value in each image block, and is respectively T1 and T2; the steepness is L = (T2-T1)/q; wherein q is the number of interval pixels of the pixel points with the lowest gray value and the highest gray value in the image block;the gray level mean value of n data subset connected regions;the mean value of the image steepness of the n data subset connected regions is obtained;
the area selection module is used for taking the RGB video image information of which the variation degree of the nth frame video image is greater than the threshold value W as a data subset set S3;
and the coding transmission module is used for performing MPS coding on the data subset set S3 to obtain an MPS data packet, analyzing the MPS data packet according to a preset data structure to obtain structured data which accords with optimization processing, and transmitting the obtained structured data to a user side, wherein the MPS is a model data storage format and a transmission format which are used for expressing a linear optimization model.
Preferably, the method further comprises the following steps: and the graying module is used for performing graying processing on the images of the connected regions and selecting a grayscale threshold value according to the maximum inter-class variance method OSTU.
Preferably, the graying module is further configured to perform graying on the video image by using a maximum value method, and use the maximum value of the three-component brightness in the color image as the grayscale value of the grayscale image.
Preferably, the method further comprises the following steps: and the preprocessing module is used for preprocessing the video image and filtering and denoising the video image information before acquiring the gray average value of the connected region.
An apparatus comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program when executed by the processor implementing the steps of the online education method as described above.
A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of online education as described above.
The application has the following advantages:
the application relates to an online education method and system, which calculates a difference matrix H of an nth frame and a previous frame of video image in real time from a second framen(ii) a Traverse matrix HnAn area position information set S1 in which the element in the recording matrix is not equal to 0; selecting RGB video image information corresponding to the region position information set S1 of the nth frame image as a data subset set S2; taking the RGB video image information of which the degree of change of the nth frame video image is greater than the threshold value W as a data subset set S3; MPS encoding is performed on the data subset set S3, and the obtained structured data is transmitted to the user end. The invention realizes the data transmission only aiming at the area where the video image has obvious change, and the original image is reserved in the area where the video image has no obvious change, thereby greatly reducing the data transmission amount and obviously improving the data transmission efficiency.
Particularly, the image data selection speed is greatly increased by the calculation mode of the difference degree matrix to the video frame image area; the image change degree calculation mode can remarkably improve the selection rate of the salient region and greatly reduce image distortion.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings needed to be used in the description of the present application will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor.
FIG. 1 is a flow chart of a method of online education provided by an embodiment of the present application;
fig. 2 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
As understood by the technical personnel in the field, as for the background technology, the traditional online education has low video data transmission efficiency, large video data transmission quantity and higher cost, and can not meet the requirement of more students; the fast, convenient and efficient transmission of remote video information is necessary at present. In the traditional video transmission, all data are transmitted, and the structural characteristics of the transmitted data cannot be effectively utilized; therefore, a method capable of real-time, fast and reducing the transmission of invalid data volume is an urgent need, thereby improving the user experience. In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example 1:
the invention provides an online education method, a system, equipment and a storage medium, according to the structural characteristics of a course video, the invention only transmits the image data of a significant change area, and the original image is reserved in the area without significant change, thereby greatly reducing the data transmission amount and remarkably improving the data transmission efficiency;
referring to fig. 1, a flowchart of an online education method provided by an embodiment of the present application is shown, including the steps of:
s100, collecting teaching video streaming data of a teacher in real time by a camera;
s200, calculating a difference matrix H of the nth frame and the previous frame of video image in real time from the second framen,
Wherein the content of the first and second substances,a difference between R value matrices of R channels representing video RGB images of the n-th frame and the n-1 th frame,a difference between a matrix of G values representing G channels of video RGB images of the n-th frame and the n-1 st frame,a difference between the B value matrices of B channels representing the video RGB images of the n-th frame and the n-1 st frame,、、matrix proportionality coefficients of an R channel, a G channel and a B channel are respectively;
s300, traversing matrix HnAn area position information set S1 in which the element in the recording matrix is not equal to 0;
s400, selecting RGB video image information corresponding to the region position information set S1 of the nth frame image as a data subset set S2, calculating the image change degree of each connected region set in the data subset set S2, and calculating the image change degree W of the connected region of the kth data subsetkComprises the following steps:
wherein the content of the first and second substances,representing the difference between the mean gray levels of the connected regions of the kth data subset of the n-th frame and the n-1 frame of video,representing the difference between the image steepness of the connected region of the kth data subset of the n-th frame and the n-1 frame video,representing the proportion of the k-th data subset connected region in the whole video image, h is a set threshold value, ifThen, then(ii) a If it isThen, then(ii) a The image steepness is to extract pixel points with the lowest gray value and the highest gray value in each image block, and is respectively T1 and T2; the steepness is L = (T2-T1)/q; wherein q is the number of interval pixels of the pixel points with the lowest gray value and the highest gray value in the image block;the gray level mean value of n data subset connected regions;the mean value of the image steepness of the n data subset connected regions is obtained;
s500, taking the RGB video image information with the n frame video image change degree larger than a threshold value W as a data subset set S3;
s600, performing MPS coding on the data subset set S3, and transmitting the obtained structured data to the user end, where MPS is a model data storage format and a transmission format for expressing a linear optimization model.
In an embodiment of the present application, the step of obtaining the mean gray level of the connected region includes performing graying processing on the image of the connected region, and selecting a gray level threshold according to the maximum inter-class variance method OSTU.
In an embodiment of the present application, the obtaining of the mean grayscale value of the connected region includes graying the video image by using a maximum method, and taking a maximum value of three-component luminance in the color image as a grayscale value of the grayscale image.
In an embodiment of the present application, the preprocessing of the video image and the filtering and denoising of the video image information are further included before the obtaining of the gray level mean value of the connected region.
Example 2:
the invention also discloses an online education system, comprising:
the acquisition module is used for acquiring teaching video streaming data of the teacher in real time by the camera; calculating the difference matrix H of the nth frame and the previous frame in real time from the second framen,
Wherein the content of the first and second substances,to representThe difference between the R value matrix of the R channels of the video RGB image of the n-th frame and the n-1 st frame,a difference between a matrix of G values representing G channels of video RGB images of the n-th frame and the n-1 st frame,a difference between the B value matrices of B channels representing the video RGB images of the n-th frame and the n-1 st frame,、、matrix proportionality coefficients of an R channel, a G channel and a B channel are respectively;
a traversing module for traversing the matrix HnAn area position information set S1 in which the element in the recording matrix is not equal to 0;
a region initial selection module, configured to select RGB video image information corresponding to the region position information set S1 of the nth frame of image as a data subset set S2, calculate an image change degree of each connected region set in the data subset set S2, and calculate an image change degree W of a connected region of the kth data subsetkComprises the following steps:
wherein the content of the first and second substances,representing the difference between the mean gray levels of the connected regions of the kth data subset of the n-th frame and the n-1 frame of video,representing the k data subset connected region of the n frame and the n-1 frame videoThe difference in the steepness of the image of (2),representing the proportion of the k-th data subset connected region in the whole video image, h is a set threshold value, ifThen, then(ii) a If it isThen, then(ii) a The image steepness is to extract pixel points with the lowest gray value and the highest gray value in each image block, and is respectively T1 and T2; the steepness is L = (T2-T1)/q; wherein q is the number of interval pixels of the pixel points with the lowest gray value and the highest gray value in the image block;the gray level mean value of n data subset connected regions;the mean value of the image steepness of the n data subset connected regions is obtained;
the area selection module is used for taking the RGB video image information of which the variation degree of the nth frame video image is greater than the threshold value W as a data subset set S3;
and the coding transmission module is used for performing MPS coding on the data subset set S3 to obtain an MPS data packet, analyzing the MPS data packet according to a preset data structure to obtain structured data which accords with optimization processing, and transmitting the obtained structured data to a user side, wherein the MPS is a model data storage format and a transmission format which are used for expressing a linear optimization model.
In an embodiment of the present application, the method further includes: and the graying module is used for performing graying processing on the images of the connected regions and selecting a grayscale threshold value according to the maximum inter-class variance method OSTU.
In an embodiment of the application, the graying module is further configured to perform graying on the video image by using a maximum value method, and use a maximum value of three-component brightness in the color image as a grayscale value of the grayscale image.
In an embodiment of the present application, the method further includes: and the preprocessing module is used for preprocessing the video image and filtering and denoising the video image information before acquiring the gray average value of the connected region.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
the application provides an online education method, system, device and storage medium, which calculates the difference degree matrix H of the nth frame and the previous frame of video image in real time from the second framen(ii) a Traverse matrix HnAn area position information set S1 in which the element in the recording matrix is not equal to 0; selecting RGB video image information corresponding to the region position information set S1 of the nth frame image as a data subset set S2; taking the RGB video image information of which the degree of change of the nth frame video image is greater than the threshold value W as a data subset set S3; MPS encoding is performed on the data subset set S3, and the obtained structured data is transmitted to the user end. The invention realizes the data transmission only aiming at the area where the video image has obvious change, and the original image is reserved in the area where the video image has no obvious change, thereby greatly reducing the data transmission amount and obviously improving the data transmission efficiency.
Particularly, the image data selection speed is greatly increased by the calculation mode of the difference degree matrix to the video frame image area; the image change degree calculation mode can remarkably improve the selection rate of the salient region and greatly reduce image distortion.
Example 3:
referring to fig. 2, a computer device of an online education method of the present application is shown, which may specifically include the following:
the computer device 12 described above is embodied in the form of a general purpose computing device, and the components of the computer device 12 may include, but are not limited to: one or more processors or processing units 16, a memory 28, and a bus 18 that couples various system components including the memory 28 and the processing unit 16.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The memory 28 may include computer system readable media in the form of volatile memory, such as random access memory 30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (commonly referred to as "hard drives"). Although not shown in FIG. 2, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. The memory may include at least one program product having a set (e.g., at least one) of program modules 42, with the program modules 42 configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules 42, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, camera, etc.), with one or more devices that enable an operator to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through the I/O interface 22. Also, computer device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN)), a Wide Area Network (WAN), and/or a public network (e.g., the Internet) via network adapter 20. As shown in FIG. 2, the network adapter 20 communicates with the other modules of the computer device 12 via the bus 18. It should be appreciated that although not shown in FIG. 2, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units 16, external disk drive arrays, RAID systems, tape drives, and data backup storage systems 34, etc.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the memory 28, for example, to implement an online education method provided by the embodiment of the present application.
That is, the processing unit 16 implements, when executing the program,: a method of online education comprising the steps of: the camera collects the teaching video streaming data of the teacher in real time; calculating the difference matrix H of the nth frame and the previous frame in real time from the second framen,
Wherein the content of the first and second substances,a difference between R value matrices of R channels representing video RGB images of the n-th frame and the n-1 th frame,a difference between a matrix of G values representing G channels of video RGB images of the n-th frame and the n-1 st frame,a difference between the B value matrices of B channels representing the video RGB images of the n-th frame and the n-1 st frame,、、matrix proportionality coefficients of an R channel, a G channel and a B channel are respectively;
traverse matrix HnAn area position information set S1 in which the element in the recording matrix is not equal to 0;
selecting RGB video image information corresponding to the region position information set S1 of the nth frame image as a data subset set S2, calculating the image change degree of each connected region set in the data subset set S2, and the image change degree W of the connected region of the kth data subsetkComprises the following steps:
wherein the content of the first and second substances,representing the difference between the mean gray levels of the connected regions of the kth data subset of the n-th frame and the n-1 frame of video,representing the difference between the image steepness of the connected region of the kth data subset of the n-th frame and the n-1 frame video,representing the proportion of the k-th data subset connected region in the whole video image, h is a set threshold value, ifThen, then(ii) a If it isThen, then(ii) a The image steepness is to extract pixel points with the lowest gray value and the highest gray value in each image block, and is respectively T1 and T2; the steepness is L = (T2-T1)/q; wherein q is the number of interval pixels of the pixel points with the lowest gray value and the highest gray value in the image block;the gray level mean value of n data subset connected regions;the mean value of the image steepness of the n data subset connected regions is obtained;
taking the RGB video image information of which the degree of change of the nth frame video image is greater than the threshold value W as a data subset set S3;
MPS is performed on the data subset set S3, and the obtained structured data is transmitted to the user end, where the MPS is a model data storage format and a transmission format for expressing the linear optimization model.
In an embodiment of the present application, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an online education method as provided in all embodiments of the present application.
That is, the program when executed by the processor implements: a method of online education comprising the steps of: the camera collects the teaching video streaming data of the teacher in real time; calculating the difference matrix H of the nth frame and the previous frame in real time from the second framen,
Wherein the content of the first and second substances,a difference between R value matrices of R channels representing video RGB images of the n-th frame and the n-1 th frame,a difference between a matrix of G values representing G channels of video RGB images of the n-th frame and the n-1 st frame,a difference between the B value matrices of B channels representing the video RGB images of the n-th frame and the n-1 st frame,、、matrix proportionality coefficients of an R channel, a G channel and a B channel are respectively;
traverse matrix HnAn area position information set S1 in which the element in the recording matrix is not equal to 0;
selecting RGB video image information corresponding to the region position information set S1 of the nth frame image as a data subset set S2, and calculating each connected region set in the data subset set S2The degree of image change of the k-th data subset connected region WkComprises the following steps:
wherein the content of the first and second substances,representing the difference between the mean gray levels of the connected regions of the kth data subset of the n-th frame and the n-1 frame of video,representing the difference between the image steepness of the connected region of the kth data subset of the n-th frame and the n-1 frame video,representing the proportion of the k-th data subset connected region in the whole video image, h is a set threshold value, ifThen, then(ii) a If it isThen, then(ii) a The image steepness is to extract pixel points with the lowest gray value and the highest gray value in each image block, and is respectively T1 and T2; the steepness is L = (T2-T1)/q; wherein q is the number of interval pixels of the pixel points with the lowest gray value and the highest gray value in the image block;the gray level mean value of n data subset connected regions;the mean value of the image steepness of the n data subset connected regions is obtained;
taking the RGB video image information of which the degree of change of the nth frame video image is greater than the threshold value W as a data subset set S3;
MPS is performed on the data subset set S3, and the obtained structured data is transmitted to the user end, where the MPS is a model data storage format and a transmission format for expressing the linear optimization model.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the operator's computer, partly on the operator's computer, as a stand-alone software package, partly on the operator's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the operator's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method, system, device and storage medium for online education provided by the present application are described in detail above, and the principle and implementation of the present application are explained herein by applying specific examples, and the description of the above examples is only used to help understand the method and core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. An online education method, comprising the steps of: the camera collects the teaching video streaming data of the teacher in real time; calculating the difference matrix of the nth frame and the previous frame in real time from the second frame,
Wherein the content of the first and second substances,a difference between R value matrices of R channels representing video RGB images of the n-th frame and the n-1 th frame,a difference between a matrix of G values representing G channels of video RGB images of the n-th frame and the n-1 st frame,a difference between the B value matrices of B channels representing the video RGB images of the n-th frame and the n-1 st frame,、、matrix proportionality coefficients of an R channel, a G channel and a B channel are respectively;
traversal matrixAn area position information set S1 in which the element in the recording matrix is not equal to 0;
selecting RGB video image information corresponding to the region position information set S1 of the nth frame image as a data subset set S2, calculating the image change degree of each connected region set in the data subset set S2, and the image change degree of the connected region of the kth data subsetComprises the following steps:
wherein the content of the first and second substances,representing the difference between the mean gray levels of the connected regions of the kth data subset of the n-th frame and the n-1 frame of video,representing the difference between the image steepness of the connected region of the kth data subset of the n-th frame and the n-1 frame video,is shown ask data subset connected regions account for the proportion of the whole video image, h is a set threshold value, if yes, the connected regions are connected with the video imageThen, then(ii) a If it isThen, then(ii) a The image steepness is to extract pixel points with the lowest gray value and the highest gray value in each image block, and is respectively T1 and T2; the steepness is L = (T2-T1)/q; wherein q is the number of interval pixels of the pixel points with the lowest gray value and the highest gray value in the image block;is the average of the gray levels of all connected regions of the nth frame,the mean value of the image steepness of all the connected areas of the nth frame is obtained;
taking the RGB video image information of which the degree of change of the nth frame video image is greater than the threshold value W as a data subset set S3;
MPS is performed on the data subset set S3, and the obtained structured data is transmitted to the user end, where the MPS is a model data storage format and a transmission format for expressing the linear optimization model.
2. The method of claim 1, wherein the obtaining of the mean grayscale value of the connected regions comprises graying the images of the connected regions and selecting the grayscale threshold according to the maximum inter-class variance method OSTU.
3. The on-line education method as claimed in claim 1, wherein the step of obtaining the mean value of the gray levels of the connected regions comprises graying the video image by a maximum value method, and taking the maximum value of the three-component brightness in the color image as the gray value of the gray image.
4. The on-line education method as claimed in claim 2, wherein the step of obtaining the mean value of the gray scale of the connected component further comprises preprocessing the video image to remove noise and filter the video image information.
5. An online education system, comprising an acquisition module: the camera collects the teaching video streaming data of the teacher in real time; calculating the difference matrix of the nth frame and the previous frame in real time from the second frame,
Wherein the content of the first and second substances,a difference between R value matrices of R channels representing video RGB images of the n-th frame and the n-1 th frame,a difference between a matrix of G values representing G channels of video RGB images of the n-th frame and the n-1 st frame,a difference between the B value matrices of B channels representing the video RGB images of the n-th frame and the n-1 st frame,、、matrix proportionality coefficients of an R channel, a G channel and a B channel are respectively;
a traversing module: traversal matrixAn area position information set S1 in which the element in the recording matrix is not equal to 0;
the region primary selection module: selecting RGB video image information corresponding to the region position information set S1 of the nth frame image as a data subset set S2, calculating the image change degree of each connected region set in the data subset set S2, and the image change degree of the connected region of the kth data subsetComprises the following steps:
wherein the content of the first and second substances,representing the difference between the mean gray levels of the connected regions of the kth data subset of the n-th frame and the n-1 frame of video,representing the difference between the image steepness of the connected region of the kth data subset of the n-th frame and the n-1 frame video,representing the proportion of the k-th data subset connected region in the whole video image, h is a set threshold value, ifThen, then(ii) a If it isThen, then(ii) a The image steepness is to extract pixel points with the lowest gray value and the highest gray value in each image block, and is respectively T1 and T2; the steepness is L = (T2-T1)/q; wherein q is the number of interval pixels of the pixel points with the lowest gray value and the highest gray value in the image block;is the average of the gray levels of all connected regions of the nth frame,the mean value of the image steepness of all the connected areas of the nth frame is obtained;
a region selection module: taking the RGB video image information of which the degree of change of the nth frame video image is greater than the threshold value W as a data subset set S3;
the coding transmission module: MPS coding is carried out on the data subset set S3 to obtain an MPS data packet, the MPS data packet is analyzed according to a preset data structure to obtain structured data which accord with optimization processing, the obtained structured data are transmitted to a user side, and MPS is a model data storage format and a transmission format which are used for expressing a linear optimization model.
6. The system of claim 5, wherein the graying module comprises a graying module for graying the connected region image and selecting the grayscale threshold according to the maximum inter-class variance method OSTU.
7. The system of claim 5, wherein the graying module is configured to perform graying on the video image by using a maximum value method, and the maximum value of the three-component brightness in the color image is used as the grayscale value of the grayscale image.
8. The system of claim 5, wherein the preprocessing module is further configured to preprocess the video image before obtaining the mean grayscale value of the connected component, and filter and denoise information of the video image.
9. An apparatus comprising a processor, a memory, and a computer program stored on the memory and capable of running on the processor, the computer program when executed by the processor implementing the method of any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110660037.7A CN113115037B (en) | 2021-06-15 | 2021-06-15 | Online education method, system, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110660037.7A CN113115037B (en) | 2021-06-15 | 2021-06-15 | Online education method, system, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113115037A CN113115037A (en) | 2021-07-13 |
CN113115037B true CN113115037B (en) | 2021-09-14 |
Family
ID=76723492
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110660037.7A Active CN113115037B (en) | 2021-06-15 | 2021-06-15 | Online education method, system, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113115037B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113660495A (en) * | 2021-08-11 | 2021-11-16 | 易谷网络科技股份有限公司 | Real-time video stream compression method and device, electronic equipment and storage medium |
CN114155254B (en) * | 2021-12-09 | 2022-11-08 | 成都智元汇信息技术股份有限公司 | Image cutting method based on image correction, electronic device and medium |
CN114140542B (en) * | 2021-12-09 | 2022-11-22 | 成都智元汇信息技术股份有限公司 | Picture cutting method based on color compensation, electronic equipment and medium |
CN115119016A (en) * | 2022-06-29 | 2022-09-27 | 王雨佳 | Information data encryption algorithm |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1946144A (en) * | 2006-11-01 | 2007-04-11 | 李博航 | Real time video image transmission technology |
CN101184216A (en) * | 2007-12-07 | 2008-05-21 | 广东纺织职业技术学院 | Intelligent domestic gateway presentation video control method and system thereof |
CN101321287A (en) * | 2008-07-08 | 2008-12-10 | 浙江大学 | Video encoding method based on movement object detection |
CN103400154A (en) * | 2013-08-09 | 2013-11-20 | 电子科技大学 | Human body movement recognition method based on surveillance isometric mapping |
CN104394418A (en) * | 2014-09-23 | 2015-03-04 | 清华大学 | Method and device for coding video data and method and device for decoding video data |
CN105306960A (en) * | 2015-10-18 | 2016-02-03 | 北京航空航天大学 | Dynamic adaptive stream system for transmitting high-quality online course videos |
CN105787597A (en) * | 2016-01-20 | 2016-07-20 | 北京优弈数据科技有限公司 | Data optimizing processing system |
US9578324B1 (en) * | 2014-06-27 | 2017-02-21 | Google Inc. | Video coding using statistical-based spatially differentiated partitioning |
CN107147906A (en) * | 2017-06-12 | 2017-09-08 | 中国矿业大学 | A kind of virtual perspective synthetic video quality without referring to evaluation method |
CN108021347A (en) * | 2017-12-29 | 2018-05-11 | 航天科工智慧产业发展有限公司 | A kind of method of Android terminal Screen sharing |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6647061B1 (en) * | 2000-06-09 | 2003-11-11 | General Instrument Corporation | Video size conversion and transcoding from MPEG-2 to MPEG-4 |
JP5821610B2 (en) * | 2011-12-20 | 2015-11-24 | 富士通株式会社 | Information processing apparatus, information processing method, and program |
US10448012B2 (en) * | 2016-11-22 | 2019-10-15 | Pixvana, Inc. | System and method for data reduction based on scene content |
JP7208356B2 (en) * | 2018-09-26 | 2023-01-18 | コーヒレント・ロジックス・インコーポレーテッド | Generating Arbitrary World Views |
-
2021
- 2021-06-15 CN CN202110660037.7A patent/CN113115037B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1946144A (en) * | 2006-11-01 | 2007-04-11 | 李博航 | Real time video image transmission technology |
CN101184216A (en) * | 2007-12-07 | 2008-05-21 | 广东纺织职业技术学院 | Intelligent domestic gateway presentation video control method and system thereof |
CN101321287A (en) * | 2008-07-08 | 2008-12-10 | 浙江大学 | Video encoding method based on movement object detection |
CN103400154A (en) * | 2013-08-09 | 2013-11-20 | 电子科技大学 | Human body movement recognition method based on surveillance isometric mapping |
US9578324B1 (en) * | 2014-06-27 | 2017-02-21 | Google Inc. | Video coding using statistical-based spatially differentiated partitioning |
CN104394418A (en) * | 2014-09-23 | 2015-03-04 | 清华大学 | Method and device for coding video data and method and device for decoding video data |
CN105306960A (en) * | 2015-10-18 | 2016-02-03 | 北京航空航天大学 | Dynamic adaptive stream system for transmitting high-quality online course videos |
CN105787597A (en) * | 2016-01-20 | 2016-07-20 | 北京优弈数据科技有限公司 | Data optimizing processing system |
CN107147906A (en) * | 2017-06-12 | 2017-09-08 | 中国矿业大学 | A kind of virtual perspective synthetic video quality without referring to evaluation method |
CN108021347A (en) * | 2017-12-29 | 2018-05-11 | 航天科工智慧产业发展有限公司 | A kind of method of Android terminal Screen sharing |
Non-Patent Citations (1)
Title |
---|
基于可区分边界和加权对比度优化的显著度检测算法;姜青竹等;《电子学报》;20170115(第01期);第150-159页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113115037A (en) | 2021-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113115037B (en) | Online education method, system, equipment and storage medium | |
CN109886210B (en) | Traffic image recognition method and device, computer equipment and medium | |
CN113052868B (en) | Method and device for training matting model and image matting | |
CN111738041A (en) | Video segmentation method, device, equipment and medium | |
CN109194878B (en) | Video image anti-shake method, device, equipment and storage medium | |
CN111986117A (en) | System and method for correcting arithmetic operation | |
CN113344826A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111382647B (en) | Picture processing method, device, equipment and storage medium | |
CN113516697A (en) | Image registration method and device, electronic equipment and computer-readable storage medium | |
CN111815748B (en) | Animation processing method and device, storage medium and electronic equipment | |
CN111861204A (en) | Course mobile learning evaluation system and method based on intelligent platform | |
CN114723652A (en) | Cell density determination method, cell density determination device, electronic apparatus, and storage medium | |
CN115333879B (en) | Remote conference method and system | |
CN113315995B (en) | Method and device for improving video quality, readable storage medium and electronic equipment | |
CN112507243B (en) | Content pushing method and device based on expressions | |
WO2023284236A1 (en) | Blind image denoising method and apparatus, electronic device, and storage medium | |
CN115601820A (en) | Face fake image detection method, device, terminal and storage medium | |
CN112990198B (en) | Detection and identification method and system for water meter reading and storage medium | |
CN108419095A (en) | A kind of streaming media transcoding method, apparatus, computer equipment and readable medium | |
CN113762260A (en) | Method, device and equipment for processing layout picture and storage medium | |
CN112584117B (en) | White balance adjusting method, device, equipment and storage medium | |
CN113117341B (en) | Picture processing method and device, computer readable storage medium and electronic equipment | |
CN117173609A (en) | Multi-scale feature and channel attention-based reference-free screen video quality evaluation method and device | |
US20230334626A1 (en) | Techniques for denoising videos | |
KR20100049406A (en) | Remote education server, method and computer readable media storing program for method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |