CN111125432A - Rapid matching system and video matching method for safety production training - Google Patents
Rapid matching system and video matching method for safety production training Download PDFInfo
- Publication number
- CN111125432A CN111125432A CN201911358722.3A CN201911358722A CN111125432A CN 111125432 A CN111125432 A CN 111125432A CN 201911358722 A CN201911358722 A CN 201911358722A CN 111125432 A CN111125432 A CN 111125432A
- Authority
- CN
- China
- Prior art keywords
- training
- user
- training video
- optional
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012549 training Methods 0.000 title claims abstract description 161
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 18
- 238000010801 machine learning Methods 0.000 claims abstract description 13
- 238000003860 storage Methods 0.000 claims abstract description 13
- 230000006399 behavior Effects 0.000 claims abstract description 9
- 238000004891 communication Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 34
- 238000001228 spectrum Methods 0.000 claims description 33
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 claims description 3
- 238000007637 random forest analysis Methods 0.000 claims description 2
- 238000012360 testing method Methods 0.000 description 4
- 239000003245 coal Substances 0.000 description 3
- 239000000463 material Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/735—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/75—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/02—Agriculture; Fishing; Forestry; Mining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
- G06Q50/2057—Career enhancement or continuing education service
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Strategic Management (AREA)
- Human Resources & Organizations (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Business, Economics & Management (AREA)
- Life Sciences & Earth Sciences (AREA)
- Primary Health Care (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Marketing (AREA)
- Computational Linguistics (AREA)
- Mining & Mineral Resources (AREA)
- Marine Sciences & Fisheries (AREA)
- Bioinformatics & Computational Biology (AREA)
- Animal Husbandry (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Agronomy & Crop Science (AREA)
- Evolutionary Biology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The invention provides a rapid matching system for safety production training and a video matching method, wherein the system comprises a user side and a cloud side, the user side is in communication connection with the cloud side, the cloud side comprises an ID generating unit, a storage unit and a control unit, and the ID generating unit and the storage unit are respectively connected with the control unit; the ID generation unit acquires user initial information from a user side and generates a user ID and a password; the storage unit stores identity information of each user ID, a behavior log of each user ID, training videos and a training curriculum schedule of the system, wherein the behavior log comprises the time length of each training video watched by the user ID; the control unit monitors whether the learning time of the user meets the requirement or not, a machine learning method is adopted to match the training video for the user, and the user side plays the training video. The rapid matching system for the safety production training can monitor the training duration of the user, and can match the training videos for different users according to the duration of learning of each training video of different users.
Description
Technical Field
The invention relates to the field of computers, in particular to a rapid matching system for safety production training and a video matching method.
Background
The coal mine safety production responsibility is heavier than that in Taishan mountain. The "decisions of the State Council of the State administration on further strengthening the training work of safety production" (2012/10) put forward "the training is not in place and is a major potential safety hazard", require to realize the whole person 100% safety training, and emphasize that each enterprise strengthens the safety training infrastructure comprehensively, and it is firm to promote the safety training content standardization, the mode diversification, the management informatization, the method modernization and the supervision daily normalization, and strives to implement the safety training with full coverage, multiple means and high quality. In the training process, the requirements for personnel in different posts or working contents are inconsistent, and how to intelligently supervise and suggest the training conditions of various post personnel is a problem to be solved.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a rapid matching system for safety production training and a video matching method.
In order to achieve the above purpose, the invention provides a rapid matching system for safety production training, which comprises a user side and a cloud side, wherein the user side is in communication connection with the cloud side, the cloud side comprises an ID generation unit, a storage unit and a control unit, and the ID generation unit and the storage unit are respectively connected with the control unit;
the ID generation unit acquires user initial information from a user side and generates a user ID and a password;
the storage unit stores identity information of each user ID, a behavior log of each user ID, training videos, training curriculum schedules and training examination results of the system, wherein the behavior log comprises the time length of each training video watched by the user ID;
the control unit monitors whether the learning duration of the user meets the requirement or not, a machine learning method is adopted to match the training video for the user, and the user side plays the training video.
The rapid matching system for the safety production training can monitor the training duration of the user, and can match the training videos for different users according to the duration of learning of each training video of different users.
Preferred embodiments of the present application: the system also comprises a certificate generating unit, wherein the certificate generating unit generates a certificate according to the level of the current user ID, the time length for watching each training video by the user ID, the training examination result and the certificate generating requirement, and the control unit monitors whether the certificate is in the valid period according to the time length of the valid period of the certificate. The certificate can be automatically generated for users needing the certificate, and the validity period of the certificate can be intelligently monitored.
Preferred embodiments of the present application: the machine learning method comprises the following steps:
and carrying out deep learning on the duration of watching each training video by the user ID, the training examination result and the training curriculum corresponding to the level of the user ID, obtaining the curriculum which is not learned and/or needs to be strengthened, and pushing the curriculum to the user ID.
Preferred embodiments of the present application: the training videos comprise mandatory training videos and optional training videos, and the newly added training videos are matched into the mandatory training videos or the optional training videos through a machine learning algorithm.
The application also provides a video matching method based on machine learning, and the training video matching method comprises the following steps:
establishing a mandatory training video database and an optional training video database, wherein each mandatory sample in the mandatory training video database comprises a mandatory characteristic variable and a mandatory training video; each optional sample in the optional training content database comprises an optional frequency characteristic variable and an optional training video;
extracting a necessary sample set from a necessary training video database, and extracting an optional sample set from an optional training video database;
constructing a first classifier function by using a necessary sample set obtained by sampling, and constructing a second classifier function by using an optional sample set obtained by sampling;
extracting characteristic variables of the training video to be matched and inputting the characteristic variables serving as input variables into the first classifier function and the second classifier function to obtain the similarity with a necessary sample set and the similarity with an optional sample set;
if the similarity with the necessary sample set reaches a necessary specified proportion value, matching the training video to be matched with the necessary training video database; and if the similarity with the optional sample set reaches an optional specified proportion value, matching the training video to be matched into an optional training video database, and if the optional specified proportion value and the optional specified proportion value both reach, matching the training video to be matched into the optional training video database.
The video matching method is simple, and can quickly match the training video to be matched to the optional training video database or the optional training video database.
The preferred scheme of the method is as follows: the classifier function compares the videos through the sound frequency spectrum and the image recognition degree, and the method specifically comprises the following steps:
acquiring a keyword A in a sample set; acquiring a frequency spectrum of the keyword A;
acquiring a keyword B of a training video to be matched, and acquiring a frequency spectrum of the keyword B;
calculating the similarity C of the two frequency spectrums as a difference of central frequency points and B as a spectrum width difference, wherein the difference of the central frequency points is the difference of the frequency spectrum of the keyword A and the central frequency point of the frequency spectrum of the keyword B, the spectrum width difference is the spectrum width difference of the frequency spectrum of the keyword A and the frequency spectrum of the keyword B, 0.5< a <1, 0< B <0.5 and a + B as 1;
if the similarity C is larger than the threshold value, acquiring the information of the shot object in the training video picture to be matched, acquiring the reference object information in the sample set, and calculating the similarity D of the two objects;
When the classifier function is a first classifier function, the keyword A is a necessary feature variable; when the classifier function is a second classifier function, the keyword A is an optional feature variable; and the keyword B is a characteristic variable of the training video to be matched.
The invention has the beneficial effects that: the invention realizes the supervision of the training time length of the users participating in the safety production training, simultaneously can match the training videos for different users according to the time length of the different users for learning each training video, can automatically generate the certificate for the users needing the certificate, intelligently monitors the validity period of the certificate, and also realizes the automatic matching and classification of the training videos.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a functional block diagram of a rapid matching system for safety production training.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
As shown in fig. 1, the invention provides a rapid matching system for safety production training, which comprises a user side and a cloud side, wherein the user side is in communication connection with the cloud side, the cloud side comprises an ID generation unit, a storage unit and a control unit, and the ID generation unit and the storage unit are respectively connected with the control unit.
The ID generation unit acquires user initial information from a user side and generates a user ID and a password. The user ID has at least one level, each level corresponding to a different training curriculum.
The storage unit stores identity information of each user ID of the system, a behavior log of each user ID, training videos and a training curriculum schedule, wherein the behavior log comprises the time length for each training video watched by the user ID.
And the control unit monitors whether the learning time of the user meets the requirement or not according to the total time of the user watching each training video in the behavior log, matches the training video to the user by adopting a machine learning method, and plays the training video at the user side. The machine learning method comprises the following steps:
the user can take a training test at the user end, the training test result is stored in the storage unit and comprises the score of each part of test content, the distribution condition of wrong questions and the like, the time length of each training video watched by the user ID, the training test result and the training curriculum table corresponding to the level to which the user ID belongs are subjected to deep learning, the curriculum which is not learned and/or needs to be strengthened is obtained, and the curriculum is pushed to the user ID.
The training quick matching system further comprises a certificate generating unit, wherein the certificate generating unit generates a certificate according to the level of the current user ID, the time length of each training video watched by the user ID, the training examination result and the certificate generating requirement. And if the training examination result is unqualified, the certificate is not allowed to be generated. The control unit monitors whether the certificate is in the valid period according to the certificate valid period duration.
Such as: the level is the coal mine special operation level, and then the coal mine special operation personnel should be trained according to a specified training outline before taking qualification examinations. Wherein the time of the initial training is not less than ninety school hours. The expiration of the validity period of the special operation certificate requires a delay for certificate exchange, and a licensee should participate in special training at not less than twenty-four times before the validity period expires. The control unit monitors whether the training time of the user at the special operation level meets the requirements or not, can also push a training video matched with the user, and can also monitor whether the operation certificate of the special operation is in the valid period or not.
The training videos at the cloud end can be updated at any time, the training videos comprise necessary training videos and optional training videos, and the newly added training videos are matched into the necessary training videos or the optional training videos through a machine learning algorithm.
The invention also provides a video matching method based on machine learning, and the training video matching method comprises the following steps:
establishing a mandatory training video database and an optional training video database, wherein each mandatory sample in the mandatory training video database comprises a mandatory characteristic variable and a mandatory training video; each selectable sample in the selectable training content database includes a selectable frequency feature variable and a selectable training video.
And extracting a necessary sample set from the necessary training video database, and extracting an optional sample set from the optional training video database.
And constructing a first classifier function by using the sampled optional sample set, and constructing a second classifier function by using the sampled optional sample set.
And extracting characteristic variables of the training video to be matched and inputting the characteristic variables serving as input variables into the first classifier function and the second classifier function to obtain the similarity with the necessary sample set and the similarity with the optional sample set.
The classifier function compares the videos through the sound frequency spectrum and the image recognition degree, and the method specifically comprises the following steps:
acquiring a keyword A in a sample set, namely a vocabulary with high repetition probability, wherein the vocabulary can be a single word or a set of a plurality of words; and acquiring the frequency spectrum of the keyword A.
And acquiring a keyword B of the training video to be matched and acquiring a frequency spectrum of the keyword B.
And calculating the similarity C of the two frequency spectrums, namely a difference of central frequency points and B spectrum width difference, wherein the difference of the central frequency points is the difference of the frequency spectrum of the keyword A and the central frequency point of the frequency spectrum of the keyword B, the spectrum width difference is the spectrum width difference of the frequency spectrum of the keyword A and the frequency spectrum of the keyword B, 0.5< a <1, 0< B <0.5 and a + B is 1.
And if the similarity C is larger than a set threshold value, acquiring the information of the shot object in the training video picture to be matched, acquiring the reference object information in the sample set, and calculating the similarity D of the two objects, wherein the similarity D algorithm only adopts the existing algorithm, such as an image comparison algorithm.
Here, when the classifier function is the first classifier function, the sample set is a necessary sample set, and the keyword a is a necessary feature variable; when the classifier function is a second classifier function, the sample set is an optional sample set, and the keyword A is an optional feature variable; and the keyword B is a characteristic variable of the training video to be matched.
Preferably, the first classifier function and the second classifier function are both random forest model functions.
If the similarity with the necessary sample set reaches a necessary specified proportion value, matching the training video to be matched with the necessary training video database; and if the similarity with the optional sample set reaches an optional specified proportion value, matching the training content to be matched into an optional training video database, and if the optional specified proportion value and the optional specified proportion value both reach, matching the training video to be matched into the optional training video database.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Claims (8)
1. A rapid matching system for safety production training is characterized by comprising a user side and a cloud side, wherein the user side is in communication connection with the cloud side, the cloud side comprises an ID generation unit, a storage unit and a control unit, and the ID generation unit and the storage unit are respectively connected with the control unit;
the ID generation unit acquires user initial information from a user side and generates a user ID and a password;
the storage unit stores identity information of each user ID, a behavior log of each user ID, training videos, training curriculum schedules and training examination results of the system, wherein the behavior log comprises the time length of each training video watched by the user ID;
the control unit monitors whether the learning duration of the user meets the requirement or not, a machine learning method is adopted to match the training video for the user, and the user side plays the training video.
2. The system for rapid matching of safety production training as claimed in claim 1, further comprising a certificate generation unit for generating a certificate according to the level of the current user ID, the duration of each training video viewed by the user ID, the training examination result and the certificate generation requirement.
3. The rapid matching system for safety production training as claimed in claim 2, wherein the control unit monitors whether the certificate is within the validity period according to the validity period of the certificate.
4. The rapid matching system for safety production training as claimed in claim 1, wherein the machine learning method is:
and carrying out deep learning on the duration of watching each training video by the user ID, the training examination result and the training curriculum corresponding to the level of the user ID, obtaining the curriculum which is not learned and/or needs to be strengthened, and pushing the curriculum to the user ID.
5. The rapid matching system for safety production training as claimed in claim 1, wherein the training videos include a mandatory training video and an optional training video, and the newly added training video is matched as the mandatory training video or the optional training video through a machine learning algorithm.
6. A video matching method based on machine learning is characterized in that the training video matching method comprises the following steps:
establishing a mandatory training video database and an optional training video database, wherein each mandatory sample in the mandatory training video database comprises a mandatory characteristic variable and a mandatory training video; each optional sample in the optional training content database comprises an optional frequency characteristic variable and an optional training video;
extracting a necessary sample set from a necessary training video database, and extracting an optional sample set from an optional training video database;
constructing a first classifier function by using a necessary sample set obtained by sampling, and constructing a second classifier function by using an optional sample set obtained by sampling;
extracting characteristic variables of the training video to be matched and inputting the characteristic variables serving as input variables into the first classifier function and the second classifier function to obtain the similarity with a necessary sample set and the similarity with an optional sample set;
if the similarity with the necessary sample set reaches a necessary specified proportion value, matching the training video to be matched with the necessary training video database; and if the similarity with the optional sample set reaches an optional specified proportion value, matching the training video to be matched into an optional training video database, and if the optional specified proportion value and the optional specified proportion value both reach, matching the training video to be matched into the optional training video database.
7. The spatial data matching method of claim 6, wherein the classifier function compares the video by sound spectrum and image recognition, and comprises the steps of:
acquiring a keyword A in a sample set; acquiring a frequency spectrum of the keyword A;
acquiring a keyword B of a training video to be matched, and acquiring a frequency spectrum of the keyword B;
calculating the similarity C of the two frequency spectrums as a difference of central frequency points and B as a spectrum width difference, wherein the difference of the central frequency points is the difference of the frequency spectrum of the keyword A and the central frequency point of the frequency spectrum of the keyword B, the spectrum width difference is the spectrum width difference of the frequency spectrum of the keyword A and the frequency spectrum of the keyword B, 0.5< a <1, 0< B <0.5 and a + B as 1;
if the similarity C is larger than the threshold value, acquiring the information of the shot object in the training video picture to be matched, acquiring the reference object information in the sample set, and calculating the similarity D of the two objects;
When the classifier function is a first classifier function, the keyword A is a necessary feature variable; when the classifier function is a second classifier function, the keyword A is an optional feature variable; and the keyword B is a characteristic variable of the training video to be matched.
8. The spatial data matching method of claim 6, wherein the first classifier function and the second classifier function are both random forest model functions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911358722.3A CN111125432B (en) | 2019-12-25 | 2019-12-25 | Video matching method and training rapid matching system based on same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911358722.3A CN111125432B (en) | 2019-12-25 | 2019-12-25 | Video matching method and training rapid matching system based on same |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111125432A true CN111125432A (en) | 2020-05-08 |
CN111125432B CN111125432B (en) | 2023-07-11 |
Family
ID=70502352
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911358722.3A Active CN111125432B (en) | 2019-12-25 | 2019-12-25 | Video matching method and training rapid matching system based on same |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111125432B (en) |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001312295A (en) * | 2000-04-27 | 2001-11-09 | Sharp Corp | Device, method, and system for voice recognition and program recording medium |
JP2010128633A (en) * | 2008-11-26 | 2010-06-10 | Denso It Laboratory Inc | Information presentation device, method and program |
US20100280827A1 (en) * | 2009-04-30 | 2010-11-04 | Microsoft Corporation | Noise robust speech classifier ensemble |
JP2010267062A (en) * | 2009-05-14 | 2010-11-25 | Ntt Data Corp | Document management system, document management method, and computer program |
CN102200967A (en) * | 2011-03-30 | 2011-09-28 | 中国人民解放军军事医学科学院放射与辐射医学研究所 | Method and system for processing text based on DNA sequences |
US20140114663A1 (en) * | 2012-10-19 | 2014-04-24 | Industrial Technology Research Institute | Guided speaker adaptive speech synthesis system and method and computer program product |
CN104679779A (en) * | 2013-11-29 | 2015-06-03 | 华为技术有限公司 | Method and device for classifying videos |
CN105466710A (en) * | 2015-11-23 | 2016-04-06 | 电子科技大学 | Local mean value decomposition end point effect improvement method based on frequency domain similarity |
CN106935248A (en) * | 2017-02-14 | 2017-07-07 | 广州孩教圈信息科技股份有限公司 | A kind of voice similarity detection method and device |
US20170201779A1 (en) * | 2013-09-26 | 2017-07-13 | Mark W. Publicover | Computerized method and system for providing customized entertainment content |
CN106991628A (en) * | 2017-03-31 | 2017-07-28 | 河北天英软件科技有限公司 | A kind of the online training method of examination and system |
CN107169898A (en) * | 2017-05-12 | 2017-09-15 | 广州市和佳电子科技有限公司 | It is a kind of based on intelligent glasses be terminal training system and implementation method |
CN107818528A (en) * | 2017-09-28 | 2018-03-20 | 珠海市领创智能物联网研究院有限公司 | A kind of staffs training system based on Internet of Things |
US20180197209A1 (en) * | 2006-07-31 | 2018-07-12 | Mark W. Publicover | Advertising and fulfillment system |
CN108648123A (en) * | 2018-07-13 | 2018-10-12 | 江苏开放大学(江苏城市职业学院) | A method of its management network teaching process of the network teaching platform and utilization based on big data |
CN108845578A (en) * | 2018-07-19 | 2018-11-20 | 上海工程技术大学 | A kind of removable video monitoring system of voice-control |
CN109785859A (en) * | 2019-01-31 | 2019-05-21 | 平安科技(深圳)有限公司 | The method, apparatus and computer equipment of management music based on speech analysis |
CN109934747A (en) * | 2019-03-18 | 2019-06-25 | 重庆替比网络科技有限公司 | A kind of record for vocational training, archives supervisory systems |
CN110188266A (en) * | 2019-05-10 | 2019-08-30 | 广州职赢未来信息科技有限公司 | Curriculum information method for pushing, system, readable storage medium storing program for executing and terminal device |
CN110599810A (en) * | 2019-08-12 | 2019-12-20 | 苏州凌瑞智能技术有限公司 | Customized intelligent manufacturing training system and method based on intelligent service platform |
-
2019
- 2019-12-25 CN CN201911358722.3A patent/CN111125432B/en active Active
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001312295A (en) * | 2000-04-27 | 2001-11-09 | Sharp Corp | Device, method, and system for voice recognition and program recording medium |
US20180197209A1 (en) * | 2006-07-31 | 2018-07-12 | Mark W. Publicover | Advertising and fulfillment system |
JP2010128633A (en) * | 2008-11-26 | 2010-06-10 | Denso It Laboratory Inc | Information presentation device, method and program |
US20100280827A1 (en) * | 2009-04-30 | 2010-11-04 | Microsoft Corporation | Noise robust speech classifier ensemble |
JP2010267062A (en) * | 2009-05-14 | 2010-11-25 | Ntt Data Corp | Document management system, document management method, and computer program |
CN102200967A (en) * | 2011-03-30 | 2011-09-28 | 中国人民解放军军事医学科学院放射与辐射医学研究所 | Method and system for processing text based on DNA sequences |
US20140114663A1 (en) * | 2012-10-19 | 2014-04-24 | Industrial Technology Research Institute | Guided speaker adaptive speech synthesis system and method and computer program product |
US20170201779A1 (en) * | 2013-09-26 | 2017-07-13 | Mark W. Publicover | Computerized method and system for providing customized entertainment content |
CN104679779A (en) * | 2013-11-29 | 2015-06-03 | 华为技术有限公司 | Method and device for classifying videos |
CN105466710A (en) * | 2015-11-23 | 2016-04-06 | 电子科技大学 | Local mean value decomposition end point effect improvement method based on frequency domain similarity |
CN106935248A (en) * | 2017-02-14 | 2017-07-07 | 广州孩教圈信息科技股份有限公司 | A kind of voice similarity detection method and device |
CN106991628A (en) * | 2017-03-31 | 2017-07-28 | 河北天英软件科技有限公司 | A kind of the online training method of examination and system |
CN107169898A (en) * | 2017-05-12 | 2017-09-15 | 广州市和佳电子科技有限公司 | It is a kind of based on intelligent glasses be terminal training system and implementation method |
CN107818528A (en) * | 2017-09-28 | 2018-03-20 | 珠海市领创智能物联网研究院有限公司 | A kind of staffs training system based on Internet of Things |
CN108648123A (en) * | 2018-07-13 | 2018-10-12 | 江苏开放大学(江苏城市职业学院) | A method of its management network teaching process of the network teaching platform and utilization based on big data |
CN108845578A (en) * | 2018-07-19 | 2018-11-20 | 上海工程技术大学 | A kind of removable video monitoring system of voice-control |
CN109785859A (en) * | 2019-01-31 | 2019-05-21 | 平安科技(深圳)有限公司 | The method, apparatus and computer equipment of management music based on speech analysis |
CN109934747A (en) * | 2019-03-18 | 2019-06-25 | 重庆替比网络科技有限公司 | A kind of record for vocational training, archives supervisory systems |
CN110188266A (en) * | 2019-05-10 | 2019-08-30 | 广州职赢未来信息科技有限公司 | Curriculum information method for pushing, system, readable storage medium storing program for executing and terminal device |
CN110599810A (en) * | 2019-08-12 | 2019-12-20 | 苏州凌瑞智能技术有限公司 | Customized intelligent manufacturing training system and method based on intelligent service platform |
Non-Patent Citations (2)
Title |
---|
TAIGA YOSHIDA: "《Towards semantic and affective content-based video recommendation》", 《2013 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS》 * |
双志伟: "《语音转换分析及相似度改进》", 《清华大学学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111125432B (en) | 2023-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Larsson et al. | Triumph of the underdogs? Comparing Twitter use by political actors during two Norwegian election campaigns | |
US11349680B2 (en) | Method and apparatus for pushing information based on artificial intelligence | |
CN109862397A (en) | A kind of video analysis method, apparatus, equipment and storage medium | |
US9626651B2 (en) | Automated social network introductions for e-meetings | |
CN106354810A (en) | Internet teaching system using big data technology | |
CN107492375A (en) | A kind of classroom behavior analysis method and system | |
CN108924651B (en) | Teaching video intelligent playing system based on training operation recognition | |
CN110689903B (en) | Method, device, equipment and medium for evaluating intelligent sound box | |
CN107580009A (en) | A kind of verifying video content method and system | |
Soares et al. | An optimization model for temporal video lecture segmentation using word2vec and acoustic features | |
CN112529748A (en) | Intelligent education platform based on time node mark feedback learning state | |
CN111125432A (en) | Rapid matching system and video matching method for safety production training | |
CN106372083B (en) | A kind of method and system that controversial news clue is found automatically | |
Martinez-Lucas et al. | Analyzing the effect of affective priming on emotional annotations | |
CN111918073B (en) | Live broadcast room management method and device | |
Jansen et al. | Amara: A sustainable, global solution for accessibility, powered by communities of volunteers | |
CN116681556A (en) | Intelligent remote teaching cloud platform system and method based on education big data | |
Salal et al. | Using of Data Mining techniques to predictof student’s performance in industrial institute of Al-Diwaniyah, Iraq | |
Bayer et al. | NEPS technical report: Generated school type variable T723080_G1 in starting cohorts 3 and 4 | |
Deusser et al. | Buzz in social media: Detection of short-lived viral phenomena | |
Apon et al. | Real time action recognition from video footage | |
CN114511424A (en) | Classroom quality management method based on big data and intelligent education | |
EP2902924A1 (en) | Method for automatically selecting a real-time video stream among a plurality of available real-time video streams, and associated system | |
Sintawati et al. | Application of the C4. 5 Algorithm on the Effect of Watching Youtube Videos On the Development of Early Childhood Creativity | |
CN109033199A (en) | Patrol video synchronous playing and intelligent comparison method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |