CN111125432B - Video matching method and training rapid matching system based on same - Google Patents

Video matching method and training rapid matching system based on same Download PDF

Info

Publication number
CN111125432B
CN111125432B CN201911358722.3A CN201911358722A CN111125432B CN 111125432 B CN111125432 B CN 111125432B CN 201911358722 A CN201911358722 A CN 201911358722A CN 111125432 B CN111125432 B CN 111125432B
Authority
CN
China
Prior art keywords
training
user
video
training video
keyword
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911358722.3A
Other languages
Chinese (zh)
Other versions
CN111125432A (en
Inventor
许刚
侯明玉
黎建
魏余万
张安坤
袁建
肖强
冷梅
谭兴彬
刘天明
冯齐敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Qihui'an Technology Co ltd
Shihao Coal Mine Of Chongqing Energy Investment Chongqing New Energy Co ltd
Original Assignee
Chongqing Qihui'an Technology Co ltd
Shihao Coal Mine Of Chongqing Energy Investment Chongqing New Energy Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Qihui'an Technology Co ltd, Shihao Coal Mine Of Chongqing Energy Investment Chongqing New Energy Co ltd filed Critical Chongqing Qihui'an Technology Co ltd
Priority to CN201911358722.3A priority Critical patent/CN111125432B/en
Publication of CN111125432A publication Critical patent/CN111125432A/en
Application granted granted Critical
Publication of CN111125432B publication Critical patent/CN111125432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • G06Q50/2057Career enhancement or continuing education service
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Artificial Intelligence (AREA)
  • Marketing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Primary Health Care (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Agronomy & Crop Science (AREA)
  • Animal Husbandry (AREA)
  • Computational Linguistics (AREA)
  • Mining & Mineral Resources (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention provides a rapid matching system for safety production training and a video matching method, wherein the system comprises a user side and a cloud end, the user side is in communication connection with the cloud end, the cloud end comprises an ID generating unit, a storage unit and a control unit, and the ID generating unit and the storage unit are respectively connected with the control unit; the ID generating unit acquires user initial information from a user side and generates a user ID and a password; the storage unit stores identity information of each user ID of the system, a behavior log of each user ID, training videos and training curriculum schedules, wherein the behavior log comprises time length for the user ID to watch each training video; the control unit monitors whether the learning duration of the user meets the requirement, matches training videos to the user by adopting a machine learning method, and plays the training videos at the user side. The rapid matching system for the safe production training can monitor the training time of the user, and can also match training videos for different users according to the time of learning each training video by different users.

Description

Video matching method and training rapid matching system based on same
Technical Field
The invention relates to the field of computers, in particular to a rapid matching system for safety production training and a video matching method.
Background
The coal mine safety production responsibility is heavier than that of Taishan mountain. The national institutes of security committee decides about further strengthening the work of security production training (security committee 2012 10) puts forward that "the lack of training is a major potential safety hazard", requires the realization of 100% security training of the whole staff, emphasizes the comprehensive strengthening of security training infrastructure of each enterprise, and is a solid advance of security training content standardization, mode diversification, management informatization, method modernization and supervision daily, in an effort to implement full-coverage, multi-means, high-quality security training. Because the requirements on personnel with different posts or working contents are inconsistent in the training process, how to systematically and intelligently supervise and recommend the training conditions of various post personnel is a problem to be solved.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a safe production training rapid matching system and a video matching method.
In order to achieve the above purpose, the invention provides a rapid matching system for safety production training, which comprises a user side and a cloud end, wherein the user side is in communication connection with the cloud end, the cloud end comprises a I D generation unit, a storage unit and a control unit, and the I D generation unit and the storage unit are respectively connected with the control unit;
the I D generating unit acquires user initial information from a user side and generates a user I D and a password;
the storage unit stores identity information of each user I D of the system, a behavior log of each user I D, training videos, training curriculum schedules and training examination results, wherein the behavior log comprises time periods for the user I D to watch each training video;
the control unit monitors whether the learning duration of the user meets the requirement, matches the training video to the user by adopting a machine learning method, and plays the training video at the user side.
The rapid matching system for the safe production training can monitor the training time of the user, and can also match training videos for different users according to the time of learning each training video by different users.
Preferred embodiments of the present application: the system further comprises a certificate generation unit which generates a certificate according to the level of the current user I D, the time period of watching each training video by the user I D, training examination results and certificate generation requirements, and the control unit monitors whether the certificate is in the validity period according to the time period of the validity period of the certificate. The method can automatically generate the certificate for the user needing the certificate and intelligently monitor the validity period of the certificate.
Preferred embodiments of the present application: the machine learning method comprises the following steps:
the training schedule corresponding to the class of the user I D is subjected to deep learning by the time period for the user I D to watch each training video and the training examination result, so that a course which is not learned and/or needs reinforcement learning is obtained, and the course is pushed to the user I D.
Preferred embodiments of the present application: the training videos include a on-demand training video and an optional training video, and the newly added training video is matched into the on-demand training video or the optional training video through a machine learning algorithm.
The application also provides a video matching method based on machine learning, which comprises the following steps:
establishing a mandatory training video database and an optional training video database, wherein each mandatory sample in the mandatory training video database comprises mandatory feature variables and mandatory training videos; each selectable sample in the selectable training content database comprises a selectable frequency characteristic variable and a selectable training video;
extracting an alternative sample set from the alternative training video database, and extracting an alternative sample set from the alternative training video database;
constructing a first classifier function by using the sampled optional sample set, and constructing a second classifier function by using the sampled optional sample set;
extracting characteristic variables of training videos to be matched, and inputting the characteristic variables into a first classifier function and a second classifier function as input variables to obtain the similarity with a mandatory sample set and the similarity with an optional sample set;
if the similarity between the training video to be matched and the necessary sample set reaches the necessary specified proportion value, matching the training video to be matched into a necessary training video database; and if the similarity with the optional sample set reaches an optional specified proportion value, matching the training video to be matched into an optional training video database, and if the optional specified proportion value and the optional specified proportion value reach, matching the training video to be matched into an optional training video database.
The video matching method is simple, and training videos to be matched can be quickly matched to the optional training video database or the optional training video database.
The method has the preferable scheme that: the classifier function compares the video through the sound spectrum and the image recognition degree, and comprises the following specific steps:
obtaining a keyword A in a sample set; acquiring a frequency spectrum of a keyword A;
acquiring a keyword B of a training video to be matched, and acquiring a frequency spectrum of the keyword B;
calculating the similarity C=a of the two frequency spectrums and the difference +b of the center frequency points, wherein the difference of the center frequency points is the difference of the center frequency points of the frequency spectrums of the keyword A and the frequency spectrums of the keyword B, and the difference of the frequency spectrums is the difference of the frequency spectrums of the keyword A and the frequency spectrums of the keyword B, wherein 0.5< a <1, 0< B <0.5 and a+b=1;
if the similarity C is larger than a threshold value, acquiring information of a shot object in a training video picture to be matched, acquiring information of a reference object in a sample set, and calculating the similarity D of the two objects;
obtaining overall similarity
Figure GDA0004143926550000031
When the classifier function is a first classifier function, the keyword A is an essential feature variable; when the classifier function is a second classifier function, the keyword A is an optional feature variable; the keyword B is a characteristic variable of the training video to be matched.
The beneficial effects of the invention are as follows: the invention realizes supervision of training time length of users participating in safe production training, can also match training videos for different users according to time length of learning each training video by different users, can automatically generate certificates for users needing certificates, intelligently monitor valid period of the certificates, and also realize automatic matching classification of the training videos.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic block diagram of a safe production training quick match system.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, unless otherwise specified and defined, it should be noted that the terms "mounted," "connected," and "coupled" are to be construed broadly, and may be, for example, mechanical or electrical, or may be in communication with each other between two elements, directly or indirectly through intermediaries, as would be understood by those skilled in the art, in view of the specific meaning of the terms described above.
As shown in fig. 1, the invention provides a rapid matching system for safety production training, which comprises a user side and a cloud end, wherein the user side is in communication connection with the cloud end, the cloud end comprises a I D generation unit, a storage unit and a control unit, and the I D generation unit and the storage unit are respectively connected with the control unit.
The I D generating unit obtains user initial information from the user terminal and generates the user I D and the password. The user I D has at least one level, each level corresponding to a different training curriculum.
The storage unit stores identity information of each user I D of the system, a behavior log of each user I D, a training video, and a training curriculum, wherein the behavior log includes a duration of time that each training video was viewed by the user I D.
The control unit monitors whether the learning duration of the user meets the requirement according to the sum of the time durations of watching the training videos of the user in the behavior log, matches the training videos with the user by adopting a machine learning method, and plays the training videos at the user side. The machine learning method comprises the following steps:
the user can take a training test at the user end, the training test results are stored in the storage unit, the training test results comprise the content scores of all the parts of the test, the wrong problem distribution conditions and the like, the time for the user I D to watch each training video and the training curriculum schedule corresponding to the class to which the user I D belongs are subjected to deep learning, so that a curriculum which is not learned and/or needs reinforcement learning is obtained, and the curriculum is pushed to the user I D.
The training quick matching system further includes a certificate generation unit that generates a certificate according to the level of the current user I D, the length of time the user I D views each training video, training test results, and a certificate generation requirement. If the training test result is not qualified, the certificate is not allowed to be generated. The control unit monitors whether the certificate is in the validity period according to the validity period duration of the certificate.
Such as: the level is the coal mine special operation level, and then the coal mine special operation personnel should be trained according to a specified training outline before taking qualification tests. Wherein the time for initial training is not less than ninety hours. The expiration of the validity period of the special operation card needs to be delayed for changing the card, and the licensor should participate in special training at least twenty-four times before the expiration of the validity period. The control unit monitors whether the training time of the user of the special job level meets the requirement, can push training videos matched with the user to the special job level, and can monitor whether the special job operation certificate is in the validity period.
The training video of the cloud can be updated at any time, the training video comprises a mandatory training video and an optional training video, and the newly added training video is matched into the mandatory training video or the optional training video through a machine learning algorithm.
The invention also provides a video matching method based on machine learning, which comprises the following steps:
establishing a mandatory training video database and an optional training video database, wherein each mandatory sample in the mandatory training video database comprises mandatory feature variables and mandatory training videos; each selectable sample in the selectable training content database includes a selectable frequency characteristic variable and a selectable training video.
The method comprises the steps of extracting an alternative sample set from an alternative training video database, and extracting an alternative sample set from an alternative training video database.
A first classifier function is constructed using the sampled set of candidate samples and a second classifier function is constructed using the sampled set of candidate samples.
And extracting characteristic variables of training videos to be matched, and inputting the characteristic variables into the first classifier function and the second classifier function as input variables to obtain the similarity with the alternative sample set and the similarity with the alternative sample set.
The classifier function compares the video through the sound spectrum and the image recognition degree, and specifically comprises the following steps:
obtaining a keyword A in a sample set, namely a vocabulary with high repetition probability, wherein the keyword A can be a single word or a set of a plurality of words; and acquiring the frequency spectrum of the keyword A.
And obtaining a keyword B of the training video to be matched, and obtaining the frequency spectrum of the keyword B.
Calculating the difference of the similarity C=a of the two frequency spectrums and the difference of the center frequency points +b of the two frequency spectrums, wherein the difference of the center frequency points is the difference of the center frequency points of the frequency spectrums of the keyword A and the frequency spectrums of the keyword B, the difference of the frequency spectrums is the difference of the frequency spectrums of the keyword A and the frequency spectrums of the keyword B, 0.5< a <1, 0< B <0.5, and a+b=1.
If the similarity C is larger than a set threshold value, acquiring information of the shot objects in the training video pictures to be matched, acquiring information of the reference objects in the sample set, and calculating the similarity D of the two objects, wherein an existing algorithm, such as an image comparison algorithm, is adopted for the similarity D.
Obtaining overall similarity
Figure GDA0004143926550000071
Here, when the classifier function is the first classifier function, the sample set is a mandatory sample set, and the keyword a is a mandatory feature variable; when the classifier function is a second classifier function, the sample set is an optional sample set, and the keyword A is an optional characteristic variable; the keyword B is a characteristic variable of the training video to be matched.
Preferably, the first classifier function and the second classifier function are random forest model functions.
If the similarity between the training video to be matched and the necessary sample set reaches the necessary specified proportion value, matching the training video to be matched into a necessary training video database; and if the similarity with the optional sample set reaches an optional specified proportion value, matching the training content to be matched into an optional training video database, and if the optional specified proportion value and the optional specified proportion value reach, matching the training video to be matched into an optional training video database.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (7)

1. The video matching method based on machine learning is characterized by comprising the following steps:
establishing a mandatory training video database and an optional training video database, wherein each mandatory sample in the mandatory training video database comprises mandatory feature variables and mandatory training videos; each selectable sample in the selectable training content database comprises a selectable frequency characteristic variable and a selectable training video;
extracting an alternative sample set from the alternative training video database, and extracting an alternative sample set from the alternative training video database;
constructing a first classifier function by using the sampled optional sample set, and constructing a second classifier function by using the sampled optional sample set;
extracting characteristic variables of training videos to be matched, and inputting the characteristic variables into a first classifier function and a second classifier function as input variables to obtain the similarity with a mandatory sample set and the similarity with an optional sample set;
the classifier function compares the video through the sound spectrum and the image recognition degree, and comprises the following specific steps:
obtaining a keyword A in a sample set; acquiring a frequency spectrum of a keyword A;
acquiring a keyword B of a training video to be matched, and acquiring a frequency spectrum of the keyword B;
calculating the similarity C=a of the two frequency spectrums and the difference +b of the center frequency points, wherein the difference of the center frequency points is the difference of the center frequency points of the frequency spectrums of the keyword A and the frequency spectrums of the keyword B, and the difference of the frequency spectrums is the difference of the frequency spectrums of the keyword A and the frequency spectrums of the keyword B, wherein 0.5< a <1, 0< B <0.5 and a+b=1;
if the similarity C is larger than a threshold value, acquiring information of a shot object in a training video picture to be matched, acquiring information of a reference object in a sample set, and calculating the similarity D of the two objects;
acquiring overall similarity e=c×d;
when the classifier function is a first classifier function, the keyword A is an essential feature variable; when the classifier function is a second classifier function, the keyword A is an optional feature variable; the keyword B is a characteristic variable of the training video to be matched;
if the similarity between the training video to be matched and the necessary sample set reaches the necessary specified proportion value, matching the training video to be matched into a necessary training video database; and if the similarity with the optional sample set reaches an optional specified proportion value, matching the training video to be matched into an optional training video database, and if the optional specified proportion value and the optional specified proportion value reach, matching the training video to be matched into an optional training video database.
2. The machine learning based video matching method of claim 1, wherein the first classifier function and the second classifier function are both random forest model functions.
3. The training rapid matching system based on the machine learning-based video matching method of claim 1 is characterized by comprising a user side and a cloud end, wherein the user side is in communication connection with the cloud end, the cloud end comprises an ID generating unit, a storage unit and a control unit, and the ID generating unit and the storage unit are respectively connected with the control unit;
the ID generation unit acquires user initial information from a user side and generates a user ID and a password;
the storage unit stores identity information of each user ID of the system, a behavior log of each user ID, training videos, training curriculum schedules and training examination results, wherein the behavior log comprises time length for watching each training video by the user ID;
the control unit monitors whether the learning duration of the user meets the requirement, matches the training video to the user by adopting a machine learning method, and plays the training video at the user side.
4. The training quick matching system according to claim 3, further comprising a certificate generation unit that generates a certificate according to a level of a current user ID, a time period for which the user ID views each training video, a training examination result, and a certificate generation requirement.
5. The training quick match system of claim 4, wherein the control unit monitors whether a credential is within a validity period based on a credential validity period duration.
6. The training quick match system of claim 3, wherein the machine learning method is:
and performing deep learning on the time length of watching each training video by the user ID and the training curriculum schedule of training examination results corresponding to the class to which the user ID belongs, obtaining a curriculum which is not learned and/or needs reinforcement learning, and pushing the curriculum to the user ID.
7. The training quick match system of claim 3, wherein the training videos comprise a pickpocket training video and a pickpocket training video, the newly added training video being matched to the pickpocket training video or the pickpocket training video by a machine learning algorithm.
CN201911358722.3A 2019-12-25 2019-12-25 Video matching method and training rapid matching system based on same Active CN111125432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911358722.3A CN111125432B (en) 2019-12-25 2019-12-25 Video matching method and training rapid matching system based on same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911358722.3A CN111125432B (en) 2019-12-25 2019-12-25 Video matching method and training rapid matching system based on same

Publications (2)

Publication Number Publication Date
CN111125432A CN111125432A (en) 2020-05-08
CN111125432B true CN111125432B (en) 2023-07-11

Family

ID=70502352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911358722.3A Active CN111125432B (en) 2019-12-25 2019-12-25 Video matching method and training rapid matching system based on same

Country Status (1)

Country Link
CN (1) CN111125432B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001312295A (en) * 2000-04-27 2001-11-09 Sharp Corp Device, method, and system for voice recognition and program recording medium
JP2010128633A (en) * 2008-11-26 2010-06-10 Denso It Laboratory Inc Information presentation device, method and program
CN102200967A (en) * 2011-03-30 2011-09-28 中国人民解放军军事医学科学院放射与辐射医学研究所 Method and system for processing text based on DNA sequences
CN105466710A (en) * 2015-11-23 2016-04-06 电子科技大学 Local mean value decomposition end point effect improvement method based on frequency domain similarity
CN106935248A (en) * 2017-02-14 2017-07-07 广州孩教圈信息科技股份有限公司 A kind of voice similarity detection method and device
CN106991628A (en) * 2017-03-31 2017-07-28 河北天英软件科技有限公司 A kind of the online training method of examination and system
CN108845578A (en) * 2018-07-19 2018-11-20 上海工程技术大学 A kind of removable video monitoring system of voice-control
CN109785859A (en) * 2019-01-31 2019-05-21 平安科技(深圳)有限公司 The method, apparatus and computer equipment of management music based on speech analysis

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080082417A1 (en) * 2006-07-31 2008-04-03 Publicover Mark W Advertising and fulfillment system
US8412525B2 (en) * 2009-04-30 2013-04-02 Microsoft Corporation Noise robust speech classifier ensemble
JP5288617B2 (en) * 2009-05-14 2013-09-11 株式会社エヌ・ティ・ティ・データ Document management system, document management method, and computer program
TWI471854B (en) * 2012-10-19 2015-02-01 Ind Tech Res Inst Guided speaker adaptive speech synthesis system and method and computer program product
US10546326B2 (en) * 2013-09-26 2020-01-28 Mark W. Publicover Providing targeted content based on a user's preferences
CN104679779B (en) * 2013-11-29 2019-02-01 华为技术有限公司 The method and apparatus of visual classification
CN107169898A (en) * 2017-05-12 2017-09-15 广州市和佳电子科技有限公司 It is a kind of based on intelligent glasses be terminal training system and implementation method
CN107818528A (en) * 2017-09-28 2018-03-20 珠海市领创智能物联网研究院有限公司 A kind of staffs training system based on Internet of Things
CN108648123A (en) * 2018-07-13 2018-10-12 江苏开放大学(江苏城市职业学院) A method of its management network teaching process of the network teaching platform and utilization based on big data
CN109934747A (en) * 2019-03-18 2019-06-25 重庆替比网络科技有限公司 A kind of record for vocational training, archives supervisory systems
CN110188266A (en) * 2019-05-10 2019-08-30 广州职赢未来信息科技有限公司 Curriculum information method for pushing, system, readable storage medium storing program for executing and terminal device
CN110599810A (en) * 2019-08-12 2019-12-20 苏州凌瑞智能技术有限公司 Customized intelligent manufacturing training system and method based on intelligent service platform

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001312295A (en) * 2000-04-27 2001-11-09 Sharp Corp Device, method, and system for voice recognition and program recording medium
JP2010128633A (en) * 2008-11-26 2010-06-10 Denso It Laboratory Inc Information presentation device, method and program
CN102200967A (en) * 2011-03-30 2011-09-28 中国人民解放军军事医学科学院放射与辐射医学研究所 Method and system for processing text based on DNA sequences
CN105466710A (en) * 2015-11-23 2016-04-06 电子科技大学 Local mean value decomposition end point effect improvement method based on frequency domain similarity
CN106935248A (en) * 2017-02-14 2017-07-07 广州孩教圈信息科技股份有限公司 A kind of voice similarity detection method and device
CN106991628A (en) * 2017-03-31 2017-07-28 河北天英软件科技有限公司 A kind of the online training method of examination and system
CN108845578A (en) * 2018-07-19 2018-11-20 上海工程技术大学 A kind of removable video monitoring system of voice-control
CN109785859A (en) * 2019-01-31 2019-05-21 平安科技(深圳)有限公司 The method, apparatus and computer equipment of management music based on speech analysis

Also Published As

Publication number Publication date
CN111125432A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
Sekaran et al. Research methods for business: A skill building approach
CN108874832B (en) Target comment determination method and device
CN107733666A (en) Conference implementation method and device and electronic equipment
CN110458130B (en) Person identification method, person identification device, electronic equipment and storage medium
US9613077B2 (en) Natural language management of online social network connections
CN104636501A (en) Multimedia network artificial instant translation system and multimedia network artificial instant translation method
CN111726682B (en) Video clip generation method, device, equipment and computer storage medium
US9626651B2 (en) Automated social network introductions for e-meetings
DE102012218485B4 (en) Method for an instant messaging system and instant messaging system
CN110689903B (en) Method, device, equipment and medium for evaluating intelligent sound box
CN111177462B (en) Video distribution timeliness determination method and device
CN110856016A (en) Video live broadcast monitoring method and device, electronic equipment and storage medium
US20210011976A1 (en) Automatically identifying and minimizing potentially indirect meanings in electronic communications
CN110633357A (en) Voice interaction method, device, equipment and medium
CN111125432B (en) Video matching method and training rapid matching system based on same
US20170223133A1 (en) Monitoring and maintaining social group cohesiveness
CN107210001A (en) Use the autonomous learning systems of video segment
CN111918073A (en) Management method and device of live broadcast room
CN105373568A (en) Method and device for automatically learning question answers
WO2020034756A1 (en) Method and apparatus for predicting target device, and electronic device and storage medium
CN116681556A (en) Intelligent remote teaching cloud platform system and method based on education big data
Gazizullina et al. Prediction of twitter message deletion
DE102016213104A1 (en) Token-based authentication with signed message
CN114363103B (en) Method, device and computer readable medium for processing conference information
CN106411601A (en) System and method for batch configuration of cloud terminal time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant