CN107977443B - Intelligent teaching method and system based on voice analysis - Google Patents

Intelligent teaching method and system based on voice analysis Download PDF

Info

Publication number
CN107977443B
CN107977443B CN201711302148.0A CN201711302148A CN107977443B CN 107977443 B CN107977443 B CN 107977443B CN 201711302148 A CN201711302148 A CN 201711302148A CN 107977443 B CN107977443 B CN 107977443B
Authority
CN
China
Prior art keywords
data
courseware
keywords
voice data
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711302148.0A
Other languages
Chinese (zh)
Other versions
CN107977443A (en
Inventor
吴静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Boran Zhongchuang Digital Technology Co.,Ltd.
Original Assignee
Shanghai Boran Zhongchuang Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Boran Zhongchuang Digital Technology Co ltd filed Critical Shanghai Boran Zhongchuang Digital Technology Co ltd
Priority to CN201711302148.0A priority Critical patent/CN107977443B/en
Publication of CN107977443A publication Critical patent/CN107977443A/en
Application granted granted Critical
Publication of CN107977443B publication Critical patent/CN107977443B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification

Abstract

The invention discloses an intelligent teaching method based on voice analysis, which comprises the steps of recording voice data of a user into a system, and storing the voice data of the user as first voice data; the user sends courseware data to the system in advance, and the system stores the received courseware data; setting keywords in the voice data by a user, acquiring the keywords by the system, marking the keywords in the stored courseware data, and finally covering the modified courseware data with the original courseware data for storage; acquiring all voice data and storing the voice data as second voice data; comparing the second voice data with the first voice data, and judging whether voice data consistent with the voiceprint of the first voice data exists in the second voice data; extracting voice data with consistent voice prints as third voice data; judging whether the third voice data has keywords or not; and extracting courseware data including the keywords and outputting the data.

Description

Intelligent teaching method and system based on voice analysis
Technical Field
The invention relates to the field of intelligent teaching, in particular to an intelligent teaching method and system based on voice analysis.
Background
Electronic courseware is increasingly used in modern teaching to replace traditional paper courseware. The electronic courseware is different from the traditional paper courseware, and the main difference is that the courseware is not only seen by a teacher in class, but also is seen by students by projecting contents on a screen. The electronic courseware includes all the writing and drawing on the board of the teacher and some dynamic scenes hard to express in language. The electronic courseware can be used for class without a blackboard. The electronic courseware is used for class, so that the classroom capacity can be increased, students can learn knowledge in wonderful sound, light and electric environments, and the learning effect is enhanced.
The teacher usually selects ppt courseware, ppt showing is according to the production order, but the teacher gives lessons according to the order of the ppt showing in class, if the teacher alternates the teaching content to give lessons, corresponding ppt content is difficult to find, time is wasted on browsing the ppt often or the ppt is not seen directly, and thus teaching quality is easily influenced.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the disadvantages in the background art, embodiments of the present invention provide an intelligent teaching method and system based on speech analysis, which can effectively solve the problems related to the background art.
The technical scheme is as follows: an intelligent teaching method based on voice analysis comprises the following steps:
recording voice data of a user into a system, and storing the voice data of the user as first voice data;
the user sends courseware data to the system in advance, and the system stores the received courseware data;
setting keywords in the courseware data by a user, acquiring the keywords by the system, marking the keywords in the saved courseware data, and finally, covering the modified courseware data with the original courseware data for saving;
acquiring all voice data and storing the voice data as second voice data;
comparing the second voice data with the first voice data, and judging whether voice data consistent with the voiceprint of the first voice data exists in the second voice data;
if so, extracting the voice data with consistent voiceprints as third voice data;
judging whether the third voice data has the keyword or not;
if yes, extracting courseware data including the keywords according to the keywords in the third voice data and outputting the data.
As a preferred mode of the present invention, acquiring all voice data includes:
setting a preset range, and acquiring all voice data in the preset range.
As a preferred embodiment of the present invention, the extracting and outputting courseware data including the keyword includes:
and if a plurality of courseware data comprise the keywords, calculating the times of the keywords in the courseware data, and preferentially extracting the courseware data with the most times of the keywords.
As a preferred embodiment of the present invention, the extracting and outputting courseware data including the keyword includes:
if the number of the keywords is more than one, the courseware data including all the keywords are extracted preferentially;
and if the courseware data comprising all the keywords does not exist, extracting the courseware data comprising the most keywords.
As a preferred embodiment of the present invention, the extracting and outputting courseware data including the keyword includes:
and displaying the courseware data including the keywords through a display.
An intelligent tutoring system based on speech analysis, comprising:
the voice recording module is configured to acquire all voice data of a user and within a preset range by using a microphone;
a first storage module configured to store voice data;
the receiving module is configured to receive courseware data sent by a user;
a second storage module configured to store courseware data;
the processing module is configured to acquire keywords, and has operation permissions of all the voice data and the courseware data, wherein the operation permissions comprise courseware data modification, whether voiceprints of the first voice data and the second voice data are consistent or not, voice data and courseware data extraction and whether keywords exist in the voice data or not;
and the output module is configured to output the courseware data including the keywords in a data output mode.
As a preferred mode of the present invention, the voice recording module is further configured to acquire all voice data within a preset range by using a microphone.
As a preferred aspect of the present invention, the processing module includes:
and the counting module is configured to count the times of occurrence of the keywords in the courseware data.
As a preferred mode of the present invention, the processor module further includes:
and the judging module is configured to judge whether keywords exist in the courseware data.
As a preferred mode of the present invention, the output module is further configured to display the courseware data including the keyword through a display.
The invention realizes the following beneficial effects:
the intelligent teaching system provided by the invention is suitable for being applied to school office; the teaching system provided by the invention realizes the intelligent teaching function based on the voice data of the user, and the user inputs the voice data of the user into the system through the voice input module in advance; the processing module extracts third voice data of the user by comparing the second voice data with the voiceprint of the first voice data; extracting courseware data comprising the keywords through the keywords preset by the user and the keywords appearing in the third voice data, and outputting the courseware data through a display; when a plurality of keywords appear in the third voice data, preferentially extracting courseware data comprising all the keywords, if the courseware data comprising all the keywords do not exist, extracting courseware data comprising the most keywords and outputting the data; when a plurality of courseware data comprise keywords appearing in the third voice data, the times of the keywords appearing in the courseware data are calculated, and then courseware data with the largest times of the keywords appearing are extracted preferentially and output.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. Fig. 1 is a schematic flow chart of an intelligent teaching method based on speech analysis according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of an intelligent teaching method based on speech analysis according to a second embodiment of the present invention;
fig. 3 is a schematic flow chart of an intelligent teaching method based on voice analysis according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an intelligent teaching system based on speech analysis according to a fourth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
Example one
As shown in fig. 1, this embodiment provides an intelligent teaching method based on speech analysis, which includes:
s101: recording voice data of a user into a system, and storing the voice data of the user as first voice data;
s102: the user sends courseware data to the system in advance, and the system stores the received courseware data;
s103: setting keywords in the courseware data by a user, acquiring the keywords by the system, marking the keywords in the saved courseware data, and finally, covering the modified courseware data with the original courseware data for saving;
s104: acquiring all voice data and storing the voice data as second voice data;
s105: comparing the second voice data with the first voice data, and judging whether voice data consistent with the voiceprint of the first voice data exists in the second voice data;
s106: if so, extracting the voice data with consistent voiceprints as third voice data;
s107: judging whether the third voice data has the keyword or not;
s108: if yes, extracting courseware data including the keywords according to the keywords in the third voice data and outputting the data.
Acquiring all voice data comprises:
setting a preset range, and acquiring all voice data in the preset range.
The step of extracting courseware data including the keywords and outputting the data comprises the following steps:
and displaying the courseware data including the keywords through a display.
Specifically, the relation between the teaching system and the teacher provided by the invention is a one-to-many relation, namely the teaching system can be used by a plurality of teachers. In step S101, a user inputs the voice data in a normal state into the system through the voice input module one by one according to a prompt, and under the condition of voice input, it is necessary to ensure that there is and only the voice data of the user in a preset range of the voice input module, the voice input module acquires the voice data of the user when the user makes a normal sound, and determines whether there is and only one voice data of the user in the acquired voice data, if so, the input is completed, if voice data of multiple users are detected, the user is prompted to input again, and after the input is completed, the processing module stores the voice data of the user as the first voice data into the first storage module.
In step S102, a user connects to the system using an intelligent device, where the intelligent device includes a smart phone, a smart watch, a notebook computer, a desktop computer, and the like, and after the connection, the user sends courseware data to the system through the intelligent device, the receiving module receives the courseware data and forwards the courseware data to the processing module, the processing module analyzes whether the received courseware data is teaching data, if yes, the processing module stores the courseware data in the second storage module, and if not, the processing module deletes the courseware data.
In step S103, in order to facilitate the user to find the courseware data to be used at any time, after the user uses the intelligent device to connect to the system, the user uses the intelligent device to set the keywords in the selected courseware data, after the setting is completed, the processing module obtains the keywords set by the user and marks the keywords in the saved courseware data, and the processing module replaces the courseware data selected by the user in the second storage module with the courseware data marked with the keywords.
In step S104, when the system is used, the voice recording module acquires all the voice data in a preset range in real time and synchronously sends the acquired voice data to the processing module, and the processing module stores the voice data as second voice data in the first storage module, where the preset range at least covers a classroom in which the system is installed.
In step S105, the processing module extracts the second voice data and the first voice data, performs voiceprint comparison on the two and determines whether there is voice data consistent with a voiceprint of the first voice data in the second voice data, where the first storage device includes a plurality of first voice data of a plurality of users, the processing module compares the first voice data with all the second voice data one by one, the processing module performs multiple determinations, and if the processing module determines that the result is negative, performs the next determination; and if the processor module judges that the data is positive, stopping judging.
In step S106, when the processing module determines that the voiceprints are consistent, the processing module extracts a part of the voice data in the second voice data that is consistent with the voiceprint of the first voice data as third voice data, that is, in this step, the voice data of the user during class speaking is obtained.
In step S107, the processing module determines whether a keyword exists in the third speech data in real time.
In step S108, if the processing module determines that the third voice data includes a keyword, extracting courseware data including the keyword according to the keyword and displaying the courseware data through a display.
Example two
As shown in fig. 2, extracting and outputting courseware data including the keywords includes:
and if a plurality of courseware data comprise the keywords, calculating the times of the keywords in the courseware data, and preferentially extracting the courseware data with the most times of the keywords.
Specifically, the user divides the teaching data into a plurality of courseware data, the courseware data are all provided with independent keywords, different courseware data may comprise the same keywords, in order to improve the extraction accuracy of the invention, the processing module needs to filter the courseware data before outputting the courseware data, the processing module extracts all courseware data, the judging module judges whether the courseware data comprises the keywords one by one, after the judgment is finished, the processing module extracts all courseware data including the keywords, the counting module counts the times of the keywords appearing in the courseware data one by one, and sending the statistical result to the processing module, wherein the statistical result comprises the times of occurrence of the keywords and courseware data corresponding to the times, and the processing module preferentially extracts the courseware data with the maximum times of occurrence of the keywords and outputs the data.
EXAMPLE III
As shown in fig. 3, extracting and outputting courseware data including the keywords includes:
if the number of the keywords is more than one, the courseware data including all the keywords are extracted preferentially;
and if the courseware data comprising all the keywords does not exist, extracting the courseware data comprising the most keywords.
Specifically, the number of keywords set by the user may be multiple, and correspondingly, the third voice data may include a plurality of different keywords, the accuracy of extracting the courseware data can be further improved by combining the technical features of this embodiment with the technical features of the second embodiment, when the third voice data includes a plurality of different keywords, the processing module extracts all the courseware data, the determining module determines whether the courseware data includes the keywords one by one, if yes, the processing module determines whether the courseware data includes all the keywords, and if yes, the processing module preferentially extracts the courseware data including all the keywords and outputs the data; if not, the processing module extracts courseware data comprising the most keywords and outputs the data.
Example four
As shown in fig. 4, this embodiment provides an intelligent teaching system based on voice analysis, which includes:
a voice input module 401 configured to acquire a user and all voice data within a preset range by using a microphone;
a first storage module 402 configured to store voice data;
a receiving module 403 configured to receive courseware data sent by a user;
a second storage module 404 configured to store courseware data;
a processing module 405 configured to obtain keywords, where the processing module has operation permissions of all the voice data and the courseware data, and the operation permissions include modifying the courseware data, determining whether voiceprints of the first voice data and the second voice data are consistent, extracting the voice data and the courseware data, and determining whether the voice data has the keywords;
and an output module 406 configured to perform data output on the courseware data including the keywords.
The voice recording module 401 is further configured to acquire all voice data within a preset range by using a microphone.
The processing module 405 includes:
a counting module 407 configured to count the number of occurrences of the keyword in the courseware data.
The processor module 405 further comprises:
a decision module 408 configured to determine whether keywords are present in the courseware data.
The output module 406 is further configured to display the courseware data including the keywords via a display.
It should be understood that, in the fourth embodiment, the specific implementation process of each module described above may correspond to the description of the above method embodiments (the first to the third embodiments), and is not described in detail here.
The system provided in the fourth embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the system is divided into different functional modules to complete all or part of the functions described above.
The above embodiments are merely illustrative of the technical ideas and features of the present invention, and are intended to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.

Claims (7)

1. An intelligent teaching method based on voice analysis is used for finding out corresponding PPT content when teaching is not carried out according to PPT showing sequence, and is characterized by comprising the following steps:
recording voice data of a user into a system, and storing the voice data of the user as first voice data;
the user sends courseware data to the system in advance, and the system stores the received courseware data;
setting keywords in the courseware data by a user, acquiring the keywords by the system, marking the keywords in the saved courseware data, and finally, covering the modified courseware data with the original courseware data for saving;
acquiring all voice data and storing the voice data as second voice data, wherein the acquiring all voice data comprises: setting a preset range, and acquiring all voice data in the preset range;
comparing the second voice data with the first voice data, and judging whether voice data consistent with the voiceprint of the first voice data exists in the second voice data;
if so, extracting the voice data with consistent voiceprints as third voice data;
judging whether the third voice data has the keyword or not;
if yes, extracting courseware data including the keywords according to the keywords in the third voice data and outputting the data;
the step of extracting courseware data including the keywords and outputting the data comprises the following steps:
if a plurality of courseware data comprise the keywords, calculating the times of the keywords in the courseware data, and preferentially extracting the courseware data with the most times of the keywords;
if the number of the keywords is more than one, the courseware data including all the keywords are extracted preferentially;
and if the courseware data comprising all the keywords does not exist, extracting the courseware data comprising the most keywords.
2. The intelligent teaching method based on voice analysis according to claim 1, wherein: the step of extracting courseware data including the keywords and outputting the data comprises the following steps:
and displaying the courseware data including the keywords through a display.
3. An intelligent tutoring system based on speech analysis for use in the method of claim 1, wherein: the method comprises the following steps:
the voice recording module is configured to acquire all voice data of a user and within a preset range by using a microphone;
a first storage module configured to store voice data;
the receiving module is configured to receive courseware data sent by a user;
a second storage module configured to store courseware data;
the processing module is configured to acquire keywords, and has operation permissions of all the voice data and the courseware data, wherein the operation permissions comprise courseware data modification, whether voiceprints of the first voice data and the second voice data are consistent or not, voice data and courseware data extraction and whether keywords exist in the voice data or not;
and the output module is configured to output the courseware data including the keywords in a data output mode.
4. The intelligent teaching system based on voice analysis of claim 3, wherein: the voice recording module is further configured to acquire all voice data in a preset range by using the microphone.
5. The intelligent teaching system based on voice analysis of claim 3, wherein: the processing module comprises:
and the counting module is configured to count the times of occurrence of the keywords in the courseware data.
6. The intelligent teaching system based on voice analysis of claim 3, wherein: the processing module further comprises:
and the judging module is configured to judge whether keywords exist in the courseware data.
7. The intelligent teaching system based on voice analysis of claim 3, wherein: the output module is further configured to display the courseware data including the keywords via the display.
CN201711302148.0A 2017-12-10 2017-12-10 Intelligent teaching method and system based on voice analysis Active CN107977443B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711302148.0A CN107977443B (en) 2017-12-10 2017-12-10 Intelligent teaching method and system based on voice analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711302148.0A CN107977443B (en) 2017-12-10 2017-12-10 Intelligent teaching method and system based on voice analysis

Publications (2)

Publication Number Publication Date
CN107977443A CN107977443A (en) 2018-05-01
CN107977443B true CN107977443B (en) 2021-10-22

Family

ID=62009783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711302148.0A Active CN107977443B (en) 2017-12-10 2017-12-10 Intelligent teaching method and system based on voice analysis

Country Status (1)

Country Link
CN (1) CN107977443B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189766B (en) * 2018-10-25 2021-11-12 重庆鲁班机器人技术研究院有限公司 Teaching scheme acquisition method and device and electronic equipment
CN110880316A (en) * 2019-10-16 2020-03-13 苏宁云计算有限公司 Audio output method and system
CN111128195A (en) * 2019-11-29 2020-05-08 合肥讯飞读写科技有限公司 Voiceprint control method of intelligent demonstrator, intelligent demonstrator and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104952009A (en) * 2015-04-23 2015-09-30 阔地教育科技有限公司 Resource management method, system and server and interactive teaching terminal
CN106056996A (en) * 2016-08-23 2016-10-26 深圳市时尚德源文化传播有限公司 Multimedia interaction teaching system and method
CN106297458A (en) * 2016-10-25 2017-01-04 合肥东上多媒体科技有限公司 A kind of intelligent multimedia instruction management platform
CN106487410A (en) * 2016-12-13 2017-03-08 北京奇虎科技有限公司 A kind of authority control method of message interruption-free and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8037041B2 (en) * 2005-04-11 2011-10-11 Alden Byird Investments, Llc System for dynamic keyword aggregation, search query generation and submission to third-party information search utilities
US7567658B1 (en) * 2005-06-22 2009-07-28 Intellicall, Inc. Method to verify designation of pay telephone with an interexchange carrier
US8060494B2 (en) * 2007-12-07 2011-11-15 Microsoft Corporation Indexing and searching audio using text indexers
CN102339193A (en) * 2010-07-21 2012-02-01 Tcl集团股份有限公司 Voice control conference speed method and system
CN102455843B (en) * 2010-10-21 2016-06-01 浪潮乐金数字移动通信有限公司 The method of controlling operation thereof of a kind of PPT file and device
CN103956166A (en) * 2014-05-27 2014-07-30 华东理工大学 Multimedia courseware retrieval system based on voice keyword recognition
CN106453859B (en) * 2016-09-23 2019-11-15 维沃移动通信有限公司 A kind of sound control method and mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104952009A (en) * 2015-04-23 2015-09-30 阔地教育科技有限公司 Resource management method, system and server and interactive teaching terminal
CN106056996A (en) * 2016-08-23 2016-10-26 深圳市时尚德源文化传播有限公司 Multimedia interaction teaching system and method
CN106297458A (en) * 2016-10-25 2017-01-04 合肥东上多媒体科技有限公司 A kind of intelligent multimedia instruction management platform
CN106487410A (en) * 2016-12-13 2017-03-08 北京奇虎科技有限公司 A kind of authority control method of message interruption-free and device

Also Published As

Publication number Publication date
CN107977443A (en) 2018-05-01

Similar Documents

Publication Publication Date Title
US10114809B2 (en) Method and apparatus for phonetically annotating text
CN107977443B (en) Intelligent teaching method and system based on voice analysis
CN105224665A (en) A kind of wrong topic management method and system
CN109243215B (en) Interaction method based on intelligent device, intelligent device and system
CN110600033B (en) Learning condition evaluation method and device, storage medium and electronic equipment
CN110085068A (en) A kind of study coach method and device based on image recognition
CN108305618B (en) Voice acquisition and search method, intelligent pen, search terminal and storage medium
CN113537801B (en) Blackboard writing processing method, blackboard writing processing device, terminal and storage medium
CN111415537A (en) Symbol-labeling-based word listening system for primary and secondary school students
CN109637536B (en) Method and device for automatically identifying semantic accuracy
CN107748744A (en) A kind of method for building up and device for sketching the contours frame knowledge base
CN109391833A (en) A kind of sound control method and smart television of smart television
CN110210299A (en) Voice training data creation method, device, equipment and readable storage medium storing program for executing
CN106375594A (en) Method and device for adjusting equipment, and electronic equipment
CN108595406A (en) A kind of based reminding method of User Status, device, electronic equipment and storage medium
JP2016085284A (en) Program, apparatus and method for estimating evaluation level with respect to learning item on the basis of person's remark
CN111859893B (en) Image-text typesetting method, device, equipment and medium
CN112165627A (en) Information processing method, device, storage medium, terminal and system
CN112116181B (en) Classroom quality model training method, classroom quality evaluation method and classroom quality evaluation device
CN110111795B (en) Voice processing method and terminal equipment
CN111090977A (en) Intelligent writing system and intelligent writing method
CN105225554A (en) A kind of detection method of state of listening to the teacher and device
CN115273840A (en) Voice interaction device and voice interaction method
CN115168534A (en) Intelligent retrieval method and device
CN114972716A (en) Lesson content recording method, related device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210928

Address after: 200000 floor 1, No. 223, Jianghan Road, Minhang District, Shanghai

Applicant after: Shanghai Boran Zhongchuang Digital Technology Co.,Ltd.

Address before: Room 5-1603, central Dijing, Xianfu street, Taicang City, Suzhou City, Jiangsu Province

Applicant before: Wu Jing

GR01 Patent grant
GR01 Patent grant