CN115936002A - Conference identification method based on algorithm, terminal and storage medium - Google Patents

Conference identification method based on algorithm, terminal and storage medium Download PDF

Info

Publication number
CN115936002A
CN115936002A CN202211337154.0A CN202211337154A CN115936002A CN 115936002 A CN115936002 A CN 115936002A CN 202211337154 A CN202211337154 A CN 202211337154A CN 115936002 A CN115936002 A CN 115936002A
Authority
CN
China
Prior art keywords
conference
language
algorithm
speaker
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211337154.0A
Other languages
Chinese (zh)
Inventor
吴莹
周胜杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Konka Electronic Technology Co Ltd
Original Assignee
Shenzhen Konka Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Konka Electronic Technology Co Ltd filed Critical Shenzhen Konka Electronic Technology Co Ltd
Priority to CN202211337154.0A priority Critical patent/CN115936002A/en
Publication of CN115936002A publication Critical patent/CN115936002A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a conference identification method, a terminal and a storage medium based on an algorithm, wherein the method comprises the following steps: establishing a conference, acquiring a list of participants and corresponding information data, and preprocessing the list of the participants and the corresponding information data; recognizing a speaker entering the conference according to AI language recognition, face recognition and preprocessed data, determining identity information of the speaker, and dynamically synchronizing the speaking content of the speaker to a screen and a conference system in real time; recording texts corresponding to the speaking contents of speakers, automatically recognizing word senses based on a learning model of an NLP algorithm, determining language and language habits and a culture background according to identity information of the speakers, and outputting corresponding conference summary according to the language and language habits and the culture background. According to the invention, the efficiency and the accuracy of the conference summary are improved by means of AI speech recognition and combination of NLP algorithm models and the like.

Description

Conference identification method based on algorithm, terminal and storage medium
Technical Field
The invention relates to the technical field of terminals, in particular to a conference identification method based on an algorithm, a terminal and a storage medium.
Background
People often organize different related parties to participate in the form of meetings to promote project progress and business development, but the parties have time cost. The conference summary is not a simple record, but the content and results of the conference are conveyed and executed in the form of official documents. In the prior art, a conference summary needs a special person to spend a large amount of time for shorthand and arrangement, and the recording needs to be checked once and once, so that the recording work is time-consuming and tedious, and the efficiency of recording the conference summary is low.
The recording conference summary mode only singly records the recording such as recording, video and the like, only carries out the mode of directly converting the language translation into the text recording to form the conference summary and then outputs, and only has simple classification processing, and only the conference record is formed; because artificial natural language has ambiguity, large natural language systems such as Chinese and English can be translated into harmonic words to a great extent through language translation forms at present, so that the phenomenon of ambiguity or incapability of meeting language habits and cultural backgrounds of speakers occurs, and the accuracy of meeting summary records is low.
Thus, the prior art has yet to be improved.
Disclosure of Invention
The invention aims to solve the technical problems that the traditional conference summary mode is low in efficiency and accuracy.
The technical scheme adopted by the invention for solving the technical problem is as follows:
in a first aspect, the present invention provides a conference identification method based on an algorithm, including:
establishing a conference, acquiring a list of conference participants and corresponding information data, and preprocessing the list of the conference participants and the corresponding information data;
recognizing a speaker entering a conference according to AI language recognition, face recognition and preprocessed data, determining identity information of the speaker, and dynamically synchronizing the speaking content of the speaker to a screen and a conference system in real time;
recording texts corresponding to the speaking contents of the speaking personnel, automatically recognizing word senses based on a learning model of an NLP algorithm, determining language habits and culture backgrounds according to identity information of the speaking personnel, and outputting corresponding conference summary according to the language habits and the culture backgrounds.
In one implementation, the recording of the text corresponding to the speech content of the speaker and the automatic word sense recognition based on a learning model of NLP algorithm includes:
recording a text corresponding to the speaking content of the speaking person;
performing semantic recognition under the context based on the learning model of the NLP algorithm;
setting words with a large number of occurrences as core words, and splitting the text content into different paragraphs according to the core words;
and predicting information according to the existing text information, and supplementing the text according to the predicted information.
In one implementation, the determining a language habit and a culture background according to the identity information of the speaker and outputting a corresponding conference summary according to the language habit and the culture background includes:
determining the language habit of the speaker according to the identity information of the speaker, and performing syntactic analysis and automatic correction according to the syntactic and grammatical structural analysis in the language habit;
and analyzing, processing and summarizing the subjective text with emotional colors according to the culture background, and generating a standard conference summary file format.
In one implementation, the outputting the corresponding conference summary according to the language habit and the culture background of the speaking person then includes:
acquiring modification annotations of all conference participants to the conference summary based on human-computer interaction;
and correcting the conference summary according to the correction annotations, and outputting the corrected conference summary according to an output path.
In one implementation manner, the obtaining a list of conference participants and corresponding information data, and preprocessing the list of conference participants and the corresponding information data include:
carrying out identity identification and information data acquisition on the participants to acquire character attribute parameters corresponding to the participants to obtain the participant list and corresponding information data;
and uploading the list of the participants and the corresponding information data to a database of a server.
In one implementation, the person attribute parameters include: one or a combination of face information, voiceprint information and language information;
wherein the language information includes: nationality, language, and commonly used languages.
In one implementation, the recognizing a speaking person entering a conference according to AI language recognition, face recognition and preprocessed data, and determining identity information of the speaking person includes:
picking up voice information of the speaker;
and determining a pickup source through the AI language identification, matching the speaker through the face identification, searching corresponding data information in the preprocessed data, and determining the identity information of the speaker.
In one implementation, the dynamically synchronizing the speaking content of the speaking person to the screen and the conference system in real time includes:
outputting the speaking content of the speaking person into a corresponding language version according to the identity information, and displaying the output content on a conference system large screen and synchronizing the output content to a terminal screen corresponding to the speaking person;
and outputting the speech content of the speech person as the language versions corresponding to other conference participants, and synchronizing the output language versions to the terminal screens corresponding to the other conference participants.
In a second aspect, the present invention further provides a terminal, including: a processor and a memory storing an algorithm based conference identification program for implementing the operations of the algorithm based conference identification method according to the first aspect when executed by the processor.
In a third aspect, the present invention also provides a storage medium, which is a computer-readable storage medium, and the storage medium stores an algorithm-based conference identification program, and the algorithm-based conference identification program is used for implementing the operation of the algorithm-based conference identification method according to the first aspect when executed by a processor.
The invention adopts the technical scheme and has the following effects:
the invention provides a conference recognition method based on an algorithm for users, further, by means of AI voice recognition and combination of NLP algorithm models and the like, the method can determine language habits and culture backgrounds according to identity information of speakers, and output corresponding conference summary according to the language habits and the culture backgrounds, thereby improving efficiency and accuracy of the conference summary. The invention records and summarizes and explains more objectively through the algorithm, does not carry out subjective judgment to record the conference content, realizes office automation, provides more intelligent and normalized content for users, and enables the conference to be held and recorded more conveniently and efficiently.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the embodiments or technical solutions of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flow chart of an algorithm-based conference identification method in one implementation of the invention.
FIG. 2 is a schematic diagram of a conference summary in one implementation of the invention.
Fig. 3 is a functional schematic of a terminal in one implementation of the invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Exemplary method
The existing conference summary technology is only limited to only recording modes of recording, video and the like, not only is the efficiency low, but also the mode of directly converting the language translation into the text record forms the conference summary and then outputting, and only simple classification processing is carried out, so that only the conference record is formed; because artificial natural language has ambiguity, large natural language systems such as Chinese and English can be translated into harmonic words to a great extent through language translation forms at present, so that the phenomenon of ambiguity or non-conformity with language habits and culture backgrounds of speakers occurs, and the accuracy of conference summary recording is low.
Aiming at the technical problems, the embodiment of the invention provides an algorithm-based conference recognition method, which is used for eliminating speech recognition ambiguity, content classification and paragraph splitting, text automatic completion, early warning, syntax error modification and the like by means of AI speech recognition and combination with an NLP algorithm model and the like, and automatically generating a conference summary after emotion analysis is carried out on real-time comments and marks. The embodiment of the invention records and summarizes and expounds more objectively through the algorithm, does not carry out subjective judgment to record the conference content, realizes office automation, provides more intelligent and normalized content for users, and enables the conference to be held and recorded more conveniently and efficiently.
As shown in fig. 1, an embodiment of the present invention provides an algorithm-based conference identification method, including the following steps:
step S100, a conference is created, a list of conference participants and corresponding information data are obtained, and the list of the conference participants and the corresponding information data are preprocessed.
In this embodiment, the conference identification method based on the algorithm is applied to a terminal, where the terminal includes but is not limited to: computers, mobile terminals, and the like.
In this embodiment, the terminal is a conference system terminal, and the terminal is connected to the terminal through a network and is a client of each participant; after the terminal establishes a conference, collecting a list of participants and data for preprocessing; when participants enter a conference and start a video conference, the terminal identifies the pickup source and face matching through AI voice, judges the identity of a speaker, confirms from which participant the voice comes, and dynamically synchronizes the content in real time and outputs the content to a screen and a conference system; the conference system automatically identifies the part of speech through syntax and semantics by recording the text output by the corresponding conferee and based on a learning model of an NLP algorithm, and outputs a conference summary by combining the cultural background of the object and corresponding to the language habit of the conferee after word segmentation; the method achieves the aims of reducing the error recognition rate of the content and improving the content accuracy and matching degree.
In this embodiment, the conference participants are supported to adopt a trial online mode to manually or voice mark the contents of the paragraph in question in real time, and the conference system supports manual annotation to modify the contents; after the participants confirm, the terminal version conference summary and the mail and/or the short message and/or the link are output to all the people, and office automation is realized.
Specifically, in an implementation manner of the present embodiment, the step S100 includes the following steps:
step S101, carrying out identity identification and information data acquisition on conference participants, acquiring character attribute parameters corresponding to the conference participants, and acquiring a conference participant list and corresponding information data;
and step S102, uploading the conference participant list and the corresponding information data to a database of a server.
In this embodiment, before a conference is performed, a conference system of the terminal performs identity identification and information data acquisition on conference participants to acquire parameters such as character attributes; wherein, the parameters such as character attribute comprise: face information (e.g., avatar, gender), voiceprint information (e.g., voice curve, gender, age), language information (e.g., nationality, language, common language).
In an implementation manner of this embodiment, the identification and information data of the participants may be obtained by remotely controlling a corresponding client through the terminal, and starting a camera module and a language module of the corresponding client to collect corresponding information data, so as to obtain the list of the participants and the corresponding information data; in another implementation manner of this embodiment, after each client locally starts a camera module and a language module to collect corresponding information data, the data taken by each client may be transmitted to the terminal, so as to obtain the list of participants and the corresponding information data.
In this embodiment, after acquiring information data of participants, the terminal obtains the list of participants and corresponding information data, and transmits the list of participants and corresponding information data to a database of a server; the server analyzes and arranges the conference participant list and the corresponding information data according to the conference participant list uploaded by the terminal, creates a corresponding database in the server after the server is checked and calibrated, associates the conference participant list and the corresponding information data after the check and calibration, and stores the conference participant list and the corresponding information data into the corresponding database.
As shown in fig. 1, in an implementation manner of the embodiment of the present invention, the method for identifying a meeting based on an algorithm further includes the following steps:
and step S200, recognizing a speaker entering the conference according to AI language recognition, face recognition and preprocessed data, determining identity information of the speaker, and dynamically synchronizing the speaking content of the speaker to a screen and a conference system in real time.
In this embodiment, when a conference starts, the terminal starts to read data in a database on a server to obtain data information corresponding to the actual participant; meanwhile, the terminal and each client can start a local camera module and a voice module to identify the speaker so as to determine the identity information of the speaker.
Specifically, in one implementation manner of the present embodiment, the step S200 includes the following steps:
step S201, picking up voice information of the speaker;
step S202, determining a pickup source through the AI language recognition, matching the speaker through the face recognition, searching corresponding data information in the preprocessed data, and determining identity information of the speaker.
In this embodiment, when a speaker performs recognition, an pickup source is determined through AI (Artificial Intelligence) language recognition, and the speaker may be a local speaker (i.e., a conference host) and a speaker at a client; for pickup sources, audio and video signal sources of the conference comprise audio and video signals of local speakers and audio and video signals of clients speakers; taking an audio signal as an example, for a terminal of a local speaker, a conference terminal (i.e., the terminal) may detect whether a voice signal is input through a voice module, and if so, collect an audio input source of the speaker (i.e., the local speaker); for the client of the far-end speaker, the conference terminal (namely the terminal) receives the audio signal sent by the corresponding client, and after audio decoding, the decoded information is used as the corresponding audio input source.
In the process of matching speakers, if the audio/video signal is the signal of a local speaker, matching identification is directly performed according to the audio/video signal, wherein the matching identification process is as follows: searching corresponding data information in the preprocessed data (namely the data read from the server) according to the audio and video signals acquired in real time, so as to match the head portrait, the age, the sex, the voice and the language, and if the matching is successful, determining the identity information of a speaker; and if the matching fails, reminding or warning the speaking person.
If the audio and video signal is the signal of the speaker at the client, the identity information of the speaker is verified in a local mode and a remote mode; one is that after the remote client acquires the audio and video signal, the client located remotely recognizes the identity of the speaking person according to the audio and video signal locally (the matching recognition process is as above), and then sends the recognized identity information to the conference terminal (namely the terminal); and the other is that the remote client sends the acquired audio and video signals to a conference terminal (namely the terminal), and then the conference terminal (namely the terminal) identifies the identity of the speaker according to the audio and video signals.
Specifically, in an implementation manner of this embodiment, the step S200 further includes the following steps:
step S203, outputting the speaking content of the speaking person as a corresponding language version according to the identity information, and displaying the output content on a conference system large screen and synchronizing the output content to a terminal screen corresponding to the speaking person;
and step S204, outputting the speaking content of the speaking person as the language version corresponding to other conference participants, and synchronizing the output language version to the terminal screens corresponding to other conference participants.
In this embodiment, after the identity information of the speaking person is determined, the conference terminal (i.e., the terminal) synchronizes and translates the conference content orally output by each participant in real time, that is, outputs the conference content of the speaking person.
For example, when the camera module and the voice module dynamically detect that a speaker a (such as a chinese person) is verbally outputting content, the content is output as a chinese version and is displayed on a conference system large screen and synchronized to a terminal screen of the speaker a, and meanwhile, the client of other participants performs the following operations:
conference participants B (e.g. americans): the obtained real-time conference content is in English version;
conference participant C (e.g., french): the obtained real-time conference content is a French version;
conference participant D (e.g., russian): the obtained real-time conference content is a Russian version;
conference participants N (e.g., N-nation): the obtained real-time conference content is a corresponding preset common language version.
In the embodiment, identity information of a speaking person in the conference can be identified through AI voice identification and face identification, so that the identity information of the speaking person, such as age, gender, voice and language, can be determined, and therefore, by means of combination of NLP algorithm models and the like, more intelligent and normalized service content can be provided for users, and effects of conference use experience and the like of the users can be improved.
As shown in fig. 1, in an implementation manner of the embodiment of the present invention, the method for identifying a meeting based on an algorithm further includes the following steps:
step S300, recording texts corresponding to the speaking contents of the speaking personnel, automatically identifying word meanings based on a learning model of an NLP algorithm, determining language habits and culture backgrounds according to the identity information of the speaking personnel, and outputting corresponding conference summary according to the language habits and the culture backgrounds.
In this embodiment, the conference system (i.e., the terminal) first records a text (i.e., text information of speech content) output by a corresponding participant, and then automatically identifies a part of speech through syntax and semantics based on a learning model of a Natural Language Processing (NLP) algorithm, and outputs a conference summary according to a Language habit of the participant after word segmentation and by combining a cultural background of the object.
Specifically, in one implementation manner of the present embodiment, the step S300 includes the following steps:
step S301, recording a text corresponding to the speaking content of the speaking person;
step S302, semantic recognition under the context is carried out on the basis of the learning model of the NLP algorithm;
step S303, setting words with a large number of occurrences as core words, and splitting the text content into different paragraphs according to the core words;
and step S304, predicting information according to the existing text information, and supplementing the text according to the predicted information.
In this embodiment, during the language processing performed by the conference system (i.e. the terminal), the following operations are performed:
firstly, performing semantic recognition under the context of the language corresponding to the speaker, thereby eliminating speech recognition ambiguity in the speech content; such as similar words, homophones and the like, and the words are accurately matched into proper nouns and/or common nouns in the field;
then, determining high-frequency words according to the number of words in the speaking content, converting the high-frequency words into core words and classifying the content, thereby associating the given text with one or more categories, and segmenting; by adopting high-frequency appearing words or implicit meaning analysis, the converted core words are emphasized, namely, firstly, and finally … … and the like.
Finally, automatically completing the text corresponding to the speech content, namely preprocessing the text; and (4) predicting and calculating the next word according to the known text, such as converting the word into a full name for short, removing the mood word from the incomplete word, comparing the mood word with the dictionary output and the like.
Specifically, in an implementation manner of this embodiment, the step S300 further includes the following steps:
step S305, determining the language habit of the speaker according to the identity information of the speaker, and performing syntactic analysis and automatic correction according to the syntactic and grammatical structural analysis in the language habit;
and S306, analyzing, processing and summarizing the subjective text with emotional colors according to the culture background, and generating a standard conference summary file format.
In this embodiment, during the language processing performed by the conference system (i.e., the terminal), the following operations may be further performed:
automatically segmenting words according to languages of speakers, marking and early warning grammatical semantic errors, and automatically modifying errors after syntactic analysis and understanding based on the syntax of rules and structural analysis of the current statistical grammar; secondly, performing sentiment analysis on the real-time comments and the marks, analyzing subjective texts with sentiment colors, and automatically forming a standard conference summary file format after processing and induction; as shown in fig. 2, the standard meeting summary file format includes: meeting basic information (e.g., time, location, host, participants, document number, etc.), annotation information, and speech content.
In the embodiment, by means of AI speech recognition and combination with NLP algorithm models and the like, speech recognition ambiguity, content classification and paragraph splitting, text automatic completion, early warning and grammar syntax error modification are eliminated, and the conference summary is automatically generated after emotion analysis is performed on real-time comments and marks. The conference recording is more objective through machine recording and algorithm summarization, conference content is recorded without subjective judgment, office automation is realized, more intelligent and normalized content is provided for users, the conference calling and recording are more convenient and efficient, and user experience is improved.
In an implementation manner of the embodiment of the present invention, the algorithm-based conference identification method further includes the following steps:
step S400, acquiring modification annotations of all conference participants to the conference summary based on human-computer interaction;
and S500, correcting the conference summary according to the correction remarks, and outputting the corrected conference summary according to an output path.
In the embodiment, the conference participants are supported to adopt a trial online mode to manually or voice mark the contents of the paragraph to be in doubt in real time, and the conference system supports manual annotation to modify the contents; and after the participants confirm, outputting the final version of the conference summary and copying all the people by the mail, thereby realizing office automation.
The embodiment provides a conference identification method based on the algorithm for the user, the conference identification method is recorded by a machine, summarized and elaborated through the algorithm, the conference content is recorded without subjective judgment, office automation is realized, the multi-language real-time translation and sharing function of a plurality of related parties is realized, the conference calling and recording are more convenient and efficient, and the user experience is improved.
The embodiment achieves the following technical effects through the technical scheme:
the embodiment provides a conference recognition method based on an algorithm for a user, and further carries out processing such as eliminating speech recognition ambiguity, content classification and paragraph splitting, text automatic completion, early warning and syntax error modification by means of AI speech recognition and combination with an NLP algorithm model, and automatically generates a conference summary after carrying out sentiment analysis on real-time comments and marks. The embodiment records and summarizes and explains objectively through an algorithm, and records the conference content without subjective judgment, so that office automation is realized, more intelligent and normalized content is provided for users, and the conference holding and recording are more convenient and efficient.
Exemplary device
Based on the above embodiment, the present invention further provides a terminal, including: the system comprises a processor, a memory, an interface, a display screen and a communication module which are connected through a system bus; wherein the processor is configured to provide computing and control capabilities; the memory comprises a storage medium and an internal memory; the storage medium stores an operating system and a computer program; the internal memory provides an environment for the running of an operating system and a computer program in the storage medium; the interface is used for connecting external equipment, such as mobile terminals, computers and the like; the display screen is used for displaying corresponding information; the communication module is used for communicating with a cloud server or a mobile terminal.
The computer program is operable, when executed by the processor, to perform operations of an algorithm-based conference identification method.
It will be understood by those skilled in the art that the block diagram shown in fig. 3 is a block diagram of only a portion of the structure associated with the inventive arrangements and does not define a terminal to which the inventive arrangements are applied, and that a particular terminal may include more or less components than those shown in fig. 3, or may combine certain components, or have a different arrangement of components.
In one embodiment, a terminal is provided, which includes: a processor and a memory storing an algorithm-based conference identification program for implementing the operations of the algorithm-based conference identification method as described above when executed by the processor.
In one embodiment, a storage medium is provided, wherein the storage medium stores an algorithm-based conference identification program, which when executed by the processor is configured to implement the operations of the algorithm-based conference identification method as described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a non-volatile storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory.
In summary, the present invention provides a conference identification method, a terminal and a storage medium based on an algorithm, wherein the method comprises: establishing a conference, acquiring a list of participants and corresponding information data, and preprocessing the list of the participants and the corresponding information data; recognizing a speaker entering the conference according to AI language recognition, face recognition and preprocessed data, determining identity information of the speaker, and dynamically synchronizing the speaking content of the speaker to a screen and a conference system in real time; recording texts corresponding to the speech contents of the speechers, automatically identifying word senses based on a learning model of an NLP algorithm, determining language habits and culture backgrounds according to identity information of the speechers, and outputting corresponding conference summary according to the language habits and the culture backgrounds. The invention improves the efficiency and accuracy of the conference summary by means of AI voice recognition and combination with NLP algorithm models and the like.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. An algorithm-based conference identification method, comprising:
establishing a conference, acquiring a list of conference participants and corresponding information data, and preprocessing the list of the conference participants and the corresponding information data;
recognizing a speaker entering a conference according to AI language recognition, face recognition and preprocessed data, determining identity information of the speaker, and dynamically synchronizing the speaking content of the speaker to a screen and a conference system in real time;
recording a text corresponding to the speaking content of the speaking person, automatically identifying word meaning based on a learning model of an NLP algorithm, determining language habit and culture background according to the identity information of the speaking person, and outputting a corresponding conference summary according to the language habit and the culture background.
2. The method for identifying a conference based on an algorithm as claimed in claim 1, wherein the recording of the text corresponding to the speech content of the speaker and the automatic word sense identification based on the learning model of NLP algorithm comprises:
recording a text corresponding to the speaking content of the speaking person;
performing semantic recognition under the context based on the learning model of the NLP algorithm;
setting words with a large number of occurrences as core words, and splitting the text content into different paragraphs according to the core words;
and predicting information according to the existing text information, and supplementing the text according to the predicted information.
3. The method of claim 1, wherein determining a language habit and a cultural background according to the identity information of the speaker and outputting a corresponding conference summary according to the language habit and the cultural background comprises:
determining the language habit of the speaker according to the identity information of the speaker, and performing syntactic analysis and automatic correction according to the syntactic and grammatical structural analysis in the language habit;
and analyzing, processing and summarizing the subjective text with emotional colors according to the culture background and generating a standard conference summary file format.
4. The method of claim 1, wherein outputting the corresponding conference summary according to the speaking person's language habits and cultural background comprises:
acquiring modification annotations of all conference participants to the conference summary based on human-computer interaction;
and correcting the conference summary according to the correction annotations, and outputting the corrected conference summary according to an output path.
5. The method of claim 1, wherein the obtaining and pre-processing the list of participants and the corresponding information data comprises:
carrying out identity identification and information data acquisition on participants to obtain character attribute parameters corresponding to the participants to obtain a participant list and corresponding information data;
and uploading the list of the participants and the corresponding information data to a database of a server.
6. The algorithm-based meeting identification method of claim 5 wherein said person attribute parameters comprise: one or a combination of face information, voiceprint information and language information;
wherein the language information includes: nationality, language, and commonly used languages.
7. The method for identifying a conference based on an algorithm according to claim 1, wherein the identifying a speaking person entering the conference according to AI language identification, face identification and preprocessed data, and determining the identity information of the speaking person comprises:
picking up voice information of the speaker;
and determining a pickup source through the AI language identification, matching the speaker through the face identification, searching corresponding data information in the preprocessed data, and determining the identity information of the speaker.
8. The method for identifying a conference based on an algorithm as claimed in claim 1, wherein the real-time dynamic synchronization of the speaking content of the speaking person to the screen and the conference system comprises:
outputting the speaking content of the speaking person into a corresponding language version according to the identity information, and displaying the output content on a conference system large screen and synchronizing the output content to a terminal screen corresponding to the speaking person;
and outputting the speech content of the speech person as a language version corresponding to other participants, and synchronizing the output language version to terminal screens corresponding to the other participants.
9. A terminal, comprising: a processor and a memory storing an algorithm based meeting identification program for implementing the operations of the algorithm based meeting identification method of any of claims 1-8 when executed by the processor.
10. A storage medium, characterized in that the storage medium is a computer-readable storage medium, which stores an algorithm-based conference identification program, which when executed by a processor is configured to implement the operations of the algorithm-based conference identification method according to any one of claims 1 to 8.
CN202211337154.0A 2022-10-28 2022-10-28 Conference identification method based on algorithm, terminal and storage medium Pending CN115936002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211337154.0A CN115936002A (en) 2022-10-28 2022-10-28 Conference identification method based on algorithm, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211337154.0A CN115936002A (en) 2022-10-28 2022-10-28 Conference identification method based on algorithm, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115936002A true CN115936002A (en) 2023-04-07

Family

ID=86651747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211337154.0A Pending CN115936002A (en) 2022-10-28 2022-10-28 Conference identification method based on algorithm, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115936002A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117316163A (en) * 2023-10-08 2023-12-29 江门市麦德利电子科技有限公司 Paperless office conference equipment and paperless office conference method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117316163A (en) * 2023-10-08 2023-12-29 江门市麦德利电子科技有限公司 Paperless office conference equipment and paperless office conference method

Similar Documents

Publication Publication Date Title
US11417343B2 (en) Automatic speaker identification in calls using multiple speaker-identification parameters
CN108986826A (en) Automatically generate method, electronic device and the readable storage medium storing program for executing of minutes
JP6233798B2 (en) Apparatus and method for converting data
WO2005027092A1 (en) Document creation/reading method, document creation/reading device, document creation/reading robot, and document creation/reading program
US10650813B2 (en) Analysis of content written on a board
CN111063355A (en) Conference record generation method and recording terminal
CN112699645A (en) Corpus labeling method, apparatus and device
CN115936002A (en) Conference identification method based on algorithm, terminal and storage medium
CN111062221A (en) Data processing method, data processing device, electronic equipment and storage medium
US20140297255A1 (en) System and method for speech to speech translation using cores of a natural liquid architecture system
CN111126084A (en) Data processing method and device, electronic equipment and storage medium
US20230326369A1 (en) Method and apparatus for generating sign language video, computer device, and storage medium
CN110992958B (en) Content recording method, content recording apparatus, electronic device, and storage medium
CN111161710A (en) Simultaneous interpretation method and device, electronic equipment and storage medium
CN114528851B (en) Reply sentence determination method, reply sentence determination device, electronic equipment and storage medium
CN115623134A (en) Conference audio processing method, device, equipment and storage medium
CN113763925B (en) Speech recognition method, device, computer equipment and storage medium
CN111556096B (en) Information pushing method, device, medium and electronic equipment
CN115424618A (en) Electronic medical record voice interaction equipment based on machine learning
CN115171673A (en) Role portrait based communication auxiliary method and device and storage medium
US11017073B2 (en) Information processing apparatus, information processing system, and method of processing information
CN114297409A (en) Model training method, information extraction method and device, electronic device and medium
CN114239610A (en) Multi-language speech recognition and translation method and related system
CN113312928A (en) Text translation method and device, electronic equipment and storage medium
CN113221514A (en) Text processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination