CN113630306A - Information processing method, information processing device, electronic equipment and storage medium - Google Patents

Information processing method, information processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113630306A
CN113630306A CN202110857187.7A CN202110857187A CN113630306A CN 113630306 A CN113630306 A CN 113630306A CN 202110857187 A CN202110857187 A CN 202110857187A CN 113630306 A CN113630306 A CN 113630306A
Authority
CN
China
Prior art keywords
terminal
information
preset
word segmentation
voice information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110857187.7A
Other languages
Chinese (zh)
Inventor
龚存晨
魏文长
李求会
于猛
杨子闻
张凯
赵忻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110857187.7A priority Critical patent/CN113630306A/en
Publication of CN113630306A publication Critical patent/CN113630306A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/212Monitoring or handling of messages using filtering or selective blocking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Abstract

The present disclosure relates to an information processing method, an information processing apparatus, an electronic device, and a storage medium, wherein the method includes: when voice information sent by a first terminal in a preset room is received, converting the voice information into text information; performing word segmentation processing on the text information to obtain a word segmentation set, wherein the word segmentation set comprises one or more vocabularies; and if the target vocabulary meeting preset prompt conditions exists in the word segmentation set, generating prompt information aiming at the first terminal. Therefore, the problems of high management cost and low efficiency caused by manual examination can be solved, the voice information sent by the user can be monitored in real time, and privacy disclosure of the user can be avoided.

Description

Information processing method, information processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of information technologies, and in particular, to an information processing method and apparatus, an electronic device, and a storage medium.
Background
The current voice chat program can be used for users to enter the same room through respective terminal equipment through a network to carry out voice communication in the room. To maintain order in the room, there is an administrator to supervise the parties' speech. However, the management is performed manually, which generally has the problems of high management cost and low efficiency.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an information processing method, apparatus, electronic device, and storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided an information processing method, the method including:
when voice information sent by a first terminal in a preset room is received, converting the voice information into text information;
performing word segmentation processing on the text information to obtain a word segmentation set, wherein the word segmentation set comprises one or more vocabularies;
and if the target vocabulary meeting preset prompt conditions exists in the word segmentation set, generating prompt information aiming at the first terminal.
Optionally, the method further comprises:
extracting entity words in the word segmentation set, and respectively matching the entity words in the word segmentation set with words in a preset word set, wherein the words in the preset word set all accord with the preset prompt condition;
and if the entity words in the word segmentation set have words matched with the words in the preset word set, determining that target words meeting the preset prompt condition exist in the word segmentation set.
Optionally, the method further comprises:
acquiring a target position of the target vocabulary in the voice information;
shielding the voice data corresponding to the target position in the voice information to obtain processed voice information;
and sending the processed voice information to a second terminal.
Optionally, the method further comprises:
acquiring the generation times of the prompt message aiming at the first terminal;
and if the generation times are larger than a preset threshold value, shielding the voice information sent by the first terminal, so that other terminals in the preset room cannot receive the voice information sent by the first terminal during the shielding period.
Optionally, the shielding the voice information sent by the first terminal includes:
acquiring the violation level of the first terminal; the level of the violation level is positively correlated with the generation frequency of the prompt message;
determining the shielding time length of the voice information sent by the first terminal according to the violation level of the first terminal, and shielding the voice information sent by the first terminal according to the shielding time length; wherein the masking duration is positively correlated with the level of the violation level.
Optionally, the method further comprises:
if the situation that target words meeting the preset prompt condition exist in the word segmentation set is not detected, complaint information aiming at the first terminal and sent by other terminals in the preset room is received;
and sending the complaint information to a manual auditing platform so that a manual auditing staff can evaluate the voice information sent by the first terminal based on the complaint information.
According to a second aspect of the embodiments of the present disclosure, there is provided an information processing apparatus, the apparatus including:
the conversion unit is configured to convert voice information into text information when the voice information sent by a first terminal in a preset room is received;
the word segmentation unit is configured to perform word segmentation on the text information to obtain a word segmentation set, and the word segmentation set comprises one or more vocabularies;
the information generating unit is configured to generate prompt information aiming at the first terminal when target words meeting preset prompt conditions exist in the word segmentation set.
Optionally, the apparatus further comprises:
the entity word extracting unit is configured to extract entity words in the participle set and match the entity words in the participle set with words in a preset word set respectively, wherein the words in the preset word set all accord with the preset prompt condition;
and the target vocabulary determining unit is configured to determine that target vocabularies meeting preset prompting conditions exist in the word segmentation set when the entity words in the word segmentation set have vocabularies matched with the vocabularies in the preset vocabulary set.
Optionally, the apparatus further comprises:
a position acquisition unit configured to acquire a target position where the target vocabulary is located in the voice information;
the first shielding unit is configured to shield voice data corresponding to a target position in the voice information to obtain processed voice information;
a sending unit configured to send the processed voice information to a second terminal.
Optionally, the apparatus further comprises:
a number generation unit configured to acquire the number of generation times of the guidance information for the first terminal;
and the second shielding unit is configured to shield the voice information sent by the first terminal if the generation times are greater than a preset threshold value, so that other terminals in the preset room cannot receive the voice information sent by the first terminal during shielding.
Optionally, the second shielding unit includes:
the level acquisition module is configured to acquire the violation level of the first terminal; the level of the violation level is positively correlated with the generation frequency of the prompt message;
the shielding module is configured to determine a shielding time length for the voice information sent by the first terminal according to the violation level of the first terminal, and shield the voice information sent by the first terminal according to the shielding time length; wherein the masking duration is positively correlated with the level of the violation level.
Optionally, the apparatus further comprises:
a complaint information receiving unit, configured to receive complaint information, sent by other terminals in the preset room and aiming at the first terminal, if a target vocabulary meeting the preset prompting condition is not detected in the word segmentation set;
the complaint information sending unit is configured to send the complaint information to a manual auditing platform so that a manual auditing person can evaluate the voice information sent by the first terminal based on the complaint information.
According to a third aspect of the embodiments of the present disclosure, there is provided a server, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform any of the information processing methods described above.
In a fourth aspect of the embodiments of the present disclosure, a non-transitory computer-readable storage medium is provided, in which instructions, when executed by a processor of a mobile terminal, enable the mobile terminal to perform one of the above-mentioned information processing methods.
According to a fifth aspect of embodiments of the present disclosure, there is provided an application program/computer program product which, when run on a computer, causes the computer to perform the steps of the information processing method described in any one of the above embodiments.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the information processing method, the information processing device, the electronic equipment and the storage medium, when the voice information sent by the first terminal in the preset room is received, the voice information is converted into the text information, word segmentation processing is carried out on the text information, and if target words meeting preset prompt conditions exist in an obtained word segmentation set, prompt information of the first terminal is generated. Therefore, the problems of high management cost and low efficiency caused by manual examination can be solved, the voice information sent by the user can be monitored in real time, and privacy disclosure of the user can be avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating an information processing method according to an exemplary embodiment;
FIG. 2 is a block diagram illustrating an information processing apparatus according to an example embodiment;
FIG. 3 is a block diagram illustrating a server in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating an information processing method according to an exemplary embodiment, which may be used in a server communicatively connected to a plurality of terminals in a preset room, as shown in fig. 1, and may include the steps of:
in step S110, when receiving the voice message sent by the first terminal in the preset room, the voice message is converted into text message.
In the embodiment provided by the present disclosure, when a plurality of users play games, watch movies, chat by voice, or chat by video through a terminal in a network, the users usually play in the same room, that is, voice information sent by a certain user through the terminal is received by other users in the same room. However, when a user is talking in voice, the user may make an improper statement for emotional excitement or other reasons, which causes an undeniable phenomenon in the network. In order to purify the network environment, many times, related parties monitor the phenomenon in a manual online monitoring mode, and in rooms with a large number of people, voice information sent by users is usually monitored by the identity of a network manager. However, the labor cost is high, the efficiency is low, and the method is limited by dialect terms of various regions, so that manual review cannot respond in time in many times.
Therefore, in the embodiment of the present disclosure, the voice information sent by the user is audited in an online real-time identification manner. Specifically, when receiving the voice message sent by the first terminal, the server converts the voice message into text message, so as to analyze and process the words in the text.
It should be noted that, when receiving the voice information sent by the terminal, the server may first detect the language used by the user, such as whether the language is chinese or foreign, and if the language is chinese, detect whether mandarin or local dialect is used, so as to better identify the voice information.
In step S120, the text information is subjected to word segmentation processing to obtain a word segmentation set. Wherein the set of word segments includes one or more words.
When the reservation information is converted into the text information, word segmentation processing needs to be performed on the text information, and a word segmentation set comprising one or more words can be obtained after the word segmentation processing. For example, the text message is that the summer park is one of four major gardens in China, and a participle set containing the words of the summer park, the Chinese garden, the four major gardens and the first garden can be obtained through participle processing. Of course, the vocabulary therein can be further decomposed according to different word segmentation rules.
In step S130, if there is a target vocabulary meeting a preset prompt condition in the vocabulary segmentation set, a prompt message for the first terminal is generated.
In the embodiment of the disclosure, whether the target vocabulary meeting the preset prompt condition exists in the voice information can be identified by presetting the target vocabulary set meeting the preset prompt condition and respectively matching each vocabulary in the participle set with the vocabulary in the preset vocabulary set. It can be understood that all the words in the preset word set meet the preset prompt condition. In addition, in practical application, if a vocabulary belongs to a sensitive vocabulary, the vocabulary meets a preset prompting condition.
The preset vocabulary set in the embodiment can be obtained according to the pre-statistics, and the preset vocabulary set is continuously optimized by integrating the vocabulary reflected by the user and the like. Once the target vocabulary meeting the preset prompt condition exists in the word segmentation set, prompt information is sent to the user of the first terminal in time through the generated prompt information, so that the user can pay attention to the own words, and the purpose of real-time monitoring is achieved.
Thus, the method may further comprise the steps of:
in step S131, the entity words in the segmentation set are extracted, and the entity words in the segmentation set are respectively matched with the words in the preset word set, where the words in the preset word set all conform to the preset prompt condition.
In step S132, if the entity words in the segmentation set have words matching the words in the preset word set, it is determined that the target words meeting the preset prompt condition exist in the segmentation set.
It should be noted that, in the embodiment of the present disclosure, before the words in the segmentation set are respectively matched with the words in the preset word set, the words in the segmentation set may also be preprocessed. Because the target words meeting the preset prompt conditions are all entity words, other words such as virtual words in the participle set can be filtered, the entity words in the participle set are reserved, and then the entity words are respectively matched with the words in the preset word set, so that the matching efficiency can be improved.
According to the information processing method provided by the embodiment of the disclosure, when the voice information sent by the first terminal in the preset room is received, the voice information is converted into the text information, and the text information is subjected to word segmentation processing, and if the target words meeting the preset prompt conditions exist in the obtained word segmentation set, the prompt information of the first terminal is generated. Therefore, the problems of high management cost and low efficiency caused by manual examination can be solved, the voice information sent by the user can be monitored in real time, and privacy disclosure of the user can be avoided.
Once a target vocabulary meeting a preset prompting condition appears in a voice message sent by a user, in order to perform accurate recognition processing on the target vocabulary meeting the preset prompting condition and not influence the whole-day chatting between the user and other users, in a further embodiment provided by the present disclosure, in combination with the above embodiment, the method may further include the following steps:
in step S133, a target position where the target vocabulary is located in the speech information is acquired.
In step S134, the voice data corresponding to the target position in the voice information is masked, so as to obtain processed voice information.
In step S135, the processed voice information is transmitted to the second terminal.
In the course of speech chatting by several users in the same room, for example, playing game or other situations, when a user is emotional, it is likely that some kind of words will appear unwarranted during the chatting process, such as the words of an adversity, and the like, and these unwarranted words are often several words, the disclosed embodiments can not only prevent the unwarranted words from being heard by other users, but also serve the purpose of purifying the network environment, and also allow the user to express other meanings of their speaking without affecting the normal expression of the words because one or several target words meeting the preset prompting condition are masked, by detecting the positions of these words and then masking them.
For example, in the process that the first user plays the game in the same room by other users, the text information corresponding to the voice information sent at a certain time is as follows: the people have said that the people can feel like a star, namely, the people are hero in the later period, need to be adjusted low and developed, and can be stably held in the later period of the morning-evening costume, and can be put on the shelf all the time and died for three times, and the people can still use a pair of grass shoes, namely, the people can feel like a star simply. According to the embodiment of the disclosure, through sensitive word matching, the last word of the words is found to be the target word meeting the preset prompting condition, so that the last word can be shielded, the phenomenon of non-civilized words is avoided, and the expression of the whole meaning of the user is not influenced.
It should be noted that, when the target vocabulary meeting the preset prompt condition in the voice information is masked, the embodiment of the disclosure may convert the target vocabulary into the "dripping" sound by adjusting the frequency of the sound, so as to prevent other users from affecting the expression of other users for the user who utters the voice that the user has not uttered the voice. In the embodiment, the second user can be prevented from receiving the non-civilized language by sending the processed voice information to the second terminal.
In another embodiment provided by the present disclosure, in combination with the above embodiment, the method may further include the following steps:
in step S140, the number of times of generation of the guidance information for the first terminal is acquired.
In step S150, if the generation number is greater than the preset threshold, the voice message sent by the first terminal is masked, so that other terminals in the preset room cannot receive the voice message sent by the first terminal during the masking period.
In the embodiment of the disclosure, the voice information sent by the user is monitored, and if the generation times of the prompt information of the user is greater than the preset threshold, it is indicated that the current mind state of the user is not suitable for online release of the voice information, and the voice information can be shielded to avoid further interference to other users. So that other terminals in the room cannot receive the voice information transmitted from the first terminal during the screening.
Specifically, the violation level of the first terminal can be obtained; the level of the violation level is positively correlated with the generation frequency of the prompt message; determining the shielding time length of the voice information sent by the first terminal according to the violation level of the first terminal, and shielding the voice information sent by the first terminal according to the shielding time length; wherein the masking duration is positively correlated with the level of violation level.
For example, if there are three prompting messages in a period of time in the voice message sent by the user through the first terminal, the violation level may be set as a primary level, and the shielding time for shielding subsequent voice message distribution is 5 minutes. If the user accumulates 5 times of prompting messages in a period of time, the violation level of the user can be set to be a middle level, and the shielding time for shielding subsequent voice message release is 15 minutes. If the user accumulates more than 5 prompts over a period of time, his violation level may be set to high, and the masking duration that will mask subsequent speech message releases is 24 hours. The specific setting may be made according to circumstances, and the embodiments of the present disclosure are not limited thereto. That is, the more the user generates the prompt information within a period of time, the greater the punishment to the user, so as to purify the network environment and prompt the user to pay attention to the normalized expression.
In another embodiment provided by the present disclosure, in combination with the above embodiment, the method may further include the following steps:
in step S160, if it is not detected that the target vocabulary meeting the preset prompting condition exists in the participle set, the complaint information sent by other terminals in the preset room and addressed to the first terminal is received.
In step S170, the complaint information is sent to the manual review platform, so that the manual review staff can evaluate the voice information sent by the first terminal based on the complaint information.
The method is limited by the number and the updating speed of the preset vocabulary set vocabularies, and when a user cannot recognize dialects or foreign languages in time, once complaint information aiming at the first terminal sent by other terminals in a preset room is received, the complaint information can be sent to a manual auditing platform in time, and voice information sent by the first terminal is evaluated by a manual auditing staff based on the complaint information, so that the problem that the other users are influenced by the fact that a large number of related target vocabularies meeting preset prompt conditions are spread is avoided.
Fig. 2 is a block diagram illustrating an information processing apparatus according to an example embodiment. Referring to fig. 2, the apparatus includes a conversion unit 10, a segmentation unit 20, and an information generation unit 30.
The conversion unit is configured to convert voice information into text information when the voice information sent by a first terminal in a preset room is received;
the word segmentation unit is configured to perform word segmentation on the text information to obtain a word segmentation set, and the word segmentation set comprises one or more vocabularies;
the information generating unit is configured to generate prompt information aiming at the first terminal when target words meeting preset prompt conditions exist in the word segmentation set.
In yet another embodiment provided by the present disclosure, the apparatus further comprises:
the entity word extracting unit is configured to extract entity words in the participle set and match the entity words in the participle set with words in a preset word set respectively, wherein the words in the preset word set all accord with the preset prompt condition;
and the target vocabulary determining unit is configured to determine that target vocabularies meeting preset prompting conditions exist in the participle set when the entity words in the participle set have vocabularies matched with the vocabularies in the preset vocabulary set.
In yet another embodiment provided by the present disclosure, the apparatus further comprises:
a position acquisition unit configured to acquire a target position where the target vocabulary is located in the voice information;
the first shielding unit is configured to shield voice data corresponding to a target position in the voice information to obtain processed voice information;
a sending unit configured to send the processed voice information to a second terminal.
In yet another embodiment provided by the present disclosure, the apparatus further comprises:
a number generation unit configured to acquire the number of generation times of the guidance information for the first terminal;
and the second shielding unit is configured to shield the voice information sent by the first terminal if the generation times are greater than a preset threshold value, so that other terminals in the preset room cannot receive the voice information sent by the first terminal during shielding.
In yet another embodiment provided by the present disclosure, the second shielding unit includes:
the level acquisition module is configured to acquire the violation level of the first terminal; the level of the violation level is positively correlated with the generation frequency of the prompt message;
the shielding module is configured to determine a shielding time length for the voice information sent by the first terminal according to the violation level of the first terminal, and shield the voice information sent by the first terminal according to the shielding time length; wherein the masking duration is positively correlated with the level of the violation level.
In yet another embodiment provided by the present disclosure, the apparatus further comprises:
a complaint information receiving unit configured to receive complaint information, which is sent by other terminals in the preset room and is directed to the first terminal, if it is not detected that a target vocabulary meeting the preset prompt condition exists in the word segmentation set;
the complaint information sending unit is configured to send the complaint information to a manual auditing platform so that a manual auditing person can evaluate the voice information sent by the first terminal based on the complaint information.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
When receiving voice information sent by a first terminal in a preset room, the information processing device provided by the embodiment of the disclosure converts the voice information into text information, performs word segmentation processing on the text information, and generates prompt information of the first terminal if a target vocabulary meeting preset prompt conditions exists in an obtained word segmentation set. Therefore, the problems of high management cost and low efficiency caused by manual examination can be solved, the voice information sent by the user can be monitored in real time, and privacy disclosure of the user can be avoided.
Fig. 3 is a block diagram illustrating an apparatus 1900 for information processing according to an example embodiment. For example, the apparatus 1900 may be provided as a server. Referring to fig. 3, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the information processing method described above.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The disclosed embodiments also provide a non-transitory computer-readable storage medium, where instructions in the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform the above-mentioned information processing method.
There is also provided an application program/computer program product according to an embodiment of the present disclosure, and in yet another embodiment provided by the present disclosure, there is also provided a computer program product including instructions, which when run on a computer, cause the computer to perform the steps of the information processing method described in any one of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the disclosure are, in whole or in part, generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber, DSL (Digital Subscriber Line)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD (Digital Versatile Disk)), or a semiconductor medium (e.g., an SSD (Solid State Disk)), etc.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An information processing method, characterized in that the method comprises:
when receiving voice information sent by a first terminal in a preset voice chat room, converting the voice information into text information;
performing word segmentation processing on the text information to obtain a word segmentation set, wherein the word segmentation set comprises one or more vocabularies;
and if the target vocabulary meeting preset prompt conditions exists in the word segmentation set, generating prompt information aiming at the first terminal.
2. The method of claim 1, further comprising:
extracting entity words in the word segmentation set, and respectively matching the entity words in the word segmentation set with words in a preset word set, wherein the words in the preset word set all accord with the preset prompt condition;
and if the entity words in the word segmentation set have words matched with the words in the preset word set, determining that target words meeting the preset prompt condition exist in the word segmentation set.
3. The method of claim 2, further comprising:
acquiring a target position of the target vocabulary in the voice information;
shielding the voice data corresponding to the target position in the voice information to obtain processed voice information;
and sending the processed voice information to a second terminal.
4. The method of claim 1, further comprising:
acquiring the generation times of the prompt message aiming at the first terminal;
and if the generation times are larger than a preset threshold value, shielding the voice information sent by the first terminal, so that other terminals in the preset room cannot receive the voice information sent by the first terminal during the shielding period.
5. The method of claim 4, wherein the masking the voice message sent by the first terminal comprises:
acquiring the violation level of the first terminal; the level of the violation level is positively correlated with the generation frequency of the prompt message;
determining the shielding time length of the voice information sent by the first terminal according to the violation level of the first terminal, and shielding the voice information sent by the first terminal according to the shielding time length; wherein the masking duration is positively correlated with the level of the violation level.
6. The method according to any one of claims 1 to 5, further comprising:
if the situation that target words meeting the preset prompt conditions exist in the word segmentation set is not detected, complaint information aiming at the first terminal and sent by other terminals in the preset room is received;
and sending the complaint information to a manual auditing platform so that a manual auditing staff can evaluate the voice information sent by the first terminal based on the complaint information.
7. An information processing apparatus characterized in that the apparatus comprises:
the conversion unit is configured to convert voice information into text information when the voice information sent by a first terminal in a preset room is received;
the word segmentation unit is configured to perform word segmentation on the text information to obtain a word segmentation set, and the word segmentation set comprises one or more vocabularies;
the information generating unit is configured to generate prompt information aiming at the first terminal when target words meeting preset prompt conditions exist in the word segmentation set.
8. A server, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the information processing method of any one of claims 1 to 6.
9. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of a mobile terminal, enable the mobile terminal to perform an information processing method, the method comprising the steps of the information processing method of any one of claims 1 to 6.
10. An application program/computer program product, characterized in that it causes a computer to carry out the steps of the information processing method according to any one of claims 1 to 6, when it is run on the computer.
CN202110857187.7A 2021-07-28 2021-07-28 Information processing method, information processing device, electronic equipment and storage medium Pending CN113630306A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110857187.7A CN113630306A (en) 2021-07-28 2021-07-28 Information processing method, information processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110857187.7A CN113630306A (en) 2021-07-28 2021-07-28 Information processing method, information processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113630306A true CN113630306A (en) 2021-11-09

Family

ID=78381379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110857187.7A Pending CN113630306A (en) 2021-07-28 2021-07-28 Information processing method, information processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113630306A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342153A (en) * 2023-05-31 2023-06-27 北京拓普丰联信息科技股份有限公司 Prompting method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106549954A (en) * 2016-11-01 2017-03-29 乐视控股(北京)有限公司 Method of speech processing and device
CN109302338A (en) * 2018-08-31 2019-02-01 南昌努比亚技术有限公司 Intelligent indicating risk method, mobile terminal and computer readable storage medium
CN110597961A (en) * 2019-09-18 2019-12-20 腾讯科技(深圳)有限公司 Text category labeling method and device, electronic equipment and storage medium
US20210158811A1 (en) * 2019-11-26 2021-05-27 Vui, Inc. Multi-modal conversational agent platform
CN113127746A (en) * 2021-05-13 2021-07-16 心动网络股份有限公司 Information pushing method based on user chat content analysis and related equipment thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106549954A (en) * 2016-11-01 2017-03-29 乐视控股(北京)有限公司 Method of speech processing and device
CN109302338A (en) * 2018-08-31 2019-02-01 南昌努比亚技术有限公司 Intelligent indicating risk method, mobile terminal and computer readable storage medium
CN110597961A (en) * 2019-09-18 2019-12-20 腾讯科技(深圳)有限公司 Text category labeling method and device, electronic equipment and storage medium
US20210158811A1 (en) * 2019-11-26 2021-05-27 Vui, Inc. Multi-modal conversational agent platform
CN113127746A (en) * 2021-05-13 2021-07-16 心动网络股份有限公司 Information pushing method based on user chat content analysis and related equipment thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342153A (en) * 2023-05-31 2023-06-27 北京拓普丰联信息科技股份有限公司 Prompting method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN107818798B (en) Customer service quality evaluation method, device, equipment and storage medium
JP6393730B2 (en) Voice identification method and apparatus
CN110446115B (en) Live broadcast interaction method and device, electronic equipment and storage medium
US20190088262A1 (en) Method and apparatus for pushing information
CN107393541B (en) Information verification method and device
US20200312315A1 (en) Acoustic environment aware stream selection for multi-stream speech recognition
JP6099556B2 (en) Voice identification method and apparatus
US8121845B2 (en) Speech screening
WO2020181824A1 (en) Voiceprint recognition method, apparatus and device, and computer-readable storage medium
CN108159702B (en) Multi-player voice game processing method and device
US8521525B2 (en) Communication control apparatus, communication control method, and non-transitory computer-readable medium storing a communication control program for converting sound data into text data
CN110544469B (en) Training method and device of voice recognition model, storage medium and electronic device
US8078455B2 (en) Apparatus, method, and medium for distinguishing vocal sound from other sounds
CN111107380B (en) Method, apparatus and computer storage medium for managing audio data
CN109089172B (en) Bullet screen display method and device and electronic equipment
CN109003600B (en) Message processing method and device
CN113630306A (en) Information processing method, information processing device, electronic equipment and storage medium
CN111312286A (en) Age identification method, age identification device, age identification equipment and computer readable storage medium
CN106899486A (en) A kind of message display method and device
WO2020024415A1 (en) Voiceprint recognition processing method and apparatus, electronic device and storage medium
CN113055751B (en) Data processing method, device, electronic equipment and storage medium
CN113707183A (en) Audio processing method and device in video
CN109634554B (en) Method and device for outputting information
CN111179936A (en) Call recording monitoring method
CN113852835A (en) Live broadcast audio processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211109