CN115881136A - Voice information processing method based on events under scene and related device - Google Patents

Voice information processing method based on events under scene and related device Download PDF

Info

Publication number
CN115881136A
CN115881136A CN202310044598.3A CN202310044598A CN115881136A CN 115881136 A CN115881136 A CN 115881136A CN 202310044598 A CN202310044598 A CN 202310044598A CN 115881136 A CN115881136 A CN 115881136A
Authority
CN
China
Prior art keywords
user
information
electronic device
prompt
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310044598.3A
Other languages
Chinese (zh)
Other versions
CN115881136B (en
Inventor
王一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Renma Interactive Technology Co Ltd
Original Assignee
Shenzhen Renma Interactive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Renma Interactive Technology Co Ltd filed Critical Shenzhen Renma Interactive Technology Co Ltd
Priority to CN202310044598.3A priority Critical patent/CN115881136B/en
Publication of CN115881136A publication Critical patent/CN115881136A/en
Application granted granted Critical
Publication of CN115881136B publication Critical patent/CN115881136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Alarm Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application provides a voice information processing method and a related device based on events under scenes, which can acquire environmental sound information when monitoring conditions are met, then determine the role played by a user in the event according to target words in the environmental sound information, and guide the user in a targeted manner according to the identity of the user, prompt the user in time to stop user behaviors when the user is an event initiator, acquire the idea of the user for the event through dialogue with the user when the user is an event acceptor, and provide corresponding suggestions, so that the will of the user is fully respected, the user can accept the suggestions more easily, and the complexity of data processing in the event judgment process can be reduced.

Description

Voice information processing method based on events under scene and related device
Technical Field
The application belongs to the technical field of general data processing of the Internet industry, and particularly relates to a voice information processing method and a related device based on events in a scene.
Background
There is an urgent need for a solution that can accurately identify events in a scene and guide a user to correctly solve the events when they occur.
Disclosure of Invention
The embodiment of the application provides a voice information processing method and a related device based on events in a scene, so as to improve the accuracy and timeliness of identifying the events in the scene and guide a user in a targeted manner.
In a first aspect, an embodiment of the present application provides a method for processing voice information based on an event in a scene, which is applied to a human-computer conversation server in a security protection system, where the security protection system includes the human-computer conversation server and a first electronic device, and the method includes:
acquiring environmental sound information from the first electronic equipment under the condition that monitoring conditions are met, wherein the condition that the monitoring conditions are met comprises the step of acquiring a monitoring instruction or meeting monitoring time;
determining whether at least one target word exists in the environmental sound information, wherein the word type of the target word belongs to the threat profanity word type or the distress word type;
if the at least one target word exists, determining whether sound information of the user exists in sound information corresponding to the at least one target word;
if the voice information of the user exists, determining the identity of the user according to the word type of the target word, wherein the identity comprises an event initiator or an event receiver;
under the condition that the identity of the user is the event initiator, sending first prompt information to the first electronic device, wherein the first prompt information is used for prompting the user to stop the behavior of injuring others;
when the identity of the user is the event receiver, storing environmental sound information containing the target words, and sending first inquiry information to the first electronic device in a secure environment, wherein the secure environment is an environment without an event in a scene, and the first inquiry information is used for inquiring whether the user is damaged;
acquiring first reply information from the first electronic equipment;
in the case that the first reply information indicates that the user is damaged, sending second inquiry information to the first electronic device, the second inquiry information being used for inquiring whether the user needs help;
acquiring second reply information from the first electronic equipment;
in the case that the second reply information indicates that the user needs help, sending second prompt information to the first electronic device, wherein the second prompt information is used for prompting the user that voice evidence is saved and prompting the user for measures which can be taken;
in the case that the second reply information indicates that the user does not need help, sending third inquiry information to the first electronic device, wherein the third inquiry information is used for inquiring the user about a solution;
acquiring third reply information from the first electronic equipment, wherein the third reply information is used for indicating a solution mode of the user;
determining attitudes of the user according to the solution modes, wherein the attitudes comprise active resolution attitudes or passive resolution attitudes;
in the case that the attitude is a negative resolution attitude, sending third prompt information to the first electronic equipment, wherein the third prompt information is used for encouraging the user to actively resolve and prompting measures which can be taken by the user;
determining whether the user's solution is feasible according to the third reply information under the condition that the attitude is an active solution attitude;
and if the solution is not feasible, sending fourth prompt information to the first electronic equipment, wherein the fourth prompt information is used for prompting the user that the solution is not feasible and prompting measures which can be taken by the user.
In a second aspect, an embodiment of the present application provides a speech information processing apparatus based on an event under a scene, which is applied to a human-computer conversation server in a security protection system, where the security protection system includes the human-computer conversation server and a first electronic device, and the apparatus includes:
the first acquisition unit is used for acquiring environmental sound information from the first electronic equipment under the condition that monitoring conditions are met, wherein the meeting of the monitoring conditions comprises the acquisition of a monitoring instruction or the meeting of monitoring time;
the first determination unit is used for determining whether at least one target word exists in the environmental sound information, and the word type of the target word belongs to the threat profanity word type or the distress word type;
the second determining unit is used for determining whether the sound information of the user exists in the sound information corresponding to the at least one target word if the at least one target word exists;
a third determining unit, configured to determine, if there is voice information of the user, an identity of the user according to a word type of the target word, where the identity includes an event initiator or an event acceptor;
a first sending unit, configured to send, to the first electronic device, first prompt information when the identity of the user is the event initiator, where the first prompt information is used to prompt the user to stop a behavior of injuring another person;
a second sending unit, configured to, if the identity of the user is the event recipient, save environmental sound information including the target word, and send first query information to the first electronic device in a secure environment, where the secure environment is an environment where no event exists in a scene, and the first query information is used to query whether the user is damaged;
a second acquisition unit configured to acquire first reply information from the first electronic device;
a third sending unit, configured to send, to the first electronic device, second query information for querying whether the user needs help, in a case where the first reply information indicates that the user is damaged;
a third acquisition unit configured to acquire second reply information from the first electronic device;
a fourth sending unit, configured to send, to the first electronic device, second prompt information in a case where the second reply information indicates that the user needs help, where the second prompt information is used to prompt the user that voice evidence is saved and prompt measures that the user can take;
a fifth sending unit, configured to send third query information to the first electronic device, where the second reply information indicates that the user does not need help, and the third query information is used to query the user for a solution;
a fourth acquiring unit configured to acquire third reply information from the first electronic device, the third reply information indicating a solution of the user;
a fourth determining unit, configured to determine an attitude of the user according to the solution, where the attitude includes an active solution attitude or a passive solution attitude;
a sixth sending unit, configured to send, to the first electronic device, third prompt information when the attitude is a negative resolution attitude, where the third prompt information is used to encourage the user to actively resolve and prompt the user about measures that can be taken;
a fifth determining unit, configured to determine whether a solution manner of the user is feasible according to the third reply information if the attitude is an active solution attitude;
a seventh sending unit, configured to send fourth prompt information to the first electronic device if the solution is not feasible, where the fourth prompt information is used to prompt the user that the solution is not feasible and prompt the user about measures that can be taken.
In a third aspect, an embodiment of the present application provides a server, including a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer storage medium having stored thereon a computer program/instructions for execution by a processor to perform the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
In this embodiment, in the case that the monitoring condition is satisfied, the man-machine conversation server obtains the environmental sound information from the first electronic device, then determines whether at least one target word exists in the environmental sound information, if at least one target word exists, determines whether the sound information of the user exists in the sound information corresponding to the at least one target word, if the sound information of the user exists, determines the identity of the user according to the word type of the target word, where the identity includes an event initiator or an event receiver, in the case that the identity of the user is the event initiator, sends first prompt information to the first electronic device, the first prompt information is used to prompt the user to stop behavior of injuring others, in the case that the identity of the user is the event receiver, stores the environmental sound information including the target word, and sends first query information to the first electronic device in a secure environment, where the secure environment is an environment where the event is absent, the first query information is used to determine whether the user is harmed, then obtains the first electronic sound information from the first electronic device, sends second query information to the first electronic device in a second response to the second electronic device, and sends the second query information to the second query information indicating that the user is damaged, and answers to the user in a response to the second electronic device, and prompting the user for available measures, sending third query information to the first electronic device when the second reply information indicates that the user does not need help, then obtaining third reply information from the first electronic device, wherein the third reply information is used for indicating the solution of the user, then determining the attitude of the user according to the solution, wherein the attitude comprises an active solution attitude or a passive solution attitude, sending third prompt information to the first electronic device when the attitude is the passive solution attitude, wherein the third prompt information is used for encouraging the user to actively solve, and prompting the user for available measures, determining whether the solution of the user is available according to the third prompt information when the attitude is the active solution attitude, and if the solution is not available, sending fourth prompt information to the first electronic device, wherein the fourth prompt information is used for prompting the user that the solution of the user is not available, and prompting the user for available measures.
Therefore, whether the user is in the event or not can be accurately and timely judged according to the voice information, and the complexity of data processing is reduced. The method and the system can also determine the role played by the user in the event, guide the event in a targeted manner according to the identity of the user, acquire the idea of the user for the event through conversation with the user when the user is an event acceptor, and provide corresponding suggestions, so that the user will be fully respected, the user can accept the suggestions more easily, and the event can be solved more timely and efficiently.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a safety protection system provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a server according to an embodiment of the present application;
FIG. 3 is a schematic flowchart of a method for processing voice information based on an event under a scene according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a human-machine interaction page provided by an embodiment of the present application;
FIG. 5 is a block diagram illustrating functional units of an apparatus for processing speech information based on an event in a scene according to an embodiment of the present application;
fig. 6 is a block diagram of functional units of another speech information processing device based on events in a scene according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In view of the foregoing problems, embodiments of the present application provide a method and a related apparatus for processing voice information based on events in a scene, and the following describes embodiments of the present application in detail with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a safety protection system according to an embodiment of the present disclosure. The security protection system 10 comprises a man-machine conversation server 101 and a first electronic device 102, wherein the man-machine conversation server 101 is in communication connection with the first electronic device 102, and the first electronic device 102 may comprise a plurality of first electronic devices which are respectively in communication connection with the man-machine conversation server 101. The plurality of chat robots are deployed in the human-computer interaction server 101 and used for interacting information with the first electronic device 102, the first electronic device 102 is provided with a corresponding program and can be adapted to the chat robots and interactively execute instructions of the chat robots, the chat robots can analyze voice information from the first electronic device 102, determine user semantics, determine reply sentences according to the user semantics and output the reply sentences through the first electronic device 102, and human-computer interaction with users is achieved. In particular, the security protection system 10 may further include a plurality of second electronic devices 103, and the plurality of second electronic devices 103 are respectively connected in communication with the human-computer interaction server 101 and the first electronic device 102, that is, one first electronic device 102 may correspond to at least one second electronic device 103, and each second electronic device 103 is connected in communication with the human-computer interaction server 101 and the first electronic device 102.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a server according to an embodiment of the present disclosure. As shown, the server 110 may be the human-machine conversation server 101 according to the present embodiment, and the server 110 includes a processor 120, a memory 130, a communication interface 140, and one or more programs 131, wherein the one or more programs 131 are stored in the memory 130 and configured to be executed by the processor 120, and the one or more programs 131 include instructions for executing any one of the steps of the method embodiments described below. In a specific implementation, the processor 120 is configured to perform any one of the steps performed by the server in the method embodiments described below, and when performing data transmission such as sending, optionally invokes the communication interface 140 to complete the corresponding operation.
The electronic device according to the embodiment of the present application may be the first electronic device 102 or the second electronic device 103, and the electronic device may be an electronic device with communication capability, and the electronic device may include various handheld devices with wireless communication function, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), mobile Stations (MS), terminal Equipment (terminal device), and the like.
Referring to fig. 3, fig. 3 is a schematic flowchart of a voice information processing method based on an event under a scene according to an embodiment of the present application, where the voice information processing method based on the event under the scene is applied to a human-machine conversation server in a security system, where the security system includes the human-machine conversation server and a first electronic device, and the method includes the following steps.
S201, acquiring the environmental sound information from the first electronic equipment under the condition that the monitoring condition is met.
The meeting of the monitoring condition comprises the acquisition of a monitoring instruction or the meeting of monitoring time. The obtaining of the monitoring instruction may be that the user clicks a control on the first electronic device to open a monitoring function. Or the safety protection system further comprises second electronic equipment, the second electronic equipment is in communication connection with the first electronic equipment and the man-machine conversation server respectively, and the monitoring instruction is sent to the first electronic equipment through the second electronic equipment, so that the first electronic equipment meets the monitoring condition. The monitoring time may be a time set by the user independently corresponding to the first electronic device, or may be a time set by the user independently corresponding to the second electronic device. The meeting of the monitoring condition may also be to acquire a target word and then wake up the monitoring function, that is, the monitoring function of the first electronic device is in a dormant state under normal conditions, but after receiving the corresponding wake-up word, for example, when a word with a threat profanity and profanity word type is acquired, the monitoring function is woken up. It should be noted that the user authorization is obtained in advance before the monitoring function is turned on.
S202, determining whether at least one target word exists in the environmental sound information.
Wherein, the word type of the target word belongs to the words type of threat, foul or distress. The threat insults word is the word used to indicate a negative emotion. When determining whether a certain word is a threatened profanity word, official definitions of the word can be determined first, if the official definitions determine that the word does not belong to the threatened profanity word, whether the word includes a network extension is determined, and if the word in the network extension has a negative emotion, the word belongs to the threatened profanity word. The term for help may include a term for help set by the user, that is, the term for help includes a general term with a meaning of help and a special term for help set by the user. The threat profanity words or distress words may exist in the environmental sound, and the threat profanity words and distress words may also occur simultaneously.
S203, if the at least one target word exists, determining whether the sound information corresponding to the at least one target word contains the sound information of the user.
The voice information may indicate a voiceprint characteristic of the user, that is, the voice information may indicate both the content of the utterance and the speaker corresponding to the content. The sound characteristics of the user can be obtained in advance, then the sound characteristics contained in the sound information corresponding to the target words in the obtained environment sound information are determined, and then whether the target words are emitted by the user or not is determined according to the matching of the sound characteristics and the sound characteristics of the user, or when a plurality of target words exist, which target word is emitted by the user is determined.
S204, if the voice information of the user exists, determining the identity of the user according to the word type of the target word.
Wherein the identity comprises an event initiator or an event recipient. When the voice information of the user exists, the target words corresponding to the user are determined according to the voice information of the user, and then the word types of the target words corresponding to the user are determined, so that the identity of the user is determined. If the target word corresponding to the user is a threatened and threatened words, the identity of the user is an event initiator, and if the target word corresponding to the user is a distress word, the identity of the user is an event receiver. The impairments in the present solution include physical and psychological injuries.
S205, sending a first prompt message to the first electronic device under the condition that the identity of the user is the event initiator.
The first prompt message is used for prompting the user to stop the behavior of injuring others. Namely, when the identity of the user is a person who injures others, the behavior of the user needs to be stopped in time. The content in the first prompt message can be warning or persuasive, and different types of first prompt messages can be sent out according to the personality of the user. For example, the first prompt message may be a dissuading class if the user's personality is determined to be good at listening to opinions based on historical human-machine conversation records with the user, or a warning class if the user's personality is determined to be poor at listening to opinions based on historical human-machine conversation records. The user can be guided in a targeted manner according to different types of the first prompt information output according to different personalities, so that the probability that the user stops injuring other people after the first prompt information is obtained is increased, and the timeliness and effectiveness of the guidance for the user are enhanced.
S206, under the condition that the identity of the user is the event receiver, storing the environmental sound information containing the target words, and sending first inquiry information to the first electronic equipment under a safe environment.
Wherein the secure environment is an environment in which there is no event in a scene, and the first query information is used to query whether the user is harmed. When the user is determined to be damaged, the environmental sound information including the target word is firstly stored, and the stored environmental sound information can be from a period of time before the target word is acquired to the period of time before only the user's sound information in the environmental sound information. Determining whether the user is in a secure environment includes: after the preset time period of the target words is obtained, analyzing the current environmental sound information to determine that the current environment is a quiet environment, or determining that the user is in a safe environment if the current environmental sound information only includes the sound information of the user. Namely, when the first inquiry information is sent to the user, the user is required to be in an environment with a steady state of mind and no other threats around the user, namely, when the user is damaged for a period of time, the emotion is stable, and other people who implement events in the scene leave, and then the user is inquired whether the user is damaged, at the moment, a more accurate and objective answer can be obtained, and the user can be conveniently and correctly guided subsequently.
S207, acquiring first reply information from the first electronic device.
And S208, sending second inquiry information to the first electronic equipment under the condition that the first reply information indicates that the user is damaged.
Wherein the second query information is used to query the user whether help is needed. After the man-machine conversation server acquires the first reply information, performing semantic analysis on the first reply information to determine user semantics, and determining whether the user is damaged or not according to the combination of the user semantics and the first inquiry information.
S209, acquiring second reply information from the first electronic device.
S210, sending a second prompt message to the first electronic device when the second reply message indicates that the user needs help.
Wherein, for example, as shown in fig. 4, the second prompting message is used to prompt the user that the voice evidence is saved and prompt the user for an action that can be taken. The prompt that the user has saved the voice evidence can give the user more sense of security, so that the user can more easily receive the measures which can be taken and are informed to the user in the second prompt message.
S211, in a case that the second reply information indicates that the user does not need help, sending third query information to the first electronic device.
Wherein the third query information is used to query the user for a solution. I.e. the third query information is used to query the user what to do.
S212, third reply information from the first electronic device is obtained.
S213, determining the attitude of the user according to the solution mode.
Wherein the third reply information is used for indicating a solution of the user, and the attitude comprises an active resolution attitude or a passive resolution attitude. The active resolution attitude, i.e. whether the user's resolution is feasible or not, is that the user wants to resolve the event, rather than silently standing, and the passive resolution, i.e. that the user is not ready to resolve the event.
And S214, sending third prompt information to the first electronic device under the condition that the attitude is a negative solution attitude.
And S215, under the condition that the attitude is the active resolution attitude, determining whether the resolution mode of the user is feasible according to the third reply information.
And S216, if not feasible, sending fourth prompt information to the first electronic equipment.
Wherein the third prompting information is used to encourage the user to actively address and prompt the user for actions that may be taken. The fourth prompt message is used for prompting the user that the solution is not feasible and informing the user of measures which can be taken. When the user prepares for acquiescent support, the user is encouraged to actively solve, and feasible measures are provided, when the solution of the user is feasible, the user does not intervene with the current event, the solution is autonomously solved by the user, and if the solution of the user is not feasible, the solution is timely informed, and the feasible solution is provided for the user to select. Therefore, the user will not only be fully respected, but also the user can be encouraged to be guided to take correct measures to solve the problem. Determining whether feasible includes: and performing semantic analysis on the third reply information, extracting keywords, determining user intention according to the keywords, matching the user intention with measures which can be taken, if the matching is successful, determining to be feasible, if the matching is unsuccessful, inquiring the reason why the user takes the solving measures, analyzing whether the logic of the reason answered by the user is correct, and if the logic is correct, determining to be feasible.
In this example, the man-machine conversation server, when a monitoring condition is met, acquires environmental sound information from the first electronic device, then determines whether at least one target word exists in the environmental sound information, if at least one target word exists, determines whether sound information of the user exists in sound information corresponding to the at least one target word, if sound information of the user exists, determines an identity of the user according to a word type of the target word, where the identity includes an event initiator or an event acceptor, sends first prompt information to the first electronic device, if the identity of the user is the event initiator, the first prompt information is used to prompt the user to stop a behavior of injuring others, if the identity of the user is the event acceptor, stores environmental sound information including the target word, and sends first query information to the first electronic device in a secure environment, where the secure environment is an environment in which an event does not exist, the first query information is used to determine whether the user is damaged, then acquires environmental sound information from the first electronic device, sends second query information to the user in a second response to the second electronic device, and sends a second query information to the user in a response to the second electronic device, the second query information indicating that the user needs to answer the second electronic device, and prompting the user for available measures, sending third query information to the first electronic device when the second reply information indicates that the user does not need help, then obtaining third reply information from the first electronic device, wherein the third reply information is used for indicating the solution of the user, then determining the attitude of the user according to the solution, wherein the attitude comprises an active solution attitude or a passive solution attitude, sending third prompt information to the first electronic device when the attitude is the passive solution attitude, wherein the third prompt information is used for encouraging the user to actively solve, and prompting the user for available measures, determining whether the solution of the user is available according to the third prompt information when the attitude is the active solution attitude, and if the solution is not available, sending fourth prompt information to the first electronic device, wherein the fourth prompt information is used for prompting the user that the solution of the user is not available, and prompting the user for available measures.
Therefore, whether the user is in the event under the scene can be accurately and timely judged according to the voice information, and the complexity of data processing is reduced. The method and the system can also determine the role played by the user in the scene, guide the user in a targeted manner according to the identity of the user, acquire the idea of the user for the event through conversation with the user when the user is an event receiver, and provide corresponding suggestions, so that the intention of the user is fully respected, the user can accept the suggestions more easily, and the event can be solved more timely and efficiently under the condition of protecting the mental health of the user.
In a possible example, the security protection system further includes a second electronic device, where the second electronic device is an associated device of the first electronic device, and after the first prompt message is sent to the first electronic device, the method further includes: determining the number of times of sending the first prompt message to the first electronic device within a preset time period; and sending notification information to the second electronic device when the number of times is greater than a first preset number of times, wherein the notification information is used for indicating that the user is performing a behavior of injuring others.
The preset time period may be a time period from the first time when the user is determined to say the target word to the later time when the user is prompted. The first preset number of times may be one time or multiple times. The notification message may also include environmental sound information including the target word.
Therefore, in the example, when the user is an event initiator, the user behavior can be intervened and guided timely and effectively, and the intelligence of handling the event is improved.
In one possible example, the method further comprises: under the condition that the sound information of the user does not exist in the sound information corresponding to the at least one target word, acquiring reference sound information corresponding to each target word in the at least one target word; determining whether first target sound information exists in the reference sound information, wherein the first target sound information is sound information which is acquired in a history mode, except for the sound information of the user, and the occurrence frequency of the sound information is greater than a second preset frequency; if the first target sound information exists, determining the word type of a target word corresponding to the first target sound information; determining the identity of the user as the event originator when the word type is the threat abuse word type; and when the word type is the help-seeking word type, determining the identity of the user as the event receiver.
The historically acquired environmental sound information may be environmental sound information acquired at a close time, such as within a week or a month. After the environmental sound information is acquired, sound characteristic analysis is carried out on the sound information except the user, a plurality of pieces of sound information with more occurrence time or times in the recent period of time are determined, and then the plurality of pieces of sound information are determined to be the first target sound information. If the obtained environmental sound information includes the target word but does not obtain the sound information of the user but obtains the sound information of the friends of the user, it can be determined that the current position of the user is consistent with the friends of the user.
Therefore, in the embodiment, even if the user does not make a sound, the identity of the user can be determined, and the accuracy of identity recognition of the user is improved.
In one possible example, the at least one target term includes a threat insulting term and the user's identity is an event acceptor, and after sending the second query information to the first electronic device, the method further comprises: determining sound information corresponding to the target words of which the word types belong to the insulting threat category words as second target sound information; determining that the user is again harmed if the second target sound information reappears in the ambient sound information and the at least one target word is present in the ambient sound information.
If the user is the event receiver and the environmental sound information contains the target words including the insulting threat words, the sound characteristics of the person who implements the event can be determined according to the target words, and when the target words are detected in the environmental sound next time and the second target sound information exists at the same time, the damage of the user can be directly determined without inquiring the user. And even if the target word is not spoken by the person corresponding to the second target sound information, the person who has performed the event exists, so that the person who has performed the event can be determined to be a plurality of persons, and the user can be determined to be currently suffering from damage. And when the user is again harmed, the second query information may be sent to the user again.
It can be seen that, in this example, determining whether the user is again harmed according to the second sound information can improve the efficiency and accuracy of identifying the event.
In one possible example, after determining that the sound information corresponding to the target word of the word type that belongs to the insulting threat word is the second target sound information, the method further comprises: sending fourth query information to the first electronic device, wherein the fourth query information is used for querying the user for sensitive words; acquiring fourth reply information from the first electronic equipment, wherein the fourth reply information comprises the sensitive words; determining that the sensitive word is the target word; after the determining that the user has suffered the impairment again, the method further comprises: and sending warning information to the first electronic equipment, wherein the warning information is used for warning the event initiator to stop the behavior of injuring others.
Because some special words, even if not the threatened words, may cause psychological harm to the user, the user may be asked whether a sensitive word, which may be a name, is available. In particular, when the sensitive word is a name, the sensitive word may be bound with the second target sound information, and it is determined that the user is damaged again only when the sound information corresponding to the sensitive word appearing in the environmental sound information is the second target sound information. Therefore, the situation of misjudgment caused by normal communication can be avoided. And when it is determined that the user is again harmed, a warning may be issued by the first electronic device through the speaker so that the implemented person may have a concern, thereby protecting the user.
Therefore, in the example, the sensitive words for the user are added as the target words, so that the accuracy and flexibility of determining the event can be improved.
Referring to fig. 5, fig. 5 is a block diagram illustrating functional units of a speech information processing apparatus based on events under scenes according to an embodiment of the present application. The apparatus 40 for processing voice information based on an event under a scene is applied to a man-machine conversation server in a security protection system, the security protection system comprising the man-machine conversation server and a first electronic device, the apparatus 40 for processing voice information based on an event under a scene comprises: a first obtaining unit 401, configured to obtain environmental sound information from the first electronic device when a monitoring condition is met, where the meeting of the monitoring condition includes obtaining a monitoring instruction or meeting of a monitoring time; a first determining unit 402, configured to determine whether at least one target word exists in the environmental sound information, where a word type of the target word belongs to the threat profanity word type or the distress word type; a second determining unit 403, configured to determine whether there is sound information of a user in sound information corresponding to the at least one target word if there is the at least one target word; a third determining unit 404, configured to determine an identity of the user according to a word type of the target word if there is voice information of the user, where the identity includes an event initiator or an event recipient; a first sending unit 405, configured to send, to the first electronic device, first prompt information when the identity of the user is the event initiator, where the first prompt information is used to prompt the user to stop a behavior of injuring another person; a second sending unit 406, configured to, if the identity of the user is the event recipient, save environmental sound information including the target word, and send first query information to the first electronic device in a secure environment, where the secure environment is an environment where there is no event in a scene, and the first query information is used to query whether the user is damaged; a second obtaining unit 407, configured to obtain first reply information from the first electronic device; a third sending unit 408, configured to send, to the first electronic device, second query information for querying whether the user needs help, in a case where the first reply information indicates that the user is damaged; a third acquiring unit 409, configured to acquire second reply information from the first electronic device; a fourth sending unit 410, configured to send, to the first electronic device, a second prompt message in a case where the second reply message indicates that the user needs help, where the second prompt message is used to prompt the user that the voice evidence is saved and prompt the user for an action that can be taken; a fifth sending unit 411, configured to send third query information to the first electronic device, where the second reply information indicates that the user does not need help, the third query information being used to query the user for a solution; a fourth obtaining unit 412, configured to obtain third reply information from the first electronic device, where the third reply information is used to indicate a solution of the user; a fourth determining unit 413, configured to determine an attitude of the user according to the solution, where the attitude includes an active solution attitude or a passive solution attitude; a sixth sending unit 414, configured to send, to the first electronic device, third prompt information if the attitude is a negative resolution attitude, where the third prompt information is used to encourage the user to actively resolve and prompt the user about measures that can be taken; a fifth determining unit 415, configured to determine whether a solution manner of the user is feasible according to the third reply information in a case where the attitude is an active solution attitude; a seventh sending unit 416, configured to send fourth prompting information to the first electronic device if the solution is not feasible, where the fourth prompting information is used to prompt the user that the solution is not feasible and prompt the user about measures that can be taken.
In a possible example, the security protection system further includes a second electronic device, where the second electronic device is an associated device of the first electronic device, and after the first prompt information is sent to the first electronic device, the first sending unit 405 is specifically configured to: determining the number of times of sending the first prompt message to the first electronic device within a preset time period; and sending notification information to the second electronic device when the number of times is greater than a first preset number of times, wherein the notification information is used for indicating that the user is performing a behavior of injuring others.
In one possible example, the speech information processing device 40 based on the under-scene event is further configured to: under the condition that the sound information of the user does not exist in the sound information corresponding to the at least one target word, acquiring reference sound information corresponding to each target word in the at least one target word; determining whether first target sound information exists in the reference sound information, wherein the first target sound information is sound information which is acquired in a history mode, except for the sound information of the user, and the occurrence frequency of the sound information is greater than a second preset frequency; if the first target sound information exists, determining the word type of a target word corresponding to the first target sound information; when the word type is the threat profanity word type, determining the identity of the user as the event originator; and when the word type is the help-seeking word type, determining the identity of the user as the event receiver.
In one possible example, the at least one target term includes a threat insulting term, and the user identity is an event acceptor, and after the sending of the second query information to the first electronic device, the scene-based event speech information processing apparatus 40 is further configured to: determining sound information corresponding to the target words of which the word types belong to the insulting threat category words as second target sound information; determining that the user is again harmed if the second target sound information reappears in the ambient sound information and the at least one target word is present in the ambient sound information.
In one possible example, after the determining that the sound information corresponding to the target word whose word type belongs to the insulting threat word class is the second target sound information, the scene-based event speech information processing apparatus 40 is further configured to: sending fourth query information to the first electronic device, wherein the fourth query information is used for querying the user for sensitive words; acquiring fourth reply information from the first electronic equipment, wherein the fourth reply information comprises the sensitive words; determining that the sensitive word is the target word; after the determining that the user has suffered harm again, the method further comprises: and sending warning information to the first electronic equipment, wherein the warning information is used for warning the event initiator to stop the behavior of injuring others.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
In the case of using an integrated unit, please refer to fig. 6, where fig. 6 is a block diagram illustrating functional units of another speech information processing apparatus based on an event under a scene according to an embodiment of the present application. In fig. 6, the speech information processing apparatus 500 based on the event under the scene includes: a processing module 512 and a communication module 511. The processing module 512 is used for controlling and managing actions of the voice information processing apparatus 500 based on the event under the scene, for example, the steps of the first acquiring unit 401, the first determining unit 402, the second determining unit 403, the third determining unit 404, the first transmitting unit 405, the second transmitting unit 406, the second acquiring unit 407, the third transmitting unit 408, the third acquiring unit 409, the fourth transmitting unit 410, the fifth transmitting unit 411, the fourth acquiring unit 412, the fourth determining unit 413, the sixth transmitting unit 414, the fifth determining unit 415, and the seventh transmitting unit 416 are executed, and/or other processes for executing the techniques described herein are executed. The communication module 511 is used for interaction between the voice information processing apparatus 500 and other devices based on the event under the scene. As shown in fig. 6, the scene event based speech information processing apparatus 500 may further include a storage module 513, and the storage module 513 is configured to store program codes and data of the scene event based speech information processing apparatus 500.
The Processing module 512 may be a processor or a controller, and for example, may be a Central Processing Unit (CPU), a general-purpose processor, a Digital Signal Processor (DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, and the like. The communication module 511 may be a transceiver, an RF circuit or a communication interface, etc. The storage module 513 may be a memory.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. The apparatus 500 for processing speech information based on an event under a scene may perform the method for processing speech information based on an event under a scene shown in fig. 3.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device includes hardware structures and software modules for performing the respective functions in order to realize the functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Embodiments of the present application further provide a chip, where the chip includes a processor, configured to call and run a computer program from a memory, so that a device in which the chip is installed performs some or all of the steps described in the electronic device in the above method embodiments.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art will recognize that the embodiments described in this specification are preferred embodiments and that acts or modules referred to are not necessarily required for this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
While the present application is disclosed above, the present application is not limited thereto. Any person skilled in the art can easily think of changes or substitutions without departing from the spirit and scope of the present application, and all kinds of changes and modifications can be made, including different combinations of functions, implementation steps, software and hardware, which are described above, all within the protection scope of the present application.

Claims (8)

1. A voice information processing method based on events under scenes is characterized by being applied to a man-machine conversation server in a security protection system, wherein the security protection system comprises the man-machine conversation server and a first electronic device, and the method comprises the following steps:
acquiring environmental sound information from the first electronic equipment under the condition that monitoring conditions are met, wherein the meeting of the monitoring conditions comprises the acquisition of a monitoring instruction or the meeting of monitoring time;
determining whether at least one target word exists in the environmental sound information, wherein the word type of the target word belongs to the threat profanity word type or the distress word type;
if the at least one target word exists, determining whether the sound information of the user exists in the sound information corresponding to the at least one target word;
if the voice information of the user exists, determining the identity of the user according to the word type of the target word, wherein the identity comprises an event initiator or an event receiver;
under the condition that the identity of the user is the event initiator, sending first prompt information to the first electronic device, wherein the first prompt information is used for prompting the user to stop the behavior of injuring others;
when the identity of the user is the event receiver, storing environmental sound information containing the target words, and sending first inquiry information to the first electronic device in a secure environment, wherein the secure environment is an environment without an event in a scene, and the first inquiry information is used for inquiring whether the user is damaged;
acquiring first reply information from the first electronic equipment;
in the case that the first reply information indicates that the user is damaged, sending second inquiry information to the first electronic device, the second inquiry information being used for inquiring whether the user needs help;
acquiring second reply information from the first electronic equipment;
in the case that the second reply message indicates that the user needs help, sending a second prompt message to the first electronic device, wherein the second prompt message is used for prompting the user that the voice evidence is saved and prompting the user about measures which can be taken;
in the case that the second reply information indicates that the user does not need help, sending third inquiry information to the first electronic device, wherein the third inquiry information is used for inquiring the user about a solution;
acquiring third reply information from the first electronic equipment, wherein the third reply information is used for indicating a solution mode of the user;
determining attitudes of the user according to the solution modes, wherein the attitudes comprise active resolution attitudes or passive resolution attitudes;
in the case that the attitude is a negative solution attitude, sending third prompt information to the first electronic device, wherein the third prompt information is used for encouraging the user to actively solve and prompting measures which can be taken by the user;
determining whether the user's solution is feasible according to the third reply information under the condition that the attitude is an active solution attitude;
and if the solution is not feasible, sending fourth prompt information to the first electronic equipment, wherein the fourth prompt information is used for prompting the user that the solution is not feasible and prompting measures which can be taken by the user.
2. The method of claim 1, wherein the security protection system further comprises a second electronic device, the second electronic device being an associated device of the first electronic device, and wherein after sending the first prompt to the first electronic device, the method further comprises:
determining the number of times of sending the first prompt message to the first electronic device within a preset time period;
and sending notification information to the second electronic equipment when the times are greater than a first preset time, wherein the notification information is used for indicating that the user is performing a behavior of injuring others.
3. The method of claim 2, further comprising:
under the condition that the sound information of the user does not exist in the sound information corresponding to the at least one target word, acquiring reference sound information corresponding to each target word in the at least one target word;
determining whether first target sound information exists in the reference sound information, wherein the first target sound information is sound information which is acquired in a history mode, except for the sound information of the user, and the occurrence frequency of the sound information is greater than a second preset frequency;
if the first target sound information exists, determining the word type of a target word corresponding to the first target sound information;
when the word type is the threat profanity word type, determining the identity of the user as the event originator;
and when the word type is the help-seeking word type, determining the identity of the user as the event receiver.
4. The method of claim 3, wherein the at least one target term includes a threat profanity term and the identity of the user is an event recipient, the method further comprising, after sending the second query information to the first electronic device:
determining sound information corresponding to the target words of which the word types belong to the insulting threat category words as second target sound information;
determining that the user is again harmed if the second target sound information reappears in the ambient sound information and the at least one target word is present in the ambient sound information.
5. The method of claim 4, wherein after determining that the sound information corresponding to the target word of the word type that belongs to the insulting threat category word is the second target sound information, the method further comprises:
sending fourth query information to the first electronic device, wherein the fourth query information is used for querying the user for sensitive words;
acquiring fourth reply information from the first electronic equipment, wherein the fourth reply information comprises the sensitive words;
determining that the sensitive word is the target word;
after the determining that the user has suffered harm again, the method further comprises:
sending warning information to the first electronic device, wherein the warning information is used for warning the event initiator to stop the behavior of injuring others.
6. A voice information processing device based on events under scenes is applied to a man-machine conversation server in a safety protection system, the safety protection system comprises the man-machine conversation server and a first electronic device, and the device comprises:
the first acquisition unit is used for acquiring environmental sound information from the first electronic equipment under the condition that monitoring conditions are met, wherein the meeting of the monitoring conditions comprises the acquisition of a monitoring instruction or the meeting of monitoring time;
a first determining unit, configured to determine whether at least one target word exists in the environmental sound information, where a word type of the target word belongs to the threat profanity word type or the distress word type;
the second determining unit is used for determining whether the sound information of the user exists in the sound information corresponding to the at least one target word if the at least one target word exists;
a third determining unit, configured to determine, if there is voice information of the user, an identity of the user according to a word type of the target word, where the identity includes an event initiator or an event acceptor;
a first sending unit, configured to send, to the first electronic device, first prompt information when the identity of the user is the event initiator, where the first prompt information is used to prompt the user to stop a behavior of injuring another person;
a second sending unit, configured to, when the identity of the user is the event recipient, store environmental sound information including the target word, and send first query information to the first electronic device in a secure environment, where the secure environment is an environment where there is no event in a scene, and the first query information is used to query whether the user is damaged;
a second acquisition unit configured to acquire first reply information from the first electronic device;
a third sending unit, configured to send, to the first electronic device, second query information for querying whether the user needs help, in a case where the first reply information indicates that the user is damaged;
a third acquiring unit configured to acquire second reply information from the first electronic device;
a fourth sending unit, configured to send, to the first electronic device, second prompt information in a case where the second reply information indicates that the user needs help, where the second prompt information is used to prompt the user that voice evidence is saved and prompt measures that the user can take;
a fifth sending unit, configured to send third query information to the first electronic device, where the second reply information indicates that the user does not need help, and the third query information is used to query the user for a solution;
a fourth acquiring unit configured to acquire third reply information from the first electronic device, the third reply information indicating a solution of the user;
a fourth determining unit, configured to determine an attitude of the user according to the solution, where the attitude includes an active solution attitude or a passive solution attitude;
a sixth sending unit, configured to send, to the first electronic device, third prompt information when the attitude is a negative resolution attitude, where the third prompt information is used to encourage the user to actively resolve and prompt the user about measures that can be taken;
a fifth determining unit, configured to determine whether a solution manner of the user is feasible according to the third reply information if the attitude is an active solution attitude;
a seventh sending unit, configured to send fourth prompt information to the first electronic device if the solution is not feasible, where the fourth prompt information is used to prompt the user that the solution is not feasible and prompt the user about measures that can be taken.
7. A server, comprising a processor, memory, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of any of claims 1-5.
8. A computer-readable storage medium, having stored thereon a computer program/instructions, characterized in that the computer program/instructions are executed by a processor for carrying out the steps of the method according to any one of claims 1-5.
CN202310044598.3A 2023-01-30 2023-01-30 Voice information processing method and related device based on conflict event in campus scene Active CN115881136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310044598.3A CN115881136B (en) 2023-01-30 2023-01-30 Voice information processing method and related device based on conflict event in campus scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310044598.3A CN115881136B (en) 2023-01-30 2023-01-30 Voice information processing method and related device based on conflict event in campus scene

Publications (2)

Publication Number Publication Date
CN115881136A true CN115881136A (en) 2023-03-31
CN115881136B CN115881136B (en) 2023-08-18

Family

ID=85758482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310044598.3A Active CN115881136B (en) 2023-01-30 2023-01-30 Voice information processing method and related device based on conflict event in campus scene

Country Status (1)

Country Link
CN (1) CN115881136B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017133099A1 (en) * 2016-02-04 2017-08-10 中兴通讯股份有限公司 Information processing method, device, wearable device and storage medium
CN108053535A (en) * 2017-12-26 2018-05-18 重庆硕德信息技术有限公司 Intelligent monitor system
CN111402546A (en) * 2020-03-25 2020-07-10 上海闻泰信息技术有限公司 Child protection method and system based on smart band
WO2020158101A1 (en) * 2019-01-31 2020-08-06 株式会社日立システムズ Harmful act detection system and method
CN113205661A (en) * 2021-04-30 2021-08-03 广东艾檬电子科技有限公司 Anti-cheating implementation method and system, intelligent wearable device and storage medium
CN115273845A (en) * 2022-07-27 2022-11-01 西安西古光通信有限公司 Visual anti-cheating system, system operation method, device and medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017133099A1 (en) * 2016-02-04 2017-08-10 中兴通讯股份有限公司 Information processing method, device, wearable device and storage medium
CN108053535A (en) * 2017-12-26 2018-05-18 重庆硕德信息技术有限公司 Intelligent monitor system
WO2020158101A1 (en) * 2019-01-31 2020-08-06 株式会社日立システムズ Harmful act detection system and method
CN111402546A (en) * 2020-03-25 2020-07-10 上海闻泰信息技术有限公司 Child protection method and system based on smart band
CN113205661A (en) * 2021-04-30 2021-08-03 广东艾檬电子科技有限公司 Anti-cheating implementation method and system, intelligent wearable device and storage medium
CN115273845A (en) * 2022-07-27 2022-11-01 西安西古光通信有限公司 Visual anti-cheating system, system operation method, device and medium

Also Published As

Publication number Publication date
CN115881136B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
US20180324573A1 (en) Information Sending Method, Network Device, and Terminal
US10068575B2 (en) Information notification supporting device, information notification supporting method, and computer program product
US20170064084A1 (en) Method and Apparatus for Implementing Voice Mailbox
CN115410601B (en) Voice interaction method based on scene recognition in man-machine conversation scene
CN109726265A (en) Assist information processing method, equipment and the computer readable storage medium of chat
CN111768781B (en) Voice interrupt processing method and device
CN110659013A (en) Message processing method and device and storage medium
CN104363569A (en) Method for recommending optimal contact ways to mobile subscriber based on context awareness
CN110995938A (en) Data processing method and device
CN114155853A (en) Rejection method, device, equipment and storage medium
US20180248929A1 (en) Method to generate and transmit role-specific audio snippets
CN110209768B (en) Question processing method and device for automatic question answering
CN111611365A (en) Flow control method, device, equipment and storage medium of dialog system
CN106327792A (en) Alarm method and alarm system
CN112036820B (en) Method, system, storage medium and equipment for processing information feedback in enterprise
CN115881136A (en) Voice information processing method based on events under scene and related device
EP3766233B1 (en) Methods and systems for enabling a digital assistant to generate an ambient aware response
CN114257971A (en) Message processing method, intelligent terminal and storage medium
CN113411455A (en) Remote monitoring method and device, computer equipment and storage medium
US20080159514A1 (en) Telecommunication device
CN117132392B (en) Vehicle loan fraud risk early warning method and system
CN111611804A (en) Danger identification method and device, electronic equipment and storage medium
CN112398793B (en) Social engineering interaction method and device and storage medium
CN112116911B (en) Sound control method and device and computer readable storage medium
KR102559488B1 (en) Method for servicing to prevent criminal and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant