CN113411455A - Remote monitoring method and device, computer equipment and storage medium - Google Patents

Remote monitoring method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113411455A
CN113411455A CN202110691922.1A CN202110691922A CN113411455A CN 113411455 A CN113411455 A CN 113411455A CN 202110691922 A CN202110691922 A CN 202110691922A CN 113411455 A CN113411455 A CN 113411455A
Authority
CN
China
Prior art keywords
user
information
words
target
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110691922.1A
Other languages
Chinese (zh)
Inventor
徐筱莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Puhui Enterprise Management Co Ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN202110691922.1A priority Critical patent/CN113411455A/en
Publication of CN113411455A publication Critical patent/CN113411455A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/2281Call monitoring, e.g. for law enforcement purposes; Call tracing; Detection or prevention of malicious calls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/22Interactive procedures; Man-machine interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Technology Law (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephonic Communication Services (AREA)
  • Alarm Systems (AREA)

Abstract

The application relates to the technical field of cloud monitoring, and discloses a remote monitoring method, a remote monitoring device, computer equipment and a storage medium, wherein the method comprises the following steps: responding to a remote monitoring request of a first user, and receiving a voice segment which is sent by a mobile terminal and recorded in the online communication process of the first user and a second user; cutting off the voice fragment containing the first user in the voice fragment according to the first voiceprint information of the first user to obtain a target voice fragment; extracting target voice information from the target voice fragment, and converting the target voice information into text information; when detecting that words matched with preset sensitive words exist in a plurality of words of the text information, extracting target sensitive words; when the severity level corresponding to the target sensitive word is monitored to be the target severity level, an alarm request is sent to the alarm center to quickly and effectively react, so that the timeliness of alarm prevention is ensured, and the monitoring effect is improved.

Description

Remote monitoring method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of cloud monitoring technologies, and in particular, to a remote monitoring method and apparatus, a computer device, and a storage medium.
Background
In the field of collection of business, in the process of face-to-face communication between business personnel and clients, the business personnel may face emotional overstrain, language threats and even violent behaviors of the clients to harm the personal safety of the business personnel, so that the face-to-face communication between the business personnel and the clients needs to be effectively monitored, and subsequent evidence collection is facilitated.
The existing monitoring mode for face-to-face communication between business personnel and clients generally records the face-to-face communication process in a mode that the business personnel actively carry a recording device or a camera device, but the operation mode can only follow up responsibility investigation according to recorded audio and video, cannot perform quick and effective reaction and treatment, cannot timely stop the threat of the clients, and causes low timeliness of prevention and alarm and poor monitoring effect.
Disclosure of Invention
The application mainly aims to provide a remote monitoring method, a remote monitoring device, a computer device and a storage medium, so that the face-to-face communication process of a user is monitored in real time, and when the user is monitored to be threatened by the person, the remote monitoring method and the remote monitoring device can quickly and effectively react to ensure the timeliness of prevention and alarm and improve the monitoring effect.
In order to achieve the above object, the present application provides a remote monitoring method, which is applied to a monitoring platform, where the monitoring platform establishes a connection with a mobile terminal with a recording function, which is carried by a user, and the remote monitoring method includes:
responding to a remote monitoring request initiated by a first user at a mobile terminal, controlling the mobile terminal to start a recording mode, and receiving a voice segment recorded in the online communication process between the first user and a second user, which is sent by the mobile terminal; wherein the second user is a monitored user;
inquiring first voiceprint information of the first user from a voiceprint information base, and cutting off a voice fragment containing the first user in the voice fragment according to the first voiceprint information to obtain a target voice fragment;
extracting target voice information from the target voice fragment, converting the target voice information into text information, and segmenting the text information to obtain a plurality of words;
judging whether a word matched with a preset sensitive word exists in the plurality of words or not;
when detecting that the words matched with the preset sensitive words exist in the words, extracting the words matched with the preset sensitive words from the words to obtain target sensitive words;
inquiring the corresponding severity grade of the target sensitive word from a preset sensitive word comparison table;
judging whether the severity grade is a target severity grade or not;
if so, acquiring a voice fragment containing the target sensitive word, determining identity information of the second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information, and sending an alarm request carrying the alarm information to an alarm center.
Preferably, the step of judging whether a word matching a preset sensitive word exists in the plurality of words includes:
converting the words into word vectors respectively by using a preset text word vector model to obtain feature word vectors;
respectively calculating cosine distances between the feature word vectors and word vectors corresponding to preset sensitive words;
judging whether a feature word vector with the cosine distance larger than a preset similarity threshold exists or not;
if yes, determining that the words matched with the preset sensitive words exist in the words.
Preferably, the step of packing the voice segment containing the target sensitive word and the identity information of the second user to obtain the alarm information includes:
acquiring a position area code of a base station which is currently in communication connection with the mobile terminal;
and determining the position information of the first user according to the position area code, and packaging the voice fragment containing the target sensitive word, the position information and the identity information of the second user to obtain alarm information.
Further, after the step of determining the location information of the first user according to the location area code, the method further includes:
inquiring whether the area where the first user is located has monitoring equipment or not according to the position information;
if yes, controlling the monitoring equipment to shoot an offline communication process of the first user and the second user to obtain a monitoring video;
and receiving the monitoring video sent by the monitoring equipment, compressing the monitoring video and sending the compressed monitoring video to the alarm center.
Preferably, the step of obtaining the speech segment containing the target sensitive word includes:
acquiring a time node of the target sensitive word in the voice fragment;
and taking the time node as a center, acquiring voice fragments in a preset time period before and after the time node as the voice fragments containing the target sensitive words.
Preferably, the step of determining the identity information of the second user includes:
extracting voiceprint information of a second user from the voice fragment to obtain second voiceprint information;
and inquiring the identity information corresponding to the second voiceprint information from a voiceprint information base to obtain the identity information of the second user.
Preferably, the step of extracting the target voice information from the target voice segment includes:
and carrying out noise reduction and/or echo cancellation processing on the target voice fragment, and extracting target voice information from the target voice fragment subjected to noise reduction and/or echo cancellation processing.
The present application further provides a remote monitoring device, including:
the control module is used for responding to a remote monitoring request initiated by a first user at a mobile terminal, controlling the mobile terminal to start a recording mode, and receiving a voice fragment recorded in the online communication process between the first user and a second user, which is sent by the mobile terminal; wherein the second user is a monitored user;
the first query module is used for querying first voiceprint information of the first user from a voiceprint information base and cutting out a voice fragment containing the first user from the voice fragment according to the first voiceprint information to obtain a target voice fragment;
the word segmentation module is used for extracting target voice information from the target voice fragment, converting the target voice information into text information, and segmenting words of the text information to obtain a plurality of words;
the first judgment module is used for judging whether a word matched with a preset sensitive word exists in the words;
the extracting module is used for extracting words matched with the preset sensitive words from the words to obtain target sensitive words when detecting that the words matched with the preset sensitive words exist in the words;
the second query module is used for querying the severity level corresponding to the target sensitive word from a preset sensitive word comparison table;
the second judgment module is used for judging whether the severity grade is a target severity grade or not;
and the sending module is used for acquiring the voice fragment containing the target sensitive word, determining the identity information of the second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information and sending an alarm request carrying the alarm information to an alarm center if the target sensitive word is contained in the voice fragment.
The present application further provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of any of the above methods when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of any of the above.
The remote monitoring method, the remote monitoring device, the computer equipment and the storage medium firstly respond to a remote monitoring request initiated by a first user at a mobile terminal, control the mobile terminal to start a recording mode, and receive a voice fragment which is sent by the mobile terminal and is recorded in the online communication process of the first user and a second user; then, inquiring first voiceprint information of the first user, cutting off a voice segment containing the first user in the voice segment according to the first voiceprint information to obtain a target voice segment, and analyzing and processing only the target voice segment in the future to reduce the data processing amount; extracting target voice information from the target voice fragment, converting the target voice information into text information, and segmenting the text information to obtain a plurality of words; respectively detecting the words by adopting a sensitive word detection algorithm, and judging whether words matched with preset sensitive words exist in the words; when detecting that a word matched with a preset sensitive word exists in the words, extracting the word matched with the preset sensitive word from the words to obtain a target sensitive word; inquiring the severity level corresponding to the target sensitive word from a preset sensitive word comparison table; judging whether the severity level is a target severity level; if yes, obtaining a voice fragment containing a target sensitive word, determining identity information of a second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information, and finally sending an alarm request carrying the alarm information to an alarm center, so that the offline ditch passing process of the first user and the second user is monitored in real time, and when the situation that the voice fragment contains the sensitive word and the severity level corresponding to the sensitive word reaches the target severity level is analyzed, the first user possibly has a situation of personal threat, and at the moment, the monitoring platform sends the alarm request to the alarm center to quickly and effectively react, thereby ensuring timeliness of prevention alarm and improving monitoring effect.
Drawings
Fig. 1 is a schematic flow chart of a remote monitoring method according to an embodiment of the present application;
FIG. 2 is a block diagram of a remote monitoring device according to an embodiment of the present disclosure;
fig. 3 is a block diagram illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The application provides a remote monitoring method, be applied to the monitoring platform that comprises one or more servers, the monitoring platform sets up communication connection with the mobile terminal that has the recording function that the user hand-carries, in order to carry out data transmission, be used for solving current business personnel and recording face-to-face communication process through the mode of initiatively carrying recording equipment or camera equipment, can only follow-up carry out responsibility according to the audio frequency and video of recording and pursue, can't carry out quick, effectual reaction and processing, can't stop the threat of customer in time yet, lead to the timeliness of prevention warning low, the relatively poor problem of effect of control, in an embodiment, as shown in fig. 1, this remote monitoring method includes:
s1, responding to a remote monitoring request initiated by a first user at a mobile terminal, controlling the mobile terminal to start a recording mode, and receiving a voice fragment recorded in the online communication process between the first user and a second user sent by the mobile terminal; wherein the second user is a monitored user;
s2, inquiring first voiceprint information of the first user from a voiceprint information base, and cutting out a voice fragment containing the first user in the voice fragment according to the first voiceprint information to obtain a target voice fragment;
s3, extracting target voice information from the target voice fragment, converting the target voice information into text information, and segmenting the text information to obtain a plurality of words;
s4, judging whether a word matched with a preset sensitive word exists in the words or not;
s5, when detecting that the words matched with the preset sensitive words exist in the words, extracting the words matched with the preset sensitive words from the words to obtain target sensitive words;
s6, inquiring the corresponding severity grade of the target sensitive word from a preset sensitive word comparison table;
s7, judging whether the severity level is a target severity level;
and S8, if yes, acquiring a voice fragment containing the target sensitive word, determining identity information of the second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information, and sending an alarm request carrying the alarm information to an alarm center.
As described in step S1, when the first user logs in the account of the remote monitoring APP of the mobile terminal and sends a remote monitoring request to the monitoring platform on the application program of the mobile terminal, the monitoring platform receives the remote monitoring request sent by the first user. The remote monitoring request may include a name, an account number, an identification number of the first user, and an authorization code authorizing a recording function of the mobile terminal. The mobile terminal can comprise terminals such as a smart phone, a smart watch, a tablet computer and a bracelet.
After receiving a remote monitoring request sent by a first user, the monitoring platform can directly control the mobile terminal to start a recording mode through a control instruction and an authorization code, so that the mobile terminal records offline communication contents of the first user and a second user in real time, generates a voice segment, and sends the voice segment to the monitoring platform. Specifically, the mobile terminal can start a short recording mode at intervals of a preset time interval in the recording process, analyze each received voice segment, and control the short recording mode of the mobile terminal to switch to the long recording mode when detecting that the second user has abnormal behavior in the short segment, so as to continuously record the offline ditch passing process of the first user and the second user. The abnormal behavior comprises the fact that the speed of speech of the second user is increased, the sound decibel is higher than the preset decibel and the like.
Furthermore, after receiving the voice segments recorded in the offline communication process of the first user and the second user, which are sent by the mobile terminal, the monitoring platform analyzes and processes the voice information, so that the complex voice processing is not required to be completed by the mobile terminal, but is completed by the monitoring platform, and the data processing pressure of the mobile terminal is reduced.
As described in step S2, in this step, the first voiceprint information of the first user can be queried in the voiceprint information base, and the first voiceprint information is used to cut out the voice segment containing the first user, so as to obtain the target voice segment containing only the second user, so as to subsequently process only the target voice segment, thereby reducing the data processing amount and improving the processing efficiency.
The voiceprint information base stores voiceprint information of each user and identity information of the corresponding user. The voiceprint information of the voiceprint information base can be obtained through the following two modes, one mode is that voiceprint information of a user is collected through application software, and the user needs to actively cooperate with the voiceprint information; another is daily voice data collected by other voice collection systems, such as a telephone, without the need for active user interaction. For voiceprint information collected by application software, the voiceprint information which meets the conditions is directly registered in a voiceprint information base, and the user is prompted to collect the voiceprint information again when the voiceprint information which does not meet the conditions is not registered in the voiceprint information base; for the daily voice data obtained from other systems, if the corresponding user does not register the voiceprint, voiceprint features are extracted from a plurality of daily voice data of the user, the similarity between the voiceprint features is compared, the most representative voiceprint feature (the most similar to other voiceprint feature files of the user) is selected as the voiceprint information of the user and is supplemented into a voiceprint information base.
As described in step S3, in this step, the speech information is extracted from the target speech segment to obtain the target speech information, the target speech information is converted into text information, and the text information is subjected to word segmentation to obtain a plurality of words. If the target voice information is divided into the word set, all the words in all the word banks are traversed, stop words and similar meaning words in the word set are obtained according to the word banks, the stop words are deleted, and duplication removal processing is carried out on the similar meaning words. For example, the target voice information is: the output participle of "I have no money in the account now" is: "I", "now", "Account", "No", "money".
As described in step S4, in this step, a sensitive word list may be pre-constructed, where preset sensitive words are stored in the sensitive word list, and when determining whether there is a word in the plurality of words that is consistent with the preset sensitive words in the sensitive word list, the sensitive word detection algorithm may be used to detect each word, and determine whether there is a word that hits the preset sensitive word in the text information according to the detection result, and if not, the online ditching process of the first user and the second user is normal. The sensitive words comprise words with personal threats, such as words of 'hitting you', 'killing', and the like.
Sensitive word detection algorithms include, but are not limited to, using DFA algorithms, AC automata, and KMP (Knuth-Morris-Pratt, Nutt-Morris-Pratt) algorithms. In this embodiment, sensitive word detection is performed on a plurality of words by using an AC automaton. The AC (Aho-corascik) automaton is a dictionary matching algorithm for searching for a matching sensitive word among preset keywords in an input text (i.e., a recognition text). The AC automaton skillfully converts character comparison into state transition by using a finite automaton, time complexity is linear, algorithm efficiency is high, and acquisition efficiency of sensitive word matching is improved.
As described in step S5, when it is detected that there is a word matching the preset sensitive word in the words, the word matching the preset sensitive word is determined, so as to obtain the target sensitive word, where the target sensitive word is a word in the target voice segment that has a personal threat.
As described in the step S6, a sensitive word comparison table of sensitive words and severity levels may also be pre-constructed, and is used to query the severity level corresponding to each sensitive word, when the severity level of the target sensitive word needs to be queried, the severity level corresponding to the target sensitive word is queried from the preset sensitive word comparison table, for example, the preset sensitive word comparison table may be divided into A, B, C three severity levels, and the severity levels are a > B > C, for example, the severity level corresponding to "kill" is a, the severity level corresponding to "hit you" is B, that is, the severity level corresponding to "kill" is higher than the severity level corresponding to "hit you", and threatens the person of the first user.
As described above in step S7, the target severity level can be customized, typically to be the highest severity level, such as level a.
As described in step S8, when it is determined that the severity level is the target severity level, the voice fragment containing the target sensitive word is obtained, the identity information of the second user is determined, the voice fragment containing the target sensitive word and the identity information of the second user are packed and compressed to obtain the alarm information, and the alarm request carrying the alarm information is sent to the alarm center, so as to timely stop the threat of the user, improve the timeliness of the prevention alarm, and improve the monitoring effect.
The remote monitoring method comprises the steps of firstly responding to a remote monitoring request initiated by a first user at a mobile terminal, controlling the mobile terminal to start a recording mode, and receiving a voice fragment recorded in the online communication process of the first user and a second user, wherein the voice fragment is sent by the mobile terminal; then, inquiring first voiceprint information of the first user, cutting off a voice segment containing the first user in the voice segment according to the first voiceprint information to obtain a target voice segment, and analyzing and processing only the target voice segment in the future to reduce the data processing amount; extracting target voice information from the target voice fragment, converting the target voice information into text information, and segmenting the text information to obtain a plurality of words; respectively detecting the words by adopting a sensitive word detection algorithm, and judging whether words matched with preset sensitive words exist in the words; when detecting that a word matched with a preset sensitive word exists in the words, extracting the word matched with the preset sensitive word from the words to obtain a target sensitive word; inquiring the severity level corresponding to the target sensitive word from a preset sensitive word comparison table; judging whether the severity level is a target severity level; if yes, obtaining a voice fragment containing a target sensitive word, determining identity information of a second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information, and finally sending an alarm request carrying the alarm information to an alarm center, so that the offline ditch passing process of the first user and the second user is monitored in real time, and when the situation that the voice fragment contains the sensitive word and the severity level corresponding to the sensitive word reaches the target severity level is analyzed, the first user possibly has a situation of personal threat, and at the moment, the monitoring platform sends the alarm request to the alarm center to quickly and effectively react, thereby ensuring timeliness of prevention alarm and improving monitoring effect.
In an embodiment, in step S3, the step of extracting the target speech information from the target speech segment may specifically include:
s31, carrying out noise reduction and/or echo cancellation processing on the target voice fragment, and extracting target voice information from the target voice fragment after the noise reduction and/or echo cancellation processing.
The embodiment may perform noise reduction and/or echo cancellation processing on the target voice segment, and extract the target voice information from the target voice segment after the noise reduction and/or echo cancellation processing, so as to remove the interference information. The purpose of noise reduction on voice is to remove noise collected when voice information is collected by voice collection equipment, and the purpose of echo cancellation is to cancel voice played by other applications from collected target voice information of a user so as to avoid interference of subsequent text conversion.
In an embodiment, in step S4, the step of determining whether there is a word that matches the preset sensitive word in the words may specifically include:
s41, converting the words into word vectors respectively by using a preset text word vector model to obtain feature word vectors;
s42, respectively calculating cosine distances between the feature word vectors and word vectors corresponding to preset sensitive words, and judging whether feature word vectors with cosine distances larger than a preset similarity threshold exist or not;
and S43, if yes, determining that a word matched with the preset sensitive word exists in the words.
As described in step S41, the Word vectors corresponding to the words can be obtained by converting the words into Word vectors using the Word2Vec Word vector model trained in advance. The Word2Vec Word vector model is a model for learning semantic knowledge from a large amount of texts and adopts an unsupervised mode. The method is characterized in that a large amount of texts are trained, words in the texts are represented in a vector form, the vector is called a word vector, and the relation between two words can be known by calculating the distance between the word vectors of the two words. The emotion dictionary is used for storing word vectors corresponding to various emotion words, such as killed word vector W1, played word vector W2 and the like. The method for extracting the sensitive words in the words to be recognized is simple and convenient to operate, small in calculation amount and low in complexity, the sensitive words in the words to be recognized can be extracted quickly, and the efficiency of analyzing the words to be recognized can be improved.
As described in step S42, the cosine distances between the feature word vectors corresponding to the words and phrases and the word vectors corresponding to one or at least two preset sensitive words are calculated, and it is determined whether there is a feature word vector whose cosine distance is greater than the preset similarity threshold, and if not, there is no sensitive word in the words and phrases. Preferably, the preset similarity threshold is 0.9.
As described in step S43, when there is a feature word vector whose cosine distance is greater than the preset similarity threshold, it indicates that there is a sensitive word in the word, and selects the feature word vector whose cosine similarity is greater than the preset similarity threshold as the target feature word vector, and uses the word corresponding to the target feature word vector as the target sensitive word, so as to precisely select the sensitive word in a vector manner.
In an embodiment, in step S8, the step of packing the voice fragment containing the target sensitive word and the identity information of the second user to obtain the alarm information may specifically include:
s81, acquiring the position area code of the base station which establishes communication connection with the mobile terminal currently;
s82, determining the position information of the first user according to the position area code, and packaging the voice fragment containing the target sensitive word, the position information and the identity information of the second user to obtain alarm information.
As described in step S81 above, the location area code may be encoded based on the location area of the base station to which the mobile terminal is connected. The location area code is a code of actual location information, which is used to convert the actual location information in text form into a code form, which is stored in a temporary location register (VLR) of the MSC; the actual location information is stored in the DB of the MSC, and the number of characters ranges from 1 to 20. The location area codes correspond one-to-one to the actual location information.
After the corresponding relation between the position area codes and the actual position information is established, the user can inquire the current actual position information of the mobile terminal of other users. For example, assuming that the location area code of the mobile terminal a of the first user is 0001, the actual location information of the location area code is zoo after being queried.
As described in step S82, the position information corresponding to the position area code is determined, so as to obtain the position information of the second user, and when generating the alarm information, the position information is packaged together with the voice segment containing the target sensitive word and the identity information of the second user, so as to obtain the alarm information, so as to send the position information to the alarm center, so that the alarm center can process the threat item in time.
In an embodiment, in the step of S82, after the step of determining the location information of the first user according to the location area code, the method may further include:
inquiring whether the area where the first user is located has monitoring equipment or not according to the position information;
if yes, controlling the monitoring equipment to shoot an offline communication process of the first user and the second user to obtain a monitoring video;
and receiving the monitoring video sent by the monitoring equipment, compressing the monitoring video and sending the compressed monitoring video to the alarm center.
The embodiment can determine whether the area where the first user is located has the monitoring device according to the inquired position information, when the area where the first user is located has the monitoring device, the monitoring device is controlled to shoot an offline communication process of the first user and the second user to obtain the monitoring video, and the monitoring video is packaged in the alarm information and then sent to the alarm center so that the alarm center can judge and respond in time.
In an embodiment, in the step of S8, the step of obtaining the speech segment containing the target sensitive word may specifically include:
c81, acquiring the time node of the target sensitive word in the voice fragment;
and C82, taking the time node as a center, acquiring voice fragments in a preset time period before and after the time node as the voice fragments containing the target sensitive words.
In this embodiment, a time node of the target sensitive word in the speech segment may be obtained first, and the speech segments in a preset time period before and after the time node are obtained as the speech segments containing the target sensitive word with the time node as a center, where if the time node of the target sensitive word is 3:00, 2: and (5) the communication content is 45-3:15, and the voice segment corresponding to the communication content is used as the voice segment containing the target sensitive word, so that the front and back communication processes can be positioned in time to know the original committee of the event.
In an embodiment, in the step of S8, the step of determining the identity information of the second user may specifically include:
d81, extracting voiceprint information of a second user from the voice fragment to obtain second voiceprint information;
and D82, inquiring the identity information corresponding to the second voiceprint information from the voiceprint information base to obtain the identity information of the second user.
In this embodiment, when the second user is identified, the voiceprint information of the second user can be directly extracted from the voice fragment to obtain the second voiceprint information, and the identity information corresponding to the second voiceprint information is queried from the voiceprint information base to obtain the identity information of the second user. The voiceprint information base stores the identity information of each user and the voiceprint information of each user, so that the identity information of the second user can be determined quickly and accurately according to the voiceprint information of the second user.
Referring to fig. 2, an embodiment of the present application further provides a remote monitoring apparatus, including:
the control module 1 is used for responding to a remote monitoring request initiated by a first user at a mobile terminal, controlling the mobile terminal to start a recording mode, and receiving a voice segment recorded in the online communication process between the first user and a second user sent by the mobile terminal; wherein the second user is a monitored user;
the first query module 2 is configured to query first voiceprint information of the first user from a voiceprint information base, and cut out a voice fragment containing the first user from the voice fragment according to the first voiceprint information to obtain a target voice fragment;
the word segmentation module 3 is configured to extract target voice information from the target voice fragment, convert the target voice information into text information, and segment words of the text information to obtain a plurality of words;
the first judgment module 4 is used for judging whether a word matched with a preset sensitive word exists in the plurality of words;
the extracting module 5 is configured to, when it is detected that a word matched with the preset sensitive word exists in the plurality of words, extract a word matched with the preset sensitive word from the plurality of words to obtain a target sensitive word;
the second query module 6 is configured to query the severity level corresponding to the target sensitive word from a preset sensitive word comparison table;
a second judging module 7, configured to judge whether the severity level is a target severity level;
and the sending module 8 is used for acquiring the voice fragment containing the target sensitive word if the target sensitive word is the voice fragment, determining the identity information of the second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information, and sending an alarm request carrying the alarm information to an alarm center.
When a first user logs in an account of a remote monitoring APP of a mobile terminal and sends a remote monitoring request to a monitoring platform on an application program of the mobile terminal, the monitoring platform receives the remote monitoring request sent by the first user. The remote monitoring request may include a name, an account number, an identification number of the first user, and an authorization code authorizing a recording function of the mobile terminal. The mobile terminal can comprise terminals such as a smart phone, a smart watch, a tablet computer and a bracelet.
After receiving a remote monitoring request sent by a first user, the monitoring platform can directly control the mobile terminal to start a recording mode through a control instruction and an authorization code, so that the mobile terminal records offline communication contents of the first user and a second user in real time, generates a voice segment, and sends the voice segment to the monitoring platform. Specifically, the mobile terminal can start a short recording mode at intervals of a preset time interval in the recording process, analyze each received voice segment, and control the short recording mode of the mobile terminal to switch to the long recording mode when detecting that the second user has abnormal behavior in the short segment, so as to continuously record the offline ditch passing process of the first user and the second user. The abnormal behavior comprises the fact that the speed of speech of the second user is increased, the sound decibel is higher than the preset decibel and the like.
Furthermore, after receiving the voice segments recorded in the offline communication process of the first user and the second user, which are sent by the mobile terminal, the monitoring platform analyzes and processes the voice information, so that the complex voice processing is not required to be completed by the mobile terminal, but is completed by the monitoring platform, and the data processing pressure of the mobile terminal is reduced.
In addition, the first voiceprint information of the first user can be inquired in the voiceprint information base, the first voiceprint information is utilized to cut the voice fragment containing the first user in the voice fragment, the target voice fragment containing only the second user is obtained, and then only the target voice fragment is processed, so that the data processing amount is reduced, and the processing efficiency is improved.
The voiceprint information base stores voiceprint information of each user and identity information of the corresponding user. The voiceprint information of the voiceprint information base can be obtained through the following two modes, one mode is that voiceprint information of a user is collected through application software, and the user needs to actively cooperate with the voiceprint information; another is daily voice data collected by other voice collection systems, such as a telephone, without the need for active user interaction. For voiceprint information collected by application software, the voiceprint information which meets the conditions is directly registered in a voiceprint information base, and the user is prompted to collect the voiceprint information again when the voiceprint information which does not meet the conditions is not registered in the voiceprint information base; for the daily voice data obtained from other systems, if the corresponding user does not register the voiceprint, voiceprint features are extracted from a plurality of daily voice data of the user, the similarity between the voiceprint features is compared, the most representative voiceprint feature (the most similar to other voiceprint feature files of the user) is selected as the voiceprint information of the user and is supplemented into a voiceprint information base.
The device can extract the voice information from the target voice fragment to obtain the target voice information, convert the target voice information into the text information, and carry out word formation and segmentation on the text information to obtain a plurality of words. If the target voice information is divided into the word set, all the words in all the word banks are traversed, stop words and similar meaning words in the word set are obtained according to the word banks, the stop words are deleted, and duplication removal processing is carried out on the similar meaning words. For example, the target voice information is: the output participle of "I have no money in the account now" is: "I", "now", "Account", "No", "money".
Further, a sensitive word list can be constructed in advance, preset sensitive words are stored in the sensitive word list, when whether words consistent with the preset sensitive words of the sensitive word list exist in the plurality of words or not is judged, the sensitive word detection algorithm can be adopted to detect the words respectively, whether words hitting the preset sensitive words exist in the text information or not is judged according to the detection result, and if not, the online ditch-passing process of the first user and the second user is normal. The sensitive words comprise words with personal threats, such as words of 'hitting you', 'killing', and the like.
Sensitive word detection algorithms include, but are not limited to, using DFA algorithms, AC automata, and KMP (Knuth-Morris-Pratt, Nutt-Morris-Pratt) algorithms. In this embodiment, sensitive word detection is performed on a plurality of words by using an AC automaton. The AC (Aho-corascik) automaton is a dictionary matching algorithm for searching for a matching sensitive word among preset keywords in an input text (i.e., a recognition text). The AC automaton skillfully converts character comparison into state transition by using a finite automaton, time complexity is linear, algorithm efficiency is high, and acquisition efficiency of sensitive word matching is improved.
When the words matched with the preset sensitive words exist in the plurality of words, the words matched with the preset sensitive words are determined, and the target sensitive words are obtained, wherein the target sensitive words are words with personal threats in the target voice fragments.
Furthermore, the method can also pre-construct a sensitive word comparison table of sensitive words and severity levels, which is used for querying the severity level corresponding to each sensitive word, and when the severity level of the target sensitive word needs to be queried, query the severity level corresponding to the target sensitive word from the preset sensitive word comparison table, wherein the severity level can be divided into A, B, C severity levels, the severity levels are respectively A > B > C, if the severity level corresponding to 'killing' is A, the severity level corresponding to 'you' is B, that is, the severity level corresponding to 'killing' is higher than the severity level corresponding to 'you', and the threat to the person of the first user is higher.
Wherein, the target severity level can be customized, and is generally the highest severity level, such as level a.
When the severity level is determined to be the target severity level, acquiring a voice fragment containing the target sensitive word, determining identity information of a second user, packaging and compressing the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information, sending an alarm request carrying the alarm information to an alarm center, timely stopping threats of the client, improving timeliness of alarm prevention and improving monitoring effect.
As described above, it can be understood that each component of the remote monitoring apparatus provided in the present application may implement the function of any one of the above remote monitoring methods, and the detailed structure is not described again.
Referring to fig. 3, a computer device, which may be a server and whose internal structure may be as shown in fig. 3, is also provided in the embodiment of the present application. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and the computer program in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a remote monitoring method.
The processor executes the remote monitoring method, and the method comprises the following steps:
responding to a remote monitoring request initiated by a first user at a mobile terminal, controlling the mobile terminal to start a recording mode, and receiving a voice segment recorded in the online communication process between the first user and a second user, which is sent by the mobile terminal; wherein the second user is a monitored user;
inquiring first voiceprint information of the first user from a voiceprint information base, and cutting off a voice fragment containing the first user in the voice fragment according to the first voiceprint information to obtain a target voice fragment;
extracting target voice information from the target voice fragment, converting the target voice information into text information, and segmenting the text information to obtain a plurality of words;
judging whether a word matched with a preset sensitive word exists in the plurality of words or not;
when detecting that the words matched with the preset sensitive words exist in the words, extracting the words matched with the preset sensitive words from the words to obtain target sensitive words;
inquiring the corresponding severity grade of the target sensitive word from a preset sensitive word comparison table;
judging whether the severity grade is a target severity grade or not;
if so, acquiring a voice fragment containing the target sensitive word, determining identity information of the second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information, and sending an alarm request carrying the alarm information to an alarm center.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing a remote monitoring method, including the steps of:
responding to a remote monitoring request initiated by a first user at a mobile terminal, controlling the mobile terminal to start a recording mode, and receiving a voice segment recorded in the online communication process between the first user and a second user, which is sent by the mobile terminal; wherein the second user is a monitored user;
inquiring first voiceprint information of the first user from a voiceprint information base, and cutting off a voice fragment containing the first user in the voice fragment according to the first voiceprint information to obtain a target voice fragment;
extracting target voice information from the target voice fragment, converting the target voice information into text information, and segmenting the text information to obtain a plurality of words;
judging whether a word matched with a preset sensitive word exists in the plurality of words or not;
when detecting that the words matched with the preset sensitive words exist in the words, extracting the words matched with the preset sensitive words from the words to obtain target sensitive words;
inquiring the corresponding severity grade of the target sensitive word from a preset sensitive word comparison table;
judging whether the severity grade is a target severity grade or not;
if so, acquiring a voice fragment containing the target sensitive word, determining identity information of the second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information, and sending an alarm request carrying the alarm information to an alarm center.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
To sum up, the most beneficial effect of this application lies in:
the remote monitoring method, the remote monitoring device, the computer equipment and the storage medium firstly respond to a remote monitoring request initiated by a first user at a mobile terminal, control the mobile terminal to start a recording mode, and receive a voice fragment which is sent by the mobile terminal and is recorded in the online communication process of the first user and a second user; then, inquiring first voiceprint information of the first user, cutting off a voice segment containing the first user in the voice segment according to the first voiceprint information to obtain a target voice segment, and analyzing and processing only the target voice segment in the future to reduce the data processing amount; extracting target voice information from the target voice fragment, converting the target voice information into text information, and segmenting the text information to obtain a plurality of words; respectively detecting the words by adopting a sensitive word detection algorithm, and judging whether words matched with preset sensitive words exist in the words; when detecting that a word matched with a preset sensitive word exists in the words, extracting the word matched with the preset sensitive word from the words to obtain a target sensitive word; inquiring the severity level corresponding to the target sensitive word from a preset sensitive word comparison table; judging whether the severity level is a target severity level; if yes, obtaining a voice fragment containing a target sensitive word, determining identity information of a second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information, and finally sending an alarm request carrying the alarm information to an alarm center, so that the offline ditch passing process of the first user and the second user is monitored in real time, and when the situation that the voice fragment contains the sensitive word and the severity level corresponding to the sensitive word reaches the target severity level is analyzed, the first user possibly has a situation of personal threat, and at the moment, the monitoring platform sends the alarm request to the alarm center to quickly and effectively react, thereby ensuring timeliness of prevention alarm and improving monitoring effect.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A remote monitoring method is applied to a monitoring platform, the monitoring platform is connected with a mobile terminal which is carried by a user and has a recording function, and the remote monitoring method is characterized by comprising the following steps:
responding to a remote monitoring request initiated by a first user at a mobile terminal, controlling the mobile terminal to start a recording mode, and receiving a voice segment recorded in the online communication process between the first user and a second user, which is sent by the mobile terminal; wherein the second user is a monitored user;
inquiring first voiceprint information of the first user from a voiceprint information base, and cutting off a voice fragment containing the first user in the voice fragment according to the first voiceprint information to obtain a target voice fragment;
extracting target voice information from the target voice fragment, converting the target voice information into text information, and segmenting the text information to obtain a plurality of words;
judging whether a word matched with a preset sensitive word exists in the plurality of words or not;
when detecting that the words matched with the preset sensitive words exist in the words, extracting the words matched with the preset sensitive words from the words to obtain target sensitive words;
inquiring the corresponding severity grade of the target sensitive word from a preset sensitive word comparison table;
judging whether the severity grade is a target severity grade or not;
if so, acquiring a voice fragment containing the target sensitive word, determining identity information of the second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information, and sending an alarm request carrying the alarm information to an alarm center.
2. The method of claim 1, wherein the step of determining whether there is a word in the plurality of words that matches a predetermined sensitive word comprises:
converting the words into word vectors respectively by using a preset text word vector model to obtain feature word vectors;
respectively calculating cosine distances between the feature word vectors and word vectors corresponding to preset sensitive words;
judging whether a feature word vector with the cosine distance larger than a preset similarity threshold exists or not;
if yes, determining that the words matched with the preset sensitive words exist in the words.
3. The method of claim 1, wherein the step of packing the voice segment containing the target sensitive word and the identity information of the second user to obtain an alarm message comprises:
acquiring a position area code of a base station which is currently in communication connection with the mobile terminal;
and determining the position information of the first user according to the position area code, and packaging the voice fragment containing the target sensitive word, the position information and the identity information of the second user to obtain alarm information.
4. The method of claim 3, wherein the step of determining the location information of the first user based on the location area code is further followed by:
inquiring whether the area where the first user is located has monitoring equipment or not according to the position information;
if yes, controlling the monitoring equipment to shoot an offline communication process of the first user and the second user to obtain a monitoring video;
and receiving the monitoring video sent by the monitoring equipment, compressing the monitoring video and sending the compressed monitoring video to the alarm center.
5. The method of claim 1, wherein the step of obtaining the speech segment containing the target sensitive word comprises:
acquiring a time node of the target sensitive word in the voice fragment;
and taking the time node as a center, acquiring voice fragments in a preset time period before and after the time node as the voice fragments containing the target sensitive words.
6. The method of claim 1, wherein the step of determining the identity information of the second user comprises:
extracting voiceprint information of a second user from the voice fragment to obtain second voiceprint information;
and inquiring the identity information corresponding to the second voiceprint information from a voiceprint information base to obtain the identity information of the second user.
7. The method according to claim 1, wherein the step of extracting the target speech information from the target speech segment comprises:
and carrying out noise reduction and/or echo cancellation processing on the target voice fragment, and extracting target voice information from the target voice fragment subjected to noise reduction and/or echo cancellation processing.
8. A remote monitoring apparatus, comprising:
the control module is used for responding to a remote monitoring request initiated by a first user at a mobile terminal, controlling the mobile terminal to start a recording mode, and receiving a voice fragment recorded in the online communication process between the first user and a second user, which is sent by the mobile terminal; wherein the second user is a monitored user;
the first query module is used for querying first voiceprint information of the first user from a voiceprint information base and cutting out a voice fragment containing the first user from the voice fragment according to the first voiceprint information to obtain a target voice fragment;
the word segmentation module is used for extracting target voice information from the target voice fragment, converting the target voice information into text information, and segmenting words of the text information to obtain a plurality of words;
the first judgment module is used for judging whether a word matched with a preset sensitive word exists in the words;
the extracting module is used for extracting words matched with the preset sensitive words from the words to obtain target sensitive words when detecting that the words matched with the preset sensitive words exist in the words;
the second query module is used for querying the severity level corresponding to the target sensitive word from a preset sensitive word comparison table;
the second judgment module is used for judging whether the severity grade is a target severity grade or not;
and the sending module is used for acquiring the voice fragment containing the target sensitive word, determining the identity information of the second user, packaging the voice fragment containing the target sensitive word and the identity information of the second user to obtain alarm information and sending an alarm request carrying the alarm information to an alarm center if the target sensitive word is contained in the voice fragment.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program implements the steps of the remote monitoring method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the remote monitoring method according to any one of claims 1 to 7.
CN202110691922.1A 2021-06-22 2021-06-22 Remote monitoring method and device, computer equipment and storage medium Withdrawn CN113411455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110691922.1A CN113411455A (en) 2021-06-22 2021-06-22 Remote monitoring method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110691922.1A CN113411455A (en) 2021-06-22 2021-06-22 Remote monitoring method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113411455A true CN113411455A (en) 2021-09-17

Family

ID=77682568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110691922.1A Withdrawn CN113411455A (en) 2021-06-22 2021-06-22 Remote monitoring method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113411455A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556790A (en) * 2024-01-02 2024-02-13 四川大学华西医院 Medical information processing method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117556790A (en) * 2024-01-02 2024-02-13 四川大学华西医院 Medical information processing method, device, equipment and storage medium
CN117556790B (en) * 2024-01-02 2024-04-16 四川大学华西医院 Medical information processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109729383B (en) Double-recording video quality detection method and device, computer equipment and storage medium
US10810510B2 (en) Conversation and context aware fraud and abuse prevention agent
CN111104495A (en) Information interaction method, device, equipment and storage medium based on intention recognition
CN109344615B (en) Method and device for detecting malicious command
CN104966053B (en) Face identification method and identifying system
CN109743624B (en) Video cutting method and device, computer equipment and storage medium
CN108010513B (en) Voice processing method and device
CN112037799A (en) Voice interrupt processing method and device, computer equipment and storage medium
CN113411455A (en) Remote monitoring method and device, computer equipment and storage medium
CN113670434A (en) Transformer substation equipment sound abnormality identification method and device and computer equipment
Derakhshan et al. Detecting telephone-based social engineering attacks using scam signatures
CN114493902A (en) Multi-mode information anomaly monitoring method and device, computer equipment and storage medium
CN113435753A (en) Enterprise risk judgment method, device, equipment and medium in high-risk industry
CN111723364A (en) Collision detection method and device, computer equipment and storage medium
CN111464687A (en) Strange call request processing method and device
TWI691923B (en) Fraud detection system for financial transaction and method thereof
EP4009581A1 (en) System and method for anonymizing personal identification data in an audio / video conversation
CN115314268A (en) Malicious encrypted traffic detection method and system based on traffic fingerprints and behaviors
CN111339829B (en) User identity authentication method, device, computer equipment and storage medium
CN111601000B (en) Communication network fraud identification method and device and electronic equipment
CN114257688A (en) Telephone fraud identification method and related device
CN112328998A (en) Computer information security monitoring method
CN117614743B (en) Phishing early warning method and system thereof
CN116503879B (en) Threat behavior identification method and device applied to e-commerce platform
CN111832317B (en) Intelligent information flow guiding method and device, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210917