CN116580844A - Method, device, equipment and storage medium for processing injury event notification information - Google Patents

Method, device, equipment and storage medium for processing injury event notification information Download PDF

Info

Publication number
CN116580844A
CN116580844A CN202310347582.XA CN202310347582A CN116580844A CN 116580844 A CN116580844 A CN 116580844A CN 202310347582 A CN202310347582 A CN 202310347582A CN 116580844 A CN116580844 A CN 116580844A
Authority
CN
China
Prior art keywords
injury
event notification
information
keywords
notification information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310347582.XA
Other languages
Chinese (zh)
Inventor
张进军
李斗
刘江
廉惠欣
田思佳
赵晖
蔡苗
魏爽
王继坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuanmeng Health Technology Co ltd
BEIJING FIRST AID CENTER
Original Assignee
Beijing Yuanmeng Health Technology Co ltd
BEIJING FIRST AID CENTER
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuanmeng Health Technology Co ltd, BEIJING FIRST AID CENTER filed Critical Beijing Yuanmeng Health Technology Co ltd
Priority to CN202310347582.XA priority Critical patent/CN116580844A/en
Publication of CN116580844A publication Critical patent/CN116580844A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a processing method, a device, equipment and a storage medium of injury event notification information. The method for processing the injury event notification information comprises the following steps: acquiring injury event notification information; extracting a wounded condition keyword from the wounded event notification information; and determining and outputting the injury level corresponding to the injury event notification information according to all the extracted injury keywords and a preset injury level determination strategy based on the injury keywords. The processing method of the injury event notification information provided by the embodiment of the application can timely and accurately judge the injury level of the injured person so as to provide references for related personnel, timely provide symptomatic first-aid instruction content and reduce the injury disability mortality rate.

Description

Method, device, equipment and storage medium for processing injury event notification information
Technical Field
The present application relates to the field of signal processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing information of an injury event.
Background
In daily life, when an injury accident such as burns and scalds occurs, reasonable rescue is timely carried out on the wounded, so that the severity of the injury can be greatly reduced. In the related art, in the case of an occurrence of an injury accident such as a burn or scald, there is a lack of a technical scheme capable of timely and accurately processing the injury event notification information. The technical scheme for processing the wounded event notification information is to be developed at present so as to timely and accurately process the wounded event notification information, so that a processing result is provided for relevant personnel for reference, the wounded degree of the wounded person can be rapidly and accurately judged by the relevant personnel, the wounded person can be timely provided with appropriate first-aid guidance content, and the disability and mortality rate of the wounded event can be reduced.
Disclosure of Invention
The application aims to provide a processing method, a device, equipment and a storage medium for injury event notification information, so as to timely and accurately process the injury event notification information, and provide a processing result for related personnel for reference, so that the related personnel can quickly and accurately judge the injury degree of a wounded person, and the processing method, the device, the equipment and the storage medium are beneficial to timely providing proper first aid guidance content for the wounded person and reducing disability and fatality rate of the injury event. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
According to an aspect of the embodiment of the present application, there is provided a method for processing injury event notification information, including:
acquiring injury event notification information;
extracting a wounded condition keyword from the wounded event notification information;
and determining and outputting the injury level corresponding to the injury event notification information according to all the extracted injury keywords and a preset injury level determination strategy based on the injury keywords.
In some embodiments of the application, the injury event notification information comprises one or more pictures, or the injury event notification information comprises injury event notification video information;
in the case that the injury event notification information includes one or more pictures, the extracting injury keywords from the injury event notification information includes extracting injury keywords from all the pictures;
in the case where the injury event notification information includes injury event notification video information, the extracting injury key words from the injury event notification information includes:
splitting the wounded event notification video information into a plurality of pictures, and extracting wounded condition keywords from all the pictures;
the extracting the injury keywords from all the pictures comprises the following steps:
marking each picture according to a preset matching identification database to obtain a marked picture;
processing each marked picture by utilizing a pre-trained image recognition model to obtain image recognition data of each marked picture;
and extracting the injury keywords from the image identification data of all the marked pictures according to a preset keyword library.
In some embodiments of the present application, the marking each picture according to the preset matching identification database to obtain a marked picture includes:
And marking each picture by using a character feature extraction algorithm according to a preset matching identification database to obtain marked pictures.
In some embodiments of the present application, the processing each of the marker pictures using a pre-trained image recognition model to obtain image recognition data for each of the marker pictures includes:
and performing injury position locking, injury area judgment and injury depth judgment on each marking picture by utilizing a pre-trained image recognition model to obtain injury position data, injury area data and injury depth data of each marking picture.
In some embodiments of the application, the splitting the injury event notification video information into a plurality of pictures comprises:
reading a video file of the injury event notification video information;
calculating the md5 value of the video file;
extracting pictures of corresponding frame number segments in the video file according to the md5 value;
acquiring the number of the intercepted pictures according to a preset frame interval;
and storing and outputting all the pictures according to a preset file name format of the stored pictures.
In some embodiments of the application, the injury event notification information comprises injury event notification voice information; the extracting the injury key words from the injury event notification information comprises the following steps:
Converting the injury event notification voice information into text information;
and extracting the injury key words from the text information according to a preset key word library.
In some embodiments of the present application, the converting the voice information into text information includes:
extracting voice characteristics of the voice information;
acquiring a minimum speech unit sequence corresponding to the speech feature through a pre-trained acoustic processing model;
and decoding the voice information of the minimum voice unit sequence through a pre-trained language decoding model to obtain the text information corresponding to the voice information.
In some embodiments of the present application, the extracting the injury key words from the text information according to the preset key word library includes:
data cleaning is carried out on the text information, and cleaned text is obtained;
comparing the cleaned characters with the preset keyword library;
and determining the words in the preset keyword library in the cleaned characters as the injury keywords according to the comparison result.
In some embodiments of the present application, the performing data cleaning on the text information to obtain cleaned text includes:
Deleting punctuation marks in the text information through regular matching to obtain text without punctuation marks;
word segmentation is carried out on the text without punctuation marks, and word segmentation processing results are obtained;
and deleting irrelevant words in the word segmentation processing result according to a preset rule to obtain the cleaned characters.
In some embodiments of the present application, the comparing the cleaned text with the preset keyword library includes:
and comparing each word in the cleaned text with the preset keyword library respectively.
In some embodiments of the application, the acquiring injury event notification voice information includes:
and acquiring voice help call audio information or video help call audio information of the injury event.
In some embodiments of the present application, the determining and outputting the injury level corresponding to the injury event notification information according to all the keywords and a preset injury level determination policy based on the keywords includes:
determining the score of each keyword according to a preset keyword scoring rule;
calculating the sum of the scores of the keywords to obtain a total score;
and determining and outputting the injury level corresponding to the injury event notification information according to the total score and a preset injury level score interval.
In some embodiments of the application, the processing method further comprises:
and determining and outputting preset injury treatment prompt information corresponding to the injury level, wherein the prompt information comprises characters, pictures, videos and/or voices.
In some embodiments of the application, the processing method further comprises:
and if the injury level corresponding to the injury event notification information is not determined, determining and outputting a visual display signal according to the touch input signal.
According to another aspect of an embodiment of the present application, there is provided a processing apparatus for injury event notification information, including:
the notification information acquisition module is used for acquiring the notification information of the injury event;
the keyword extraction module is used for extracting injury keywords from the injury event notification information;
the injury level determining module is used for determining and outputting an injury level corresponding to the injury event notification information according to all the extracted injury keywords and a preset injury level determining strategy based on the injury keywords.
According to another aspect of an embodiment of the present application, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the computer program to implement the method according to any of the embodiments of the present application.
According to another aspect of an embodiment of the present application, there is provided a computer-readable storage medium having stored thereon a computer program to be executed by a processor to implement the method according to any of the embodiments of the present application.
One of the technical solutions provided in one aspect of the embodiments of the present application may include the following beneficial effects:
according to the processing method for the wounded event notification information, the wounded event notification information is obtained, the wounded condition keywords are extracted from the wounded event notification information, the wounded condition level corresponding to the wounded event notification information is determined and output according to all the extracted wounded condition keywords and the preset wounded condition level determination strategy based on the wounded condition keywords, timely and accurate processing of the wounded event notification information is achieved, and accordingly processing results are provided for relevant personnel to be referred to, so that the wounded degree of the relevant personnel can be judged quickly and accurately, appropriate first-aid guidance content can be provided for the wounded personnel timely, and disability and mortality of the wounded event can be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
Fig. 1 is a schematic application scenario diagram of a method for processing injury event notification information according to an embodiment of the present application.
FIG. 2 is a flow chart illustrating a method of processing injury event notification information according to one embodiment of the application.
FIG. 3 illustrates a speech recognition transfer text content schematic in one example of the application.
Fig. 4 illustrates a flow chart of processing one or more pictures in one example of the application.
Fig. 5 shows a flow chart of processing video information in one example of the application.
FIG. 6 illustrates a flow chart of training an image recognition algorithm model in one example of the application.
Fig. 7 shows a schematic of an injury level output in one example of the application.
FIG. 8 illustrates a schematic diagram of an exemplary injury event processing flow.
FIG. 9 is a block diagram showing a structure of a processing apparatus for notifying an injury event according to an embodiment of the application.
Fig. 10 shows a block diagram of an electronic device according to an embodiment of the application.
FIG. 11 shows a schematic diagram of a computer-readable storage medium of one embodiment of the application.
Detailed Description
The present application will be further described with reference to the drawings and the specific embodiments in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Fig. 1 is a schematic diagram of an application scenario of a method for processing injury event notification information according to an embodiment of the present application, where a user terminal communicates with a server through a network. The method comprises the steps that a service side can extract a wounded condition keyword from wounded event notification information sent by a user terminal; and determining and outputting the injury level corresponding to the injury event notification information according to all the extracted injury keywords and a preset injury level determination strategy based on the injury keywords, wherein the user terminal receives the injury level. The user terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, and the like. The server may be implemented by a stand-alone server or a server cluster formed by a plurality of servers.
Referring to fig. 2, an embodiment of the present application provides a method for processing injury event notification information, including steps S10 to S30:
s10, acquiring the injury event notification information.
The injury event notification information may include, for example, injury event notification voice information, injury event notification video information, and/or injury event notification image information. The injury event is for example burn and scald event, mechanical injury event, etc.
In one embodiment, the injury event notification information comprises one or more pictures, or the injury event notification information comprises injury event notification video information.
In the case where the injury event notification information includes one or more pictures, the extracting injury keywords from the injury event notification information includes extracting injury keywords from all of the pictures.
In the case where the injury event notification information includes injury event notification video information, the extracting injury key words from the injury event notification information includes: and splitting the wounded event notification video information into a plurality of pictures, and extracting wounded keywords from all the pictures.
Illustratively, extracting the injury keywords from all the pictures includes: marking each picture according to a preset matching identification database to obtain a marked picture; processing each marked picture by utilizing a pre-trained image recognition model to obtain image recognition data of each marked picture; and extracting the injury keywords from the image identification data of all the marked pictures according to a preset keyword library.
In one embodiment, marking each picture according to a preset matching identification database to obtain a marked picture includes: and marking each picture by using a character feature extraction algorithm according to a preset matching identification database to obtain marked pictures.
In one embodiment, processing each of the marker pictures using a pre-trained image recognition model to obtain image recognition data for each of the marker pictures includes: and performing injury position locking, injury area judgment and injury depth judgment on each marking picture by utilizing a pre-trained image recognition model to obtain injury position data, injury area data and injury depth data of each marking picture.
In one embodiment, splitting the injury event notification video information into a plurality of pictures includes: reading a video file of the injury event notification video information; calculating the md5 value of the video file; extracting pictures of corresponding frame number segments in the video file according to the md5 value; acquiring the number of the intercepted pictures according to a preset frame interval; and storing and outputting all the pictures according to a preset file name format of the stored pictures.
In one embodiment, the acquiring the injury event notification voice message includes: and acquiring voice help call audio information or video help call audio information of the injury event.
When an injury accident occurs, related personnel usually call for help by voice to a medical care department through a telephone or call for help by video to the medical care department through a smart phone. The voice information for notifying the injury event can be obtained by recording the voice help call or recording the video help call, for example, so as to obtain corresponding voice audio.
S20, extracting the injury key words from the injury event notification information.
In some embodiments, the injury event notification information includes injury event notification voice information; the extracting the injury key words from the injury event notification information comprises the following steps: converting the injury event notification voice information into text information; and extracting the injury key words from the text information according to a preset key word library.
When an injury event occurs, key information in voice data is extracted through voice recognition, so that references can be provided for staff to judge the injury level and severity of a patient, symptomatic first-aid guidance content is provided in time, and disability and mortality rate is reduced.
In some embodiments, the models used for speech-to-text include a pre-trained acoustic decoding model, a pre-trained language decoding model, and a decoder. The determination of the pre-trained acoustic decoding model may include: feature extraction is carried out on the voice corpus in the voice database, and training of an acoustic decoding model is carried out according to the extracted features. The determining of the language decoding model may include: training a language decoding model according to the text corpus in the text database. The decoder is used for determining the phrase most probably corresponding to the given minimum voice unit sequence, and further can obtain the text information corresponding to the voice information. The minimum speech unit is divided according to the natural attribute of the speech, and is analyzed according to pronunciation actions in syllables, wherein one pronunciation action corresponds to one minimum speech unit.
In one embodiment, the converting the voice information into text information includes: extracting voice characteristics of the voice information; acquiring a minimum speech unit sequence corresponding to the speech feature through a pre-trained acoustic processing model; and decoding the voice information of the minimum voice unit sequence through a pre-trained language decoding model to obtain the text information corresponding to the voice information. Referring to fig. 3, fig. 3 is a schematic diagram illustrating speech recognition and text content in one example.
Illustratively, converting the voice information into text information may include: extracting the voice characteristics of the voice information, and inputting the voice characteristics into a pre-trained decoder; the decoder firstly obtains a minimum voice unit sequence corresponding to the voice characteristic through a pre-trained acoustic processing model; then, the decoder decodes the voice information of the acoustic processing model through the pre-trained language decoding model to obtain the text information corresponding to the voice information. The decoder may be, for example, a dynamic decoder that uses breadth-first search to simultaneously generate multiple hypotheses in the original search network, or a speech decoder such as a Viterbi algorithm model implemented by Token Passing, and relies on pruning algorithms to not make the network too large.
The acoustic processing model may include an artificial neural network model for providing acoustic modeling function support to the hidden markov model, and a hidden markov model, wherein the granularity of the acoustic modeling function support may include words, syllables, or minimum phonetic units, etc. The hidden markov model may determine the minimum sequence of speech units based on acoustic modeling functionality support provided by the artificial neural network model. The artificial neural network model may include an input layer, an hidden layer, and an output layer; the hidden layers may include a feed forward neural network layer and a self-attention neural network layer.
The self-attention neural network layer adopts the self-attention of the voice characteristic; in the acoustic modeling process, higher attention is given to the voice features related to the acoustic modeling unit, and attention to the non-related voice features is reduced, so that the output accuracy can be improved. In addition, the self-attention neural network layer can be independent of the relation between different moments, so that the self-attention neural network layer can adopt parallel transportation, the operation efficiency of the neural network model can be improved, and the voice recognition efficiency can be improved.
In some embodiments, the hidden layers may include at least one hidden layer pair, and one hidden layer pair may include one of the feedforward neural network layers and one of the self-attention neural network layers connected to each other.
The feedforward neural network layer may include a time delay neural network (TDNN, time Delay Neural Network) layer or a convolutional neural network (CNN, convolutional Neural Network) layer.
The delay neural network layer can consider continuous input information at a plurality of moments, so that the context information can be applied to operation, and the accuracy of outputting the text information is improved.
The fusion degree between the feedforward neural network layer and the self-attention neural network layer can be improved by the plurality of hidden layer pairs, and then the output accuracy can be improved.
In some embodiments, the output layers may include a first output layer and a second output layer; the first output layer is arranged behind the last hidden layer pair, and the second output layer is arranged behind the middle hidden layer pair; the first output layer is connected to a hidden Markov model, and the processing result is output to the hidden Markov model.
In the training process of the artificial neural network model, error information of the artificial neural network model is reversely calculated, and the error information is usually smaller and smaller in the returning process, so that the problem of gradient disappearance is caused. The second output layer is arranged behind the hidden layer pair in the middle, so that the error gradient can be increased, the condition of gradient disappearance can be relieved, and the accuracy of the model can be improved.
In one embodiment, the extracting keywords from the text information according to a preset keyword library includes: data cleaning is carried out on the text information, and cleaned text is obtained; comparing the cleaned characters with the preset keyword library; and determining the words existing in the preset keyword library in the cleaned characters as keywords according to the comparison result.
In one embodiment, the step of performing data cleaning on the text information to obtain cleaned text includes: deleting punctuation marks in the text information through regular matching to obtain text without punctuation marks; word segmentation is carried out on the text without punctuation marks, and word segmentation processing results are obtained; and deleting irrelevant words in the word segmentation processing result to obtain the cleaned characters.
The irrelevant words may be nonsensical works, for example, and the preset rule may be to delete nonsensical works, for example, the works are "on", "also" and the like. The preset rules may be set according to the needs of practical applications, and are not limited herein.
The word segmentation process is divided into word units (token), and is a sequence labeling task. The word segmentation process may be implemented, for example, by a maximum matching algorithm. The maximum matching algorithm mainly comprises a forward maximum matching algorithm, a reverse maximum matching algorithm, a bidirectional matching algorithm and the like. The step of the maximum matching algorithm is to cut out single word strings, then compare the single word strings with a dictionary word stock, record the single word strings if the single word strings are a word, or continue the comparison by adding or subtracting one single word string until only one single word string is left, and terminate the comparison.
For example, for the input sentence "hello, i require help, my face is scalded, and the surface is red and swollen now," the word segmentation is performed, and the output word segmentation result is: "hello", "me", "want", "resort", "me", "face", "scalded", "present", "surface", "red and swollen".
In some embodiments, after the voice call content is identified as the text content, extracting keywords by utilizing an NLP Chinese word segmentation-maximum matching algorithm technology aiming at the text content, matching word libraries according to a word segmentation mode, and identifying content words related to injuries in the text. And according to the matched voice-to-text content, a Chinese word segmentation-maximum matching algorithm of NLP is adopted, a human body position information base is matched, and the injured position of the injured person is judged.
S30, determining and outputting the injury level corresponding to the injury event notification information according to all the extracted injury keywords and a preset injury level determination strategy based on the injury keywords.
In the field of injury identification, established grade determination methods for injury, such as a new nine-way method, a three-degree four-way method and the like, are commonly accepted by industries. Details will be described later in the examples regarding the specific cases of the new ninety method and the three-degree four method. It should be emphasized that the strategy and the regulation for determining the injury level are not simple manual regulation, but are determined according to objective conditions of objective injury such as burn and scald area, red swelling, bleeding degree and the like, and belong to a technical scheme conforming to the natural law.
In a specific example, in the case that the injury event notification information includes one or more pictures, referring to fig. 4, the injury level is determined by identifying the burn and scald position and depth condition of the human body in the image through the image information uploaded by the alarm. When the injury event notification information includes injury event notification video information, referring to the flowchart shown in fig. 5, the injury level and the burn and scald position of the injured person are determined by identifying the video dialogue content and capturing the video picture key frame and using the content key word and the key frame image identification. The extraction of the injury keywords from all the pictures can be realized through a pre-trained image recognition model. Firstly, an algorithm library is created, and referring to fig. 6, different burn and scald pictures are collected for algorithm training, a large number of pictures of burn and scald injuries are identified, burn and scald injury cases are extracted, the cases mainly comprise burn and scald injury grades and depths corresponding to clinical manifestations of surfaces of burns and scalds, and a model library based on the clinical manifestations is generated. When an alarm person submits a burn and scald picture, the picture is processed and marked at the burn and scald position by using a character feature extraction technology, and database matching is carried out according to the surface expression, so that depth data are given. And identifying burn and scald marks on the surface of the picture, and calculating the area. And (5) integrating the data, and obtaining the burn and scald grade according to the depth and the area obtained by the matching model library.
In one embodiment, the keyword-based preset injury level determination strategy may include, for example: each injury level corresponds to a preset keyword total partition, and each injury level corresponds to a preset injury level score interval. Determining and outputting a wound level corresponding to the wound event notification information according to all keywords and a preset wound level determination strategy based on the keywords, including: determining the score of each keyword according to a preset keyword scoring rule; calculating the sum of the scores of the keywords to obtain a total score; and determining and outputting the injury level corresponding to the injury event notification information according to the total score and a preset injury level score interval. The preset keyword scoring rule may be, for example, a score corresponding to each keyword is preset. Specifically, each injury level corresponds to an injury level score interval, and the injury level corresponding to the injury event notification information can be determined according to the injury level score interval in which the total score of each keyword falls. For example, the score interval corresponding to the moderate of the injury level is [60,80 ], and when the total score of each keyword falls within the interval [60,80 ], the corresponding injury level is determined to be moderate.
In some specific examples, taking burn and scald events as examples, the new nine-method evaluation algorithm can be compared according to the content of keywords: head-neck=1×9%; torso = 3 x 9%; double upper limb = 2 x 9%; double lower limbs=5×9++1%, locking the burn and scald position and area. If the burn and scald area is smaller, the number of palm areas which can be covered by the burn and scald area in the voice content can be identified, the help caller describes n palm sizes according to the 1% of the palm area occupied by the body surface area, and the calculation formula is as follows: and finally, calculating the area of the burn and scald = n 1 percent, wherein the size of the burn and scald area is calculated to be n percent.
Specifically, the burn and scald depth is judged, when the voice call content is identified, the condition description of a caller on the surface of the burn and scald is identified, and the identification library is compared by utilizing an NLP Chinese word segmentation-maximum matching algorithm:
(1) local redness and swelling, pain and burning sensation, slightly increased skin temperature and no water bubbles.
(2) Severe pain, hyperesthesia, localized redness and swelling, blisters of varying sizes, containing yellow or reddish plasma-like liquids or protein-coagulated jelly. After removing the vesicular beancurd sheet, flushing of the wound surface, and the capillary network with blood congestion due to choroid or granular expansion can be seen.
Local swelling, dullness, or the like, and the surface of the wound is slightly wet, reddish or alternately red and white after the epidermis is removed, and the wound has reticular embolic blood vessels, has tougher touch, and has lower warm seat and capability of plucking the hair.
(3) The skin pain disappears, the skin is inelastic, dry and bubble-free, leather-like, wax white, burnt yellow and even carbonized, needling and dehairing are not painful, and a dendritic vascular network with coarse embolism can be seen.
And (5) comparing the grade of the identification library, and judging the depth of the burn and scald as (1).
For example, in one example, the conclusion that the burn and scald grade is severe is drawn from the burn and scald grade= [ face (position) ]++ [1 palm size (area) ]++ [ surface redness (depth) ]++ [ hoarseness ].
The judging mode specified in the burn and scald grade library is as follows:
light: the total area is less than 10 percent or II degree burn and scald;
and (3) moderately: the total area is between 11 percent and 30 percent per degree II burn and scald or the total area is less than 10 percent per degree III burn and scald;
severe: the total area is between 31% and 50% of the total area of the burn and scald of the II degree or the burn and scald of the III degree is between 11% and 20% of the total area of the burn and scald of the III degree or the burn and scald area is less than 30%/burn and scald of the II degree, but one of the following conditions exists: (1) shock and moderate shock; (2) medium and heavy respiratory tract burns and scalds (those with respiratory tract burns and scalds and below the throat);
severity of: the total area is more than 50 percent or the area of III degree burns and scalds reaches more than 20 percent.
And recommending the instruction content, and displaying the instruction corresponding to the database after judging the area and the depth.
The dictionary library is a preset database and mainly comprises a human body part library, a burn and scald area algorithm library (new nineteenth method), a burn and scald depth algorithm library (third quarter method) and a burn and scald grade library.
In one embodiment, the keyword-based preset injury level determination policy may be, for example, a rule corresponding to a new nine-score method. As shown in table 1.
TABLE 1
Burns and scalds are injuries caused by factors such as flame, hot water, hot vapor and the like acting on human bodies.
The estimation of the burn and scald area refers to the percentage of the burn and scald area of the skin to the whole body surface area. In the rule corresponding to the new nineteenth method, the body surface area is divided into 11 equal parts of 9%, and 1% is added to form 100% of the total surface area, namely head and neck = 1×9%; torso = 3 x 9%; double upper limb = 2 x 9%; double lower limbs = 5 x 9% +1%, 11 x 9% +1% (perineum).
Women and children differed in estimating area. Typically 6% of the buttocks and feet of an adult female; the children have large head and small lower limbs, the head and neck area is = [9+ (12-age) ]%, and the double lower limb area is = [46- (12-age) ]%.
The burn and scald area nineteenth method is a method for evaluating the burn and scald area adopted by China, and is divided into 11 pieces of 9 percent and 1 piece of 1 percent according to the body surface.
The measuring method comprises the following steps: the palm area of the finger of the patient with burns and scalds accounts for about 1% of the body surface area no matter the sex and the age, for example, the palm size of the field personnel/the person calling for help is similar to that of the patient, the palm estimation of the field personnel/the person calling for help can be used, and the method can assist a nineteenth method, so that the calculation of the small-area burns and scalds is more convenient.
The predetermined injury level determination policy based on the keyword may be, for example, a three-degree four-way injury determination rule as shown in table 2.
TABLE 2
/>
The determined injury level can be output in text form for reference by related staff. In addition, visual presentation can be assisted when the injury level is output in a text form, so that related staff can obtain the information of the injury level more intuitively.
The visual presentation mode can be used for displaying the injured human model image on a display screen, and the injured human model image can be used for displaying one of three forms of children, adult females and adult males according to the ages of injured persons.
In some embodiments, the injury event notification information includes injury event notification video information and/or injury event notification image information; the extracting the injury key words from the injury event notification information comprises the following steps: and carrying out image processing on the injury event notification video information and/or the injury event notification image information, and extracting injury keywords.
Specifically, image processing is performed on the injury event notification image information, and the extraction of injury keywords may include: and extracting keywords from the wounded event notification image information by using an image keyword extraction network to obtain a plurality of keywords corresponding to the wounded event notification image information. The image keyword extraction network can be a multi-label classification network, the image keyword extraction network is obtained through sample image training, a sample image is marked with a labeling description sentence, and a keyword set comprises a plurality of keywords; and training the multi-label classification network by taking a plurality of keywords corresponding to the sample image as supervision information and combining the predicted keywords to obtain a trained image keyword extraction network.
Illustratively, image processing is performed on the injury event notification video information, and extracting injury keywords may include: extracting key images from the injury event notification video information; and carrying out image processing on the key images, and extracting the injury key words. Image processing is carried out on the key images, and the extraction of the injury key words can comprise the following steps: and extracting keywords from the wounded event notification image information by using an image keyword extraction network to obtain a plurality of keywords corresponding to the wounded event notification image information.
In some examples, when seeking help through voice call, video call or call for help person uploading photos, keywords can be identified through identifying voice content keywords/video call content keywords/image keywords, the identified keywords are compared with a preset keyword database, the injured position, the injured area, the injured depth, the injured level judgment and the like are determined, then the injured position is automatically defined or selected on a visually displayed human model image, the area of the injured position is displayed, and corresponding preset guiding suggestion information is given according to the injured condition. Specifically, corresponding keywords may be set in each part of the human body model image in advance, and then compared with the identified keywords, and the corresponding part on the human body model image is determined and displayed.
In some embodiments, the processing method may further include:
s40, determining and outputting preset injury treatment prompt information corresponding to the injury level.
For example, after determining the injury level of the injured person, the preset injury processing prompt information corresponding to the current injury level may be found according to the preset injury processing prompt information corresponding to each injury level, and then the preset injury processing prompt information may be output, where the prompt information may include text, pictures, video and/or voice.
The preset injury treatment prompt information can be output in a text form for reference of related personnel. In addition, when the preset injury treatment prompt information is output in a text form, the visual presentation mode can be assisted, so that related personnel can obtain the preset injury treatment prompt information more intuitively. Referring to fig. 7, fig. 7 illustrates an example injury level output schematic.
In some embodiments, the processing method may further include:
and S50, if the injury level corresponding to the injury event notification information is not determined according to all the extracted injury keywords and a preset injury level determination strategy based on the injury keywords, determining and outputting a visual display signal according to a touch input signal.
The visual display signal may include, for example, a human body model image on which an injured part corresponding to the touch input signal is displayed, and text description information including text describing the injured part.
Specifically, the touch input signal may be a touch signal generated when a related worker performs a manual operation on the touch screen, or a text message input manually. If the injury level corresponding to the injury event notification information is not determined according to all the extracted injury keywords and the preset injury level determination strategy based on the injury keywords, the related staff can manually judge the injury condition, and then input the injury manual judgment result in a manual signal input mode. And determining information to be displayed according to the manual injury judgment result manually input by related personnel, and then outputting the display information. The visual display mode can refer to an example shown in fig. 7, a human body model image and text description information are shown in fig. 7, an injured part is marked on the human body model image, and the text description information describes and illustrates the injured condition.
For example, when the injury level is not recognized by automatic recognition through voice/video/picture or the like, the dispatcher can manually select the injured part by operating the touch display screen by directly clicking the body part on the human body model image. After the corresponding part is selected, the area occupation ratio (occupying the total area of the human body) of the part is displayed by default, and the number can be clicked to increase/decrease the injured area.
Fig. 8 is a schematic diagram of an exemplary process flow of an injury event, which is a burn and scald event, according to the present application.
According to the processing method for the wounded event notification information, the wounded event notification information is obtained, the wounded condition keywords are extracted from the wounded event notification information, the wounded condition level corresponding to the wounded event notification information is determined and output according to all the extracted wounded condition keywords and the preset wounded condition level determination strategy based on the wounded condition keywords, timely and accurate processing of the wounded event notification information is achieved, and accordingly processing results are provided for relevant personnel to be referred to, so that the wounded degree of the relevant personnel can be judged quickly and accurately, appropriate first-aid guidance content can be provided for the wounded personnel timely, and disability and mortality of the wounded event can be reduced.
Referring to fig. 9, another embodiment of the present application provides a device for processing injury event notification information, which may include:
the notification information acquisition module is used for acquiring the notification information of the injury event;
the keyword extraction module is used for extracting injury keywords from the injury event notification information;
the injury level determining module is used for determining and outputting an injury level corresponding to the injury event notification information according to all the extracted injury keywords and a preset injury level determining strategy based on the injury keywords.
In some embodiments, the injury event notification information comprises injury event notification video information, or the injury event notification information comprises one or more pictures;
in the case that the injury event notification information includes one or more pictures, the extracting injury keywords from the injury event notification information includes extracting injury keywords from all the pictures;
in the case where the injury event notification information includes injury event notification video information, the extracting injury key words from the injury event notification information includes:
splitting the wounded event notification video information into a plurality of pictures, and extracting wounded condition keywords from all the pictures;
The extracting the injury keywords from all the pictures comprises the following steps:
marking each picture according to a preset matching identification database to obtain a marked picture;
processing each marked picture by utilizing a pre-trained image recognition model to obtain image recognition data of each marked picture;
and extracting the injury keywords from the image identification data of all the marked pictures according to a preset keyword library.
In some embodiments, the marking each picture according to the preset matching identification database to obtain a marked picture includes:
and marking each picture by using a character feature extraction algorithm according to a preset matching identification database to obtain marked pictures.
In some embodiments, the processing each of the marker pictures using a pre-trained image recognition model to obtain image recognition data for each of the marker pictures includes:
and performing injury position locking, injury area judgment and injury depth judgment on each marking picture by utilizing a pre-trained image recognition model to obtain injury position data, injury area data and injury depth data of each marking picture.
In some embodiments, the splitting the injury event notification video information into a plurality of pictures comprises: reading a video file of the injury event notification video information; calculating the md5 value of the video file; extracting pictures of corresponding frame number segments in the video file according to the md5 value; acquiring the number of the intercepted pictures according to a preset frame interval; and storing and outputting all the pictures according to a preset file name format of the stored pictures.
In some embodiments, the injury event notification information includes injury event notification voice information; the keyword extraction module comprises:
the text information conversion unit is used for converting the injury event notification voice information into text information;
and the injury key word extraction unit is used for extracting injury key words from the text information according to a preset key word library.
In some embodiments, the text information converting unit may include:
a voice feature extraction subunit, configured to extract voice features of the voice information;
an acquisition subunit for acquiring a minimum speech unit sequence corresponding to the speech feature by means of a pre-trained acoustic processing model;
and the decoding subunit is used for decoding the voice information of the minimum voice unit sequence through a pre-trained language decoding model to obtain the text information corresponding to the voice information.
In some embodiments, the injury keyword extraction unit may include:
the cleaning subunit is used for carrying out data cleaning on the text information to obtain cleaned text;
a comparison subunit, configured to compare the cleaned text with the preset keyword library;
and the keyword determining subunit is used for determining the words existing in the preset keyword library in the cleaned characters as the injury keywords according to the comparison result.
In some embodiments, the data cleaning is performed on the text information to obtain cleaned text, including: deleting punctuation marks in the text information through regular matching to obtain text without punctuation marks; word segmentation is carried out on the text without punctuation marks, and word segmentation processing results are obtained; and deleting irrelevant words in the word segmentation processing result to obtain the cleaned characters.
In some embodiments, comparing the cleaned text with the preset keyword library includes: and comparing each word in the cleaned text with the preset keyword library respectively.
In some embodiments, obtaining injury event notification speech information includes: and acquiring voice help call audio information or video help call audio information of the injury event.
In some implementations, the injury level determination module can include:
the score determining unit is used for determining the score of each keyword according to a preset keyword scoring rule;
the calculating unit is used for calculating the sum of the scores of the keywords to obtain a total score;
and the injury level determining unit is used for determining and outputting an injury level corresponding to the injury event notification information according to the total score and a preset injury level score interval.
In some embodiments, the injury event notification information includes injury event notification video information and/or injury event notification image information; the extracting the injury key words from the injury event notification information comprises the following steps: and carrying out image processing on the injury event notification video information and/or the injury event notification image information, and extracting injury keywords.
In some embodiments, the processing device of the injury event notification information may further include an injury treatment prompt determining module, where the injury treatment prompt determining module is configured to determine and output a preset injury treatment prompt corresponding to the injury level, where the prompt includes text, picture, video and/or voice.
According to the processing device for the wounded event notification information, the wounded event notification information is obtained, the wounded condition keywords are extracted from the wounded event notification information, the wounded condition level corresponding to the wounded event notification information is determined and output according to all the extracted wounded condition keywords and the preset wounded condition level determination strategy based on the wounded condition keywords, timely and accurate processing of the wounded event notification information is achieved, and accordingly processing results are provided for relevant personnel to be referred to, so that the wounded degree of the relevant personnel can be judged quickly and accurately, appropriate first-aid guidance content can be provided for the wounded personnel timely, and disability and mortality of the wounded event can be reduced.
Another embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the method according to any one of the embodiments.
Referring to fig. 10, the electronic device 10 may include: processor 100, memory 101, bus 102 and communication interface 103, processor 100, communication interface 103 and memory 101 being connected by bus 102; the memory 101 stores a computer program executable on the processor 100, which when executed by the processor 100 performs the method provided by any of the previous embodiments of the application.
The memory 101 may include a high-speed random access memory (RAM: random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the device network element and the at least one other network element is achieved through at least one communication interface 103 (which may be wired or wireless), the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
Bus 102 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be divided into address buses, data buses, control buses, etc. The memory 101 is configured to store a program, and the processor 100 executes the program after receiving an execution instruction, and the method disclosed in any of the foregoing embodiments of the present application may be applied to the processor 100 or implemented by the processor 100.
The processor 100 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 100 or by instructions in the form of software. The processor 100 may be a general-purpose processor, and may include a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), and the like; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and, in combination with its hardware, performs the steps of the method described above.
The electronic device provided by the embodiment of the application and the method provided by the embodiment of the application have the same beneficial effects as the method adopted, operated or realized by the electronic device and the method provided by the embodiment of the application due to the same inventive concept.
Another embodiment of the present application provides a computer readable storage medium having stored thereon a computer program that is executed by a processor to implement a method according to any of the above embodiments. Referring to fig. 11, a computer readable storage medium is shown as an optical disc 20 having a computer program (i.e., a program product) stored thereon, which, when executed by a processor, performs the method provided by any of the embodiments described above.
It should be noted that examples of the computer readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical or magnetic storage medium, which will not be described in detail herein.
The computer-readable storage medium provided by the above-described embodiments of the present application has the same advantageous effects as the method adopted, operated or implemented by the application program stored therein, for the same inventive concept as the method provided by the embodiments of the present application.
It should be noted that:
the term "module" is not intended to be limited to a particular physical form. Depending on the particular application, modules may be implemented as hardware, firmware, software, and/or combinations thereof. Furthermore, different modules may share common components or even be implemented by the same components. There may or may not be clear boundaries between different modules.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the examples herein. The required structure for the construction of such devices is apparent from the description above. In addition, the present application is not directed to any particular programming language. It will be appreciated that the teachings of the present application described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present application.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The foregoing examples merely illustrate embodiments of the application and are described in more detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (17)

1. A method for processing injury event notification information, comprising:
acquiring injury event notification information;
extracting a wounded condition keyword from the wounded event notification information;
and determining and outputting the injury level corresponding to the injury event notification information according to all the extracted injury keywords and a preset injury level determination strategy based on the injury keywords.
2. The method of claim 1, wherein the injury event notification information comprises one or more pictures or the injury event notification information comprises injury event notification video information;
in the case that the injury event notification information includes one or more pictures, the extracting injury keywords from the injury event notification information includes extracting injury keywords from all the pictures;
In the case where the injury event notification information includes injury event notification video information, the extracting injury key words from the injury event notification information includes:
splitting the wounded event notification video information into a plurality of pictures, and extracting wounded condition keywords from all the pictures;
the extracting the injury keywords from all the pictures comprises the following steps:
marking each picture according to a preset matching identification database to obtain a marked picture;
processing each marked picture by utilizing a pre-trained image recognition model to obtain image recognition data of each marked picture;
and extracting the injury keywords from the image identification data of all the marked pictures according to a preset keyword library.
3. The method according to claim 2, wherein the marking each picture according to the preset matching identification database to obtain a marked picture includes:
and marking each picture by using a character feature extraction algorithm according to a preset matching identification database to obtain marked pictures.
4. The method of claim 2, wherein processing each of the tagged pictures using a pre-trained image recognition model to obtain image recognition data for each of the tagged pictures comprises:
And performing injury position locking, injury area judgment and injury depth judgment on each marking picture by utilizing a pre-trained image recognition model to obtain injury position data, injury area data and injury depth data of each marking picture.
5. The method of claim 2, wherein splitting the injury event notification video information into a plurality of pictures comprises:
reading a video file of the injury event notification video information;
calculating the md5 value of the video file;
extracting pictures of corresponding frame number segments in the video file according to the md5 value;
acquiring the number of the intercepted pictures according to a preset frame interval;
and storing and outputting all the pictures according to a preset file name format of the stored pictures.
6. The method of claim 1, wherein the injury event notification information comprises injury event notification voice information; the extracting the injury key words from the injury event notification information comprises the following steps:
converting the injury event notification voice information into text information;
and extracting the injury key words from the text information according to a preset key word library.
7. The method of claim 6, wherein said converting said voice information to text information comprises:
Extracting voice characteristics of the voice information;
acquiring a minimum speech unit sequence corresponding to the speech feature through a pre-trained acoustic processing model;
and decoding the voice information of the minimum voice unit sequence through a pre-trained language decoding model to obtain the text information corresponding to the voice information.
8. The method of claim 6, wherein the extracting the injury keywords from the text information according to the preset keyword library comprises:
data cleaning is carried out on the text information, and cleaned text is obtained;
comparing the cleaned characters with the preset keyword library;
and determining the words in the preset keyword library in the cleaned characters as the injury keywords according to the comparison result.
9. The method of claim 8, wherein the performing data cleaning on the text information to obtain cleaned text comprises:
deleting punctuation marks in the text information through regular matching to obtain text without punctuation marks;
word segmentation is carried out on the text without punctuation marks, and word segmentation processing results are obtained;
And deleting irrelevant words in the word segmentation processing result to obtain the cleaned characters.
10. The method of claim 9, wherein comparing the cleaned text with the pre-set keyword library comprises:
and comparing each word in the cleaned text with the preset keyword library respectively.
11. The method of claim 6, wherein the obtaining injury event notification voice information comprises:
and acquiring voice help call audio information or video help call audio information of the injury event.
12. The method of claim 1, wherein the determining and outputting the injury level corresponding to the injury event notification information according to all the keywords and a preset injury level determination policy based on the keywords comprises:
determining the score of each keyword according to a preset keyword scoring strategy;
calculating the sum of the scores of the keywords to obtain a total score;
and determining and outputting the injury level corresponding to the injury event notification information according to the total score and a preset injury level score interval.
13. The method of claim 1, wherein the processing method further comprises:
And determining and outputting preset injury treatment prompt information corresponding to the injury level, wherein the prompt information comprises characters, pictures, videos and/or voices.
14. The method of claim 1, wherein the processing method further comprises:
and if the injury level corresponding to the injury event notification information is not determined, determining and outputting a visual display signal according to the touch input signal.
15. A device for processing injury event notification information, comprising:
the notification information acquisition module is used for acquiring the notification information of the injury event;
the keyword extraction module is used for extracting injury keywords from the injury event notification information;
the injury level determining module is used for determining and outputting an injury level corresponding to the injury event notification information according to all the extracted injury keywords and a preset injury level determining strategy based on the injury keywords.
16. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the computer program to implement the method of any one of claims 1-14.
17. A computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the method of any of claims 1-14.
CN202310347582.XA 2023-04-03 2023-04-03 Method, device, equipment and storage medium for processing injury event notification information Pending CN116580844A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310347582.XA CN116580844A (en) 2023-04-03 2023-04-03 Method, device, equipment and storage medium for processing injury event notification information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310347582.XA CN116580844A (en) 2023-04-03 2023-04-03 Method, device, equipment and storage medium for processing injury event notification information

Publications (1)

Publication Number Publication Date
CN116580844A true CN116580844A (en) 2023-08-11

Family

ID=87536682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310347582.XA Pending CN116580844A (en) 2023-04-03 2023-04-03 Method, device, equipment and storage medium for processing injury event notification information

Country Status (1)

Country Link
CN (1) CN116580844A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117357104A (en) * 2023-12-07 2024-01-09 深圳市好兄弟电子有限公司 Audio analysis method based on user characteristics

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117357104A (en) * 2023-12-07 2024-01-09 深圳市好兄弟电子有限公司 Audio analysis method based on user characteristics
CN117357104B (en) * 2023-12-07 2024-04-26 深圳市好兄弟电子有限公司 Audio analysis method based on user characteristics

Similar Documents

Publication Publication Date Title
Giegerich English phonology: An introduction
WO2018097091A1 (en) Model creation device, text search device, model creation method, text search method, data structure, and program
CN110569354B (en) Barrage emotion analysis method and device
CN105551485B (en) Voice file retrieval method and system
JP2007018234A (en) Automatic feeling-expression word and phrase dictionary generating method and device, and automatic feeling-level evaluation value giving method and device
CN116580844A (en) Method, device, equipment and storage medium for processing injury event notification information
WO2021074459A1 (en) Method and system to automatically train a chatbot using domain conversations
WO2020048062A1 (en) Intelligent recommendation method and apparatus for product sales, computer device and storage medium
KR101508059B1 (en) Apparatus and Method for pleasant-unpleasant quotient of word
KR101991486B1 (en) Sentence similarity-based polysemy database expansion apparatus and method therefor
US10770068B2 (en) Dialog agent, reply sentence generation method, and non-transitory computer-readable recording medium
CN110705313A (en) Text abstract generation method based on feature extraction and semantic enhancement
CN114298021A (en) Rumor detection method based on sentiment value selection comments
WO2023124837A1 (en) Inquiry processing method and apparatus, device, and storage medium
CN116704591A (en) Eye axis prediction model training method, eye axis prediction method and device
KR101567789B1 (en) Apparatus and Method for pleasant-unpleasant quotient of word using relative emotion similarity
Im et al. The function of Zechariah 7-8 within the book of Zechariah
CN116595970A (en) Sentence synonymous rewriting method and device and electronic equipment
Matuszak A Complete Reconstruction, New Edition and Interpretation of the Sumerian Morality Tale ‘The Old Man and the Young Girl’
CN109284364B (en) Interactive vocabulary updating method and device for voice microphone-connecting interaction
CN108763229B (en) Machine translation method and device based on characteristic sentence stem extraction
Argenter et al. Jewish texts in Old Catalan
Maclagan Reflecting connections with the local language: New Zealand English
CN115546355B (en) Text matching method and device
CN116861902B (en) Analysis data processing method and device based on life meaning sense and sleep quality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination