WO2021184468A1 - 行为识别方法、装置、设备及介质 - Google Patents

行为识别方法、装置、设备及介质 Download PDF

Info

Publication number
WO2021184468A1
WO2021184468A1 PCT/CN2020/084694 CN2020084694W WO2021184468A1 WO 2021184468 A1 WO2021184468 A1 WO 2021184468A1 CN 2020084694 W CN2020084694 W CN 2020084694W WO 2021184468 A1 WO2021184468 A1 WO 2021184468A1
Authority
WO
WIPO (PCT)
Prior art keywords
behavior
data
serialized
behavior recognition
result
Prior art date
Application number
PCT/CN2020/084694
Other languages
English (en)
French (fr)
Inventor
韩亚宁
黄康
蔚鹏飞
王立平
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2021184468A1 publication Critical patent/WO2021184468A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques

Definitions

  • the embodiments of the present application relate to the technical field of behavior recognition, for example, to a behavior recognition method, device, equipment, and medium.
  • EPM Elevated Plus Maze
  • OFT Open Field Test
  • OFT devices generally have a square box with a camera on top. This square field can be divided into a central area and a peripheral area. If the mice move more in the central area, the anxiety level of the mice is relatively lower.
  • anxiety symptoms are manifested in various forms, and there are also different types of medications and methods of medication for different symptoms of patients.
  • the embodiments of the present application provide a behavior recognition method, device, equipment, and medium, so as to improve the accuracy of behavior recognition, and further improve the efficiency and accuracy of detection results determined based on behavior recognition.
  • an embodiment of the present application provides a behavior recognition method, including:
  • an embodiment of the present application also provides a behavior recognition device, including:
  • the serialized data acquisition module is configured to acquire original behavior data, preprocess the original behavior data, and obtain serialized behavior data;
  • the output result obtaining module is configured to input the serialized behavior data into the pre-trained behavior recognition model to obtain the output result of the behavior recognition model;
  • the recognition result output module is configured to generate a behavior recognition result according to the output result, and output the behavior recognition result.
  • an embodiment of the present application also provides a computer device, and the device includes:
  • One or more processors are One or more processors;
  • Storage device for storing one or more programs
  • the one or more processors implement the behavior recognition method as provided in any embodiment of the present application.
  • an embodiment of the present application also provides a computer-readable storage medium, and the computer-readable storage medium stores a computer program.
  • the computer program is executed by a processor, the implementation of the Method of behavior recognition.
  • FIG. 1a is a flowchart of a behavior recognition method provided by Embodiment 1 of the present application.
  • FIG. 1b is a schematic diagram of a glimpse diagram of animal behavior provided in Example 1 of the present application.
  • FIG. 2 is a flowchart of a behavior recognition method provided by Embodiment 2 of the present application.
  • Fig. 3a is a schematic structural diagram of a columnized animal behavior recognition and classification system provided in the third embodiment of the present application.
  • FIG. 3b is a flowchart of a behavior recognition method provided by Embodiment 3 of the present application.
  • FIG. 4 is a schematic structural diagram of a behavior recognition device provided by Embodiment 4 of the present application.
  • FIG. 5 is a schematic structural diagram of a computer device provided by Embodiment 5 of the present application.
  • FIG. 1a is a flowchart of a behavior recognition method provided in Embodiment 1 of the present application. This embodiment can be applied to the situation when the original behavior data is used for behavior recognition, for example, when it is used to identify the animal behavior data.
  • the method may be executed by a behavior recognition device, and the behavior recognition device may be implemented in a software and/or hardware manner.
  • the behavior recognition device may be configured in a computer device. As shown in Figure 1a, the method includes:
  • the original behavior data may be behavior data collected through different collection methods.
  • the original behavior data may include image data, video data, physiological signal data, etc., where the physiological signal data may be data such as heart rate data, blood pressure data, and brain electrical data.
  • the original behavior data may include multi-modal signal acquisition data.
  • the animal's original behavior data is a time series, that is, the mode state in the animal behavior data changes with time. Therefore, it is necessary to consider the dynamics of time when distinguishing different behavior patterns of animals.
  • preprocessing the multi-modal original behavior data into serialized behavior data may be: fusing the multi-modal original behavior data, and then segmenting the fused data to obtain multiple pieces of serialized behavior data .
  • the original behavior data includes multi-modal signal acquisition data
  • the step of preprocessing the original behavior data to obtain serialized behavior data includes: processing each signal according to the timestamp in the signal acquisition data.
  • the signal acquisition data of each mode is aligned to obtain the alignment behavior data;
  • the alignment behavior data is segmented using the set segmentation algorithm to obtain the segmentation behavior data;
  • the segmentation behavior data is serialized and mapped to obtain the serialized behavior data.
  • the multi-modal signal acquisition data may include image data, video data, and various physiological signal data.
  • the obtained alignment behavior data can be segmented to obtain multiple segments of segmentation behavior data, and then the segmentation behavior data can be serialized and mapped to obtain Multiple pieces of serialized behavior data.
  • aligning the signal acquisition data of each mode since the acquisition frequency of the signal acquisition data of each mode may be different, the signal acquisition data of each mode cannot be aligned at each time, and it is necessary to align the signal acquisition data of each mode.
  • Part of the signal acquisition data is resampled to make the sampling frequency of the signal acquisition data of each mode consistent after re-sampling, to ensure that the signal acquisition data of each mode can be aligned at each moment, that is, to ensure that different equipment is used for acquisition
  • the received signal collection data needs to ensure that there is corresponding data at the same time. For example, the animal’s heart rate should be increased at the beginning of the recording of the animal’s running, and the rising edge of the running speed curve must be matched to the animal’s running speed to a certain extent. The rising edge of the ECG signal curve is aligned.
  • a dynamic time warping (Dynamic Time Warping, DTW) method may be used to segment behavior data with similar patterns.
  • the DTW method can ensure the continuity of the time series in the time dimension, and find the most similar patterns in the data through the optimization search method, abstract the differences between the patterns into the optimal distance, and make the same through the difference in the data distance
  • the data of the patterns are clustered together, and the data of different patterns are separated to ensure that the similarity within the category is high (small distance), and the similarity between the categories is low (large distance).
  • DTW Dynamic Time Warping
  • the optional mapping character table can choose the American Standard Code for Information Interchange (ASCII).
  • ASCII is the most common information exchange standard and is a computer coding system based on the Latin alphabet. It is mainly used to display modern English and other Western European languages have defined a total of 128 characters so far. Some of these 128 characters are mapped to similar data to form a series of serialized animal behavior languages to obtain multiple pieces of serialized behavior data.
  • the output result of the behavior recognition model may be each behavior and the probability corresponding to each behavior.
  • the behavior output by the behavior recognition model may be a single behavior or a complex behavior.
  • the behavior output by the behavior recognition model may be walking or It may be walking and probes.
  • the output result may be directly output as the behavior recognition result, and the output result may be counted to generate a visual behavior statistics result, and the generated visual behavior statistics may be added for output.
  • the output results can be counted, and the visual behavior statistical results can be generated for output. The output of the visual behavior statistical results can enable the inspector to understand animal behaviors more vividly and intuitively.
  • the step of generating a behavior recognition result according to the output result includes: determining characteristic information corresponding to each behavior according to the output result, generating a visual behavior statistics result based on the characteristic information corresponding to each behavior, and combining The visual behavior statistics result is used as the behavior recognition result.
  • the visualized behavior statistical results may include the visualization of the statistical results of serialized animal behavior data and the visualization of the results of animal behavior recognition and classification.
  • a certain statistical method is used to describe the law of each behavior in the output result, and the statistical result of the visual serialized animal behavior data is obtained.
  • the proportion of a certain type of special behavior in all behaviors, or after performing different operations on the detection target the difference before and after the detection target behavior, for example, after using different drugs on the mouse, the difference between the mouse before and after different behaviors difference.
  • the difference in behavior will not only be reflected in the number or time of different behaviors, but also in the state transition between different behaviors. For example, in mice of different genetically modified strains, there is no significant difference in overall behavior, that is, there is no difference in the number and time of each behavior, but the transfer patterns between the different inherent behaviors are quite different.
  • strain 1 mouse and strain 2 mice both probe 10 times and sniff 5 times, but it is possible that strain 1 mouse and strain 2 mouse probe and sniff The transfer method between probes is different, and this is statistically significant.
  • Animal behavior recognition and classification result visualization not only need to visualize the global animal behavior in the time dimension, but also need to visualize the local, that is, a single behavior.
  • the overall visualization of animal behaviors uses behavior maps, and the local visualization of animal behaviors uses glimpses.
  • the visualized behavior statistical results include behavioral glimpses and/or behavioral maps.
  • Fig. 1b is a schematic diagram of a glimpse of animal behavior provided in Example 1 of the present application.
  • the glimpse of animal behavior is composed of upper and lower parts.
  • the upper part is the behavioral glimpse graph
  • the lower part is the different behavior recognition probabilities of the behavior recognition model.
  • the glimpse map samples a certain behavior in the time period at a certain time interval, and after removing the background, it is horizontally arranged and spliced to efficiently display the behavior sequence.
  • the recognition probability directly describes the recognition results of the current behavior of the model.
  • the model recognizes that the mouse has a 70% probability of a Twisting behavior, a 25% probability of an Observing behavior, and about 5%
  • the probability is the grooming behavior. Obviously, the model is more accurate in the recognition of the current behavior, and the glimpse map can accurately correspond to the recognition probability.
  • the abscissa is time, and the ordinate is different behavior types. Different colors can be used to represent different behaviors, and the transparency of the color represents the probability of the model recognizing the behavior. In each column of the behavior map, multiple color bars are allowed. This data display form can effectively represent high-dimensional animal behavior data. For describing different behavior patterns that occur at the same time, and the probability of occurrence of each different behavior Compared with the traditional behavioral graph without transparency description, it contains a larger amount of behavioral information.
  • the original behavior data is obtained, and the original behavior data is preprocessed to obtain serialized behavior data; the serialized behavior data is input into a pre-trained behavior recognition model to obtain the output result of the behavior recognition model; according to the output The result generates behavior recognition results, and outputs the behavior recognition results.
  • serialized behavior data for behavior recognition the information in the original behavior data is fully utilized, the accuracy of behavior recognition is improved, and the determination based on behavior recognition is improved. The efficiency and accuracy of the test results.
  • Fig. 2 is a flowchart of a behavior recognition method provided in the second embodiment of the present application.
  • the method includes:
  • the sample serialized data may be serialized behavior data obtained after preprocessing the sample behavior data. It can be understood that the detection target to which the sample behavior data belongs and the detection target to which the original behavior data belongs are the same type of detection target. Exemplarily, if the original behavior data is the behavior data of mice, the sample behavior data should also be the behavior data of mice.
  • the label corresponding to the sample serialized data is realized by manual labeling.
  • the sample behavior data can be preprocessed to obtain the sample serialized data, and the sample serialized data and sample behavior data can be played in chronological order.
  • the experimenter can observe the sample behavior data to determine the behaviors that need to be marked, and then serialize the data in the sample. Label it in, and get the label corresponding to the sample serialized data.
  • the manner of obtaining sample serialized data from sample behavior data can refer to the manner of obtaining serialized behavior data from original behavior data in the foregoing embodiment, which will not be repeated here.
  • the label corresponding to the sample serialized data includes at least one of a single label, a multi-label, and a language description label.
  • the manually marked behavior tags may include multiple forms, such as single tags, multiple tags, language description tags, refined tags, and so on.
  • a single label is the most traditional form of labeling different behaviors, which uses a single word to describe a sequence of different behaviors as a category label; multi-label is to mark different behaviors as multiple labels.
  • Multi-labeling takes into account the high-dimensional properties of animal behavior, that is, at the same time or within the same period of time, there will be situations where the behaviors occur at the same time (for example, a mouse may be sniffing while walking), and multiple words need to be used to describe The behaviors of animals at the same time; verbal description tags mark different behaviors in the form of verbal description.
  • the spontaneous behavior of animals occupies a high proportion of all behaviors, and these spontaneous behaviors often cannot be defined with a simple word or a few descriptive nouns. For example, a mouse stops and raises its head while walking. The right front scratched his right ear.
  • S220 Use the training sample data to train the pre-built behavior recognition model to obtain the trained behavior recognition model.
  • the pre-built behavior recognition model can be
  • the pre-built behavior recognition model can use common models in Natural Language Processing (NLP) tasks, such as sequence-to-sequence (seq2seq) network based on attention mechanism, Bidirectional Encoder Representations from Transformers (BERT) models, etc.
  • NLP Natural Language Processing
  • the seq2seq model is a commonly used codec model in the NLP field. It consists of an Encoder part and a Decoder part. Natural language sequences have temporal dynamics. Therefore, to encode natural language sequences in the time dimension, you need to use Recurrent Neural Network (RNN). In the seq2seq model, use Long Short Term Memory (LSTM) to input sequences. Encoding, using RNN to decode the features learned from the input sequence, LSTM has an excellent effect in solving the long-term dependence of the time sequence.
  • RNN Recurrent Neural Network
  • the seq2seq model that introduces the attention mechanism is used as the behavior recognition model, and the attention module is used for the decoding of semantic features, replacing the RNN in the traditional seq2seq model.
  • the attention mechanism When humans deal with natural language processing tasks such as Chinese-English translation tasks, they will selectively pay attention to the keywords in a sentence. This mechanism is called the attention mechanism.
  • the attention mechanism In the model, by increasing the attention weight of keywords and decreasing the attention weight of non-keywords, an attention mechanism similar to that of humans can be obtained. In the process of animal behavior identification and classification, similar conclusions also exist.
  • the training performance in the process of training the behavior recognition model, can be visualized, so that the inspector can understand the training level of the behavior recognition model.
  • the training loss Training Loss
  • the recognition accuracy Precision
  • recall recall
  • Confusion Matrix confusion matrix
  • the training loss directly describes the optimization of the model. The smaller the training loss, the better the optimization effect of the model.
  • the decreasing law of training loss can play a certain guiding role.
  • training loss decreases with time, indicating that the model is still optimizing learning; training loss increases with time, indicating that the model has not learned useful data laws; training loss oscillating, indicating that the current model has reached the best performance, and you want to continue
  • the parameters need to be adjusted to increase the recognition effect of the model.
  • the recognition accuracy and recall rate directly reflect the recognition effect of the current test data.
  • the accuracy rate describes the correct proportion of the model in a certain category of all data, that is, the ability to identify and classify this type of data in all categories; the recall rate describes the proportion of the model that is correctly judged in a certain category of data, That is, the degree of discrimination of the model for this type of data.
  • the confusion matrix is the most basic indicator of the machine learning model, which directly describes the degree of correspondence between the label of the data and the prediction result of the model.
  • S250 Generate a behavior recognition result according to the output result, and output the behavior recognition result.
  • the embodiment of the application generates training sample data according to the sample serialized data and the label corresponding to the sample serialized data by obtaining the sample serialized data and the label corresponding to the sample serialized data; and uses the training sample data to train the pre-built behavior recognition model , Obtain a well-trained behavior recognition model.
  • the behavior recognition model makes full use of the temporal characteristics of the behavior data when learning behavior characteristics, and improves the behavior recognition model Accuracy of behavior recognition.
  • the behavior recognition method can be executed by a serialized animal behavior recognition and classification system.
  • the system first serializes the data acquired by the animal behavior collection device, then performs serialized animal behavior data labeling, manually labeling the data tags, and finally uses the labeled serialized animal behavior data for training
  • the seq2seq cyclic neural network model with the attention mechanism is introduced to obtain the animal behavior sequence corresponding to the behavior label.
  • the system is suitable for different animal behavior recognition and classification tasks and big data analysis of animal behaviors, and automatically obtains the behaviors of experimental attention and the inherent transfer mode of behaviors, and improves the efficiency of animal behavior data analysis.
  • Fig. 3a is a schematic structural diagram of a serialized animal behavior recognition and classification system provided in the third embodiment of the present application.
  • the serialized animal behavior recognition and classification system includes: an animal behavior data serialization unit, serialization There are six parts: data labeling unit, seq2seq model training unit, behavior sequence recognition and classification unit, data visualization unit, and control host.
  • the animal behavior data serialization unit includes a data acquisition module and a data serialization module.
  • the data acquisition module acquires image, video and physiological signal data of animal behaviors.
  • the data serialization module is responsible for discretizing the time data acquired by the data acquisition module. Clustering and coding generates serialized data of animal behavior.
  • the serialized data labeling unit includes a data playing module and a data labeling module.
  • the data playing module maps the clustered data in the data serialization module to the original data space for playing and visualization for observation by experimenters.
  • the experimenters used the data labeling module to manually label the specific animal behavior sequence according to the pattern of the observed behavior data (image, video, and physiological signal).
  • the seq2seq model training unit includes a sequence data preprocessing module and a seq2seq model training module.
  • the sequence data preprocessing module performs different preprocessing of the data according to different data labeling forms.
  • the seq2seq model training module obtains the preprocessed data training and introduces attention Mechanism of seq2seq recurrent neural network model.
  • the behavior sequence recognition and classification unit includes the seq2seq model recognition and classification module and the recognition data segmentation and labeling module.
  • the seq2seq model recognition and classification module inputs the serialized animal behavior data that needs to be recognized into the seq2seq model to realize the automatic recognition and classification of animal behaviors, and the recognition data segmentation
  • the marking module obtains the identified and classified data tags, maps the original animal behavior data and the serialized data tags to the same time dimension, and at the same time divides similar data into the same folder according to the set rules.
  • the data visualization unit includes a data statistics module and a data drawing module.
  • the data statistics module performs statistics on the segmented data according to set rules, and explores the laws and differences in animal behavior data.
  • the data drawing module draws animal behavior data and statistical results.
  • the control host is the basis for the operation of the entire algorithm. It supports the collection of animal behavior data, the storage and recall of a large amount of animal behavior data, provides hardware computing power for data serialization, and provides seq2seq model training and animal behavior recognition and classification Provides a parallel computing graphics processing unit, which speeds up the training and verification of the model.
  • the control host also provides experimenters with an interactive interface that can use this method. The high-performance control host saves experimenters a lot of time for adjusting model parameters and statistical test data, ensuring the efficiency of data model operation, and shortening the experiment cycle.
  • Fig. 3b is a flowchart of a behavior recognition method provided in the third embodiment of the present application. As shown in Figure 3b, the method includes:
  • S310 Use the control host to collect and process animal behavior data to obtain serialized data.
  • the collection of animal behavior data can be divided into two ways. One is to connect the sensor that collects animal behavior to the control host, and the animal behavior data is collected through the data collection module in the control host; the other is to collect the offline device.
  • the past animal behavior data is loaded into the system by connecting to the hard disk. After collecting animal behavior data, serialize the loaded data.
  • the serialization process is divided into three steps: multi-modal data alignment, animal behavior data segmentation, and animal behavior data serialization mapping. Among them, a more detailed solution for serializing data can be found in the foregoing embodiment, which will not be repeated here.
  • the serialized data and animal raw data can be played in chronological order by the data playback module in the serialized data marking unit, and the experimenter can observe the behaviors that need to be marked, and use the data marking module to set a specific behavior sequence Manually mark the label.
  • Manually labeled behavior labels can include multiple forms: a single label form for different behaviors, a multi-label form for different behaviors, a verbal description form for different behaviors, and a fine-labeled form for focused behaviors.
  • the final recognition and classification results are also different due to different data labeling formats.
  • the data is labeled with a single label for different behaviors, and the result of recognition and classification is a single label for different behaviors.
  • the results of recognition and classification may have multiple label forms, or a single label may appear.
  • serialization of behavioral data it is equivalent to compressing the data. The redundant information of the data will be removed while the main features are retained as much as possible, and the noise of the data will be reduced.
  • manual marking because there is a certain compound ratio of compound behaviors, for example, a mouse probes at the same time while walking, the possible walking component accounts for 90%, the probe behavior accounts for 5%, and other small behaviors.
  • the markers here can only mark the more obvious behaviors they see.
  • the data marker will mark the current behavior based on their subjective judgment, and this subjectivity also has a time dynamic, and it is impossible to accurately judge the proportion of the behavior. It is possible to mark as walking, probe, or compound behaviors such as walking, probe, tail-wagging, etc.
  • the model will extract the most obvious and credible results in the data as the output, which is often reflected in the expectations of the data. Therefore, in this example, it is possible that 90% of the data is a single walk, 5% of the data is walking and probes, and the remaining 5% may be walking, probes, and sniffing. Behavior that is difficult to judge with the naked eye.
  • use different behavior language description forms to label the data.
  • the results of recognition and classification are basically the same as those described in the previous seq2seq model training unit.
  • the animal behavior language is translated into the language defined by the data tagger, and the animal behavior sequence is obtained. describe.
  • the data visualization unit includes: visualization of neural network training performance results, visualization of statistical results of serialized animal behavior data, and visualization of animal behavior recognition and classification results.
  • visualization of neural network training performance results visualization of statistical results of serialized animal behavior data
  • visualization of animal behavior recognition and classification results visualization of animal behavior recognition and classification results.
  • This embodiment provides a universal animal behavior recognition and classification system, which serializes the data acquired by the animal behavior collection device to fuse multi-modal data while retaining the continuity of behavior data in the time dimension; use The seq2seq recurrent neural network model that introduces the attention mechanism extracts high-dimensional semantic features in behavior sequence data, and decodes the features into the artificially labeled behavior label space, effectively extracting low-dimensional information in high-dimensional behavior data while retaining animals High-dimensional structure of behavior.
  • artificially marked behavior labels can contain various forms, which greatly enrich the description indicators of animal behavior, provide more reference data for drug research and development, and improve the efficiency and accuracy of drug efficacy testing.
  • FIG. 4 is a schematic structural diagram of a behavior recognition device provided by Embodiment 4 of the present application.
  • the behavior recognition device can be implemented in software and/or hardware.
  • the behavior recognition device can be configured in a computer device.
  • the device includes a serialized data acquisition module 410, an output result acquisition module 420, and a recognition result output module 430, where:
  • the serialized data acquisition module 410 is configured to acquire original behavior data, preprocess the original behavior data, and obtain serialized behavior data;
  • the output result obtaining module 420 is configured to input the serialized behavior data into the pre-trained behavior recognition model to obtain the output result of the behavior recognition model;
  • the recognition result output module 430 is configured to generate a behavior recognition result according to the output result, and output the behavior recognition result.
  • the original behavior data is obtained by the serialized data acquisition module, and the original behavior data is preprocessed to obtain the serialized behavior data; the output result acquisition module inputs the serialized behavior data into the pre-trained behavior recognition model to obtain The output result of the behavior recognition model; the recognition result output module generates behavior recognition results based on the output results, and outputs the behavior recognition results.
  • serialized behavior data for behavior recognition it makes full use of the information in the original behavior data to improve behavior
  • the accuracy of recognition further improves the efficiency and accuracy of detection results determined based on behavior recognition.
  • the original behavior data includes multi-modal signal acquisition data
  • the serialized data acquisition module 410 can be used for:
  • the segmentation behavior data is serialized and mapped to obtain the serialized behavior data.
  • the device further includes a model training module for:
  • the pre-built behavior recognition model is a sequence-to-sequence network based on the attention mechanism.
  • the label corresponding to the sample serialized data includes at least one of a single label, a multi-label, and a language description label.
  • the recognition result output module 430 may be used to:
  • the characteristic information corresponding to each behavior is determined, and the visual behavior statistics result is generated based on the characteristic information corresponding to each behavior, and the visual behavior statistics result is used as the behavior recognition result.
  • the visual behavior statistics result includes a behavior glimpse graph and/or a behavior graph.
  • the behavior recognition device provided in the embodiment of the present application can execute the behavior recognition method provided in any embodiment of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • FIG. 5 is a schematic structural diagram of a computer device provided by Embodiment 5 of the present application.
  • Figure 5 shows a block diagram of an exemplary computer device 512 suitable for implementing embodiments of the present application.
  • the computer device 512 shown in FIG. 5 is only an example.
  • the computer device 512 is represented in the form of a general-purpose computing device.
  • the components of the computer device 512 may include: one or more processors 516, a system memory 528, and a bus 518 connecting different system components (including the system memory 528 and the processor 516).
  • the bus 518 represents one or more of several types of bus structures, including a memory bus or a memory controller, a peripheral bus, a graphics acceleration port, a processor 516, or a local bus using any bus structure among multiple bus structures.
  • these architectures can include industry standard architecture (ISA) bus, microchannel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
  • ISA industry standard architecture
  • MAC microchannel architecture
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • Computer device 512 typically includes a variety of computer system readable media. These media may be any available media that can be accessed by the computer device 512, including volatile and nonvolatile media, removable and non-removable media.
  • the system memory 528 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 530 and/or cache memory 532.
  • the computer device 512 may include other removable/non-removable, volatile/non-volatile computer system storage media.
  • the storage device 534 may be used to read and write non-removable, non-volatile magnetic media (not shown in FIG. 5, usually referred to as a "hard drive").
  • a disk drive for reading and writing to removable non-volatile disks such as "floppy disks”
  • a removable non-volatile disk such as CD-ROM, DVD-ROM
  • other optical media read and write optical disc drives.
  • each drive may be connected to the bus 518 through one or more data medium interfaces.
  • the memory 528 may include at least one program product, and the program product has a set of (for example, at least one) program modules, and these program modules are configured to perform the functions of the embodiments of the present application.
  • a program/utility tool 540 having a set of (at least one) program module 542 may be stored in, for example, the memory 528.
  • Such program module 542 may include an operating system, one or more application programs, other program modules, and program data. Each of the examples or some combination may include the realization of a network environment.
  • the program module 542 generally executes the functions and/or methods in the embodiments described in this application.
  • the computer device 512 can also communicate with one or more external devices 514 (such as a keyboard, pointing device, display 524, etc.), and can also communicate with one or more devices that enable a user to interact with the computer device 512, and/or communicate with Any device (such as a network card, modem, etc.) that enables the computer device 512 to communicate with one or more other computing devices. Such communication can be performed through an input/output (I/O) interface 522.
  • the computer device 512 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 520.
  • networks for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet
  • the network adapter 520 communicates with other modules of the computer device 512 through the bus 518. It should be understood that although not shown in the figure, other hardware and/or software modules may be used in conjunction with the computer device 512, which may include: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data Backup storage system, etc.
  • the processor 516 executes various functional applications and data processing by running programs stored in the system memory 528, for example, to implement the behavior recognition method provided in the embodiment of the present application, the method includes:
  • processor may also implement the technical solution of the behavior recognition method provided in any embodiment of the present application.
  • the sixth embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored.
  • the behavior recognition method provided in the embodiment of the present application is implemented, and the method may include:
  • the computer program stored thereon can operate in the above method, and can also perform related operations of the behavior recognition method provided in any embodiment of the present application.
  • the computer storage medium of the embodiment of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the above.
  • Examples of computer-readable storage media may include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and computer-readable program code is carried therein. This propagated data signal can take many forms, and can include electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, which may include wireless, wire, optical cable, RF, etc., or any suitable combination of the above.
  • the computer program code used to perform the operations of this application can be written in one or more programming languages or a combination thereof.
  • the programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, using an Internet service provider to pass Internet connection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种行为识别方法、装置、设备及介质,所述方法包括:获取原始行为数据,对所述原始行为数据进行预处理,得到序列化行为数据;将所述序列化行为数据输入至预先训练好的行为识别模型中,获得所述行为识别模型的输出结果;根据所述输出结果生成行为识别结果,并将所述行为识别结果进行输出。

Description

行为识别方法、装置、设备及介质
本公开要求在2020年03月18日提交中国专利局、申请号为202010190264.3的中国专利申请的优先权,以上申请的全部内容通过引用结合在本公开中。
技术领域
本申请实施例涉及行为识别技术领域,例如涉及一种行为识别方法、装置、设备及介质。
背景技术
动物行为识别在药物研发方面具有重要作用。以神经精神类药物的研发为例,研究动物用药前后的行为差异是判断药效的重要指标。例如在焦虑症药物的研发中,高架十字迷宫实验(Elevated Plus Maze,EPM)或旷场实验(Open Field Test,OFT)经常被用来判断实验小鼠用药前后的焦虑水平。EPM由开放臂和封闭臂各两条组成,呈十字形交叉,交叉部分为中央区域,整个十字形迷宫距地面有一定的高度。小鼠面对开放臂会产生好奇心而想要探索,同时小鼠又有着趋暗避明的天性,两者之间发生探究与回避的冲突行为,产生焦虑心理,可以通过比较小鼠在开放臂和封闭臂内的滞留时间和路程来评价小鼠的焦虑行为。OFT装置一般有一个正方形的箱子,顶部有摄像机。这个正方形的场地可以划分为中央区域和周边区域,小鼠如果在中央区域的活动越多,说明小鼠的焦虑水平相对就越低。然而,在医学临床中,焦虑症状表现多样,针对患者不同的症状,也存在不同的用药类型与用药方式。但是,在药效检测的关键层面—小鼠的焦虑行为判断中,其过于简单的参数(在EPM开放臂停留时间、在OFT中央区域停留时间)与药效的精确检测产生了矛盾。该矛盾不仅仅出现在焦虑症药物的研发中,在阿尔兹海默症、自闭症等几乎所有的神经精神类药物的研发中均不同程度地存在着。因此,如何精细化的识别动物行为是一个亟待解决的技术问题。
发明内容
本申请实施例提供了一种行为识别方法、装置、设备及介质,以实现提高行为识别的准确性,进而提高基于行为识别确定的检测结果的效率及准确性。
第一方面,本申请实施例提供了一种行为识别方法,包括:
获取原始行为数据,对原始行为数据进行预处理,得到序列化行为数据;
将序列化行为数据输入至预先训练好的行为识别模型中,获得行为识别模型的输出结果;
根据输出结果生成行为识别结果,并将行为识别结果进行输出。
第二方面,本申请实施例还提供了一种行为识别装置,包括:
序列化数据获取模块,被配置为获取原始行为数据,对原始行为数据进行预处理,得到序列化行为数据;
输出结果获取模块,被配置为将序列化行为数据输入至预先训练好的行为识别模型中,获得行为识别模型的输出结果;
识别结果输出模块,被配置为根据输出结果生成行为识别结果,并将行为识别结果进行输出。
第三方面,本申请实施例还提供了一种计算机设备,设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序;
当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如本申请任意实施例所提供的行为识别方法。
第四方面,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如本申请任意实施例所提供的行为识别方法。
附图说明
图1a是本申请实施例一所提供的一种行为识别方法的流程图;
图1b是本申请实施例一所提供的一种动物行为掠影图的示意图;
图2是本申请实施例二所提供的一种行为识别方法的流程图;
图3a是本申请实施例三所提供的一种列化的动物行为识别分类系统的结构示意图;
图3b是本申请实施例三所提供的一种行为识别方法的流程图;
图4是本申请实施例四所提供的一种行为识别装置的结构示意图;
图5是本申请实施例五所提供的一种计算机设备的结构示意图。
具体实施方式
下面结合附图和实施例对本申请作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。
实施例一
图1a是本申请实施例一所提供的一种行为识别方法的流程图。本实施例可适用于对原始行为数据进行行为识别时的情形,例如适用于对用于对动物行为数据进行识别时的情形。该方法可以由行为识别装置执行,该行为识别装置可以采用软件和/或硬件的方式实现,例如,该行为识别装置可配置于计算机设备中。如图1a所示,所述方法包括:
S110、获取原始行为数据,对原始行为数据进行预处理,得到序列化行为数据。
在本实施例中,原始行为数据可以为通过不同采集方式采集的行为数据。可选的,原始行为数据可以包含图像数据、视频数据以及生理信号数据等,其中,生理信号数据可以为心率数据、血压数据、脑电数据等数据。也就是说,原始行为数据可以包括多模态的信号采集数据。另外,动物的原始行为数据为时间序列,即动物行为数据中的模式状态是随时间而变化的,因此,在区分动物不同的行为模式时需要考虑时间的动态性。在本实施例中,通过将多模态的原始行为数据预处理为序列化行为数据,使用序列化行为数据进行行为识别,充分利用了原始行为数据的多模态性及时间动态性,使得行为识别的结果更加准确。在一些实施例中,将多模态的原始行为数据预处理为序列化行为数据可以为,将多模态的原始行为数据进行融合,然后对融合后的数据进行分割,得到多段序列化行为数据。
在本申请的一种实施方式中,原始行为数据包括多模态的信号采集数据,对原始行为数据进行预处理,得到序列化行为数据的步骤,包括:根据信号采集数据中的时间戳对每个模态的信号采集数据进行数据对齐,得到对齐行为数据;使用设定分割算法对对齐行为数据进行分割,得到分割行为数据;将分割行为数据进行序列化映射,得到序列化行为数据。
可选的,多模态的信号采集数据可以包括图像数据、视频数据以及各种生理信号数据等。可以将每个模态的信号采集数据根据信号采集数据中携带的时间戳进行数据对齐后,对得到的对齐行为数据进行分割,得到多段分割行为数 据,然后将分割行为数据进行序列化映射,得到多段序列化行为数据。其中,在将每个模态的信号采集数据进行数据对齐时,由于每个模态的信号采集数据的采集频率可能不同,导致每个模态的信号采集数据无法在每个时刻对齐,需要对部分信号采集数据进行重采样,以使重采样后的每个模态的信号采集数据的采样频率一致,保证每个模态的信号采集数据在每个时刻均能对齐,即保证使用不同设备采集到的信号采集数据需要保证在相同时刻均存在相应的数据,例如在记录到动物跑动的时刻开始应该会伴随着动物心率的上升,且跑动的速度曲线的上升沿需一定程度上与动物心电信号曲线的上升沿对齐。
示例性的,可以在获取到对齐行为数据后,采用动态时间规整(Dynamic Time Warping,DTW)方法分割具有相似模式的行为数据。DTW方法可以保证时间序列在时间维度的连续性,并且通过最优化搜索的方式找到其数据内在最相似的模式,将模式之间的差异抽象为最优化的距离,并通过数据距离的差别将相同模式的数据聚类到一起,不同模式的数据分开,保证类别之内相似度高(距离小),类别之间相似度低(距离大)。在使用DTW将动物行为分割聚类后,将同类数据映射到一个低维空间,考虑到训练好的行为识别模型中包含单次向量映射部分,因此只需要将同类数据映射到字符表中。可选的映射字符表可以选择美国信息交换标准代码表(American Standard Code for Information Interchange,ASCII),ASCII是最通用的信息交换标准,是基于拉丁字母的一套电脑编码系统,主要用于显示现代英语和其他西欧语言,到目前为止共定义了128个字符,将这128个字符取一部分分别映射到同类数据上,形成一串序列化的动物行为语言,得到多段序列化行为数据。
S120、将序列化行为数据输入至预先训练好的行为识别模型中,获得行为识别模型的输出结果。
在本实施例中,获得序列化行为数据后,将序列化行为数据输入至训练好的行为识别模型中,得到行为识别模型的输出结果。可选的,行为识别模型的输出结果可以为每个行为及每个行为对应的概率。其中,针对每段序列化行为数据,行为识别模型输出的行为可能为单一行为,也可能为复杂行为,示例性的,针对某段序列化行为数据,行为识别模型输出的行为可能为行走,也可能为行走和探头。
S130、根据输出结果生成行为识别结果,并将行为识别结果进行输出。
在本实施例中,可以将输出结果直接作为行为识别结果进行输出,还可以 根据对输出结果进行统计,生成可视化行为统计结果,将生成的可视化行为统计加过进行输出。可选的,可以对输出结果进行统计,生成可视化行为统计结果进行输出,输出可视化行为统计结果能够使检测人员能够更加生动、直观的了解动物行为。
在本申请的一种实施方式中,根据输出结果生成行为识别结果的步骤,包括:根据输出结果确定每个行为对应的特征信息,基于每个行为对应的特征信息生成可视化行为统计结果,并将可视化行为统计结果作为行为识别结果。以动物行为识别为例,可视化行为统计结果可以包括序列化动物行为数据的统计结果可视化和动物行为识别和分类结果可视化。
一个实施例中,使用一定的统计学方法描述输出结果中每个行为的规律,得到可视化的序列化动物行为数据的统计结果。如在所有的行为中某一类特殊的行为所占据的比例,或者对检测目标执行不同的操作后,检测目标行为前后的差异,例如对小鼠使用不同的药物后,小鼠不同行为前后的差异。需要说明的是,行为的差异不仅会体现在不同行为的数量或者时间上,也会体现在不同行为之间的状态转移中。例如在不同转基因品系的小鼠中,其整体的行为表现没有显著差异,即每种行为的数量和发生的时间没有差异,然而其内在的不同行为之间的转移模式却有着较大的差异。通俗来讲,可能不同转基因品系的小鼠整体行为无差异,如品系1小鼠与品系2小鼠均探头10次,嗅探5次,但可能品系1小鼠与品系2小鼠探头与嗅探之间的转移方式不同,而这在统计学上具有显著性差异。
在行为学研究中,直接观察行为是应用最广泛的方法,因此需要进行动物行为识别和分类结果可视化。动物行为识别和分类结果可视化不但需要对全局的动物行为在时间维度上进行可视化,而且需要在局部,即单个行为进行可视化。在全局对动物行为进行可视化采用行为图谱的方式,在局部对动物行为进行可视化采用掠影图的方式。相应的,可视化行为统计结果包括行为掠影图和/或行为图谱。
图1b是本申请实施例一所提供的一种动物行为掠影图的示意图,如图1b所示,动物行为掠影图由上下两部分组成。上部为行为掠影图,下部为行为识别模型的不同行为识别概率。掠影图将该时间段的某一行为按照一定的时间间隔抽样,去除背景之后横向排开拼接,高效展示行为序列。识别概率直接描述模型对当前行为的识别结果,在图1b中,模型识别到老鼠有约70%的概率为蜷 缩(Twisting)行为,约25%的概率为观察(Observing)行为,约5%的概率为理毛(Grooming)行为。显然,该模型对于当前行为的识别较为准确,掠影图与识别概率可以准确对应。
行为图谱中,横坐标为时间,纵坐标为不同的行为种类。可以使用不同颜色表示不同的行为,颜色的透明度表示模型识别行为的概率。在行为图谱的每一列中,允许出现多个颜色条,这种数据显示形式可以有效表示高维的动物行为数据,对于描述相同时刻发生的不同行为模式,以及每种不同行为的发生概率相比于传统的无透明度描述的行为图谱而言,包含更为大量的行为信息。
本申请实施例通过获取原始行为数据,对原始行为数据进行预处理,得到序列化行为数据;将序列化行为数据输入至预先训练好的行为识别模型中,获得行为识别模型的输出结果;根据输出结果生成行为识别结果,并将行为识别结果进行输出,通过使用序列化行为数据进行行为识别,充分利用了原始行为数据中的信息,提高了行为识别的准确性,进而提高了基于行为识别确定的检测结果的效率及准确性。
实施例二
图2是本申请实施例二所提供的一种行为识别方法的流程图。本实施例在上述实施例的基础上,增加了对行为识别模型进行训练的操作。如图2所示,所述方法包括:
S210、获取样本序列化数据以及样本序列化数据对应的标签,根据样本序列化数据以及样本序列化数据对应的标签生成训练样本数据。
在本实施例中,样本序列化数据可以为对样本行为数据进行预处理后得到的序列化行为数据。可以理解的是,样本行为数据所属检测目标与原始行为数据所属检测目标为同一类检测目标。示例性的,若原始行为数据为小鼠的行为数据,则样本行为数据也应该为小鼠的行为数据。
可选的,样本序列化数据对应的标签采用人工标注方式实现。可以在对样本行为数据进行预处理,得到样本序列化数据后,将样本序列化数据与样本行为数据按照时间顺序播放,由实验人员观察样本行为数据,确定需要标记的行为,在样本序列化数据中进行标注,得到样本序列化数据对应的标签。其中,由样本行为数据得到样本序列化数据的方式可参见上述实施例中由原始行为数据得到序列化行为数据的方式,在此不再赘述。
在本申请的一种实施方式中,样本序列化数据对应的标签包括单一标签、多标签以及语言描述标签中的至少一种。可选的,人工标记的行为标签可以包含多种形式,如单一标签、多标签、语言描述标签、精细化标签等。其中,单一标签为将不同行为标记为单一标签,是最为传统的标注形式,将非同类的行为序列用单个词语描述,作为类别的标签;多标签是将不同行为标记为多标签。多标签是考虑到动物行为存在高维的属性,即在同一时刻或同一段时间内,会存在行为同时发生的情况(如小鼠可能在行走的同时嗅探),需要使用多个词语去描述动物在同一时刻发生的行为;语言描述标签是将不同行为使用语言描述形式进行标记。动物的自发行为在所有行为中占据较高比例,而且这些自发行为常常无法使用一个简单的词语或者几个描述性的名词进行定义,例如,老鼠在行走的过程中停了下来抬起了头并用右前抓抓挠了右耳。这样一个复杂的序列无法用简单的几个词语定义,因此需要使用描述性的行为定义方式;精细化标签是关注行为的精细标注形式。在生物实验过程中,为了控制变量,会给予动物一定的刺激同时观察动物对于刺激的响应,即给予动物刺激时间点前后动物行为的改变。在这种情况下,自发的行为以及其他的实验不关注的行为对于本实验并不存在较大的研究意义,因此可以将不关注的行为类别标注为“其他行为”类,同时对于实验关注的行为进行精细的标注,这里的精细标注可以使用前述的三种标记形式,可以根据实验的需求进行选择。
S220、使用训练样本数据对预先构建的行为识别模型进行训练,得到训练好的行为识别模型。
获得训练样本数据后,使用训练样本数据对预先构建的行为识别模型进行训练,得到训练好的行为识别模型。其中预先构建的行为识别模型可以为
在本申请的一种实施方式中,预先构建的行为识别模型可以使用自然语言处理(Natural Language Processing,NLP)任务中的常用模型,如基于注意力机制的序列到序列(seq2seq)网络、Bidirectional Encoder Representations from Transformers(BERT)模型等。
seq2seq模型是在NLP领域常用的编解码模型。其由编码(Encoder)部分和解码(Decoder)部分组成。自然语言序列存在时间动态性,因此在时间维度对自然语言序列编码需要使用循环神经网络(Recurrent Neural Network,RNN),在seq2seq模型中使用长短时记忆网络(Long Short Term Memory,LSTM)对输入序列进行编码,使用RNN对从输入序列中学习到的特征进行解码,LSTM 在解决时间序列的长时程依赖问题中具有优异的效果。与传统的seq2seq模型不同的是,本实施例中,应用引入注意力机制的seq2seq模型作为行为识别模型,该注意力模块被用于语义特征的解码,替代传统seq2seq模型中的RNN。人类在处理自然语言处理任务例如中英翻译任务时,会选择性的关注一句话中的关键词,这种机制被称作注意力机制。在模型中,将关键词的注意力权重提升,非关键词的注意力权重降低,即可得到与人类相似的注意力机制。在动物行为学识别与分类的过程中,也存在类似的结论。在同一时刻发生的动物行为一定存在一个或者几个主成分(主要行为或核心行为),而这些主成分所对应的序列化动物行为数据中的特定的字母也理所应当地是识别这些主成分的重要语义特征。因此,引入注意力机制的seq2seq模型相比于传统的seq2seq模型而言,可以更好地描述主要的行为成分以及主要行为成分的组成部分。
在本申请的一种实施方式中,在对行为识别模型进行训练的过程中,可以将训练性能可视化,以使检测人员了解行为识别模型的训练程度。示例性的,可以可视化行为识别模型的训练损失(Training Loss)、识别的准确率(Precision)和召回率(Recall)以及混淆矩阵(Confusion Matrix)。训练损失直接描述模型的优化情况,训练损失越小,模型的优化效果越好。同时,在模型参数调优的过程中,训练损失的下降规律可以起到一定的指导作用。例如训练损失随时间而减小,说明模型还在优化学习;训练损失随时间而增大,说明模型没有学到有用的数据规律;训练损失震荡,说明当前模型已经达到最佳性能,想要继续增加模型的识别效果需要调整参数。识别的准确率和召回率直接反映当前测试数据的识别效果。准确率描述模型在所有数据中的某一类判断正确的比例,即在所有类中对这一类数据的识别分类能力;召回率描述在数据的某一类中,被模型判断正确的比例,即模型对该类数据的区分度。混淆矩阵是机器学习模型最基本的指标,直接描述数据的标签与模型预测结果的对应程度。
S230、获取原始行为数据,对原始行为数据进行预处理,得到序列化行为数据。
S240、将序列化行为数据输入至预先训练好的行为识别模型中,获得行为识别模型的输出结果。
S250、根据输出结果生成行为识别结果,并将行为识别结果进行输出。
本申请实施例通过获取样本序列化数据以及样本序列化数据对应的标签,根据样本序列化数据以及样本序列化数据对应的标签生成训练样本数据;使用 训练样本数据对预先构建的行为识别模型进行训练,得到训练好的行为识别模型,通过基于待多种类型标记的样本序列化数据作为训练样本数据,使得行为识别模型在学习行为特征时充分利用了行为数据的时序性特征,提高了行为识别模型行为识别的准确性。
实施例三
本实施例在上述实施例的基础上,提供了一种可选实施例。在本实施例中,行为识别方法可以由序列化的动物行为识别分类系统执行。在一些实施例中,该系统通过首先将动物行为学采集设备获取的数据进行序列化处理,然后进行序列化的动物行为数据标记,手工标注数据标签,最后使用标记过的序列化动物行为数据训练引入注意力机制的seq2seq循环神经网络模型,获得与行为标签所对应的动物行为序列。该系统适用于不同的动物行为识别分类任务与动物行为的大数据分析,自动获得实验关注的行为以及行为的固有转移模式,提高动物行为学数据分析的效率。
图3a是本申请实施例三所提供的一种列化的动物行为识别分类系统的结构示意图,如图3a所示,序列化的动物行为识别分类系统包括:动物行为数据序列化单元、序列化数据标记单元、seq2seq模型训练单元、行为序列识别分类单元、数据可视化单元、控制主机六个部分。其中,动物行为数据序列化单元包括数据采集模块和数据序列化模块,数据采集模块获取动物行为的图像、视频和生理信号数据,数据序列化模块负责对数据采集模块获取的时间数据进行离散化、聚类和编码生成动物行为的序列化数据。序列化数据标记单元包括数据播放模块和数据标记模块,数据播放模块将数据序列化模块中已完成聚类的数据映射到原始数据空间进行播放和可视化,供实验人员观察。实验人员根据观察行为数据(图像、视频和生理信号)的模式为特定的动物行为序列使用数据标记模块手工标记标签。seq2seq模型训练单元包括序列数据预处理模块和seq2seq模型训练模块,序列数据预处理模块按照不同的数据标记形式对数据进行不同的预处理,seq2seq模型训练模块获取经过预处理后的数据训练引入注意力机制的seq2seq循环神经网络模型。行为序列识别分类单元包括seq2seq模型识别分类模块和识别数据分割标记模块,seq2seq模型识别分类模块将需要进行识别的序列化动物行为数据输入到seq2seq模型中实现动物行为的自动识别与分类,识别数据分割标记模块获取已识别和分类的数据标签,并将原始动物行为 数据与序列化的数据标签映射到相同的时间维度,同时按照设定的规则将同类数据分割至同一个文件夹中。数据可视化单元包括数据统计模块和数据绘图模块,数据统计模块将分割好的数据按照设定的规则进行统计,挖掘动物行为数据中存在的规律与差异,数据绘图模块将动物行为数据和统计结果绘制成条形图、饼图和行为图谱等统计数据可视化的图表,有效展示数据。控制主机是整个算法运行的基础,其支持动物行为数据的采集,支持大量动物行为数据的存储与调用,为数据的序列化处理提供硬件算力,为seq2seq模型的训练与动物行为的识别和分类提供并行运算的图形处理单元,加快了模型的训练与验证速度。控制主机也为实验人员提供可以使用该方法的交互式界面,高性能的控制主机给实验人员节省大量调整模型参数以及统计检验数据的时间,保证数据模型运行的效率,缩短实验周期。
图3b是本申请实施例三所提供的一种行为识别方法的流程图。如图3b所示,所述方法包括:
S310、使用控制主机采集动物行为数据并处理,得到序列化数据。
在本实施例中,采集动物行为数据可以分为两种方式,一是将采集动物行为的传感器与控制主机相连,通过控制主机中的数据采集模块采集到动物行为数据;二是将离线设备采集过的动物行为数据通过接入硬盘的方式加载至该系统中。采集动物行为数据后,将加载完成的数据进行序列化处理,序列化处理分为三个步骤:多模态数据对齐、动物行为数据分割以及动物行为数据序列化映射。其中,对数据进行序列化处理的更加详细的方案可参见上述实施例,在此不再赘述。
S320、对序列化数据进行标记。
可选的,可以将序列化数据与动物原始数据由序列化数据标记单元中的数据播放模块按照时间顺序播放,由实验人员观察其需要进行标记的行为,并通过数据标记模块为特定的行为序列手工标记标签。人工标记的行为标签可以包含多种形式:不同行为单一标签的形式、不同行为多标签形式、不同行为的语言描述形式以及关注行为的精细标注形式。
S330、使用标记好的序列化数据对引入注意力机制的seq2seq循环神经网络模型进行训练,得到训练好的模型。
S340、使用训练好的模型对未标记的序列化数据识别分类。
需要说明的是,由于数据标记形式的不同,最后的识别和分类结果也不同。 使用不同行为单一标签的形式标记数据,识别和分类的结果为不同行为的单一标签的标注。使用不同行为多标签形式标记数据,识别和分类的结果有可能存在多种标记形式,也有可能出现单一标签。在行为数据序列化的过程中,相当于对数据进行压缩操作,会在尽可能保留主要特征的情况下去除数据的冗余信息,降低数据存在的噪声。在手工标记的过程中,因为复合行为存在一定的复合比例,例如老鼠在行走的过程中同时探头,可能行走的成分占比为90%,探头的行为占比为5%,其他的细小的行为占比5%,在这里标记人员只可能将其所看到的较为明显的行为标记出来。在这个例子下,数据标记人员会根据其主观判断来标记当前行为,而这一主观性也存在时间的动态,对于行为的占比来说无法准确判断。标记为行走、探头或者行走、探头、摆尾等等复合的行为都会存在可能。然而,该模型会抽提出数据中最为明显的、最为可信的结果作为输出,这一结果往往体现在数据的期望中。因此在这个例子中,最后有可能会得到90%的数据中为单一的行走,5%的数据中为行走和探头,剩下5%有可能为行走、探头和嗅探等其他的细小的、肉眼难以判断的行为。最后,使用不同行为的语言描述形式标记数据,识别和分类的结果和前面seq2seq模型训练单元部分描述的基本一致,会将动物行为语言翻译为数据标记人员所定义的语言,并且得到动物行为序列的描述。
S350、将模型识别结果使用数据可视化单元进行显示。
在本实施例中,数据可视化单元中包含:神经网络训练性能结果可视化、序列化动物行为数据的统计结果可视化以及动物行为识别和分类结果可视化。更加详细的可视化方案可参见上述实施例,在此不再赘述。
本实施例提供了一种通用的动物行为识别与分类系统,通过将动物行为学采集设备获取的数据进行序列化处理,在融合多模态数据的同时保留行为数据在时间维度的连续性;使用引入注意力机制的seq2seq循环神经网络模型抽提出行为序列数据中高维的语义特征,并将该特征解码到人工标记的行为标签空间中,在有效提取高维行为数据中低维信息的同时保留动物行为的高维结构。并且人工标记的行为标签可以包含多种形式,极大地丰富了动物行为的描述指标,为药物的研发提供更多的参考数据,提高了药效检测的效率与准确性。
实施例四
图4是本申请实施例四所提供的一种行为识别装置的结构示意图。该行为 识别装置可以采用软件和/或硬件的方式实现,例如该行为识别装置可以配置于计算机设备中。如图4所示,装置包括序列化数据获取模块410、输出结果获取模块420和识别结果输出模块430,其中:
序列化数据获取模块410,用于获取原始行为数据,对原始行为数据进行预处理,得到序列化行为数据;
输出结果获取模块420,用于将序列化行为数据输入至预先训练好的行为识别模型中,获得行为识别模型的输出结果;
识别结果输出模块430,用于根据输出结果生成行为识别结果,并将行为识别结果进行输出。
本申请实施例通过序列化数据获取模块获取原始行为数据,对原始行为数据进行预处理,得到序列化行为数据;输出结果获取模块将序列化行为数据输入至预先训练好的行为识别模型中,获得行为识别模型的输出结果;识别结果输出模块根据输出结果生成行为识别结果,并将行为识别结果进行输出,通过使用序列化行为数据进行行为识别,充分利用了原始行为数据中的信息,提高了行为识别的准确性,进而提高了基于行为识别确定的检测结果的效率及准确性。
可选的,在上述方案的基础上,原始行为数据包括多模态的信号采集数据,序列化数据获取模块410可以用于:
根据信号采集数据中的时间戳对每个模态的信号采集数据进行数据对齐,得到对齐行为数据;
使用设定分割算法对对齐行为数据进行分割,得到分割行为数据;
将分割行为数据进行序列化映射,得到序列化行为数据。
可选的,在上述方案的基础上,装置还包括模型训练模块,用于:
在将序列化行为数据输入至预先训练好的行为识别模型之前,获取样本序列化数据以及样本序列化数据对应的标签,根据样本序列化数据以及样本序列化数据对应的标签生成训练样本数据;
使用训练样本数据对预先构建的行为识别模型进行训练,得到训练好的行为识别模型。
可选的,在上述方案的基础上,预先构建的行为识别模型为基于注意力机制的序列到序列网络。
可选的,在上述方案的基础上,样本序列化数据对应的标签包括单一标签、 多标签以及语言描述标签中的至少一种。
可选的,在上述方案的基础上,识别结果输出模块430可以用于:
根据输出结果确定每个行为对应的特征信息,基于每个行为对应的特征信息生成可视化行为统计结果,并将可视化行为统计结果作为行为识别结果。
可选的,在上述方案的基础上,可视化行为统计结果包括行为掠影图和/或行为图谱。
本申请实施例所提供的行为识别装置可执行本申请任意实施例所提供的行为识别方法,具备执行方法相应的功能模块和有益效果。
实施例五
图5是本申请实施例五所提供的一种计算机设备的结构示意图。图5示出了适于用来实现本申请实施方式的示例性计算机设备512的框图。图5显示的计算机设备512仅仅是一个示例。
如图5所示,计算机设备512以通用计算设备的形式表现。计算机设备512的组件可以包括:一个或者多个处理器516,系统存储器528,连接不同系统组件(包括系统存储器528和处理器516)的总线518。
总线518表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器516或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构可以包括工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
计算机设备512典型地包括多种计算机系统可读介质。这些介质可以是任何能够被计算机设备512访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器528可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)530和/或高速缓存存储器532。计算机设备512可以包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储装置534可以用于读写不可移动的、非易失性磁介质(图5未显示,通常称为“硬盘驱动器”)。尽管图5中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下, 每个驱动器可以通过一个或者多个数据介质接口与总线518相连。存储器528可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请各实施例的功能。
具有一组(至少一个)程序模块542的程序/实用工具540,可以存储在例如存储器528中,这样的程序模块542可以包括操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块542通常执行本申请所描述的实施例中的功能和/或方法。
计算机设备512也可以与一个或多个外部设备514(例如键盘、指向设备、显示器524等)通信,还可与一个或者多个使得用户能与该计算机设备512交互的设备通信,和/或与使得该计算机设备512能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口522进行。并且,计算机设备512还可以通过网络适配器520与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器520通过总线518与计算机设备512的其它模块通信。应当明白,尽管图中未示出,可以结合计算机设备512使用其它硬件和/或软件模块,可以包括:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理器516通过运行存储在系统存储器528中的程序,从而执行各种功能应用以及数据处理,例如实现本申请实施例所提供的行为识别方法,该方法包括:
获取原始行为数据,对原始行为数据进行预处理,得到序列化行为数据;
将序列化行为数据输入至预先训练好的行为识别模型中,获得行为识别模型的输出结果;
根据输出结果生成行为识别结果,并将行为识别结果进行输出。
当然,本领域技术人员可以理解,处理器还可以实现本申请任意实施例所提供的行为识别方法的技术方案。
实施例六
本申请实施例六还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现本申请实施例所提供的行为识别方法,该方法 可以包括:
获取原始行为数据,对原始行为数据进行预处理,得到序列化行为数据;
将序列化行为数据输入至预先训练好的行为识别模型中,获得行为识别模型的输出结果;
根据输出结果生成行为识别结果,并将行为识别结果进行输出。
当然,本申请实施例所提供的一种计算机可读存储介质,其上存储的计算机程序可以为如上的方法操作,还可以执行本申请任意实施例所提供的行为识别方法的相关操作。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子(非穷举的列表)可以包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,可以包括电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,可以包括无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机 上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。

Claims (10)

  1. 一种行为识别方法,包括:
    获取原始行为数据,对所述原始行为数据进行预处理,得到序列化行为数据;
    将所述序列化行为数据输入至预先训练好的行为识别模型中,获得所述行为识别模型的输出结果;
    根据所述输出结果生成行为识别结果,并将所述行为识别结果进行输出。
  2. 根据权利要求1所述的方法,其中,所述原始行为数据包括多模态的信号采集数据;
    所述对所述原始行为数据进行预处理,得到序列化行为数据的步骤,包括:
    根据所述信号采集数据中的时间戳对每个模态的信号采集数据进行数据对齐,得到对齐行为数据;
    使用设定分割算法对所述对齐行为数据进行分割,得到分割行为数据;
    将所述分割行为数据进行序列化映射,得到所述序列化行为数据。
  3. 根据权利要求1所述的方法,其中,在所述将所述序列化行为数据输入至预先训练好的行为识别模型中的步骤之前,所述方法还包括:
    获取样本序列化数据以及所述样本序列化数据对应的标签,根据所述样本序列化数据以及所述样本序列化数据对应的标签生成训练样本数据;
    使用所述训练样本数据对预先构建的行为识别模型进行训练,得到训练好的行为识别模型。
  4. 根据权利要求3所述的方法,其中,所述预先构建的行为识别模型为基于注意力机制的序列到序列网络。
  5. 根据权利要求3所述的方法,其中,所述样本序列化数据对应的标签包括单一标签、多标签以及语言描述标签中的至少一种。
  6. 根据权利要求1所述的方法,其中,所述根据所述输出结果生成行为识别结果的步骤,包括:
    根据所述输出结果确定每个行为对应的特征信息,基于每个行为对应的特征信息生成可视化行为统计结果,并将所述可视化行为统计结果作为所述行为识别结果。
  7. 根据权利要求6所述的方法,其中,所述可视化行为统计结果包括行为掠影图和行为图谱中至少之一。
  8. 一种行为识别装置,包括:
    序列化数据获取模块,被配置为获取原始行为数据,对所述原始行为数据进行预处理,得到序列化行为数据;
    输出结果获取模块,被配置为将所述序列化行为数据输入至预先训练好的行为识别模型中,获得所述行为识别模型的输出结果;
    识别结果输出模块,被配置为根据所述输出结果生成行为识别结果,并将所述行为识别结果进行输出。
  9. 一种计算机设备,所述设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-7中任一所述的行为识别方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7中任一所述的行为识别方法。
PCT/CN2020/084694 2020-03-18 2020-04-14 行为识别方法、装置、设备及介质 WO2021184468A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010190264.3A CN111325289A (zh) 2020-03-18 2020-03-18 一种行为识别方法、装置、设备及介质
CN202010190264.3 2020-03-18

Publications (1)

Publication Number Publication Date
WO2021184468A1 true WO2021184468A1 (zh) 2021-09-23

Family

ID=71173397

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/084694 WO2021184468A1 (zh) 2020-03-18 2020-04-14 行为识别方法、装置、设备及介质

Country Status (2)

Country Link
CN (1) CN111325289A (zh)
WO (1) WO2021184468A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114461468A (zh) * 2022-01-21 2022-05-10 电子科技大学 一种基于人工神经网络的微处理器应用场景识别方法
CN114817635A (zh) * 2022-04-15 2022-07-29 阿里巴巴达摩院(杭州)科技有限公司 行为识别方法、行为识别装置、视频检测方法及存储介质
CN115115967A (zh) * 2022-05-13 2022-09-27 清华大学 模式生物的视频动作分析方法、装置、设备及介质
CN115204753A (zh) * 2022-09-14 2022-10-18 深圳市深信信息技术有限公司 一种智慧农贸场所行为监测方法、系统及可读存储介质
CN116011633A (zh) * 2022-12-23 2023-04-25 浙江苍南仪表集团股份有限公司 区域燃气用量预测方法、系统、设备及物联网云平台
CN116030272A (zh) * 2023-03-30 2023-04-28 之江实验室 一种基于信息抽取的目标检测方法、系统和装置
CN116467500A (zh) * 2023-06-15 2023-07-21 阿里巴巴(中国)有限公司 数据关系识别、自动问答、查询语句生成方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111883252A (zh) * 2020-07-29 2020-11-03 济南浪潮高新科技投资发展有限公司 一种婴儿自闭症辅助诊断方法、装置、设备和存储介质
CN112057079B (zh) * 2020-08-07 2022-07-29 中国科学院深圳先进技术研究院 一种基于状态与图谱的行为量化方法和终端
CN113705296A (zh) * 2021-03-11 2021-11-26 腾讯科技(深圳)有限公司 生理电信号分类处理方法、装置、计算机设备和存储介质
CN113255597B (zh) * 2021-06-29 2021-09-28 南京视察者智能科技有限公司 一种基于transformer的行为分析方法、装置及其终端设备
CN113989728A (zh) * 2021-12-06 2022-01-28 北京航空航天大学 一种动物行为分析方法、装置及电子设备
CN114140682B (zh) * 2021-12-14 2024-09-06 湖南师范大学 一种基于增强连接时序分类网络的步态识别方法
CN114241376A (zh) * 2021-12-15 2022-03-25 深圳先进技术研究院 行为识别模型训练和行为识别方法、装置、系统及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8463721B2 (en) * 2010-08-05 2013-06-11 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for recognizing events
CN110276259A (zh) * 2019-05-21 2019-09-24 平安科技(深圳)有限公司 唇语识别方法、装置、计算机设备及存储介质
US20190294881A1 (en) * 2018-03-22 2019-09-26 Viisights Solutions Ltd. Behavior recognition
US10503967B2 (en) * 2014-11-21 2019-12-10 The Regents Of The University Of California Fast behavior and abnormality detection
CN110765939A (zh) * 2019-10-22 2020-02-07 Oppo广东移动通信有限公司 身份识别方法、装置、移动终端及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056043B (zh) * 2016-05-19 2019-07-30 中国科学院自动化研究所 基于迁移学习的动物行为识别方法和装置
CN108764176A (zh) * 2018-05-31 2018-11-06 郑州云海信息技术有限公司 一种动作序列识别方法、系统及设备和存储介质
CN110689041A (zh) * 2019-08-20 2020-01-14 陈羽旻 一种多目标行为动作识别预测方法、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8463721B2 (en) * 2010-08-05 2013-06-11 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for recognizing events
US10503967B2 (en) * 2014-11-21 2019-12-10 The Regents Of The University Of California Fast behavior and abnormality detection
US20190294881A1 (en) * 2018-03-22 2019-09-26 Viisights Solutions Ltd. Behavior recognition
CN110276259A (zh) * 2019-05-21 2019-09-24 平安科技(深圳)有限公司 唇语识别方法、装置、计算机设备及存储介质
CN110765939A (zh) * 2019-10-22 2020-02-07 Oppo广东移动通信有限公司 身份识别方法、装置、移动终端及存储介质

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114461468A (zh) * 2022-01-21 2022-05-10 电子科技大学 一种基于人工神经网络的微处理器应用场景识别方法
CN114817635A (zh) * 2022-04-15 2022-07-29 阿里巴巴达摩院(杭州)科技有限公司 行为识别方法、行为识别装置、视频检测方法及存储介质
CN115115967A (zh) * 2022-05-13 2022-09-27 清华大学 模式生物的视频动作分析方法、装置、设备及介质
CN115204753A (zh) * 2022-09-14 2022-10-18 深圳市深信信息技术有限公司 一种智慧农贸场所行为监测方法、系统及可读存储介质
CN116011633A (zh) * 2022-12-23 2023-04-25 浙江苍南仪表集团股份有限公司 区域燃气用量预测方法、系统、设备及物联网云平台
CN116011633B (zh) * 2022-12-23 2023-08-18 浙江苍南仪表集团股份有限公司 区域燃气用量预测方法、系统、设备及物联网云平台
CN116030272A (zh) * 2023-03-30 2023-04-28 之江实验室 一种基于信息抽取的目标检测方法、系统和装置
CN116467500A (zh) * 2023-06-15 2023-07-21 阿里巴巴(中国)有限公司 数据关系识别、自动问答、查询语句生成方法
CN116467500B (zh) * 2023-06-15 2023-11-03 阿里巴巴(中国)有限公司 数据关系识别、自动问答、查询语句生成方法

Also Published As

Publication number Publication date
CN111325289A (zh) 2020-06-23

Similar Documents

Publication Publication Date Title
WO2021184468A1 (zh) 行为识别方法、装置、设备及介质
CN108831559B (zh) 一种中文电子病历文本分析方法与系统
CN112001177B (zh) 融合深度学习与规则的电子病历命名实体识别方法及系统
CN111540468B (zh) 一种诊断原因可视化的icd自动编码方法与系统
CN110705293A (zh) 基于预训练语言模型的电子病历文本命名实体识别方法
Zhou et al. Facial depression recognition by deep joint label distribution and metric learning
CN107076567A (zh) 多语言图像问答
CN111524578B (zh) 一种基于电子心理沙盘的心理评估装置、方法及系统
Li et al. Ffa-ir: Towards an explainable and reliable medical report generation benchmark
CN111145903B (zh) 获取眩晕症问诊文本的方法、装置、电子设备及问诊系统
US20210241906A1 (en) Machine-aided dialog system and medical condition inquiry apparatus and method
CN112599213B (zh) 一种分类编码确定方法、装置、设备及存储介质
CN111832298A (zh) 病历的质检方法、装置、设备以及存储介质
CN112860842A (zh) 病历标注方法、装置及存储介质
CN115545021A (zh) 一种基于深度学习的临床术语识别方法与装置
CN112749277A (zh) 医学数据的处理方法、装置及存储介质
Nasser et al. A review on depression detection and diagnoses based on visual facial cues
CN106951917A (zh) 一种淋巴瘤病理类型的智能分类系统和方法
Shi et al. Understanding patient query with weak supervision from doctor response
CN112614562A (zh) 基于电子病历的模型训练方法、装置、设备及存储介质
CN117894439A (zh) 一种基于人工智能的导诊方法、系统、电子设备及介质
CN117422074A (zh) 一种临床信息文本标准化的方法、装置、设备及介质
CN110060749B (zh) 基于sev-sdg-cnn的电子病历智能诊断方法
CN111898528A (zh) 数据处理方法、装置、计算机可读介质及电子设备
CN111627566A (zh) 适应症信息处理方法与装置、存储介质、电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20925705

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20925705

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20925705

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.07.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20925705

Country of ref document: EP

Kind code of ref document: A1