CN112633190A - Deep learning method - Google Patents

Deep learning method Download PDF

Info

Publication number
CN112633190A
CN112633190A CN202011582478.1A CN202011582478A CN112633190A CN 112633190 A CN112633190 A CN 112633190A CN 202011582478 A CN202011582478 A CN 202011582478A CN 112633190 A CN112633190 A CN 112633190A
Authority
CN
China
Prior art keywords
video
behavior
learning model
sound
supervised learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011582478.1A
Other languages
Chinese (zh)
Inventor
赵嘉
黄学平
付雪峰
侯家振
韩龙哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Institute of Technology
Original Assignee
Nanchang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Institute of Technology filed Critical Nanchang Institute of Technology
Priority to CN202011582478.1A priority Critical patent/CN112633190A/en
Publication of CN112633190A publication Critical patent/CN112633190A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Social Psychology (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a deep learning method, which comprises the steps of firstly, capturing a monitoring video installed in an environment simulator to obtain a behavior video, a behavior sound and a state video of a wild animal; secondly, respectively extracting data of the behavior video and data of the behavior sound, and simultaneously obtaining the state information of the wild animals according to the state video; then, establishing a supervised learning model and training the supervised learning model to obtain a trained supervised learning model; and inputting the recorded behavior video and behavior voice of the wild animal into a trained supervised learning model to obtain the output state information of the wild animal. According to the method, the behavior of the wild animal is monitored, the characteristic values of all video frames of the behavior video of the wild animal are extracted to be used as a character string, and the model is trained by combining the current environmental parameters and the cry of the wild animal to obtain the trained model, so that the state of the wild animal can be rapidly and accurately known.

Description

Deep learning method
Technical Field
The invention relates to the field of data processing, in particular to a deep learning method.
Background
The wild animal is an indispensable part in the natural environment, the behavior action and the state of the wild animal influence the natural environment, the research on the wild animal is also an indispensable part when the natural environment is researched, the living habits of the wild animal can be known through the research on the behavior action and the state of the wild animal, and therefore the influence of the wild animal on the natural environment in the natural environment is deduced.
However, in the case of the study, many environmental researchers do not specialize in the study of wild animals, and therefore do not know the behavior of wild animals, and it takes a lot of time to observe wild animals. Generally, when a researcher is researching, the wild animals are placed in the environment simulator to observe the wild animals, the environment simulator simulates the normal living environment of the wild animals, the wild animals can perform various actions in the environment simulator, and when the researcher is researching, the researcher needs to sit beside the environment simulator for a long time to obtain the state that the wild animals can clearly show the actions, so that the research time is greatly wasted, and the research efficiency is reduced.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provide a deep learning method, which can quickly and accurately know the state of the wild animal by monitoring the behavior of the wild animal, extracting the characteristic values of all video frames of the behavior video of the wild animal to be used as a character string and training the model by combining the current environmental parameters and the voice of the wild animal to obtain the trained model, thereby saving the research time and improving the research efficiency.
Therefore, the invention provides a deep learning method, which comprises the following steps:
acquiring a behavior video, a behavior sound and a state video of the wild animal by intercepting a monitoring video installed in an environment simulator;
respectively extracting data of the behavior video and data of the behavior sound, and obtaining state information of the wild animal according to the state video;
establishing a supervised learning model, taking the obtained data of the behavior video and the behavior sound as the input of the supervised learning model, and taking the state information as the output of the supervised learning model to train the supervised learning model to obtain the trained supervised learning model;
inputting the recorded behavior video and behavior sound of the wild animal into the trained supervised learning model to obtain the output state information of the wild animal;
and comparing the output state information of the wild animals with the state data input by the user, and inputting the recorded behavior video and behavior sound of the wild animals into the unsupervised learning model when the state information of the wild animals is inconsistent with the state data input by the user.
Further, when the supervised learning model is trained, the input of the supervised learning model also comprises environmental parameters, wherein the environmental parameters are internal environmental parameters of the environmental simulator acquired by a sensor at the current moment, and the current moment is the moment when the behavior video starts; after the supervised learning model is trained, the input also comprises the environmental parameters.
Further, the input received by the supervised learning model is in the form of a vector, and the data of the behavior video and the data of the behavior sound are both represented in the form of a vector.
Further, the data of the behavior video is X, and X ═ X1,x2,…,xnN is a positive integer, n represents the nth video frame in the behavior video, and xnRepresenting the feature value of the nth video frame in the behavior video.
Further, if the data of the behavior sound is S, S ═ S1,s2,…,siWhere i is a positive integer, i represents the ith time in the behavioral sound, siRepresenting the sound parameter at the ith time in the behavioural sound.
Further, the sound parameter is a weighted average of the intensity parameter, the pitch parameter, and the audio parameter.
Further, when the status information of the wild animals is obtained according to the status video, the method comprises the following steps:
decomposing the state video to obtain each video frame of the state video;
extracting key video frames, and arranging the extracted key video frames according to the time sequence of the timestamps of the key video frames;
sequentially extracting the characteristic value of each key video frame to form a characteristic sequence;
searching in a database according to the characteristic sequence to obtain the state information of the wild animal;
the database is used for storing the state information and the corresponding characteristic sequence.
Further, the unsupervised learning model is used to modify the supervised learning model at a set point in time.
The deep learning method provided by the invention has the following beneficial effects:
1. according to the method, the behavior of the wild animal is monitored, the characteristic values of all video frames of the behavior video of the wild animal are extracted to be used as a character string, and the model is trained by combining the current environmental parameters and the cry of the wild animal to obtain the trained model, so that the state of the wild animal can be rapidly and accurately known, the research time is saved, and the research efficiency is improved;
2. when the state of the wild animal is determined, the state of the wild animal is obtained by comparing the state with the state set in the database, the database is a cloud database, and the wild animal is classified according to the species of the wild animal, so that the state of the wild animal can be more accurate.
Drawings
FIG. 1 is a schematic block diagram of an overall process of a deep learning method according to the present invention;
fig. 2 is a schematic block diagram of an overall process of obtaining status information of a wild animal according to a status video by using a deep learning method provided by the invention.
Detailed Description
An embodiment of the present invention will be described in detail below with reference to the accompanying drawings, but it should be understood that the scope of the present invention is not limited to the embodiment.
Specifically, as shown in fig. 1, an embodiment of the present invention provides a deep learning method, including the following steps:
firstly, acquiring a behavior video, a behavior sound and a state video of a wild animal by intercepting a monitoring video installed in an environment simulator;
in the above steps, in order to facilitate data collection and observation, an environment simulator is provided, so that a wild animal to be researched and observed is located inside the environment simulator, and thus, the actions and states of the wild animal in a natural environment are completely consistent, at this time, corresponding technical effects can be achieved as long as the action video, the action sound and the state video of the wild animal are collected in the environment simulator, the environment simulator can adopt various forms, a controller and a sensor in the prior art are used for construction, a larger space can be selected for the space, so that the wild animal has more active spaces, and the behavior of the wild animal is more real.
Secondly, respectively extracting data of the behavior video and data of the behavior sound, and obtaining state information of the wild animal according to the state video;
in the above steps, the behavior video and the behavior sound are respectively converted into data, generally, the data is in the form of character strings, the state information of the wild animals is obtained according to the state video, when the state information of the wild animals is obtained, the state information can be recorded after the state video is watched manually, and the state information can also be obtained by extracting the action strings in the state video and comparing the action strings with the state information.
Thirdly, establishing a supervised learning model, taking the obtained data of the behavior video and the behavior sound as the input of the supervised learning model, and taking the state information as the output of the supervised learning model to train the supervised learning model to obtain the trained supervised learning model;
in the above steps, the established supervised learning model may be any one meeting the conditions, may be a regression model, or may be a multilayer perceptron model, and the present invention is not specifically limited, and the obtained data of the behavior video and the behavior sound are used as the input of the supervised learning model, and the state information is used as the output of the supervised learning model, so that the model may be trained, and when the supervised learning model is trained, a certain data amount is required, and the amount of the data amount required for training the supervised learning model is automatically limited by the technical staff according to the actual situation, and finally, the trained supervised learning model is obtained.
Fourthly, inputting the recorded behavior video and behavior voice of the wild animal into the trained supervised learning model to obtain the output state information of the wild animal;
in the step, the obtained supervised learning model is used, and recorded behavior videos and behavior sounds of the wild animals are used as input when the supervised learning model is used, so that the supervised learning model can obtain output, namely state information of the wild animals according to the input.
Fifthly, comparing the output state information of the wild animals with the state data input by the user, and inputting the recorded behavior video and behavior sound of the wild animals into the unsupervised learning model when the state information of the wild animals is inconsistent with the state data input by the user.
In the above step, when the output wild animal status information is inconsistent with the status data input by the user, it indicates that there is a deviation in the supervised learning model or a deviation input by the user himself, and therefore, the recorded behavior video and behavior sound of the wild animal are input to the unsupervised learning model, so that the deviation is reduced.
In this embodiment, when the supervised learning model is trained, the input of the supervised learning model further includes an environmental parameter, where the environmental parameter is an internal environmental parameter of the environmental simulator, which is acquired by a sensor at the current time, and the current time is a time when the behavior video starts; after the supervised learning model is trained, the input also comprises the environmental parameters.
In the technical scheme, the condition parameters during supervised learning model training are added, so that the finally obtained data are more accurate, the environmental parameters comprise parameter indexes of various gases in the air, parameter indexes of temperature, humidity and the like of soil, parameter indexes of temperature, humidity and the like of the air, the environmental parameters in the environment simulator can be directly extracted from the setting information of the environment simulator, and the acquisition can also be carried out in a mode of setting a sensor in the environment simulator.
In this embodiment, the input received by the supervised learning model is in the form of a vector, and the data of the behavior video and the data of the behavior sound are both represented in the form of a vector. Therefore, the input data can be accepted by the model more easily and is easy to extract, and the obtained training result is more accurate.
That is, in the present embodiment, if the data of the behavior video is X, X ═ X1,x2,…,xnN is a positive integer, n represents the nth video frame in the behavior video, and xnRepresenting the feature value of the nth video frame in the behavior video. Because the feature values of the behavior videos corresponding to each piece of state information are similar to each other and have a certain association, and at the same time, each video frame in each behavior video uniquely corresponds to one feature value, a combination of the feature values of the video frames in the behavior video can uniquely correspond to one behavior video, that is, the data of the behavior video is X, and X ═ X1,x2,…,xn}。
That is, in the present embodiment, if the data of the behavioral sounds is S, S ═ S1,s2,…,siWhere i is a positive integer, i represents the ith time in the behavioral sound, siRepresenting the sound parameter at the ith time in the behavioural sound. Similar to the data corresponding to the above-described behavior video, when the sound is decomposed at the time node to obtain the sound parameters at each time, and the sound parameters are arranged into vectors to obtain the data of the sound as S, S ═ S1,s2,…,si}。
Meanwhile, the sound parameter is a weighted average of the sound intensity parameter, the pitch parameter and the audio parameter. In the embodiment of the present invention, the weighted weights of the intensity parameter, the pitch parameter and the audio parameter are all equal. Meanwhile, the sound intensity parameter, the pitch parameter and the audio parameter respectively reflect a parameter index of the sound.
In this embodiment, when obtaining the status information of the wild animal according to the status video, as shown in fig. 2, the method includes the following steps:
decomposing the state video to obtain each video frame of the state video;
extracting key video frames, and arranging the extracted key video frames according to the time sequence of the timestamps of the key video frames;
(III) sequentially extracting the characteristic value of each key video frame to form a characteristic sequence;
fourthly, searching in a database according to the characteristic sequence to obtain the state information of the wild animal;
and (V) the database is used for storing the state information and the corresponding characteristic sequence.
In the technical scheme, the state video is decomposed and then processed, the feature value of the key video frame is extracted, so that the unique corresponding feature sequence of the state video can be obtained, the feature sequence is also the unique corresponding state information, so that the state information of the wild animal can be obtained in a database searching mode, and the state information of the wild animal is also processed in an automatic mode, so that a better model learning effect is achieved.
In this embodiment, the unsupervised learning model is used to modify the supervised learning model at a set point in time. This allows the status information of the wild animals to be obtained with increased accuracy when larger quantities are used.
The above disclosure is only for a few specific embodiments of the present invention, however, the present invention is not limited to the above embodiments, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.

Claims (8)

1. A deep learning method is characterized by comprising the following steps:
acquiring a behavior video, a behavior sound and a state video of the wild animal by intercepting a monitoring video installed in an environment simulator;
respectively extracting data of the behavior video and data of the behavior sound, and obtaining state information of the wild animal according to the state video;
establishing a supervised learning model, taking the obtained data of the behavior video and the behavior sound as the input of the supervised learning model, and taking the state information as the output of the supervised learning model to train the supervised learning model to obtain the trained supervised learning model;
inputting the recorded behavior video and behavior sound of the wild animal into the trained supervised learning model to obtain the output state information of the wild animal;
and comparing the output state information of the wild animals with the state data input by the user, and inputting the recorded behavior video and behavior sound of the wild animals into the unsupervised learning model when the state information of the wild animals is inconsistent with the state data input by the user.
2. The deep learning method as claimed in claim 1, wherein, when the supervised learning model is trained, the input further includes environmental parameters, the environmental parameters are environmental simulator internal environmental parameters collected by a sensor at the current time, and the current time is the time when the behavior video starts; after the supervised learning model is trained, the input also comprises the environmental parameters.
3. The deep learning method as claimed in claim 1, wherein the input received by the supervised learning model is in the form of vector, and the data of the behavior video and the data of the behavior sound are both expressed in the form of vector.
4. The deep learning method as claimed in claim 3, wherein the data of the behavior video is X, and X is { X ═ X%1,x2,…,xnN is a positive integer, n represents the nth video frame in the behavior video, and xnRepresenting the feature value of the nth video frame in the behavior video.
5. The deep learning method as claimed in claim 3, wherein the data of the behavior sound is S, and S ═ S { (S) } is then obtained1,s2,…,siWhere i is a positive integer, i represents the ith time in the behavioral sound, siRepresents the lineIs the sound parameter at the ith moment in the sound.
6. The deep learning method as claimed in claim 5, wherein the sound parameter is a weighted average of the intensity parameter, the pitch parameter and the audio parameter.
7. The deep learning method as claimed in claim 1, wherein when obtaining the status information of the wild animal according to the status video, the method comprises the following steps:
decomposing the state video to obtain each video frame of the state video;
extracting key video frames, and arranging the extracted key video frames according to the time sequence of the timestamps of the key video frames;
sequentially extracting the characteristic value of each key video frame to form a characteristic sequence;
searching in a database according to the characteristic sequence to obtain the state information of the wild animal;
the database is used for storing the state information and the corresponding characteristic sequence.
8. A deep learning method as claimed in claim 1, wherein the unsupervised learning model is used to modify the supervised learning model at a set point in time.
CN202011582478.1A 2020-12-28 2020-12-28 Deep learning method Pending CN112633190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011582478.1A CN112633190A (en) 2020-12-28 2020-12-28 Deep learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011582478.1A CN112633190A (en) 2020-12-28 2020-12-28 Deep learning method

Publications (1)

Publication Number Publication Date
CN112633190A true CN112633190A (en) 2021-04-09

Family

ID=75326047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011582478.1A Pending CN112633190A (en) 2020-12-28 2020-12-28 Deep learning method

Country Status (1)

Country Link
CN (1) CN112633190A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056043A (en) * 2016-05-19 2016-10-26 中国科学院自动化研究所 Animal behavior identification method and apparatus based on transfer learning
CN107862387A (en) * 2017-12-05 2018-03-30 深圳地平线机器人科技有限公司 The method and apparatus for training the model of Supervised machine learning
CN108182423A (en) * 2018-01-26 2018-06-19 山东科技大学 A kind of poultry Activity recognition method based on depth convolutional neural networks
CN110457999A (en) * 2019-06-27 2019-11-15 广东工业大学 A kind of animal posture behavior estimation based on deep learning and SVM and mood recognition methods
CN110826358A (en) * 2018-08-08 2020-02-21 杭州海康威视数字技术股份有限公司 Animal emotion recognition method and device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056043A (en) * 2016-05-19 2016-10-26 中国科学院自动化研究所 Animal behavior identification method and apparatus based on transfer learning
CN107862387A (en) * 2017-12-05 2018-03-30 深圳地平线机器人科技有限公司 The method and apparatus for training the model of Supervised machine learning
CN108182423A (en) * 2018-01-26 2018-06-19 山东科技大学 A kind of poultry Activity recognition method based on depth convolutional neural networks
CN110826358A (en) * 2018-08-08 2020-02-21 杭州海康威视数字技术股份有限公司 Animal emotion recognition method and device and storage medium
CN110457999A (en) * 2019-06-27 2019-11-15 广东工业大学 A kind of animal posture behavior estimation based on deep learning and SVM and mood recognition methods

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨盛春等: "神经网络内监督学习和无监督学习之比较", 《徐州建筑职业技术学院学报》 *
石文兵等: "基于深度信念网络的湖羊维持行为识别", 《传感技术学报》 *

Similar Documents

Publication Publication Date Title
WO2018058821A1 (en) Disease and insect pest forecasting method and apparatus based on planting equipment
WO2018099220A1 (en) Method and device for predicting plant diseases and pests in planter
CN107463958A (en) Insect identifies method for early warning and system
CN111553806B (en) Self-adaptive crop management system and method based on low-power-consumption sensor and Boost model
CN106614273A (en) Pet feeding method and system based on big data analysis of Internet of Things
CN112001370A (en) Crop pest and disease identification method and system
CN117073768B (en) Beef cattle cultivation management system and method thereof
CN112418498A (en) Temperature prediction method and system for intelligent greenhouse
CN117557914B (en) Crop pest identification method based on deep learning
CN111915097B (en) Water quality prediction method for optimizing LSTM neural network based on improved genetic algorithm
CN111681755A (en) Pig disease diagnosis and treatment system and method
CN115496300A (en) Method for monitoring growth information and environment of Chinese rose seedlings
Stouffer A critical examination of models of annual‐plant population dynamics and density‐dependent fecundity
Kalaiarasi et al. Crop yield prediction using multi-parametric deep neural networks
KR102387765B1 (en) Method and apparatus for estimating crop growth quantity
Kellner et al. Simulation of oak early life history and interactions with disturbance via an individual-based model, SOEL
CN112633190A (en) Deep learning method
CN117473041A (en) Programming knowledge tracking method based on cognitive strategy
CN107895215A (en) The prediction of community network influence power and maximization System and method for based on neutral net
CN108133737A (en) Rodent fear tests video analysis method and device
Imai et al. A quantitative method for analyzing species-specific vocal sequence pattern and its developmental dynamics
CN116307879A (en) Efficient cultivation method, system and medium for penaeus monodon larvae
CN112836138B (en) User recommendation method and device
CN114613371A (en) Chick sex identification method based on bidirectional gated recurrent neural network
CN113743461A (en) Unmanned aerial vehicle cluster health degree assessment method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210409